problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.1k
25.4k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 582
39.1k
| num_tokens
int64 271
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_13173
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-187
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
roslaunch --files prints error if multiple anon nodes have same id
When running:
```
roslaunch --files package_name launch_file.launch
```
I get the following error:
```
roslaunch file contains multiple nodes named [/$(anon foo)].
Please check all <node> 'name' attributes to make sure they are unique.
Also check that $(anon id) use different ids.
```
Note that when actually launching the launch file, this will not be a problem because the anonymous node names will be expanded with unique suffixes. Also, the list of files does not relate to the expansion of anonymous node names, so the file list should be printable regardless.
This is similar in spirit to bugs #94 and #65, both of which suffer because roslaunch simply prints an error and quits when it finds non-unique node names instead of doing as much as possible to help you find your error.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `tools/roslaunch/src/roslaunch/rlutil.py`
Content:
```
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2009, Willow Garage, Inc.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions
8 # are met:
9 #
10 # * Redistributions of source code must retain the above copyright
11 # notice, this list of conditions and the following disclaimer.
12 # * Redistributions in binary form must reproduce the above
13 # copyright notice, this list of conditions and the following
14 # disclaimer in the documentation and/or other materials provided
15 # with the distribution.
16 # * Neither the name of Willow Garage, Inc. nor the names of its
17 # contributors may be used to endorse or promote products derived
18 # from this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
23 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
24 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
25 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
26 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
27 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
29 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
30 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
31 # POSSIBILITY OF SUCH DAMAGE.
32
33 """
34 Uncategorized utility routines for roslaunch.
35
36 This API should not be considered stable.
37 """
38
39 from __future__ import print_function
40
41 import os
42 import sys
43 import time
44
45 import roslib.packages
46
47 import rosclean
48 import rospkg
49 import rosgraph
50
51 import roslaunch.core
52 import roslaunch.config
53 import roslaunch.depends
54 from rosmaster import DEFAULT_MASTER_PORT
55
56 def check_log_disk_usage():
57 """
58 Check size of log directory. If high, print warning to user
59 """
60 try:
61 d = rospkg.get_log_dir()
62 roslaunch.core.printlog("Checking log directory for disk usage. This may take awhile.\nPress Ctrl-C to interrupt")
63 disk_usage = rosclean.get_disk_usage(d)
64 # warn if over a gig
65 if disk_usage > 1073741824:
66 roslaunch.core.printerrlog("WARNING: disk usage in log directory [%s] is over 1GB.\nIt's recommended that you use the 'rosclean' command."%d)
67 else:
68 roslaunch.core.printlog("Done checking log file disk usage. Usage is <1GB.")
69 except:
70 pass
71
72 def resolve_launch_arguments(args):
73 """
74 Resolve command-line args to roslaunch filenames.
75
76 :returns: resolved filenames, ``[str]``
77 """
78
79 # strip remapping args for processing
80 args = rosgraph.myargv(args)
81
82 # user can either specify:
83 # - filename + launch args
84 # - package + relative-filename + launch args
85 if not args:
86 return args
87 resolved_args = None
88 top = args[0]
89 if os.path.isfile(top):
90 resolved_args = [top] + args[1:]
91 elif len(args) == 1:
92 raise roslaunch.core.RLException("[%s] does not exist. please specify a package and launch file"%(top))
93 else:
94 try:
95 resolved = roslib.packages.find_resource(top, args[1])
96 if len(resolved) == 1:
97 resolved = resolved[0]
98 elif len(resolved) > 1:
99 raise roslaunch.core.RLException("multiple files named [%s] in package [%s]:%s\nPlease specify full path instead" % (args[1], top, ''.join(['\n- %s' % r for r in resolved])))
100 except rospkg.ResourceNotFound as e:
101 raise roslaunch.core.RLException("[%s] is not a package or launch file name"%top)
102 if not resolved:
103 raise roslaunch.core.RLException("cannot locate [%s] in package [%s]"%(args[1], top))
104 else:
105 resolved_args = [resolved] + args[2:]
106 return resolved_args
107
108 def _wait_for_master():
109 """
110 Block until ROS Master is online
111
112 :raise: :exc:`RuntimeError` If unexpected error occurs
113 """
114 m = roslaunch.core.Master() # get a handle to the default master
115 is_running = m.is_running()
116 if not is_running:
117 roslaunch.core.printlog("roscore/master is not yet running, will wait for it to start")
118 while not is_running:
119 time.sleep(0.1)
120 is_running = m.is_running()
121 if is_running:
122 roslaunch.core.printlog("master has started, initiating launch")
123 else:
124 raise RuntimeError("unknown error waiting for master to start")
125
126 _terminal_name = None
127
128 def _set_terminal(s):
129 import platform
130 if platform.system() in ['FreeBSD', 'Linux', 'Darwin', 'Unix']:
131 try:
132 print('\033]2;%s\007'%(s))
133 except:
134 pass
135
136 def update_terminal_name(ros_master_uri):
137 """
138 append master URI to the terminal name
139 """
140 if _terminal_name:
141 _set_terminal(_terminal_name + ' ' + ros_master_uri)
142
143 def change_terminal_name(args, is_core):
144 """
145 use echo (where available) to change the name of the terminal window
146 """
147 global _terminal_name
148 _terminal_name = 'roscore' if is_core else ','.join(args)
149 _set_terminal(_terminal_name)
150
151 def get_or_generate_uuid(options_runid, options_wait_for_master):
152 """
153 :param options_runid: run_id value from command-line or ``None``, ``str``
154 :param options_wait_for_master: the wait_for_master command
155 option. If this is True, it means that we must retrieve the
156 value from the parameter server and need to avoid any race
157 conditions with the roscore being initialized. ``bool``
158 """
159
160 # Three possible sources of the run_id:
161 #
162 # - if we're a child process, we get it from options_runid
163 # - if there's already a roscore running, read from the param server
164 # - generate one if we're running the roscore
165 if options_runid:
166 return options_runid
167
168 # #773: Generate a run_id to use if we launch a master
169 # process. If a master is already running, we'll get the
170 # run_id from it instead
171 param_server = rosgraph.Master('/roslaunch')
172 val = None
173 while val is None:
174 try:
175 val = param_server.getParam('/run_id')
176 except:
177 if not options_wait_for_master:
178 val = roslaunch.core.generate_run_id()
179 return val
180
181 def check_roslaunch(f):
182 """
183 Check roslaunch file for errors, returning error message if check fails. This routine
184 is mainly to support rostest's roslaunch_check.
185
186 :param f: roslaunch file name, ``str``
187 :returns: error message or ``None``
188 """
189 try:
190 rl_config = roslaunch.config.load_config_default([f], DEFAULT_MASTER_PORT, verbose=False)
191 except roslaunch.core.RLException as e:
192 return str(e)
193
194 errors = []
195 # check for missing deps
196 base_pkg, file_deps, missing = roslaunch.depends.roslaunch_deps([f])
197 for pkg, miss in missing.iteritems():
198 if miss:
199 errors.append("Missing manifest dependencies: %s/manifest.xml: %s"%(pkg, ', '.join(miss)))
200
201 # load all node defs
202 nodes = []
203 for filename, rldeps in file_deps.iteritems():
204 nodes.extend(rldeps.nodes)
205
206 # check for missing packages
207 rospack = rospkg.RosPack()
208 for pkg, node_type in nodes:
209 try:
210 rospack.get_path(pkg)
211 except:
212 errors.append("cannot find package [%s] for node [%s]"%(pkg, node_type))
213
214 # check for missing nodes
215 for pkg, node_type in nodes:
216 try:
217 if not roslib.packages.find_node(pkg, node_type):
218 errors.append("cannot find node [%s] in package [%s]"%(node_type, pkg))
219 except Exception as e:
220 errors.append("unable to find node [%s/%s]: %s"%(pkg, node_type, str(e)))
221
222 # Check for configuration errors, #2889
223 for err in rl_config.config_errors:
224 errors.append('ROSLaunch config error: %s' % err)
225
226 if errors:
227 return '\n'.join(errors)
228
229 def print_file_list(roslaunch_files):
230 """
231 :param roslaunch_files: list of launch files to load, ``str``
232
233 :returns: list of files involved in processing roslaunch_files, including the files themselves.
234 """
235 from roslaunch.config import load_config_default, get_roscore_filename
236 import roslaunch.xmlloader
237 try:
238 loader = roslaunch.xmlloader.XmlLoader(resolve_anon=False)
239 config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)
240 files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]
241 print('\n'.join(files))
242 except roslaunch.core.RLException as e:
243 print(str(e), file=sys.stderr)
244 sys.exit(1)
245
246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/tools/roslaunch/src/roslaunch/rlutil.py b/tools/roslaunch/src/roslaunch/rlutil.py
--- a/tools/roslaunch/src/roslaunch/rlutil.py
+++ b/tools/roslaunch/src/roslaunch/rlutil.py
@@ -235,7 +235,7 @@
from roslaunch.config import load_config_default, get_roscore_filename
import roslaunch.xmlloader
try:
- loader = roslaunch.xmlloader.XmlLoader(resolve_anon=False)
+ loader = roslaunch.xmlloader.XmlLoader(resolve_anon=True)
config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)
files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]
print('\n'.join(files))
|
{"golden_diff": "diff --git a/tools/roslaunch/src/roslaunch/rlutil.py b/tools/roslaunch/src/roslaunch/rlutil.py\n--- a/tools/roslaunch/src/roslaunch/rlutil.py\n+++ b/tools/roslaunch/src/roslaunch/rlutil.py\n@@ -235,7 +235,7 @@\n from roslaunch.config import load_config_default, get_roscore_filename\n import roslaunch.xmlloader\n try:\n- loader = roslaunch.xmlloader.XmlLoader(resolve_anon=False)\n+ loader = roslaunch.xmlloader.XmlLoader(resolve_anon=True)\n config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)\n files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]\n print('\\n'.join(files))\n", "issue": "roslaunch --files prints error if multiple anon nodes have same id\nWhen running:\n\n```\nroslaunch --files package_name launch_file.launch\n```\n\nI get the following error:\n\n```\nroslaunch file contains multiple nodes named [/$(anon foo)].\nPlease check all <node> 'name' attributes to make sure they are unique.\nAlso check that $(anon id) use different ids.\n```\n\nNote that when actually launching the launch file, this will not be a problem because the anonymous node names will be expanded with unique suffixes. Also, the list of files does not relate to the expansion of anonymous node names, so the file list should be printable regardless.\n\nThis is similar in spirit to bugs #94 and #65, both of which suffer because roslaunch simply prints an error and quits when it finds non-unique node names instead of doing as much as possible to help you find your error.\n\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nUncategorized utility routines for roslaunch.\n\nThis API should not be considered stable.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\n\nimport roslib.packages\n\nimport rosclean\nimport rospkg\nimport rosgraph\n\nimport roslaunch.core\nimport roslaunch.config\nimport roslaunch.depends\nfrom rosmaster import DEFAULT_MASTER_PORT\n\ndef check_log_disk_usage():\n \"\"\"\n Check size of log directory. If high, print warning to user\n \"\"\"\n try:\n d = rospkg.get_log_dir()\n roslaunch.core.printlog(\"Checking log directory for disk usage. This may take awhile.\\nPress Ctrl-C to interrupt\") \n disk_usage = rosclean.get_disk_usage(d)\n # warn if over a gig\n if disk_usage > 1073741824:\n roslaunch.core.printerrlog(\"WARNING: disk usage in log directory [%s] is over 1GB.\\nIt's recommended that you use the 'rosclean' command.\"%d)\n else:\n roslaunch.core.printlog(\"Done checking log file disk usage. Usage is <1GB.\") \n except:\n pass\n\ndef resolve_launch_arguments(args):\n \"\"\"\n Resolve command-line args to roslaunch filenames.\n\n :returns: resolved filenames, ``[str]``\n \"\"\"\n\n # strip remapping args for processing\n args = rosgraph.myargv(args)\n \n # user can either specify:\n # - filename + launch args\n # - package + relative-filename + launch args\n if not args:\n return args\n resolved_args = None\n top = args[0]\n if os.path.isfile(top):\n resolved_args = [top] + args[1:]\n elif len(args) == 1:\n raise roslaunch.core.RLException(\"[%s] does not exist. please specify a package and launch file\"%(top))\n else:\n try:\n resolved = roslib.packages.find_resource(top, args[1])\n if len(resolved) == 1:\n resolved = resolved[0]\n elif len(resolved) > 1:\n raise roslaunch.core.RLException(\"multiple files named [%s] in package [%s]:%s\\nPlease specify full path instead\" % (args[1], top, ''.join(['\\n- %s' % r for r in resolved])))\n except rospkg.ResourceNotFound as e:\n raise roslaunch.core.RLException(\"[%s] is not a package or launch file name\"%top)\n if not resolved:\n raise roslaunch.core.RLException(\"cannot locate [%s] in package [%s]\"%(args[1], top))\n else:\n resolved_args = [resolved] + args[2:]\n return resolved_args\n\ndef _wait_for_master():\n \"\"\"\n Block until ROS Master is online\n \n :raise: :exc:`RuntimeError` If unexpected error occurs\n \"\"\"\n m = roslaunch.core.Master() # get a handle to the default master\n is_running = m.is_running()\n if not is_running:\n roslaunch.core.printlog(\"roscore/master is not yet running, will wait for it to start\")\n while not is_running:\n time.sleep(0.1)\n is_running = m.is_running()\n if is_running:\n roslaunch.core.printlog(\"master has started, initiating launch\")\n else:\n raise RuntimeError(\"unknown error waiting for master to start\")\n\n_terminal_name = None\n\ndef _set_terminal(s):\n import platform\n if platform.system() in ['FreeBSD', 'Linux', 'Darwin', 'Unix']:\n try:\n print('\\033]2;%s\\007'%(s))\n except:\n pass\n \ndef update_terminal_name(ros_master_uri):\n \"\"\"\n append master URI to the terminal name\n \"\"\"\n if _terminal_name:\n _set_terminal(_terminal_name + ' ' + ros_master_uri)\n\ndef change_terminal_name(args, is_core):\n \"\"\"\n use echo (where available) to change the name of the terminal window\n \"\"\"\n global _terminal_name\n _terminal_name = 'roscore' if is_core else ','.join(args)\n _set_terminal(_terminal_name)\n\ndef get_or_generate_uuid(options_runid, options_wait_for_master):\n \"\"\"\n :param options_runid: run_id value from command-line or ``None``, ``str``\n :param options_wait_for_master: the wait_for_master command\n option. If this is True, it means that we must retrieve the\n value from the parameter server and need to avoid any race\n conditions with the roscore being initialized. ``bool``\n \"\"\"\n\n # Three possible sources of the run_id:\n #\n # - if we're a child process, we get it from options_runid\n # - if there's already a roscore running, read from the param server\n # - generate one if we're running the roscore\n if options_runid:\n return options_runid\n\n # #773: Generate a run_id to use if we launch a master\n # process. If a master is already running, we'll get the\n # run_id from it instead\n param_server = rosgraph.Master('/roslaunch')\n val = None\n while val is None:\n try:\n val = param_server.getParam('/run_id')\n except:\n if not options_wait_for_master:\n val = roslaunch.core.generate_run_id()\n return val\n \ndef check_roslaunch(f):\n \"\"\"\n Check roslaunch file for errors, returning error message if check fails. This routine\n is mainly to support rostest's roslaunch_check.\n\n :param f: roslaunch file name, ``str``\n :returns: error message or ``None``\n \"\"\"\n try:\n rl_config = roslaunch.config.load_config_default([f], DEFAULT_MASTER_PORT, verbose=False)\n except roslaunch.core.RLException as e:\n return str(e)\n \n errors = []\n # check for missing deps\n base_pkg, file_deps, missing = roslaunch.depends.roslaunch_deps([f])\n for pkg, miss in missing.iteritems():\n if miss:\n errors.append(\"Missing manifest dependencies: %s/manifest.xml: %s\"%(pkg, ', '.join(miss)))\n \n # load all node defs\n nodes = []\n for filename, rldeps in file_deps.iteritems():\n nodes.extend(rldeps.nodes)\n\n # check for missing packages\n rospack = rospkg.RosPack()\n for pkg, node_type in nodes:\n try:\n rospack.get_path(pkg)\n except:\n errors.append(\"cannot find package [%s] for node [%s]\"%(pkg, node_type))\n\n # check for missing nodes\n for pkg, node_type in nodes:\n try:\n if not roslib.packages.find_node(pkg, node_type):\n errors.append(\"cannot find node [%s] in package [%s]\"%(node_type, pkg))\n except Exception as e:\n errors.append(\"unable to find node [%s/%s]: %s\"%(pkg, node_type, str(e)))\n \n # Check for configuration errors, #2889\n for err in rl_config.config_errors:\n errors.append('ROSLaunch config error: %s' % err)\n\n if errors:\n return '\\n'.join(errors)\n \ndef print_file_list(roslaunch_files):\n \"\"\"\n :param roslaunch_files: list of launch files to load, ``str``\n\n :returns: list of files involved in processing roslaunch_files, including the files themselves.\n \"\"\"\n from roslaunch.config import load_config_default, get_roscore_filename\n import roslaunch.xmlloader\n try:\n loader = roslaunch.xmlloader.XmlLoader(resolve_anon=False)\n config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)\n files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]\n print('\\n'.join(files))\n except roslaunch.core.RLException as e:\n print(str(e), file=sys.stderr)\n sys.exit(1)\n\n", "path": "tools/roslaunch/src/roslaunch/rlutil.py"}], "after_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2009, Willow Garage, Inc.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nUncategorized utility routines for roslaunch.\n\nThis API should not be considered stable.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\n\nimport roslib.packages\n\nimport rosclean\nimport rospkg\nimport rosgraph\n\nimport roslaunch.core\nimport roslaunch.config\nimport roslaunch.depends\nfrom rosmaster import DEFAULT_MASTER_PORT\n\ndef check_log_disk_usage():\n \"\"\"\n Check size of log directory. If high, print warning to user\n \"\"\"\n try:\n d = rospkg.get_log_dir()\n roslaunch.core.printlog(\"Checking log directory for disk usage. This may take awhile.\\nPress Ctrl-C to interrupt\") \n disk_usage = rosclean.get_disk_usage(d)\n # warn if over a gig\n if disk_usage > 1073741824:\n roslaunch.core.printerrlog(\"WARNING: disk usage in log directory [%s] is over 1GB.\\nIt's recommended that you use the 'rosclean' command.\"%d)\n else:\n roslaunch.core.printlog(\"Done checking log file disk usage. Usage is <1GB.\") \n except:\n pass\n\ndef resolve_launch_arguments(args):\n \"\"\"\n Resolve command-line args to roslaunch filenames.\n\n :returns: resolved filenames, ``[str]``\n \"\"\"\n\n # strip remapping args for processing\n args = rosgraph.myargv(args)\n \n # user can either specify:\n # - filename + launch args\n # - package + relative-filename + launch args\n if not args:\n return args\n resolved_args = None\n top = args[0]\n if os.path.isfile(top):\n resolved_args = [top] + args[1:]\n elif len(args) == 1:\n raise roslaunch.core.RLException(\"[%s] does not exist. please specify a package and launch file\"%(top))\n else:\n try:\n resolved = roslib.packages.find_resource(top, args[1])\n if len(resolved) == 1:\n resolved = resolved[0]\n elif len(resolved) > 1:\n raise roslaunch.core.RLException(\"multiple files named [%s] in package [%s]:%s\\nPlease specify full path instead\" % (args[1], top, ''.join(['\\n- %s' % r for r in resolved])))\n except rospkg.ResourceNotFound as e:\n raise roslaunch.core.RLException(\"[%s] is not a package or launch file name\"%top)\n if not resolved:\n raise roslaunch.core.RLException(\"cannot locate [%s] in package [%s]\"%(args[1], top))\n else:\n resolved_args = [resolved] + args[2:]\n return resolved_args\n\ndef _wait_for_master():\n \"\"\"\n Block until ROS Master is online\n \n :raise: :exc:`RuntimeError` If unexpected error occurs\n \"\"\"\n m = roslaunch.core.Master() # get a handle to the default master\n is_running = m.is_running()\n if not is_running:\n roslaunch.core.printlog(\"roscore/master is not yet running, will wait for it to start\")\n while not is_running:\n time.sleep(0.1)\n is_running = m.is_running()\n if is_running:\n roslaunch.core.printlog(\"master has started, initiating launch\")\n else:\n raise RuntimeError(\"unknown error waiting for master to start\")\n\n_terminal_name = None\n\ndef _set_terminal(s):\n import platform\n if platform.system() in ['FreeBSD', 'Linux', 'Darwin', 'Unix']:\n try:\n print('\\033]2;%s\\007'%(s))\n except:\n pass\n \ndef update_terminal_name(ros_master_uri):\n \"\"\"\n append master URI to the terminal name\n \"\"\"\n if _terminal_name:\n _set_terminal(_terminal_name + ' ' + ros_master_uri)\n\ndef change_terminal_name(args, is_core):\n \"\"\"\n use echo (where available) to change the name of the terminal window\n \"\"\"\n global _terminal_name\n _terminal_name = 'roscore' if is_core else ','.join(args)\n _set_terminal(_terminal_name)\n\ndef get_or_generate_uuid(options_runid, options_wait_for_master):\n \"\"\"\n :param options_runid: run_id value from command-line or ``None``, ``str``\n :param options_wait_for_master: the wait_for_master command\n option. If this is True, it means that we must retrieve the\n value from the parameter server and need to avoid any race\n conditions with the roscore being initialized. ``bool``\n \"\"\"\n\n # Three possible sources of the run_id:\n #\n # - if we're a child process, we get it from options_runid\n # - if there's already a roscore running, read from the param server\n # - generate one if we're running the roscore\n if options_runid:\n return options_runid\n\n # #773: Generate a run_id to use if we launch a master\n # process. If a master is already running, we'll get the\n # run_id from it instead\n param_server = rosgraph.Master('/roslaunch')\n val = None\n while val is None:\n try:\n val = param_server.getParam('/run_id')\n except:\n if not options_wait_for_master:\n val = roslaunch.core.generate_run_id()\n return val\n \ndef check_roslaunch(f):\n \"\"\"\n Check roslaunch file for errors, returning error message if check fails. This routine\n is mainly to support rostest's roslaunch_check.\n\n :param f: roslaunch file name, ``str``\n :returns: error message or ``None``\n \"\"\"\n try:\n rl_config = roslaunch.config.load_config_default([f], DEFAULT_MASTER_PORT, verbose=False)\n except roslaunch.core.RLException as e:\n return str(e)\n \n errors = []\n # check for missing deps\n base_pkg, file_deps, missing = roslaunch.depends.roslaunch_deps([f])\n for pkg, miss in missing.iteritems():\n if miss:\n errors.append(\"Missing manifest dependencies: %s/manifest.xml: %s\"%(pkg, ', '.join(miss)))\n \n # load all node defs\n nodes = []\n for filename, rldeps in file_deps.iteritems():\n nodes.extend(rldeps.nodes)\n\n # check for missing packages\n rospack = rospkg.RosPack()\n for pkg, node_type in nodes:\n try:\n rospack.get_path(pkg)\n except:\n errors.append(\"cannot find package [%s] for node [%s]\"%(pkg, node_type))\n\n # check for missing nodes\n for pkg, node_type in nodes:\n try:\n if not roslib.packages.find_node(pkg, node_type):\n errors.append(\"cannot find node [%s] in package [%s]\"%(node_type, pkg))\n except Exception as e:\n errors.append(\"unable to find node [%s/%s]: %s\"%(pkg, node_type, str(e)))\n \n # Check for configuration errors, #2889\n for err in rl_config.config_errors:\n errors.append('ROSLaunch config error: %s' % err)\n\n if errors:\n return '\\n'.join(errors)\n \ndef print_file_list(roslaunch_files):\n \"\"\"\n :param roslaunch_files: list of launch files to load, ``str``\n\n :returns: list of files involved in processing roslaunch_files, including the files themselves.\n \"\"\"\n from roslaunch.config import load_config_default, get_roscore_filename\n import roslaunch.xmlloader\n try:\n loader = roslaunch.xmlloader.XmlLoader(resolve_anon=True)\n config = load_config_default(roslaunch_files, None, loader=loader, verbose=False, assign_machines=False)\n files = [os.path.abspath(x) for x in set(config.roslaunch_files) - set([get_roscore_filename()])]\n print('\\n'.join(files))\n except roslaunch.core.RLException as e:\n print(str(e), file=sys.stderr)\n sys.exit(1)\n\n", "path": "tools/roslaunch/src/roslaunch/rlutil.py"}]}
| 3,231 | 202 |
gh_patches_debug_9186
|
rasdani/github-patches
|
git_diff
|
fidals__shopelectro-199
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
SE Iterate tags in templates
[trello origin](https://trello.com/c/zulRj7lF/294-se-iterate-tags-in-templates)
[seo templates doc](https://docs.google.com/document/d/18DFBsuh6NT8hjyihOJ2bxw9zEe8z_070MBQrbAq0kvE/edit#)
**Проблема**
Сеошники после некоторых применений мультисв-в обнаружили заголовок *Блоки питания для ноутбуков устанавливает пользователь и от сети 220 В* [по этому линку](https://www.shopelectro.ru/catalog/categories/bloki-pitaniia-288/tags/ustanavlivaet-polzovatel-and-ot-seti-220-v/)
Получилось нехорошо. Предлагают сделать заголовок таким:
*Блоки питания для ноутбуков от сети 220 В, выбор выходного напряжения - устанавливает пользователь*
**Решение**
Чтобы переделать заголовок, нам нужно добавить в шаблон возможность указывать имя одного конкретного тега.
А решить это можно так: добавляем полноценный tags в seo-шаблоны.
Сейчас у нас только обрезанный tags.titles
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `shopelectro/views/catalog.py`
Content:
```
1 from functools import partial
2
3 from django.conf import settings
4 from django.http import HttpResponse, HttpResponseForbidden
5 from django.shortcuts import render, get_object_or_404
6 from django.views.decorators.http import require_POST
7 from django.urls import reverse
8 from django_user_agents.utils import get_user_agent
9
10 from catalog.views import catalog
11 from images.models import Image
12 from pages import views as pages_views
13
14 from shopelectro import config
15 from shopelectro import models
16 from shopelectro.views.helpers import set_csrf_cookie
17
18 PRODUCTS_ON_PAGE_PC = 48
19 PRODUCTS_ON_PAGE_MOB = 10
20
21
22 def get_products_count(request):
23 """Get Products count for response context depends on the `user_agent`."""
24 mobile_view = get_user_agent(request).is_mobile
25 return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC
26
27
28 # CATALOG VIEWS
29 class CategoryTree(catalog.CategoryTree):
30 category_model = models.Category
31
32
33 @set_csrf_cookie
34 class ProductPage(catalog.ProductPage):
35 pk_url_kwarg = None
36 slug_url_kwarg = 'product_vendor_code'
37 slug_field = 'vendor_code'
38
39 queryset = (
40 models.Product.objects
41 .filter(category__isnull=False)
42 .prefetch_related('product_feedbacks', 'page__images')
43 .select_related('page')
44 )
45
46 def get_context_data(self, **kwargs):
47 context = super(ProductPage, self).get_context_data(**kwargs)
48
49 group_tags_pairs = (
50 models.Tag.objects
51 .filter(products=self.object)
52 .get_group_tags_pairs()
53 )
54
55 return {
56 **context,
57 'price_bounds': config.PRICE_BOUNDS,
58 'group_tags_pairs': group_tags_pairs
59 }
60
61
62 # SHOPELECTRO-SPECIFIC VIEWS
63 @set_csrf_cookie
64 class IndexPage(pages_views.CustomPageView):
65
66 def get_context_data(self, **kwargs):
67 """Extended method. Add product's images to context."""
68 context = super(IndexPage, self).get_context_data(**kwargs)
69 mobile_view = get_user_agent(self.request).is_mobile
70
71 top_products = (
72 models.Product.objects
73 .filter(id__in=settings.TOP_PRODUCTS)
74 .prefetch_related('category')
75 .select_related('page')
76 )
77
78 images = Image.objects.get_main_images_by_pages(
79 models.ProductPage.objects.filter(
80 shopelectro_product__in=top_products
81 )
82 )
83
84 categories = models.Category.objects.get_root_categories_by_products(
85 top_products)
86
87 prepared_top_products = []
88 if not mobile_view:
89 prepared_top_products = [
90 (product, images.get(product.page), categories.get(product))
91 for product in top_products
92 ]
93
94 return {
95 **context,
96 'category_tile': config.MAIN_PAGE_TILE,
97 'prepared_top_products': prepared_top_products,
98 }
99
100
101 def merge_products_and_images(products):
102 images = Image.objects.get_main_images_by_pages(
103 models.ProductPage.objects.filter(shopelectro_product__in=products)
104 )
105
106 return [
107 (product, images.get(product.page))
108 for product in products
109 ]
110
111
112 @set_csrf_cookie
113 class CategoryPage(catalog.CategoryPage):
114
115 def get_context_data(self, **kwargs):
116 """Add sorting options and view_types in context."""
117 context = super(CategoryPage, self).get_context_data(**kwargs)
118 products_on_page = get_products_count(self.request)
119
120 # tile is default view_type
121 view_type = self.request.session.get('view_type', 'tile')
122
123 category = context['category']
124
125 sorting = int(self.kwargs.get('sorting', 0))
126 sorting_option = config.category_sorting(sorting)
127
128 all_products = (
129 models.Product.objects
130 .prefetch_related('page__images')
131 .select_related('page')
132 .get_by_category(category, ordering=(sorting_option, ))
133 )
134
135 group_tags_pairs = (
136 models.Tag.objects
137 .filter(products__in=all_products)
138 .get_group_tags_pairs()
139 )
140
141 tags = self.kwargs.get('tags')
142 tags_metadata = {
143 'titles': '',
144 }
145
146 if tags:
147 slugs = models.Tag.parse_url_tags(tags)
148 tags = models.Tag.objects.filter(slug__in=slugs)
149
150 all_products = (
151 all_products
152 .filter(tags__in=tags)
153 # Use distinct because filtering by QuerySet tags,
154 # that related with products by many-to-many relation.
155 .distinct(sorting_option.lstrip('-'))
156 )
157
158 tags_titles = models.Tag.serialize_title_tags(
159 tags.get_group_tags_pairs()
160 )
161
162 tags_metadata['titles'] = tags_titles
163
164 def template_context(page, tags):
165 return {
166 'page': page,
167 'tags': tags,
168 }
169
170 page = context['page']
171 page.get_template_render_context = partial(
172 template_context, page, tags_metadata)
173
174 products = all_products.get_offset(0, products_on_page)
175
176 return {
177 **context,
178 'product_image_pairs': merge_products_and_images(products),
179 'group_tags_pairs': group_tags_pairs,
180 'total_products': all_products.count(),
181 'sorting_options': config.category_sorting(),
182 'sort': sorting,
183 'tags': tags,
184 'view_type': view_type,
185 'tags_metadata': tags_metadata,
186 'skip_canonical': bool(tags),
187 }
188
189
190 def load_more(request, category_slug, offset=0, sorting=0, tags=None):
191 """
192 Load more products of a given category.
193
194 :param sorting: preferred sorting index from CATEGORY_SORTING tuple
195 :param request: HttpRequest object
196 :param category_slug: Slug for a given category
197 :param offset: used for slicing QuerySet.
198 :return:
199 """
200 products_on_page = get_products_count(request)
201
202 category = get_object_or_404(models.CategoryPage, slug=category_slug).model
203 sorting_option = config.category_sorting(int(sorting))
204
205 products = (
206 models.Product.objects
207 .prefetch_related('page__images')
208 .select_related('page')
209 .get_by_category(category, ordering=(sorting_option,))
210 )
211
212 if tags:
213 tag_entities = models.Tag.objects.filter(
214 slug__in=models.Tag.parse_url_tags(tags)
215 )
216
217 products = (
218 products
219 .filter(tags__in=tag_entities)
220 # Use distinct because filtering by QuerySet tags,
221 # that related with products by many-to-many relation.
222 .distinct(sorting_option.lstrip('-'))
223 )
224
225 products = products.get_offset(int(offset), products_on_page)
226 view = request.session.get('view_type', 'tile')
227
228 return render(request, 'catalog/category_products.html', {
229 'product_image_pairs': merge_products_and_images(products),
230 'view_type': view,
231 'prods': products_on_page,
232 })
233
234
235 @require_POST
236 def save_feedback(request):
237 def get_keys_from_post(*args):
238 return {arg: request.POST.get(arg, '') for arg in args}
239
240 product_id = request.POST.get('id')
241 product = models.Product.objects.filter(id=product_id).first()
242 if not (product_id and product):
243 return HttpResponse(status=422)
244
245 fields = ['rating', 'name', 'dignities', 'limitations', 'general']
246 feedback_data = get_keys_from_post(*fields)
247
248 models.ProductFeedback.objects.create(product=product, **feedback_data)
249 return HttpResponse('ok')
250
251
252 @require_POST
253 def delete_feedback(request):
254 if not request.user.is_authenticated:
255 return HttpResponseForbidden('Not today, sly guy...')
256
257 feedback_id = request.POST.get('id')
258 feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()
259 if not (feedback_id and feedback):
260 return HttpResponse(status=422)
261
262 feedback.delete()
263 return HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))
264
265
266 class ProductsWithoutImages(catalog.ProductsWithoutImages):
267 model = models.Product
268
269
270 class ProductsWithoutText(catalog.ProductsWithoutText):
271 model = models.Product
272
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py
--- a/shopelectro/views/catalog.py
+++ b/shopelectro/views/catalog.py
@@ -141,6 +141,7 @@
tags = self.kwargs.get('tags')
tags_metadata = {
'titles': '',
+ 'raw': [],
}
if tags:
@@ -160,6 +161,7 @@
)
tags_metadata['titles'] = tags_titles
+ tags_metadata['raw'] = tags
def template_context(page, tags):
return {
|
{"golden_diff": "diff --git a/shopelectro/views/catalog.py b/shopelectro/views/catalog.py\n--- a/shopelectro/views/catalog.py\n+++ b/shopelectro/views/catalog.py\n@@ -141,6 +141,7 @@\n tags = self.kwargs.get('tags')\n tags_metadata = {\n 'titles': '',\n+ 'raw': [],\n }\n \n if tags:\n@@ -160,6 +161,7 @@\n )\n \n tags_metadata['titles'] = tags_titles\n+ tags_metadata['raw'] = tags\n \n def template_context(page, tags):\n return {\n", "issue": "SE Iterate tags in templates\n[trello origin](https://trello.com/c/zulRj7lF/294-se-iterate-tags-in-templates)\r\n[seo templates doc](https://docs.google.com/document/d/18DFBsuh6NT8hjyihOJ2bxw9zEe8z_070MBQrbAq0kvE/edit#)\r\n\r\n**\u041f\u0440\u043e\u0431\u043b\u0435\u043c\u0430**\r\n\u0421\u0435\u043e\u0448\u043d\u0438\u043a\u0438 \u043f\u043e\u0441\u043b\u0435 \u043d\u0435\u043a\u043e\u0442\u043e\u0440\u044b\u0445 \u043f\u0440\u0438\u043c\u0435\u043d\u0435\u043d\u0438\u0439 \u043c\u0443\u043b\u044c\u0442\u0438\u0441\u0432-\u0432 \u043e\u0431\u043d\u0430\u0440\u0443\u0436\u0438\u043b\u0438 \u0437\u0430\u0433\u043e\u043b\u043e\u0432\u043e\u043a *\u0411\u043b\u043e\u043a\u0438 \u043f\u0438\u0442\u0430\u043d\u0438\u044f \u0434\u043b\u044f \u043d\u043e\u0443\u0442\u0431\u0443\u043a\u043e\u0432 \u0443\u0441\u0442\u0430\u043d\u0430\u0432\u043b\u0438\u0432\u0430\u0435\u0442 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c \u0438 \u043e\u0442 \u0441\u0435\u0442\u0438 220 \u0412* [\u043f\u043e \u044d\u0442\u043e\u043c\u0443 \u043b\u0438\u043d\u043a\u0443](https://www.shopelectro.ru/catalog/categories/bloki-pitaniia-288/tags/ustanavlivaet-polzovatel-and-ot-seti-220-v/)\r\n\r\n\u041f\u043e\u043b\u0443\u0447\u0438\u043b\u043e\u0441\u044c \u043d\u0435\u0445\u043e\u0440\u043e\u0448\u043e. \u041f\u0440\u0435\u0434\u043b\u0430\u0433\u0430\u044e\u0442 \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u0437\u0430\u0433\u043e\u043b\u043e\u0432\u043e\u043a \u0442\u0430\u043a\u0438\u043c:\r\n*\u0411\u043b\u043e\u043a\u0438 \u043f\u0438\u0442\u0430\u043d\u0438\u044f \u0434\u043b\u044f \u043d\u043e\u0443\u0442\u0431\u0443\u043a\u043e\u0432 \u043e\u0442 \u0441\u0435\u0442\u0438 220 \u0412, \u0432\u044b\u0431\u043e\u0440 \u0432\u044b\u0445\u043e\u0434\u043d\u043e\u0433\u043e \u043d\u0430\u043f\u0440\u044f\u0436\u0435\u043d\u0438\u044f - \u0443\u0441\u0442\u0430\u043d\u0430\u0432\u043b\u0438\u0432\u0430\u0435\u0442 \u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044c*\r\n\r\n\r\n**\u0420\u0435\u0448\u0435\u043d\u0438\u0435**\r\n\u0427\u0442\u043e\u0431\u044b \u043f\u0435\u0440\u0435\u0434\u0435\u043b\u0430\u0442\u044c \u0437\u0430\u0433\u043e\u043b\u043e\u0432\u043e\u043a, \u043d\u0430\u043c \u043d\u0443\u0436\u043d\u043e \u0434\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0432 \u0448\u0430\u0431\u043b\u043e\u043d \u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e\u0441\u0442\u044c \u0443\u043a\u0430\u0437\u044b\u0432\u0430\u0442\u044c \u0438\u043c\u044f \u043e\u0434\u043d\u043e\u0433\u043e \u043a\u043e\u043d\u043a\u0440\u0435\u0442\u043d\u043e\u0433\u043e \u0442\u0435\u0433\u0430.\r\n\r\n\u0410 \u0440\u0435\u0448\u0438\u0442\u044c \u044d\u0442\u043e \u043c\u043e\u0436\u043d\u043e \u0442\u0430\u043a: \u0434\u043e\u0431\u0430\u0432\u043b\u044f\u0435\u043c \u043f\u043e\u043b\u043d\u043e\u0446\u0435\u043d\u043d\u044b\u0439 tags \u0432 seo-\u0448\u0430\u0431\u043b\u043e\u043d\u044b.\r\n\u0421\u0435\u0439\u0447\u0430\u0441 \u0443 \u043d\u0430\u0441 \u0442\u043e\u043b\u044c\u043a\u043e \u043e\u0431\u0440\u0435\u0437\u0430\u043d\u043d\u044b\u0439 tags.titles\n", "before_files": [{"content": "from functools import partial\n\nfrom django.conf import settings\nfrom django.http import HttpResponse, HttpResponseForbidden\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.decorators.http import require_POST\nfrom django.urls import reverse\nfrom django_user_agents.utils import get_user_agent\n\nfrom catalog.views import catalog\nfrom images.models import Image\nfrom pages import views as pages_views\n\nfrom shopelectro import config\nfrom shopelectro import models\nfrom shopelectro.views.helpers import set_csrf_cookie\n\nPRODUCTS_ON_PAGE_PC = 48\nPRODUCTS_ON_PAGE_MOB = 10\n\n\ndef get_products_count(request):\n \"\"\"Get Products count for response context depends on the `user_agent`.\"\"\"\n mobile_view = get_user_agent(request).is_mobile\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n\n\n# CATALOG VIEWS\nclass CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n\n\n@set_csrf_cookie\nclass ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n slug_url_kwarg = 'product_vendor_code'\n slug_field = 'vendor_code'\n\n queryset = (\n models.Product.objects\n .filter(category__isnull=False)\n .prefetch_related('product_feedbacks', 'page__images')\n .select_related('page')\n )\n\n def get_context_data(self, **kwargs):\n context = super(ProductPage, self).get_context_data(**kwargs)\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products=self.object)\n .get_group_tags_pairs()\n )\n\n return {\n **context,\n 'price_bounds': config.PRICE_BOUNDS,\n 'group_tags_pairs': group_tags_pairs\n }\n\n\n# SHOPELECTRO-SPECIFIC VIEWS\n@set_csrf_cookie\nclass IndexPage(pages_views.CustomPageView):\n\n def get_context_data(self, **kwargs):\n \"\"\"Extended method. Add product's images to context.\"\"\"\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n\n top_products = (\n models.Product.objects\n .filter(id__in=settings.TOP_PRODUCTS)\n .prefetch_related('category')\n .select_related('page')\n )\n\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(\n shopelectro_product__in=top_products\n )\n )\n\n categories = models.Category.objects.get_root_categories_by_products(\n top_products)\n\n prepared_top_products = []\n if not mobile_view:\n prepared_top_products = [\n (product, images.get(product.page), categories.get(product))\n for product in top_products\n ]\n\n return {\n **context,\n 'category_tile': config.MAIN_PAGE_TILE,\n 'prepared_top_products': prepared_top_products,\n }\n\n\ndef merge_products_and_images(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(shopelectro_product__in=products)\n )\n\n return [\n (product, images.get(product.page))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass CategoryPage(catalog.CategoryPage):\n\n def get_context_data(self, **kwargs):\n \"\"\"Add sorting options and view_types in context.\"\"\"\n context = super(CategoryPage, self).get_context_data(**kwargs)\n products_on_page = get_products_count(self.request)\n\n # tile is default view_type\n view_type = self.request.session.get('view_type', 'tile')\n\n category = context['category']\n\n sorting = int(self.kwargs.get('sorting', 0))\n sorting_option = config.category_sorting(sorting)\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n )\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products__in=all_products)\n .get_group_tags_pairs()\n )\n\n tags = self.kwargs.get('tags')\n tags_metadata = {\n 'titles': '',\n }\n\n if tags:\n slugs = models.Tag.parse_url_tags(tags)\n tags = models.Tag.objects.filter(slug__in=slugs)\n\n all_products = (\n all_products\n .filter(tags__in=tags)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n tags_titles = models.Tag.serialize_title_tags(\n tags.get_group_tags_pairs()\n )\n\n tags_metadata['titles'] = tags_titles\n\n def template_context(page, tags):\n return {\n 'page': page,\n 'tags': tags,\n }\n\n page = context['page']\n page.get_template_render_context = partial(\n template_context, page, tags_metadata)\n\n products = all_products.get_offset(0, products_on_page)\n\n return {\n **context,\n 'product_image_pairs': merge_products_and_images(products),\n 'group_tags_pairs': group_tags_pairs,\n 'total_products': all_products.count(),\n 'sorting_options': config.category_sorting(),\n 'sort': sorting,\n 'tags': tags,\n 'view_type': view_type,\n 'tags_metadata': tags_metadata,\n 'skip_canonical': bool(tags),\n }\n\n\ndef load_more(request, category_slug, offset=0, sorting=0, tags=None):\n \"\"\"\n Load more products of a given category.\n\n :param sorting: preferred sorting index from CATEGORY_SORTING tuple\n :param request: HttpRequest object\n :param category_slug: Slug for a given category\n :param offset: used for slicing QuerySet.\n :return:\n \"\"\"\n products_on_page = get_products_count(request)\n\n category = get_object_or_404(models.CategoryPage, slug=category_slug).model\n sorting_option = config.category_sorting(int(sorting))\n\n products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n )\n\n if tags:\n tag_entities = models.Tag.objects.filter(\n slug__in=models.Tag.parse_url_tags(tags)\n )\n\n products = (\n products\n .filter(tags__in=tag_entities)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n products = products.get_offset(int(offset), products_on_page)\n view = request.session.get('view_type', 'tile')\n\n return render(request, 'catalog/category_products.html', {\n 'product_image_pairs': merge_products_and_images(products),\n 'view_type': view,\n 'prods': products_on_page,\n })\n\n\n@require_POST\ndef save_feedback(request):\n def get_keys_from_post(*args):\n return {arg: request.POST.get(arg, '') for arg in args}\n\n product_id = request.POST.get('id')\n product = models.Product.objects.filter(id=product_id).first()\n if not (product_id and product):\n return HttpResponse(status=422)\n\n fields = ['rating', 'name', 'dignities', 'limitations', 'general']\n feedback_data = get_keys_from_post(*fields)\n\n models.ProductFeedback.objects.create(product=product, **feedback_data)\n return HttpResponse('ok')\n\n\n@require_POST\ndef delete_feedback(request):\n if not request.user.is_authenticated:\n return HttpResponseForbidden('Not today, sly guy...')\n\n feedback_id = request.POST.get('id')\n feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()\n if not (feedback_id and feedback):\n return HttpResponse(status=422)\n\n feedback.delete()\n return HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))\n\n\nclass ProductsWithoutImages(catalog.ProductsWithoutImages):\n model = models.Product\n\n\nclass ProductsWithoutText(catalog.ProductsWithoutText):\n model = models.Product\n", "path": "shopelectro/views/catalog.py"}], "after_files": [{"content": "from functools import partial\n\nfrom django.conf import settings\nfrom django.http import HttpResponse, HttpResponseForbidden\nfrom django.shortcuts import render, get_object_or_404\nfrom django.views.decorators.http import require_POST\nfrom django.urls import reverse\nfrom django_user_agents.utils import get_user_agent\n\nfrom catalog.views import catalog\nfrom images.models import Image\nfrom pages import views as pages_views\n\nfrom shopelectro import config\nfrom shopelectro import models\nfrom shopelectro.views.helpers import set_csrf_cookie\n\nPRODUCTS_ON_PAGE_PC = 48\nPRODUCTS_ON_PAGE_MOB = 10\n\n\ndef get_products_count(request):\n \"\"\"Get Products count for response context depends on the `user_agent`.\"\"\"\n mobile_view = get_user_agent(request).is_mobile\n return PRODUCTS_ON_PAGE_MOB if mobile_view else PRODUCTS_ON_PAGE_PC\n\n\n# CATALOG VIEWS\nclass CategoryTree(catalog.CategoryTree):\n category_model = models.Category\n\n\n@set_csrf_cookie\nclass ProductPage(catalog.ProductPage):\n pk_url_kwarg = None\n slug_url_kwarg = 'product_vendor_code'\n slug_field = 'vendor_code'\n\n queryset = (\n models.Product.objects\n .filter(category__isnull=False)\n .prefetch_related('product_feedbacks', 'page__images')\n .select_related('page')\n )\n\n def get_context_data(self, **kwargs):\n context = super(ProductPage, self).get_context_data(**kwargs)\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products=self.object)\n .get_group_tags_pairs()\n )\n\n return {\n **context,\n 'price_bounds': config.PRICE_BOUNDS,\n 'group_tags_pairs': group_tags_pairs\n }\n\n\n# SHOPELECTRO-SPECIFIC VIEWS\n@set_csrf_cookie\nclass IndexPage(pages_views.CustomPageView):\n\n def get_context_data(self, **kwargs):\n \"\"\"Extended method. Add product's images to context.\"\"\"\n context = super(IndexPage, self).get_context_data(**kwargs)\n mobile_view = get_user_agent(self.request).is_mobile\n\n top_products = (\n models.Product.objects\n .filter(id__in=settings.TOP_PRODUCTS)\n .prefetch_related('category')\n .select_related('page')\n )\n\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(\n shopelectro_product__in=top_products\n )\n )\n\n categories = models.Category.objects.get_root_categories_by_products(\n top_products)\n\n prepared_top_products = []\n if not mobile_view:\n prepared_top_products = [\n (product, images.get(product.page), categories.get(product))\n for product in top_products\n ]\n\n return {\n **context,\n 'category_tile': config.MAIN_PAGE_TILE,\n 'prepared_top_products': prepared_top_products,\n }\n\n\ndef merge_products_and_images(products):\n images = Image.objects.get_main_images_by_pages(\n models.ProductPage.objects.filter(shopelectro_product__in=products)\n )\n\n return [\n (product, images.get(product.page))\n for product in products\n ]\n\n\n@set_csrf_cookie\nclass CategoryPage(catalog.CategoryPage):\n\n def get_context_data(self, **kwargs):\n \"\"\"Add sorting options and view_types in context.\"\"\"\n context = super(CategoryPage, self).get_context_data(**kwargs)\n products_on_page = get_products_count(self.request)\n\n # tile is default view_type\n view_type = self.request.session.get('view_type', 'tile')\n\n category = context['category']\n\n sorting = int(self.kwargs.get('sorting', 0))\n sorting_option = config.category_sorting(sorting)\n\n all_products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option, ))\n )\n\n group_tags_pairs = (\n models.Tag.objects\n .filter(products__in=all_products)\n .get_group_tags_pairs()\n )\n\n tags = self.kwargs.get('tags')\n tags_metadata = {\n 'titles': '',\n 'raw': [],\n }\n\n if tags:\n slugs = models.Tag.parse_url_tags(tags)\n tags = models.Tag.objects.filter(slug__in=slugs)\n\n all_products = (\n all_products\n .filter(tags__in=tags)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n tags_titles = models.Tag.serialize_title_tags(\n tags.get_group_tags_pairs()\n )\n\n tags_metadata['titles'] = tags_titles\n tags_metadata['raw'] = tags\n\n def template_context(page, tags):\n return {\n 'page': page,\n 'tags': tags,\n }\n\n page = context['page']\n page.get_template_render_context = partial(\n template_context, page, tags_metadata)\n\n products = all_products.get_offset(0, products_on_page)\n\n return {\n **context,\n 'product_image_pairs': merge_products_and_images(products),\n 'group_tags_pairs': group_tags_pairs,\n 'total_products': all_products.count(),\n 'sorting_options': config.category_sorting(),\n 'sort': sorting,\n 'tags': tags,\n 'view_type': view_type,\n 'tags_metadata': tags_metadata,\n 'skip_canonical': bool(tags),\n }\n\n\ndef load_more(request, category_slug, offset=0, sorting=0, tags=None):\n \"\"\"\n Load more products of a given category.\n\n :param sorting: preferred sorting index from CATEGORY_SORTING tuple\n :param request: HttpRequest object\n :param category_slug: Slug for a given category\n :param offset: used for slicing QuerySet.\n :return:\n \"\"\"\n products_on_page = get_products_count(request)\n\n category = get_object_or_404(models.CategoryPage, slug=category_slug).model\n sorting_option = config.category_sorting(int(sorting))\n\n products = (\n models.Product.objects\n .prefetch_related('page__images')\n .select_related('page')\n .get_by_category(category, ordering=(sorting_option,))\n )\n\n if tags:\n tag_entities = models.Tag.objects.filter(\n slug__in=models.Tag.parse_url_tags(tags)\n )\n\n products = (\n products\n .filter(tags__in=tag_entities)\n # Use distinct because filtering by QuerySet tags,\n # that related with products by many-to-many relation.\n .distinct(sorting_option.lstrip('-'))\n )\n\n products = products.get_offset(int(offset), products_on_page)\n view = request.session.get('view_type', 'tile')\n\n return render(request, 'catalog/category_products.html', {\n 'product_image_pairs': merge_products_and_images(products),\n 'view_type': view,\n 'prods': products_on_page,\n })\n\n\n@require_POST\ndef save_feedback(request):\n def get_keys_from_post(*args):\n return {arg: request.POST.get(arg, '') for arg in args}\n\n product_id = request.POST.get('id')\n product = models.Product.objects.filter(id=product_id).first()\n if not (product_id and product):\n return HttpResponse(status=422)\n\n fields = ['rating', 'name', 'dignities', 'limitations', 'general']\n feedback_data = get_keys_from_post(*fields)\n\n models.ProductFeedback.objects.create(product=product, **feedback_data)\n return HttpResponse('ok')\n\n\n@require_POST\ndef delete_feedback(request):\n if not request.user.is_authenticated:\n return HttpResponseForbidden('Not today, sly guy...')\n\n feedback_id = request.POST.get('id')\n feedback = models.ProductFeedback.objects.filter(id=feedback_id).first()\n if not (feedback_id and feedback):\n return HttpResponse(status=422)\n\n feedback.delete()\n return HttpResponse('Feedback with id={} was deleted.'.format(feedback_id))\n\n\nclass ProductsWithoutImages(catalog.ProductsWithoutImages):\n model = models.Product\n\n\nclass ProductsWithoutText(catalog.ProductsWithoutText):\n model = models.Product\n", "path": "shopelectro/views/catalog.py"}]}
| 3,027 | 134 |
gh_patches_debug_17994
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-6938
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Anthropologie spider produces transposed coordinates
https://www.alltheplaces.xyz/map/#7.69/-75.171/39.95

The cause is the upstream data:
https://www.anthropologie.com/stores/rittenhouse-square-philadelphia

It might be worth doing any of the following:
- Suspend the lat/long from the parser for now
- Contact the company (I'll probably do that shortly) about the bug
- Any kind of high level validations that can check the expected bounds for a scraper, vs the results?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/anthropologie.py`
Content:
```
1 from scrapy.spiders import SitemapSpider
2
3 from locations.structured_data_spider import StructuredDataSpider
4
5
6 class AnthropologieSpider(SitemapSpider, StructuredDataSpider):
7 name = "anthropologie"
8 item_attributes = {"brand": "Anthropologie", "brand_wikidata": "Q4773903"}
9 allowed_domains = ["anthropologie.com"]
10 sitemap_urls = ["https://www.anthropologie.com/store_sitemap.xml"]
11 sitemap_rules = [("/stores/", "parse_sd")]
12 requires_proxy = True
13
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/anthropologie.py b/locations/spiders/anthropologie.py
--- a/locations/spiders/anthropologie.py
+++ b/locations/spiders/anthropologie.py
@@ -1,5 +1,6 @@
from scrapy.spiders import SitemapSpider
+from locations.items import set_closed
from locations.structured_data_spider import StructuredDataSpider
@@ -10,3 +11,17 @@
sitemap_urls = ["https://www.anthropologie.com/store_sitemap.xml"]
sitemap_rules = [("/stores/", "parse_sd")]
requires_proxy = True
+
+ def pre_process_data(self, ld_data, **kwargs):
+ ld_data["geo"]["latitude"], ld_data["geo"]["longitude"] = (
+ ld_data["geo"]["longitude"],
+ ld_data["geo"]["latitude"],
+ )
+
+ def post_process_item(self, item, response, ld_data, **kwargs):
+ item["branch"] = item.pop("name").removeprefix(" - Anthropologie Store")
+
+ if item["branch"].startswith("Closed - ") or item["branch"].endswith(" - Closed"):
+ set_closed(item)
+
+ yield item
|
{"golden_diff": "diff --git a/locations/spiders/anthropologie.py b/locations/spiders/anthropologie.py\n--- a/locations/spiders/anthropologie.py\n+++ b/locations/spiders/anthropologie.py\n@@ -1,5 +1,6 @@\n from scrapy.spiders import SitemapSpider\n \n+from locations.items import set_closed\n from locations.structured_data_spider import StructuredDataSpider\n \n \n@@ -10,3 +11,17 @@\n sitemap_urls = [\"https://www.anthropologie.com/store_sitemap.xml\"]\n sitemap_rules = [(\"/stores/\", \"parse_sd\")]\n requires_proxy = True\n+\n+ def pre_process_data(self, ld_data, **kwargs):\n+ ld_data[\"geo\"][\"latitude\"], ld_data[\"geo\"][\"longitude\"] = (\n+ ld_data[\"geo\"][\"longitude\"],\n+ ld_data[\"geo\"][\"latitude\"],\n+ )\n+\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n+ item[\"branch\"] = item.pop(\"name\").removeprefix(\" - Anthropologie Store\")\n+\n+ if item[\"branch\"].startswith(\"Closed - \") or item[\"branch\"].endswith(\" - Closed\"):\n+ set_closed(item)\n+\n+ yield item\n", "issue": "Anthropologie spider produces transposed coordinates\nhttps://www.alltheplaces.xyz/map/#7.69/-75.171/39.95\r\n\r\n\r\n\r\nThe cause is the upstream data:\r\n\r\nhttps://www.anthropologie.com/stores/rittenhouse-square-philadelphia\r\n\r\n\r\nIt might be worth doing any of the following:\r\n\r\n- Suspend the lat/long from the parser for now\r\n- Contact the company (I'll probably do that shortly) about the bug\r\n- Any kind of high level validations that can check the expected bounds for a scraper, vs the results?\r\n\r\n\n", "before_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass AnthropologieSpider(SitemapSpider, StructuredDataSpider):\n name = \"anthropologie\"\n item_attributes = {\"brand\": \"Anthropologie\", \"brand_wikidata\": \"Q4773903\"}\n allowed_domains = [\"anthropologie.com\"]\n sitemap_urls = [\"https://www.anthropologie.com/store_sitemap.xml\"]\n sitemap_rules = [(\"/stores/\", \"parse_sd\")]\n requires_proxy = True\n", "path": "locations/spiders/anthropologie.py"}], "after_files": [{"content": "from scrapy.spiders import SitemapSpider\n\nfrom locations.items import set_closed\nfrom locations.structured_data_spider import StructuredDataSpider\n\n\nclass AnthropologieSpider(SitemapSpider, StructuredDataSpider):\n name = \"anthropologie\"\n item_attributes = {\"brand\": \"Anthropologie\", \"brand_wikidata\": \"Q4773903\"}\n allowed_domains = [\"anthropologie.com\"]\n sitemap_urls = [\"https://www.anthropologie.com/store_sitemap.xml\"]\n sitemap_rules = [(\"/stores/\", \"parse_sd\")]\n requires_proxy = True\n\n def pre_process_data(self, ld_data, **kwargs):\n ld_data[\"geo\"][\"latitude\"], ld_data[\"geo\"][\"longitude\"] = (\n ld_data[\"geo\"][\"longitude\"],\n ld_data[\"geo\"][\"latitude\"],\n )\n\n def post_process_item(self, item, response, ld_data, **kwargs):\n item[\"branch\"] = item.pop(\"name\").removeprefix(\" - Anthropologie Store\")\n\n if item[\"branch\"].startswith(\"Closed - \") or item[\"branch\"].endswith(\" - Closed\"):\n set_closed(item)\n\n yield item\n", "path": "locations/spiders/anthropologie.py"}]}
| 630 | 268 |
gh_patches_debug_29189
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-53822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Problems in TensorPipeRpcBackendOptions device mapping documentation?
## 📚 Documentation
The new release of PyTorch 1.8 introduces CUDA-support in RPC.
I've referred to the RPC documentation, and the only reference for the CUDA-support I could find is under [`TensorPipeRpcBackendOptions`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions) and [`set_device_map`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map).
Seems like setting up CUDA-support is simply done by supplying a device mapping in the `TensorPipeRpcBackendOptions`, pretty cool.
However, I find the documentation for the `device_maps`/`device_map` to be unclear. It seems that `TensorPipeRpcBackendOptions`'s `device_maps` is a dictionary where the keys are worker names, but I'm not exactly sure what the structure of the dictionary's values should be like? Supposedly each value should be some sort of dictionary (as indicated by the parameter's type - `Dict[str, Dict]`), yet the example code provides a set: `device_maps={"worker1": {0, 1}}`. I don't really understand how does this "map worker0's cuda:0 to worker1's cuda:1"?
Same for `set_device_map`'s `device_map`, the parameter's type also indicates it's a dictionary (`(Dict of python:int, str, or torch.device)`), but doesn't quite explain its structure. And again, the example code provides a set: `options.set_device_map("worker1", {1, 2})`.
It is also not explained how to define a GPU->CPU mapping (or vice versa).
Apart for this, there are 2 obvious errors in the example code provided in that documentation:
1. There is a missing comma in the following part:
```python
>>> rpc.init_rpc(
>>> "worker0",
>>> rank=0,
>>> world_size=2 # <-- missing comma
>>> backend=rpc.BackendType.TENSORPIPE,
>>> rpc_backend_options=options
>>> )
```
2. I don't see how it is possible that those two `print`s will give different results. I'm guessing that the second line should read `print(rets[1])`?
```python
>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
>>> print(rets[0]) # tensor([2., 2.], device='cuda:1')
```
cc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd @agolynski @SciPioneer @H-Huang @cbalioglu
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torch/distributed/rpc/options.py`
Content:
```
1 from torch._C._distributed_rpc import _TensorPipeRpcBackendOptionsBase
2 from . import constants as rpc_contants
3
4 import torch
5
6 from typing import Dict, List
7
8
9 class TensorPipeRpcBackendOptions(_TensorPipeRpcBackendOptionsBase):
10 r"""
11 The backend options for
12 :class:`~torch.distributed.rpc.TensorPipeAgent`, derived from
13 :class:`~torch.distributed.rpc.RpcBackendOptions`.
14
15 Args:
16 num_worker_threads (int, optional): The number of threads in the
17 thread-pool used by
18 :class:`~torch.distributed.rpc.TensorPipeAgent` to execute
19 requests (default: 16).
20 rpc_timeout (float, optional): The default timeout, in seconds,
21 for RPC requests (default: 60 seconds). If the RPC has not
22 completed in this timeframe, an exception indicating so will
23 be raised. Callers can override this timeout for individual
24 RPCs in :meth:`~torch.distributed.rpc.rpc_sync` and
25 :meth:`~torch.distributed.rpc.rpc_async` if necessary.
26 init_method (str, optional): The URL to initialize the distributed
27 store used for rendezvous. It takes any value accepted for the
28 same argument of :meth:`~torch.distributed.init_process_group`
29 (default: ``env://``).
30 device_maps (Dict[str, Dict]): Device placement mappings from this
31 worker to the callee. Key is the callee worker name and value the
32 dictionary (``Dict`` of ``int``, ``str``, or ``torch.device``) that
33 maps this worker's devices to the callee worker's devices.
34 (default: ``None``)
35 """
36 def __init__(
37 self,
38 *,
39 num_worker_threads: int = rpc_contants.DEFAULT_NUM_WORKER_THREADS,
40 rpc_timeout: float = rpc_contants.DEFAULT_RPC_TIMEOUT_SEC,
41 init_method: str = rpc_contants.DEFAULT_INIT_METHOD,
42 device_maps: Dict = None,
43 _transports: List = None,
44 _channels: List = None,
45 ):
46 super().__init__(
47 num_worker_threads,
48 _transports,
49 _channels,
50 rpc_timeout,
51 init_method,
52 device_maps if device_maps else {}
53 )
54
55 def set_device_map(self, to: str, device_map: Dict):
56 r"""
57 Set device mapping between each RPC caller and callee pair. This
58 function can be called multiple times to incrementally add
59 device placement configurations.
60
61 Args:
62 worker_name (str): Callee name.
63 device_map (Dict of int, str, or torch.device): Device placement
64 mappings from this worker to the callee. This map must be
65 invertible.
66
67 Example::
68 >>> # both workers
69 >>> def add(x, y):
70 >>> print(x) # tensor([1., 1.], device='cuda:1')
71 >>> return x + y, (x + y).to(2)
72 >>>
73 >>> # on worker 0
74 >>> options = TensorPipeRpcBackendOptions(
75 >>> num_worker_threads=8,
76 >>> device_maps={"worker1": {0, 1}}
77 >>> # maps worker0's cuda:0 to worker1's cuda:1
78 >>> )
79 >>> options.set_device_map("worker1", {1, 2})
80 >>> # maps worker0's cuda:1 to worker1's cuda:2
81 >>>
82 >>> rpc.init_rpc(
83 >>> "worker0",
84 >>> rank=0,
85 >>> world_size=2
86 >>> backend=rpc.BackendType.TENSORPIPE,
87 >>> rpc_backend_options=options
88 >>> )
89 >>>
90 >>> x = torch.ones(2)
91 >>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1))
92 >>> # The first argument will be moved to cuda:1 on worker1. When
93 >>> # sending the return value back, it will follow the invert of
94 >>> # the device map, and hence will be moved back to cuda:0 and
95 >>> # cuda:1 on worker0
96 >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
97 >>> print(rets[0]) # tensor([2., 2.], device='cuda:1')
98 """
99 device_index_map = {}
100 curr_device_maps = super().device_maps
101 for k in device_map:
102 v = device_map[k]
103 k, v = torch.device(k), torch.device(v)
104 if k.type != 'cuda' or v.type != 'cuda':
105 raise ValueError(
106 "`set_device_map` only supports CUDA devices, "
107 f"but got device pair {k}: {v}"
108
109 )
110 if to in curr_device_maps and k.index in curr_device_maps[to]:
111 curr_v = super().device_maps[to][k.index]
112 if curr_v != v.index:
113 raise ValueError(
114 "`set_device_map` only supports 1-to-1 mapping, "
115 f"trying to map {k} to {v} and {curr_v}"
116 )
117 device_index_map[k.index] = v.index
118 super().set_device_map(to, device_index_map)
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/torch/distributed/rpc/options.py b/torch/distributed/rpc/options.py
--- a/torch/distributed/rpc/options.py
+++ b/torch/distributed/rpc/options.py
@@ -73,16 +73,16 @@
>>> # on worker 0
>>> options = TensorPipeRpcBackendOptions(
>>> num_worker_threads=8,
- >>> device_maps={"worker1": {0, 1}}
+ >>> device_maps={"worker1": {0: 1}}
>>> # maps worker0's cuda:0 to worker1's cuda:1
>>> )
- >>> options.set_device_map("worker1", {1, 2})
+ >>> options.set_device_map("worker1", {1: 2})
>>> # maps worker0's cuda:1 to worker1's cuda:2
>>>
>>> rpc.init_rpc(
>>> "worker0",
>>> rank=0,
- >>> world_size=2
+ >>> world_size=2,
>>> backend=rpc.BackendType.TENSORPIPE,
>>> rpc_backend_options=options
>>> )
@@ -94,7 +94,7 @@
>>> # the device map, and hence will be moved back to cuda:0 and
>>> # cuda:1 on worker0
>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
- >>> print(rets[0]) # tensor([2., 2.], device='cuda:1')
+ >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')
"""
device_index_map = {}
curr_device_maps = super().device_maps
|
{"golden_diff": "diff --git a/torch/distributed/rpc/options.py b/torch/distributed/rpc/options.py\n--- a/torch/distributed/rpc/options.py\n+++ b/torch/distributed/rpc/options.py\n@@ -73,16 +73,16 @@\n >>> # on worker 0\n >>> options = TensorPipeRpcBackendOptions(\n >>> num_worker_threads=8,\n- >>> device_maps={\"worker1\": {0, 1}}\n+ >>> device_maps={\"worker1\": {0: 1}}\n >>> # maps worker0's cuda:0 to worker1's cuda:1\n >>> )\n- >>> options.set_device_map(\"worker1\", {1, 2})\n+ >>> options.set_device_map(\"worker1\", {1: 2})\n >>> # maps worker0's cuda:1 to worker1's cuda:2\n >>>\n >>> rpc.init_rpc(\n >>> \"worker0\",\n >>> rank=0,\n- >>> world_size=2\n+ >>> world_size=2,\n >>> backend=rpc.BackendType.TENSORPIPE,\n >>> rpc_backend_options=options\n >>> )\n@@ -94,7 +94,7 @@\n >>> # the device map, and hence will be moved back to cuda:0 and\n >>> # cuda:1 on worker0\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\n- >>> print(rets[0]) # tensor([2., 2.], device='cuda:1')\n+ >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')\n \"\"\"\n device_index_map = {}\n curr_device_maps = super().device_maps\n", "issue": "Problems in TensorPipeRpcBackendOptions device mapping documentation?\n## \ud83d\udcda Documentation\r\n\r\nThe new release of PyTorch 1.8 introduces CUDA-support in RPC.\r\nI've referred to the RPC documentation, and the only reference for the CUDA-support I could find is under [`TensorPipeRpcBackendOptions`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions) and [`set_device_map`](https://pytorch.org/docs/1.8.0/rpc.html#torch.distributed.rpc.TensorPipeRpcBackendOptions.set_device_map).\r\nSeems like setting up CUDA-support is simply done by supplying a device mapping in the `TensorPipeRpcBackendOptions`, pretty cool.\r\n\r\nHowever, I find the documentation for the `device_maps`/`device_map` to be unclear. It seems that `TensorPipeRpcBackendOptions`'s `device_maps` is a dictionary where the keys are worker names, but I'm not exactly sure what the structure of the dictionary's values should be like? Supposedly each value should be some sort of dictionary (as indicated by the parameter's type - `Dict[str, Dict]`), yet the example code provides a set: `device_maps={\"worker1\": {0, 1}}`. I don't really understand how does this \"map worker0's cuda:0 to worker1's cuda:1\"?\r\n\r\nSame for `set_device_map`'s `device_map`, the parameter's type also indicates it's a dictionary (`(Dict of python:int, str, or torch.device)`), but doesn't quite explain its structure. And again, the example code provides a set: `options.set_device_map(\"worker1\", {1, 2})`.\r\n\r\nIt is also not explained how to define a GPU->CPU mapping (or vice versa).\r\n\r\nApart for this, there are 2 obvious errors in the example code provided in that documentation:\r\n\r\n1. There is a missing comma in the following part:\r\n```python\r\n>>> rpc.init_rpc(\r\n>>> \"worker0\",\r\n>>> rank=0,\r\n>>> world_size=2 # <-- missing comma\r\n>>> backend=rpc.BackendType.TENSORPIPE,\r\n>>> rpc_backend_options=options\r\n>>> )\r\n```\r\n2. I don't see how it is possible that those two `print`s will give different results. I'm guessing that the second line should read `print(rets[1])`?\r\n```python\r\n>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\r\n>>> print(rets[0]) # tensor([2., 2.], device='cuda:1')\r\n```\n\ncc @pietern @mrshenli @pritamdamania87 @zhaojuanmao @satgera @gqchen @aazzolini @rohan-varma @jjlilley @osalpekar @jiayisuse @mrzzd @agolynski @SciPioneer @H-Huang @cbalioglu\n", "before_files": [{"content": "from torch._C._distributed_rpc import _TensorPipeRpcBackendOptionsBase\nfrom . import constants as rpc_contants\n\nimport torch\n\nfrom typing import Dict, List\n\n\nclass TensorPipeRpcBackendOptions(_TensorPipeRpcBackendOptionsBase):\n r\"\"\"\n The backend options for\n :class:`~torch.distributed.rpc.TensorPipeAgent`, derived from\n :class:`~torch.distributed.rpc.RpcBackendOptions`.\n\n Args:\n num_worker_threads (int, optional): The number of threads in the\n thread-pool used by\n :class:`~torch.distributed.rpc.TensorPipeAgent` to execute\n requests (default: 16).\n rpc_timeout (float, optional): The default timeout, in seconds,\n for RPC requests (default: 60 seconds). If the RPC has not\n completed in this timeframe, an exception indicating so will\n be raised. Callers can override this timeout for individual\n RPCs in :meth:`~torch.distributed.rpc.rpc_sync` and\n :meth:`~torch.distributed.rpc.rpc_async` if necessary.\n init_method (str, optional): The URL to initialize the distributed\n store used for rendezvous. It takes any value accepted for the\n same argument of :meth:`~torch.distributed.init_process_group`\n (default: ``env://``).\n device_maps (Dict[str, Dict]): Device placement mappings from this\n worker to the callee. Key is the callee worker name and value the\n dictionary (``Dict`` of ``int``, ``str``, or ``torch.device``) that\n maps this worker's devices to the callee worker's devices.\n (default: ``None``)\n \"\"\"\n def __init__(\n self,\n *,\n num_worker_threads: int = rpc_contants.DEFAULT_NUM_WORKER_THREADS,\n rpc_timeout: float = rpc_contants.DEFAULT_RPC_TIMEOUT_SEC,\n init_method: str = rpc_contants.DEFAULT_INIT_METHOD,\n device_maps: Dict = None,\n _transports: List = None,\n _channels: List = None,\n ):\n super().__init__(\n num_worker_threads,\n _transports,\n _channels,\n rpc_timeout,\n init_method,\n device_maps if device_maps else {}\n )\n\n def set_device_map(self, to: str, device_map: Dict):\n r\"\"\"\n Set device mapping between each RPC caller and callee pair. This\n function can be called multiple times to incrementally add\n device placement configurations.\n\n Args:\n worker_name (str): Callee name.\n device_map (Dict of int, str, or torch.device): Device placement\n mappings from this worker to the callee. This map must be\n invertible.\n\n Example::\n >>> # both workers\n >>> def add(x, y):\n >>> print(x) # tensor([1., 1.], device='cuda:1')\n >>> return x + y, (x + y).to(2)\n >>>\n >>> # on worker 0\n >>> options = TensorPipeRpcBackendOptions(\n >>> num_worker_threads=8,\n >>> device_maps={\"worker1\": {0, 1}}\n >>> # maps worker0's cuda:0 to worker1's cuda:1\n >>> )\n >>> options.set_device_map(\"worker1\", {1, 2})\n >>> # maps worker0's cuda:1 to worker1's cuda:2\n >>>\n >>> rpc.init_rpc(\n >>> \"worker0\",\n >>> rank=0,\n >>> world_size=2\n >>> backend=rpc.BackendType.TENSORPIPE,\n >>> rpc_backend_options=options\n >>> )\n >>>\n >>> x = torch.ones(2)\n >>> rets = rpc.rpc_sync(\"worker1\", add, args=(x.to(0), 1))\n >>> # The first argument will be moved to cuda:1 on worker1. When\n >>> # sending the return value back, it will follow the invert of\n >>> # the device map, and hence will be moved back to cuda:0 and\n >>> # cuda:1 on worker0\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:1')\n \"\"\"\n device_index_map = {}\n curr_device_maps = super().device_maps\n for k in device_map:\n v = device_map[k]\n k, v = torch.device(k), torch.device(v)\n if k.type != 'cuda' or v.type != 'cuda':\n raise ValueError(\n \"`set_device_map` only supports CUDA devices, \"\n f\"but got device pair {k}: {v}\"\n\n )\n if to in curr_device_maps and k.index in curr_device_maps[to]:\n curr_v = super().device_maps[to][k.index]\n if curr_v != v.index:\n raise ValueError(\n \"`set_device_map` only supports 1-to-1 mapping, \"\n f\"trying to map {k} to {v} and {curr_v}\"\n )\n device_index_map[k.index] = v.index\n super().set_device_map(to, device_index_map)\n", "path": "torch/distributed/rpc/options.py"}], "after_files": [{"content": "from torch._C._distributed_rpc import _TensorPipeRpcBackendOptionsBase\nfrom . import constants as rpc_contants\n\nimport torch\n\nfrom typing import Dict, List\n\n\nclass TensorPipeRpcBackendOptions(_TensorPipeRpcBackendOptionsBase):\n r\"\"\"\n The backend options for\n :class:`~torch.distributed.rpc.TensorPipeAgent`, derived from\n :class:`~torch.distributed.rpc.RpcBackendOptions`.\n\n Args:\n num_worker_threads (int, optional): The number of threads in the\n thread-pool used by\n :class:`~torch.distributed.rpc.TensorPipeAgent` to execute\n requests (default: 16).\n rpc_timeout (float, optional): The default timeout, in seconds,\n for RPC requests (default: 60 seconds). If the RPC has not\n completed in this timeframe, an exception indicating so will\n be raised. Callers can override this timeout for individual\n RPCs in :meth:`~torch.distributed.rpc.rpc_sync` and\n :meth:`~torch.distributed.rpc.rpc_async` if necessary.\n init_method (str, optional): The URL to initialize the distributed\n store used for rendezvous. It takes any value accepted for the\n same argument of :meth:`~torch.distributed.init_process_group`\n (default: ``env://``).\n device_maps (Dict[str, Dict]): Device placement mappings from this\n worker to the callee. Key is the callee worker name and value the\n dictionary (``Dict`` of ``int``, ``str``, or ``torch.device``) that\n maps this worker's devices to the callee worker's devices.\n (default: ``None``)\n \"\"\"\n def __init__(\n self,\n *,\n num_worker_threads: int = rpc_contants.DEFAULT_NUM_WORKER_THREADS,\n rpc_timeout: float = rpc_contants.DEFAULT_RPC_TIMEOUT_SEC,\n init_method: str = rpc_contants.DEFAULT_INIT_METHOD,\n device_maps: Dict = None,\n _transports: List = None,\n _channels: List = None,\n ):\n super().__init__(\n num_worker_threads,\n _transports,\n _channels,\n rpc_timeout,\n init_method,\n device_maps if device_maps else {}\n )\n\n def set_device_map(self, to: str, device_map: Dict):\n r\"\"\"\n Set device mapping between each RPC caller and callee pair. This\n function can be called multiple times to incrementally add\n device placement configurations.\n\n Args:\n worker_name (str): Callee name.\n device_map (Dict of int, str, or torch.device): Device placement\n mappings from this worker to the callee. This map must be\n invertible.\n\n Example::\n >>> # both workers\n >>> def add(x, y):\n >>> print(x) # tensor([1., 1.], device='cuda:1')\n >>> return x + y, (x + y).to(2)\n >>>\n >>> # on worker 0\n >>> options = TensorPipeRpcBackendOptions(\n >>> num_worker_threads=8,\n >>> device_maps={\"worker1\": {0: 1}}\n >>> # maps worker0's cuda:0 to worker1's cuda:1\n >>> )\n >>> options.set_device_map(\"worker1\", {1: 2})\n >>> # maps worker0's cuda:1 to worker1's cuda:2\n >>>\n >>> rpc.init_rpc(\n >>> \"worker0\",\n >>> rank=0,\n >>> world_size=2,\n >>> backend=rpc.BackendType.TENSORPIPE,\n >>> rpc_backend_options=options\n >>> )\n >>>\n >>> x = torch.ones(2)\n >>> rets = rpc.rpc_sync(\"worker1\", add, args=(x.to(0), 1))\n >>> # The first argument will be moved to cuda:1 on worker1. When\n >>> # sending the return value back, it will follow the invert of\n >>> # the device map, and hence will be moved back to cuda:0 and\n >>> # cuda:1 on worker0\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\n >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')\n \"\"\"\n device_index_map = {}\n curr_device_maps = super().device_maps\n for k in device_map:\n v = device_map[k]\n k, v = torch.device(k), torch.device(v)\n if k.type != 'cuda' or v.type != 'cuda':\n raise ValueError(\n \"`set_device_map` only supports CUDA devices, \"\n f\"but got device pair {k}: {v}\"\n\n )\n if to in curr_device_maps and k.index in curr_device_maps[to]:\n curr_v = super().device_maps[to][k.index]\n if curr_v != v.index:\n raise ValueError(\n \"`set_device_map` only supports 1-to-1 mapping, \"\n f\"trying to map {k} to {v} and {curr_v}\"\n )\n device_index_map[k.index] = v.index\n super().set_device_map(to, device_index_map)\n", "path": "torch/distributed/rpc/options.py"}]}
| 2,273 | 389 |
gh_patches_debug_4009
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-2889
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New sanity check commands for netCDF break build of older netCDF
This one took me a while to chase down. It might be a wontfix due to old software, but maybe this issue can at least serve as an FFR.
N.b. the easybuild and netCDF tag numbers are very similar here.
[In easybuild v4.6.2 some `nc-config` and `ncgen` sanity check commands were added to the netCDF easyblock](https://github.com/easybuilders/easybuild-easyblocks/commit/d8aa9420be572ab4df2c5993c5a3cdf370623404).
The command `ncgen -h`, in particular, should show the help text for `ncgen`. However, up until netCDF v4.6.1, `ncgen -h` meant running `ncgen` in 'header-only" mode, while `-H` was used for help. These flags were switched in [this commit](https://github.com/Unidata/netcdf-c/commit/2ea1cf5f1bc2a7352e3f66721f5181e26e556011#diff-6f23c24b125838dbee16fd3fd9edf84acc7a0492bd223c5ed03e9095cd50b15e) which first appeared in netCDF v4.6.2.
The problem is that when `ncgen` is run without any other args, or similarly, in "header-only" mode, it waits for stdin until an EOF. This causes the `ncgen` sanity check in the new easyblock to fail.
I think the fix would be to check `ncgen -H` instead of `ncgen -h` for netCDF older than v4.6.2.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `easybuild/easyblocks/n/netcdf.py`
Content:
```
1 ##
2 # Copyright 2009-2023 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for building and installing netCDF, implemented as an easyblock
27
28 @author: Stijn De Weirdt (Ghent University)
29 @author: Dries Verdegem (Ghent University)
30 @author: Kenneth Hoste (Ghent University)
31 @author: Pieter De Baets (Ghent University)
32 @author: Jens Timmerman (Ghent University)
33 """
34
35 import os
36 from distutils.version import LooseVersion
37
38 import easybuild.tools.environment as env
39 import easybuild.tools.toolchain as toolchain
40 from easybuild.easyblocks.generic.cmakemake import CMakeMake
41 from easybuild.easyblocks.generic.configuremake import ConfigureMake
42 from easybuild.tools.build_log import EasyBuildError
43 from easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir
44 from easybuild.tools.systemtools import get_shared_lib_ext
45
46
47 class EB_netCDF(CMakeMake):
48 """Support for building/installing netCDF"""
49
50 @staticmethod
51 def extra_options():
52 extra_vars = CMakeMake.extra_options()
53 extra_vars['separate_build_dir'][0] = True
54 return extra_vars
55
56 def configure_step(self):
57 """Configure build: set config options and configure"""
58
59 shlib_ext = get_shared_lib_ext()
60
61 if LooseVersion(self.version) < LooseVersion("4.3"):
62 self.cfg.update('configopts', "--enable-shared")
63
64 if self.toolchain.options['pic']:
65 self.cfg.update('configopts', '--with-pic')
66
67 tup = (os.getenv('FFLAGS'), os.getenv('MPICC'), os.getenv('F90'))
68 self.cfg.update('configopts', 'FCFLAGS="%s" CC="%s" FC="%s"' % tup)
69
70 # add -DgFortran to CPPFLAGS when building with GCC
71 if self.toolchain.comp_family() == toolchain.GCC: # @UndefinedVariable
72 self.cfg.update('configopts', 'CPPFLAGS="%s -DgFortran"' % os.getenv('CPPFLAGS'))
73
74 ConfigureMake.configure_step(self)
75
76 else:
77 for (dep, libname) in [('cURL', 'curl'), ('HDF5', 'hdf5'), ('Szip', 'sz'), ('zlib', 'z'),
78 ('PnetCDF', 'pnetcdf')]:
79 dep_root = get_software_root(dep)
80 dep_libdir = get_software_libdir(dep)
81
82 if dep_root:
83 incdir = os.path.join(dep_root, 'include')
84 self.cfg.update('configopts', '-D%s_INCLUDE_DIR=%s ' % (dep.upper(), incdir))
85
86 if dep == 'HDF5':
87 env.setvar('HDF5_ROOT', dep_root)
88 self.cfg.update('configopts', '-DUSE_HDF5=ON')
89
90 hdf5cmvars = {
91 # library name: (cmake option suffix in netcdf<4.4, cmake option suffix in netcfd>=4.4)
92 'hdf5': ('LIB', 'C_LIBRARY'),
93 'hdf5_hl': ('HL_LIB', 'HL_LIBRARY'),
94 }
95
96 for libname in hdf5cmvars:
97 if LooseVersion(self.version) < LooseVersion("4.4"):
98 cmvar = hdf5cmvars[libname][0]
99 else:
100 cmvar = hdf5cmvars[libname][1]
101 libhdf5 = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))
102 self.cfg.update('configopts', '-DHDF5_%s=%s ' % (cmvar, libhdf5))
103 # 4.4 forgot to set HDF5_<lang>_LIBRARIES
104 if LooseVersion(self.version) == LooseVersion("4.4.0"):
105 lang = 'HL' if cmvar[0] == 'H' else 'C'
106 self.cfg.update('configopts', '-DHDF5_%s_LIBRARIES=%s ' % (lang, libhdf5))
107
108 elif dep == 'PnetCDF':
109 self.cfg.update('configopts', '-DENABLE_PNETCDF=ON')
110
111 else:
112 libso = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))
113 self.cfg.update('configopts', '-D%s_LIBRARY=%s ' % (dep.upper(), libso))
114
115 CMakeMake.configure_step(self)
116
117 def sanity_check_step(self):
118 """
119 Custom sanity check for netCDF
120 """
121
122 shlib_ext = get_shared_lib_ext()
123
124 incs = ["netcdf.h"]
125 libs = ["libnetcdf.%s" % shlib_ext, "libnetcdf.a"]
126 # since v4.2, the non-C libraries have been split off in seperate extensions_step
127 # see netCDF-Fortran and netCDF-C++
128 if LooseVersion(self.version) < LooseVersion("4.2"):
129 incs += ["netcdf%s" % x for x in ["cpp.h", ".hh", ".inc", ".mod"]]
130 incs += ["ncvalues.h", "typesizes.mod"]
131 libs += ["libnetcdf_c++.%s" % shlib_ext, "libnetcdff.%s" % shlib_ext,
132 "libnetcdf_c++.a", "libnetcdff.a"]
133 binaries = ["nc%s" % x for x in ["-config", "copy", "dump", "gen", "gen3"]]
134
135 custom_paths = {
136 'files': (
137 [os.path.join("bin", x) for x in binaries] +
138 [os.path.join("lib", x) for x in libs] +
139 [os.path.join("include", x) for x in incs]
140 ),
141 'dirs': []
142 }
143
144 custom_commands = [
145 "nc-config --help",
146 "ncgen -h",
147 ]
148
149 super(EB_netCDF, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)
150
151
152 def set_netcdf_env_vars(log):
153 """Set netCDF environment variables used by other software."""
154
155 netcdf = get_software_root('netCDF')
156 if not netcdf:
157 raise EasyBuildError("netCDF module not loaded?")
158 else:
159 env.setvar('NETCDF', netcdf)
160 log.debug("Set NETCDF to %s" % netcdf)
161 netcdff = get_software_root('netCDF-Fortran')
162 netcdf_ver = get_software_version('netCDF')
163 if not netcdff:
164 if LooseVersion(netcdf_ver) >= LooseVersion("4.2"):
165 raise EasyBuildError("netCDF v4.2 no longer supplies Fortran library, also need netCDF-Fortran")
166 else:
167 env.setvar('NETCDFF', netcdff)
168 log.debug("Set NETCDFF to %s" % netcdff)
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/easybuild/easyblocks/n/netcdf.py b/easybuild/easyblocks/n/netcdf.py
--- a/easybuild/easyblocks/n/netcdf.py
+++ b/easybuild/easyblocks/n/netcdf.py
@@ -143,7 +143,7 @@
custom_commands = [
"nc-config --help",
- "ncgen -h",
+ "ncgen -h" if LooseVersion(self.version) > LooseVersion("4.6.1") else "ncgen -H",
]
super(EB_netCDF, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)
|
{"golden_diff": "diff --git a/easybuild/easyblocks/n/netcdf.py b/easybuild/easyblocks/n/netcdf.py\n--- a/easybuild/easyblocks/n/netcdf.py\n+++ b/easybuild/easyblocks/n/netcdf.py\n@@ -143,7 +143,7 @@\n \n custom_commands = [\n \"nc-config --help\",\n- \"ncgen -h\",\n+ \"ncgen -h\" if LooseVersion(self.version) > LooseVersion(\"4.6.1\") else \"ncgen -H\",\n ]\n \n super(EB_netCDF, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n", "issue": "New sanity check commands for netCDF break build of older netCDF\nThis one took me a while to chase down. It might be a wontfix due to old software, but maybe this issue can at least serve as an FFR.\r\n\r\nN.b. the easybuild and netCDF tag numbers are very similar here.\r\n\r\n[In easybuild v4.6.2 some `nc-config` and `ncgen` sanity check commands were added to the netCDF easyblock](https://github.com/easybuilders/easybuild-easyblocks/commit/d8aa9420be572ab4df2c5993c5a3cdf370623404).\r\n\r\nThe command `ncgen -h`, in particular, should show the help text for `ncgen`. However, up until netCDF v4.6.1, `ncgen -h` meant running `ncgen` in 'header-only\" mode, while `-H` was used for help. These flags were switched in [this commit](https://github.com/Unidata/netcdf-c/commit/2ea1cf5f1bc2a7352e3f66721f5181e26e556011#diff-6f23c24b125838dbee16fd3fd9edf84acc7a0492bd223c5ed03e9095cd50b15e) which first appeared in netCDF v4.6.2.\r\n\r\nThe problem is that when `ncgen` is run without any other args, or similarly, in \"header-only\" mode, it waits for stdin until an EOF. This causes the `ncgen` sanity check in the new easyblock to fail.\r\n\r\nI think the fix would be to check `ncgen -H` instead of `ncgen -h` for netCDF older than v4.6.2.\n", "before_files": [{"content": "##\n# Copyright 2009-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing netCDF, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\n\nimport os\nfrom distutils.version import LooseVersion\n\nimport easybuild.tools.environment as env\nimport easybuild.tools.toolchain as toolchain\nfrom easybuild.easyblocks.generic.cmakemake import CMakeMake\nfrom easybuild.easyblocks.generic.configuremake import ConfigureMake\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n\nclass EB_netCDF(CMakeMake):\n \"\"\"Support for building/installing netCDF\"\"\"\n\n @staticmethod\n def extra_options():\n extra_vars = CMakeMake.extra_options()\n extra_vars['separate_build_dir'][0] = True\n return extra_vars\n\n def configure_step(self):\n \"\"\"Configure build: set config options and configure\"\"\"\n\n shlib_ext = get_shared_lib_ext()\n\n if LooseVersion(self.version) < LooseVersion(\"4.3\"):\n self.cfg.update('configopts', \"--enable-shared\")\n\n if self.toolchain.options['pic']:\n self.cfg.update('configopts', '--with-pic')\n\n tup = (os.getenv('FFLAGS'), os.getenv('MPICC'), os.getenv('F90'))\n self.cfg.update('configopts', 'FCFLAGS=\"%s\" CC=\"%s\" FC=\"%s\"' % tup)\n\n # add -DgFortran to CPPFLAGS when building with GCC\n if self.toolchain.comp_family() == toolchain.GCC: # @UndefinedVariable\n self.cfg.update('configopts', 'CPPFLAGS=\"%s -DgFortran\"' % os.getenv('CPPFLAGS'))\n\n ConfigureMake.configure_step(self)\n\n else:\n for (dep, libname) in [('cURL', 'curl'), ('HDF5', 'hdf5'), ('Szip', 'sz'), ('zlib', 'z'),\n ('PnetCDF', 'pnetcdf')]:\n dep_root = get_software_root(dep)\n dep_libdir = get_software_libdir(dep)\n\n if dep_root:\n incdir = os.path.join(dep_root, 'include')\n self.cfg.update('configopts', '-D%s_INCLUDE_DIR=%s ' % (dep.upper(), incdir))\n\n if dep == 'HDF5':\n env.setvar('HDF5_ROOT', dep_root)\n self.cfg.update('configopts', '-DUSE_HDF5=ON')\n\n hdf5cmvars = {\n # library name: (cmake option suffix in netcdf<4.4, cmake option suffix in netcfd>=4.4)\n 'hdf5': ('LIB', 'C_LIBRARY'),\n 'hdf5_hl': ('HL_LIB', 'HL_LIBRARY'),\n }\n\n for libname in hdf5cmvars:\n if LooseVersion(self.version) < LooseVersion(\"4.4\"):\n cmvar = hdf5cmvars[libname][0]\n else:\n cmvar = hdf5cmvars[libname][1]\n libhdf5 = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))\n self.cfg.update('configopts', '-DHDF5_%s=%s ' % (cmvar, libhdf5))\n # 4.4 forgot to set HDF5_<lang>_LIBRARIES\n if LooseVersion(self.version) == LooseVersion(\"4.4.0\"):\n lang = 'HL' if cmvar[0] == 'H' else 'C'\n self.cfg.update('configopts', '-DHDF5_%s_LIBRARIES=%s ' % (lang, libhdf5))\n\n elif dep == 'PnetCDF':\n self.cfg.update('configopts', '-DENABLE_PNETCDF=ON')\n\n else:\n libso = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))\n self.cfg.update('configopts', '-D%s_LIBRARY=%s ' % (dep.upper(), libso))\n\n CMakeMake.configure_step(self)\n\n def sanity_check_step(self):\n \"\"\"\n Custom sanity check for netCDF\n \"\"\"\n\n shlib_ext = get_shared_lib_ext()\n\n incs = [\"netcdf.h\"]\n libs = [\"libnetcdf.%s\" % shlib_ext, \"libnetcdf.a\"]\n # since v4.2, the non-C libraries have been split off in seperate extensions_step\n # see netCDF-Fortran and netCDF-C++\n if LooseVersion(self.version) < LooseVersion(\"4.2\"):\n incs += [\"netcdf%s\" % x for x in [\"cpp.h\", \".hh\", \".inc\", \".mod\"]]\n incs += [\"ncvalues.h\", \"typesizes.mod\"]\n libs += [\"libnetcdf_c++.%s\" % shlib_ext, \"libnetcdff.%s\" % shlib_ext,\n \"libnetcdf_c++.a\", \"libnetcdff.a\"]\n binaries = [\"nc%s\" % x for x in [\"-config\", \"copy\", \"dump\", \"gen\", \"gen3\"]]\n\n custom_paths = {\n 'files': (\n [os.path.join(\"bin\", x) for x in binaries] +\n [os.path.join(\"lib\", x) for x in libs] +\n [os.path.join(\"include\", x) for x in incs]\n ),\n 'dirs': []\n }\n\n custom_commands = [\n \"nc-config --help\",\n \"ncgen -h\",\n ]\n\n super(EB_netCDF, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n\n\ndef set_netcdf_env_vars(log):\n \"\"\"Set netCDF environment variables used by other software.\"\"\"\n\n netcdf = get_software_root('netCDF')\n if not netcdf:\n raise EasyBuildError(\"netCDF module not loaded?\")\n else:\n env.setvar('NETCDF', netcdf)\n log.debug(\"Set NETCDF to %s\" % netcdf)\n netcdff = get_software_root('netCDF-Fortran')\n netcdf_ver = get_software_version('netCDF')\n if not netcdff:\n if LooseVersion(netcdf_ver) >= LooseVersion(\"4.2\"):\n raise EasyBuildError(\"netCDF v4.2 no longer supplies Fortran library, also need netCDF-Fortran\")\n else:\n env.setvar('NETCDFF', netcdff)\n log.debug(\"Set NETCDFF to %s\" % netcdff)\n", "path": "easybuild/easyblocks/n/netcdf.py"}], "after_files": [{"content": "##\n# Copyright 2009-2023 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing netCDF, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n\"\"\"\n\nimport os\nfrom distutils.version import LooseVersion\n\nimport easybuild.tools.environment as env\nimport easybuild.tools.toolchain as toolchain\nfrom easybuild.easyblocks.generic.cmakemake import CMakeMake\nfrom easybuild.easyblocks.generic.configuremake import ConfigureMake\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.modules import get_software_root, get_software_version, get_software_libdir\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n\nclass EB_netCDF(CMakeMake):\n \"\"\"Support for building/installing netCDF\"\"\"\n\n @staticmethod\n def extra_options():\n extra_vars = CMakeMake.extra_options()\n extra_vars['separate_build_dir'][0] = True\n return extra_vars\n\n def configure_step(self):\n \"\"\"Configure build: set config options and configure\"\"\"\n\n shlib_ext = get_shared_lib_ext()\n\n if LooseVersion(self.version) < LooseVersion(\"4.3\"):\n self.cfg.update('configopts', \"--enable-shared\")\n\n if self.toolchain.options['pic']:\n self.cfg.update('configopts', '--with-pic')\n\n tup = (os.getenv('FFLAGS'), os.getenv('MPICC'), os.getenv('F90'))\n self.cfg.update('configopts', 'FCFLAGS=\"%s\" CC=\"%s\" FC=\"%s\"' % tup)\n\n # add -DgFortran to CPPFLAGS when building with GCC\n if self.toolchain.comp_family() == toolchain.GCC: # @UndefinedVariable\n self.cfg.update('configopts', 'CPPFLAGS=\"%s -DgFortran\"' % os.getenv('CPPFLAGS'))\n\n ConfigureMake.configure_step(self)\n\n else:\n for (dep, libname) in [('cURL', 'curl'), ('HDF5', 'hdf5'), ('Szip', 'sz'), ('zlib', 'z'),\n ('PnetCDF', 'pnetcdf')]:\n dep_root = get_software_root(dep)\n dep_libdir = get_software_libdir(dep)\n\n if dep_root:\n incdir = os.path.join(dep_root, 'include')\n self.cfg.update('configopts', '-D%s_INCLUDE_DIR=%s ' % (dep.upper(), incdir))\n\n if dep == 'HDF5':\n env.setvar('HDF5_ROOT', dep_root)\n self.cfg.update('configopts', '-DUSE_HDF5=ON')\n\n hdf5cmvars = {\n # library name: (cmake option suffix in netcdf<4.4, cmake option suffix in netcfd>=4.4)\n 'hdf5': ('LIB', 'C_LIBRARY'),\n 'hdf5_hl': ('HL_LIB', 'HL_LIBRARY'),\n }\n\n for libname in hdf5cmvars:\n if LooseVersion(self.version) < LooseVersion(\"4.4\"):\n cmvar = hdf5cmvars[libname][0]\n else:\n cmvar = hdf5cmvars[libname][1]\n libhdf5 = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))\n self.cfg.update('configopts', '-DHDF5_%s=%s ' % (cmvar, libhdf5))\n # 4.4 forgot to set HDF5_<lang>_LIBRARIES\n if LooseVersion(self.version) == LooseVersion(\"4.4.0\"):\n lang = 'HL' if cmvar[0] == 'H' else 'C'\n self.cfg.update('configopts', '-DHDF5_%s_LIBRARIES=%s ' % (lang, libhdf5))\n\n elif dep == 'PnetCDF':\n self.cfg.update('configopts', '-DENABLE_PNETCDF=ON')\n\n else:\n libso = os.path.join(dep_root, dep_libdir, 'lib%s.%s' % (libname, shlib_ext))\n self.cfg.update('configopts', '-D%s_LIBRARY=%s ' % (dep.upper(), libso))\n\n CMakeMake.configure_step(self)\n\n def sanity_check_step(self):\n \"\"\"\n Custom sanity check for netCDF\n \"\"\"\n\n shlib_ext = get_shared_lib_ext()\n\n incs = [\"netcdf.h\"]\n libs = [\"libnetcdf.%s\" % shlib_ext, \"libnetcdf.a\"]\n # since v4.2, the non-C libraries have been split off in seperate extensions_step\n # see netCDF-Fortran and netCDF-C++\n if LooseVersion(self.version) < LooseVersion(\"4.2\"):\n incs += [\"netcdf%s\" % x for x in [\"cpp.h\", \".hh\", \".inc\", \".mod\"]]\n incs += [\"ncvalues.h\", \"typesizes.mod\"]\n libs += [\"libnetcdf_c++.%s\" % shlib_ext, \"libnetcdff.%s\" % shlib_ext,\n \"libnetcdf_c++.a\", \"libnetcdff.a\"]\n binaries = [\"nc%s\" % x for x in [\"-config\", \"copy\", \"dump\", \"gen\", \"gen3\"]]\n\n custom_paths = {\n 'files': (\n [os.path.join(\"bin\", x) for x in binaries] +\n [os.path.join(\"lib\", x) for x in libs] +\n [os.path.join(\"include\", x) for x in incs]\n ),\n 'dirs': []\n }\n\n custom_commands = [\n \"nc-config --help\",\n \"ncgen -h\" if LooseVersion(self.version) > LooseVersion(\"4.6.1\") else \"ncgen -H\",\n ]\n\n super(EB_netCDF, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n\n\ndef set_netcdf_env_vars(log):\n \"\"\"Set netCDF environment variables used by other software.\"\"\"\n\n netcdf = get_software_root('netCDF')\n if not netcdf:\n raise EasyBuildError(\"netCDF module not loaded?\")\n else:\n env.setvar('NETCDF', netcdf)\n log.debug(\"Set NETCDF to %s\" % netcdf)\n netcdff = get_software_root('netCDF-Fortran')\n netcdf_ver = get_software_version('netCDF')\n if not netcdff:\n if LooseVersion(netcdf_ver) >= LooseVersion(\"4.2\"):\n raise EasyBuildError(\"netCDF v4.2 no longer supplies Fortran library, also need netCDF-Fortran\")\n else:\n env.setvar('NETCDFF', netcdff)\n log.debug(\"Set NETCDFF to %s\" % netcdff)\n", "path": "easybuild/easyblocks/n/netcdf.py"}]}
| 2,862 | 146 |
gh_patches_debug_25775
|
rasdani/github-patches
|
git_diff
|
apache__tvm-2759
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[TEST][FLAKY] test_dlpack
Both #2749 and #2353 encountered seg fault error at test_dlpack.
http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-2749/2/pipeline
http://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-2353/48/pipeline
cc @eqy , could you help look at this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/tvm/_ffi/_ctypes/ndarray.py`
Content:
```
1 # pylint: disable=invalid-name
2 """Runtime NDArray api"""
3 from __future__ import absolute_import
4
5 import ctypes
6 from ..base import _LIB, check_call, c_str
7 from ..runtime_ctypes import TVMArrayHandle, TVMNDArrayContainerHandle
8 from .types import RETURN_SWITCH, C_TO_PY_ARG_SWITCH, _wrap_arg_func, _return_handle
9
10
11 TVMPyCapsuleDestructor = ctypes.CFUNCTYPE(None, ctypes.c_void_p)
12 _c_str_dltensor = c_str('dltensor')
13 _c_str_used_dltensor = c_str('used_dltensor')
14
15
16 # used for PyCapsule manipulation
17 if hasattr(ctypes, 'pythonapi'):
18 ctypes.pythonapi.PyCapsule_GetName.restype = ctypes.c_char_p
19 ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p
20 ctypes.pythonapi.PyCapsule_New.restype = ctypes.py_object
21
22
23 def _from_dlpack(dltensor):
24 dltensor = ctypes.py_object(dltensor)
25 if ctypes.pythonapi.PyCapsule_IsValid(dltensor, _c_str_dltensor):
26 ptr = ctypes.pythonapi.PyCapsule_GetPointer(dltensor, _c_str_dltensor)
27 handle = TVMArrayHandle()
28 check_call(_LIB.TVMArrayFromDLPack(ptr, ctypes.byref(handle)))
29 ctypes.pythonapi.PyCapsule_SetName(dltensor, _c_str_used_dltensor)
30 ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))
31 return _make_array(handle, False, False)
32 raise ValueError("Expect a dltensor field, PyCapsule can only be consumed once")
33
34
35 def _dlpack_deleter(pycapsule):
36 pycapsule = ctypes.cast(pycapsule, ctypes.py_object)
37 if ctypes.pythonapi.PyCapsule_IsValid(pycapsule, _c_str_dltensor):
38 ptr = ctypes.pythonapi.PyCapsule_GetPointer(pycapsule, _c_str_dltensor)
39 _LIB.TVMDLManagedTensorCallDeleter(ptr)
40 ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))
41
42 _c_dlpack_deleter = TVMPyCapsuleDestructor(_dlpack_deleter)
43
44
45 class NDArrayBase(object):
46 """A simple Device/CPU Array object in runtime."""
47 __slots__ = ["handle", "is_view"]
48 # pylint: disable=no-member
49 def __init__(self, handle, is_view=False):
50 """Initialize the function with handle
51
52 Parameters
53 ----------
54 handle : TVMArrayHandle
55 the handle to the underlying C++ TVMArray
56 """
57 self.handle = handle
58 self.is_view = is_view
59
60 def __del__(self):
61 if not self.is_view and _LIB:
62 check_call(_LIB.TVMArrayFree(self.handle))
63
64 @property
65 def _tvm_handle(self):
66 return ctypes.cast(self.handle, ctypes.c_void_p).value
67
68 def to_dlpack(self):
69 """Produce an array from a DLPack Tensor without copying memory
70
71 Returns
72 -------
73 dlpack : DLPack tensor view of the array data
74 """
75 handle = ctypes.c_void_p()
76 check_call(_LIB.TVMArrayToDLPack(self.handle, ctypes.byref(handle)))
77 return ctypes.pythonapi.PyCapsule_New(handle, _c_str_dltensor, _c_dlpack_deleter)
78
79
80 def _make_array(handle, is_view, is_container):
81 global _TVM_ND_CLS
82 handle = ctypes.cast(handle, TVMArrayHandle)
83 fcreate = _CLASS_NDARRAY
84 if is_container and _TVM_ND_CLS:
85 array_type_info = ctypes.cast(handle, TVMNDArrayContainerHandle).array_type_info.value
86 if array_type_info > 0:
87 fcreate = _TVM_ND_CLS[array_type_info]
88 return fcreate(handle, is_view)
89
90 _TVM_COMPATS = ()
91
92 def _reg_extension(cls, fcreate):
93 global _TVM_COMPATS
94 _TVM_COMPATS += (cls,)
95 if fcreate:
96 fret = lambda x: fcreate(_return_handle(x))
97 RETURN_SWITCH[cls._tvm_tcode] = fret
98 C_TO_PY_ARG_SWITCH[cls._tvm_tcode] = _wrap_arg_func(fret, cls._tvm_tcode)
99
100 _TVM_ND_CLS = {}
101
102 def _reg_ndarray(cls, fcreate):
103 global _TVM_ND_CLS
104 _TVM_ND_CLS[cls._array_type_code] = fcreate
105
106 _CLASS_NDARRAY = None
107
108 def _set_class_ndarray(cls):
109 global _CLASS_NDARRAY
110 _CLASS_NDARRAY = cls
111
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/python/tvm/_ffi/_ctypes/ndarray.py b/python/tvm/_ffi/_ctypes/ndarray.py
--- a/python/tvm/_ffi/_ctypes/ndarray.py
+++ b/python/tvm/_ffi/_ctypes/ndarray.py
@@ -24,6 +24,8 @@
dltensor = ctypes.py_object(dltensor)
if ctypes.pythonapi.PyCapsule_IsValid(dltensor, _c_str_dltensor):
ptr = ctypes.pythonapi.PyCapsule_GetPointer(dltensor, _c_str_dltensor)
+ # enforce type to make sure it works for all ctypes
+ ptr = ctypes.cast(ptr, ctypes.c_void_p)
handle = TVMArrayHandle()
check_call(_LIB.TVMArrayFromDLPack(ptr, ctypes.byref(handle)))
ctypes.pythonapi.PyCapsule_SetName(dltensor, _c_str_used_dltensor)
@@ -36,6 +38,8 @@
pycapsule = ctypes.cast(pycapsule, ctypes.py_object)
if ctypes.pythonapi.PyCapsule_IsValid(pycapsule, _c_str_dltensor):
ptr = ctypes.pythonapi.PyCapsule_GetPointer(pycapsule, _c_str_dltensor)
+ # enforce type to make sure it works for all ctypes
+ ptr = ctypes.cast(ctypes.c_void_p, ptr)
_LIB.TVMDLManagedTensorCallDeleter(ptr)
ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))
|
{"golden_diff": "diff --git a/python/tvm/_ffi/_ctypes/ndarray.py b/python/tvm/_ffi/_ctypes/ndarray.py\n--- a/python/tvm/_ffi/_ctypes/ndarray.py\n+++ b/python/tvm/_ffi/_ctypes/ndarray.py\n@@ -24,6 +24,8 @@\n dltensor = ctypes.py_object(dltensor)\n if ctypes.pythonapi.PyCapsule_IsValid(dltensor, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(dltensor, _c_str_dltensor)\n+ # enforce type to make sure it works for all ctypes\n+ ptr = ctypes.cast(ptr, ctypes.c_void_p)\n handle = TVMArrayHandle()\n check_call(_LIB.TVMArrayFromDLPack(ptr, ctypes.byref(handle)))\n ctypes.pythonapi.PyCapsule_SetName(dltensor, _c_str_used_dltensor)\n@@ -36,6 +38,8 @@\n pycapsule = ctypes.cast(pycapsule, ctypes.py_object)\n if ctypes.pythonapi.PyCapsule_IsValid(pycapsule, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(pycapsule, _c_str_dltensor)\n+ # enforce type to make sure it works for all ctypes\n+ ptr = ctypes.cast(ctypes.c_void_p, ptr)\n _LIB.TVMDLManagedTensorCallDeleter(ptr)\n ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))\n", "issue": "[TEST][FLAKY] test_dlpack\nBoth #2749 and #2353 encountered seg fault error at test_dlpack.\r\nhttp://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-2749/2/pipeline\r\nhttp://ci.tvm.ai:8080/blue/organizations/jenkins/tvm/detail/PR-2353/48/pipeline\r\n\r\ncc @eqy , could you help look at this?\n", "before_files": [{"content": "# pylint: disable=invalid-name\n\"\"\"Runtime NDArray api\"\"\"\nfrom __future__ import absolute_import\n\nimport ctypes\nfrom ..base import _LIB, check_call, c_str\nfrom ..runtime_ctypes import TVMArrayHandle, TVMNDArrayContainerHandle\nfrom .types import RETURN_SWITCH, C_TO_PY_ARG_SWITCH, _wrap_arg_func, _return_handle\n\n\nTVMPyCapsuleDestructor = ctypes.CFUNCTYPE(None, ctypes.c_void_p)\n_c_str_dltensor = c_str('dltensor')\n_c_str_used_dltensor = c_str('used_dltensor')\n\n\n# used for PyCapsule manipulation\nif hasattr(ctypes, 'pythonapi'):\n ctypes.pythonapi.PyCapsule_GetName.restype = ctypes.c_char_p\n ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p\n ctypes.pythonapi.PyCapsule_New.restype = ctypes.py_object\n\n\ndef _from_dlpack(dltensor):\n dltensor = ctypes.py_object(dltensor)\n if ctypes.pythonapi.PyCapsule_IsValid(dltensor, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(dltensor, _c_str_dltensor)\n handle = TVMArrayHandle()\n check_call(_LIB.TVMArrayFromDLPack(ptr, ctypes.byref(handle)))\n ctypes.pythonapi.PyCapsule_SetName(dltensor, _c_str_used_dltensor)\n ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))\n return _make_array(handle, False, False)\n raise ValueError(\"Expect a dltensor field, PyCapsule can only be consumed once\")\n\n\ndef _dlpack_deleter(pycapsule):\n pycapsule = ctypes.cast(pycapsule, ctypes.py_object)\n if ctypes.pythonapi.PyCapsule_IsValid(pycapsule, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(pycapsule, _c_str_dltensor)\n _LIB.TVMDLManagedTensorCallDeleter(ptr)\n ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))\n\n_c_dlpack_deleter = TVMPyCapsuleDestructor(_dlpack_deleter)\n\n\nclass NDArrayBase(object):\n \"\"\"A simple Device/CPU Array object in runtime.\"\"\"\n __slots__ = [\"handle\", \"is_view\"]\n # pylint: disable=no-member\n def __init__(self, handle, is_view=False):\n \"\"\"Initialize the function with handle\n\n Parameters\n ----------\n handle : TVMArrayHandle\n the handle to the underlying C++ TVMArray\n \"\"\"\n self.handle = handle\n self.is_view = is_view\n\n def __del__(self):\n if not self.is_view and _LIB:\n check_call(_LIB.TVMArrayFree(self.handle))\n\n @property\n def _tvm_handle(self):\n return ctypes.cast(self.handle, ctypes.c_void_p).value\n\n def to_dlpack(self):\n \"\"\"Produce an array from a DLPack Tensor without copying memory\n\n Returns\n -------\n dlpack : DLPack tensor view of the array data\n \"\"\"\n handle = ctypes.c_void_p()\n check_call(_LIB.TVMArrayToDLPack(self.handle, ctypes.byref(handle)))\n return ctypes.pythonapi.PyCapsule_New(handle, _c_str_dltensor, _c_dlpack_deleter)\n\n\ndef _make_array(handle, is_view, is_container):\n global _TVM_ND_CLS\n handle = ctypes.cast(handle, TVMArrayHandle)\n fcreate = _CLASS_NDARRAY\n if is_container and _TVM_ND_CLS:\n array_type_info = ctypes.cast(handle, TVMNDArrayContainerHandle).array_type_info.value\n if array_type_info > 0:\n fcreate = _TVM_ND_CLS[array_type_info]\n return fcreate(handle, is_view)\n\n_TVM_COMPATS = ()\n\ndef _reg_extension(cls, fcreate):\n global _TVM_COMPATS\n _TVM_COMPATS += (cls,)\n if fcreate:\n fret = lambda x: fcreate(_return_handle(x))\n RETURN_SWITCH[cls._tvm_tcode] = fret\n C_TO_PY_ARG_SWITCH[cls._tvm_tcode] = _wrap_arg_func(fret, cls._tvm_tcode)\n\n_TVM_ND_CLS = {}\n\ndef _reg_ndarray(cls, fcreate):\n global _TVM_ND_CLS\n _TVM_ND_CLS[cls._array_type_code] = fcreate\n\n_CLASS_NDARRAY = None\n\ndef _set_class_ndarray(cls):\n global _CLASS_NDARRAY\n _CLASS_NDARRAY = cls\n", "path": "python/tvm/_ffi/_ctypes/ndarray.py"}], "after_files": [{"content": "# pylint: disable=invalid-name\n\"\"\"Runtime NDArray api\"\"\"\nfrom __future__ import absolute_import\n\nimport ctypes\nfrom ..base import _LIB, check_call, c_str\nfrom ..runtime_ctypes import TVMArrayHandle, TVMNDArrayContainerHandle\nfrom .types import RETURN_SWITCH, C_TO_PY_ARG_SWITCH, _wrap_arg_func, _return_handle\n\n\nTVMPyCapsuleDestructor = ctypes.CFUNCTYPE(None, ctypes.c_void_p)\n_c_str_dltensor = c_str('dltensor')\n_c_str_used_dltensor = c_str('used_dltensor')\n\n\n# used for PyCapsule manipulation\nif hasattr(ctypes, 'pythonapi'):\n ctypes.pythonapi.PyCapsule_GetName.restype = ctypes.c_char_p\n ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p\n ctypes.pythonapi.PyCapsule_New.restype = ctypes.py_object\n\n\ndef _from_dlpack(dltensor):\n dltensor = ctypes.py_object(dltensor)\n if ctypes.pythonapi.PyCapsule_IsValid(dltensor, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(dltensor, _c_str_dltensor)\n # enforce type to make sure it works for all ctypes\n ptr = ctypes.cast(ptr, ctypes.c_void_p)\n handle = TVMArrayHandle()\n check_call(_LIB.TVMArrayFromDLPack(ptr, ctypes.byref(handle)))\n ctypes.pythonapi.PyCapsule_SetName(dltensor, _c_str_used_dltensor)\n ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))\n return _make_array(handle, False, False)\n raise ValueError(\"Expect a dltensor field, PyCapsule can only be consumed once\")\n\n\ndef _dlpack_deleter(pycapsule):\n pycapsule = ctypes.cast(pycapsule, ctypes.py_object)\n if ctypes.pythonapi.PyCapsule_IsValid(pycapsule, _c_str_dltensor):\n ptr = ctypes.pythonapi.PyCapsule_GetPointer(pycapsule, _c_str_dltensor)\n # enforce type to make sure it works for all ctypes\n ptr = ctypes.cast(ctypes.c_void_p, ptr)\n _LIB.TVMDLManagedTensorCallDeleter(ptr)\n ctypes.pythonapi.PyCapsule_SetDestructor(dltensor, TVMPyCapsuleDestructor(0))\n\n_c_dlpack_deleter = TVMPyCapsuleDestructor(_dlpack_deleter)\n\n\nclass NDArrayBase(object):\n \"\"\"A simple Device/CPU Array object in runtime.\"\"\"\n __slots__ = [\"handle\", \"is_view\"]\n # pylint: disable=no-member\n def __init__(self, handle, is_view=False):\n \"\"\"Initialize the function with handle\n\n Parameters\n ----------\n handle : TVMArrayHandle\n the handle to the underlying C++ TVMArray\n \"\"\"\n self.handle = handle\n self.is_view = is_view\n\n def __del__(self):\n if not self.is_view and _LIB:\n check_call(_LIB.TVMArrayFree(self.handle))\n\n @property\n def _tvm_handle(self):\n return ctypes.cast(self.handle, ctypes.c_void_p).value\n\n def to_dlpack(self):\n \"\"\"Produce an array from a DLPack Tensor without copying memory\n\n Returns\n -------\n dlpack : DLPack tensor view of the array data\n \"\"\"\n handle = ctypes.c_void_p()\n check_call(_LIB.TVMArrayToDLPack(self.handle, ctypes.byref(handle)))\n return ctypes.pythonapi.PyCapsule_New(handle, _c_str_dltensor, _c_dlpack_deleter)\n\n\ndef _make_array(handle, is_view, is_container):\n global _TVM_ND_CLS\n handle = ctypes.cast(handle, TVMArrayHandle)\n fcreate = _CLASS_NDARRAY\n if is_container and _TVM_ND_CLS:\n array_type_info = ctypes.cast(handle, TVMNDArrayContainerHandle).array_type_info.value\n if array_type_info > 0:\n fcreate = _TVM_ND_CLS[array_type_info]\n return fcreate(handle, is_view)\n\n_TVM_COMPATS = ()\n\ndef _reg_extension(cls, fcreate):\n global _TVM_COMPATS\n _TVM_COMPATS += (cls,)\n if fcreate:\n fret = lambda x: fcreate(_return_handle(x))\n RETURN_SWITCH[cls._tvm_tcode] = fret\n C_TO_PY_ARG_SWITCH[cls._tvm_tcode] = _wrap_arg_func(fret, cls._tvm_tcode)\n\n_TVM_ND_CLS = {}\n\ndef _reg_ndarray(cls, fcreate):\n global _TVM_ND_CLS\n _TVM_ND_CLS[cls._array_type_code] = fcreate\n\n_CLASS_NDARRAY = None\n\ndef _set_class_ndarray(cls):\n global _CLASS_NDARRAY\n _CLASS_NDARRAY = cls\n", "path": "python/tvm/_ffi/_ctypes/ndarray.py"}]}
| 1,663 | 347 |
gh_patches_debug_4541
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-2109
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Video widget appears to be broken in 0.11.x?
#### ALL software version info
Python 3.8.3 x64
Output of `pip list` in the virtualenv I tested this in:
```
Package Version
------------------- ---------
argon2-cffi 20.1.0
async-generator 1.10
attrs 20.3.0
backcall 0.2.0
bleach 3.3.0
bokeh 2.3.0
certifi 2020.12.5
cffi 1.14.5
chardet 4.0.0
colorama 0.4.4
decorator 4.4.2
defusedxml 0.7.1
entrypoints 0.3
idna 2.10
ipykernel 5.5.0
ipython 7.21.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 2.11.3
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.3.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
Markdown 3.3.4
MarkupSafe 1.1.1
mistune 0.8.4
nbclient 0.5.3
nbconvert 6.0.7
nbformat 5.1.2
nest-asyncio 1.5.1
notebook 6.1.0
numpy 1.20.1
packaging 20.9
pandocfilters 1.4.3
panel 0.11.1
param 1.10.1
parso 0.8.1
pickleshare 0.7.5
Pillow 8.1.2
pip 20.1.1
prometheus-client 0.9.0
prompt-toolkit 3.0.17
pycparser 2.20
pyct 0.4.8
Pygments 2.8.1
pyparsing 2.4.7
pyrsistent 0.17.3
python-dateutil 2.8.1
pyviz-comms 2.0.1
pywin32 300
pywinpty 0.5.7
PyYAML 5.4.1
pyzmq 22.0.3
qtconsole 5.0.3
QtPy 1.9.0
requests 2.25.1
Send2Trash 1.5.0
setuptools 46.4.0
six 1.15.0
terminado 0.9.3
testpath 0.4.4
tornado 6.1
tqdm 4.59.0
traitlets 5.0.5
typing-extensions 3.7.4.3
urllib3 1.26.4
wcwidth 0.2.5
webencodings 0.5.1
wheel 0.34.2
widgetsnbextension 3.5.1
```
Tested on recent versions of Firefox and Chrome, Win10 and Linux.
The problem occurs on both 0.11.0 and 0.11.1, but does not happen if I simply downgrade to 0.10.3.
#### Description of expected behavior and the observed behavior
Initially noticed this trying to play other videos, but it happens with the reference [Video pane example notebook](https://raw.githubusercontent.com/holoviz/panel/master/examples/reference/panes/Video.ipynb). When the cell creating the widget is executed, the video never loads. Checking the generated HTML reveals that the src attribute is empty:
```
<video height="360" width="640" controls="" src="" loop="" style="object-fit: fill; min-width: 100%; min-height: 100%;"></video>
```
compared to the working version from 0.10.3:
```
<video height="360" width="640" controls="" src="https://file-examples-com.github.io/uploads/2017/04/file_example_MP4_640_3MG.mp4" loop="" style="object-fit: fill; min-width: 100%; min-height: 100%;"></video>
```
#### Complete, minimal, self-contained example code that reproduces the issue
Just run the reference [Video.ipynb notebook](https://raw.githubusercontent.com/holoviz/panel/master/examples/reference/panes/Video.ipynb).
#### Stack traceback and/or browser JavaScript console output
There don't seem to be any obvious errors in the JS console or the jupyter server output. A sample log from the JS console when restarting and then running the notebook:
```
kernel.js:106 Kernel: kernel_restarting (28828522-1f07-401a-bb70-0aaa5f7fbf15)
kernel.js:106 Kernel: kernel_created (28828522-1f07-401a-bb70-0aaa5f7fbf15)
kernel.js:463 Starting WebSockets: ws://localhost:8888/api/kernels/28828522-1f07-401a-bb70-0aaa5f7fbf15
kernel.js:106 Kernel: kernel_connected (28828522-1f07-401a-bb70-0aaa5f7fbf15)
kernel.js:106 Kernel: kernel_starting (28828522-1f07-401a-bb70-0aaa5f7fbf15)
kernel.js:106 Kernel: kernel_ready (28828522-1f07-401a-bb70-0aaa5f7fbf15)
kernel.js:106 Kernel: kernel_ready (28828522-1f07-401a-bb70-0aaa5f7fbf15)
bokeh-2.3.0.min.js:184 [bokeh] setting log level to: 'info'
bokeh-2.3.0.min.js:165 [bokeh] document idle at 14 ms
```
#### Screenshots or screencasts of the bug in action
How the video widget appears in both Chrome and Firefox:


--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `panel/pane/media.py`
Content:
```
1 """
2 Contains Media panes including renderers for Audio and Video content.
3 """
4 import os
5
6 from base64 import b64encode
7 from io import BytesIO
8 from six import string_types
9
10 import numpy as np
11 import param
12
13 from ..models import Audio as _BkAudio, Video as _BkVideo
14 from ..util import isfile, isurl
15 from .base import PaneBase
16
17
18 class _MediaBase(PaneBase):
19
20 loop = param.Boolean(default=False, doc="""
21 Whether the meida should loop""")
22
23 time = param.Number(default=0, doc="""
24 The current timestamp""")
25
26 throttle = param.Integer(default=250, doc="""
27 How frequently to sample the current playback time in milliseconds""")
28
29 paused = param.Boolean(default=True, doc="""
30 Whether the media is currently paused""")
31
32 object = param.String(default='', allow_None=True, doc="""
33 The media file either local or remote.""")
34
35 volume = param.Number(default=None, bounds=(0, 100), doc="""
36 The volume of the media player.""")
37
38 _default_mime = None
39
40 _formats = []
41
42 _media_type = None
43
44 _rename = {'name': None, 'sample_rate': None, 'object': 'value'}
45
46 _updates = True
47
48 __abstract = True
49
50 @classmethod
51 def applies(cls, obj):
52 if isinstance(obj, string_types):
53 if isfile(obj) and any(obj.endswith('.'+fmt) for fmt in cls._formats):
54 return True
55 if isurl(obj, cls._formats):
56 return True
57 if hasattr(obj, 'read'): # Check for file like object
58 return True
59 return False
60
61 def _get_model(self, doc, root=None, parent=None, comm=None):
62 props = self._process_param_change(self._init_params())
63 model = self._bokeh_model(**props)
64 if root is None:
65 root = model
66 self._models[root.ref['id']] = (model, parent)
67 self._link_props(model, list(model.properties()), doc, root, comm)
68 return model
69
70 def _from_numpy(self, data):
71 from scipy.io import wavfile
72 buffer = BytesIO()
73 wavfile.write(buffer, self.sample_rate, data)
74 return buffer
75
76 def _process_param_change(self, msg):
77 msg = super()._process_param_change(msg)
78 if 'value' in msg:
79 value = msg['value']
80 if isinstance(value, np.ndarray):
81 fmt = 'wav'
82 buffer = self._from_numpy(value)
83 data = b64encode(buffer.getvalue())
84 elif os.path.isfile(value):
85 fmt = value.split('.')[-1]
86 with open(value, 'rb') as f:
87 data = f.read()
88 data = b64encode(data)
89 elif value.lower().startswith('http'):
90 return msg
91 elif not value:
92 data, fmt = b'', self._default_mime
93 else:
94 raise ValueError('Object should be either path to a sound file or numpy array')
95 template = 'data:audio/{mime};base64,{data}'
96 msg['value'] = template.format(data=data.decode('utf-8'),
97 mime=fmt)
98
99 return msg
100
101
102 class Audio(_MediaBase):
103
104 object = param.ClassSelector(default='', class_=(string_types + (np.ndarray,)),
105 allow_None=True, doc="""
106 The audio file either local or remote.""")
107
108 sample_rate = param.Integer(default=44100, doc="""
109 The sample_rate of the audio when given a NumPy array.""")
110
111 _bokeh_model = _BkAudio
112
113 _default_mime = 'wav'
114
115 _formats = ['mp3', 'wav', 'ogg']
116
117 _media_type = 'audio'
118
119 @classmethod
120 def applies(cls, obj):
121 return (super().applies(obj) or
122 (isinstance(obj, np.ndarray) and obj.ndim==1 and obj.dtype in [np.int16, np.uint16]))
123
124
125 class Video(_MediaBase):
126
127 _bokeh_model = _BkVideo
128
129 _default_mime = 'mp4'
130
131 _formats = ['mp4', 'webm', 'ogg']
132
133 _media_type = 'video'
134
135
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/panel/pane/media.py b/panel/pane/media.py
--- a/panel/pane/media.py
+++ b/panel/pane/media.py
@@ -60,6 +60,8 @@
def _get_model(self, doc, root=None, parent=None, comm=None):
props = self._process_param_change(self._init_params())
+ if self.object is not None:
+ props['value'] = self.object
model = self._bokeh_model(**props)
if root is None:
root = model
|
{"golden_diff": "diff --git a/panel/pane/media.py b/panel/pane/media.py\n--- a/panel/pane/media.py\n+++ b/panel/pane/media.py\n@@ -60,6 +60,8 @@\n \n def _get_model(self, doc, root=None, parent=None, comm=None):\n props = self._process_param_change(self._init_params())\n+ if self.object is not None:\n+ props['value'] = self.object\n model = self._bokeh_model(**props)\n if root is None:\n root = model\n", "issue": "Video widget appears to be broken in 0.11.x?\n#### ALL software version info\r\nPython 3.8.3 x64\r\n\r\nOutput of `pip list` in the virtualenv I tested this in:\r\n```\r\nPackage Version\r\n------------------- ---------\r\nargon2-cffi 20.1.0\r\nasync-generator 1.10\r\nattrs 20.3.0\r\nbackcall 0.2.0\r\nbleach 3.3.0\r\nbokeh 2.3.0\r\ncertifi 2020.12.5\r\ncffi 1.14.5\r\nchardet 4.0.0\r\ncolorama 0.4.4\r\ndecorator 4.4.2\r\ndefusedxml 0.7.1\r\nentrypoints 0.3\r\nidna 2.10\r\nipykernel 5.5.0\r\nipython 7.21.0\r\nipython-genutils 0.2.0\r\nipywidgets 7.6.3\r\njedi 0.18.0\r\nJinja2 2.11.3\r\njsonschema 3.2.0\r\njupyter 1.0.0\r\njupyter-client 6.1.12\r\njupyter-console 6.3.0\r\njupyter-core 4.7.1\r\njupyterlab-pygments 0.1.2\r\njupyterlab-widgets 1.0.0\r\nMarkdown 3.3.4\r\nMarkupSafe 1.1.1\r\nmistune 0.8.4\r\nnbclient 0.5.3\r\nnbconvert 6.0.7\r\nnbformat 5.1.2\r\nnest-asyncio 1.5.1\r\nnotebook 6.1.0\r\nnumpy 1.20.1\r\npackaging 20.9\r\npandocfilters 1.4.3\r\npanel 0.11.1\r\nparam 1.10.1\r\nparso 0.8.1\r\npickleshare 0.7.5\r\nPillow 8.1.2\r\npip 20.1.1\r\nprometheus-client 0.9.0\r\nprompt-toolkit 3.0.17\r\npycparser 2.20\r\npyct 0.4.8\r\nPygments 2.8.1\r\npyparsing 2.4.7\r\npyrsistent 0.17.3\r\npython-dateutil 2.8.1\r\npyviz-comms 2.0.1\r\npywin32 300\r\npywinpty 0.5.7\r\nPyYAML 5.4.1\r\npyzmq 22.0.3\r\nqtconsole 5.0.3\r\nQtPy 1.9.0\r\nrequests 2.25.1\r\nSend2Trash 1.5.0\r\nsetuptools 46.4.0\r\nsix 1.15.0\r\nterminado 0.9.3\r\ntestpath 0.4.4\r\ntornado 6.1\r\ntqdm 4.59.0\r\ntraitlets 5.0.5\r\ntyping-extensions 3.7.4.3\r\nurllib3 1.26.4\r\nwcwidth 0.2.5\r\nwebencodings 0.5.1\r\nwheel 0.34.2\r\nwidgetsnbextension 3.5.1\r\n```\r\nTested on recent versions of Firefox and Chrome, Win10 and Linux. \r\n\r\nThe problem occurs on both 0.11.0 and 0.11.1, but does not happen if I simply downgrade to 0.10.3. \r\n\r\n#### Description of expected behavior and the observed behavior\r\n\r\nInitially noticed this trying to play other videos, but it happens with the reference [Video pane example notebook](https://raw.githubusercontent.com/holoviz/panel/master/examples/reference/panes/Video.ipynb). When the cell creating the widget is executed, the video never loads. Checking the generated HTML reveals that the src attribute is empty:\r\n\r\n```\r\n<video height=\"360\" width=\"640\" controls=\"\" src=\"\" loop=\"\" style=\"object-fit: fill; min-width: 100%; min-height: 100%;\"></video>\r\n```\r\ncompared to the working version from 0.10.3:\r\n```\r\n<video height=\"360\" width=\"640\" controls=\"\" src=\"https://file-examples-com.github.io/uploads/2017/04/file_example_MP4_640_3MG.mp4\" loop=\"\" style=\"object-fit: fill; min-width: 100%; min-height: 100%;\"></video>\r\n```\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\nJust run the reference [Video.ipynb notebook](https://raw.githubusercontent.com/holoviz/panel/master/examples/reference/panes/Video.ipynb). \r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\n\r\nThere don't seem to be any obvious errors in the JS console or the jupyter server output. A sample log from the JS console when restarting and then running the notebook:\r\n\r\n```\r\nkernel.js:106 Kernel: kernel_restarting (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nkernel.js:106 Kernel: kernel_created (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nkernel.js:463 Starting WebSockets: ws://localhost:8888/api/kernels/28828522-1f07-401a-bb70-0aaa5f7fbf15\r\nkernel.js:106 Kernel: kernel_connected (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nkernel.js:106 Kernel: kernel_starting (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nkernel.js:106 Kernel: kernel_ready (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nkernel.js:106 Kernel: kernel_ready (28828522-1f07-401a-bb70-0aaa5f7fbf15)\r\nbokeh-2.3.0.min.js:184 [bokeh] setting log level to: 'info'\r\nbokeh-2.3.0.min.js:165 [bokeh] document idle at 14 ms\r\n```\r\n\r\n#### Screenshots or screencasts of the bug in action\r\n\r\nHow the video widget appears in both Chrome and Firefox: \r\n\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nContains Media panes including renderers for Audio and Video content.\n\"\"\"\nimport os\n\nfrom base64 import b64encode\nfrom io import BytesIO\nfrom six import string_types\n\nimport numpy as np\nimport param\n\nfrom ..models import Audio as _BkAudio, Video as _BkVideo\nfrom ..util import isfile, isurl\nfrom .base import PaneBase\n\n\nclass _MediaBase(PaneBase):\n\n loop = param.Boolean(default=False, doc=\"\"\"\n Whether the meida should loop\"\"\")\n\n time = param.Number(default=0, doc=\"\"\"\n The current timestamp\"\"\")\n\n throttle = param.Integer(default=250, doc=\"\"\"\n How frequently to sample the current playback time in milliseconds\"\"\")\n\n paused = param.Boolean(default=True, doc=\"\"\"\n Whether the media is currently paused\"\"\")\n\n object = param.String(default='', allow_None=True, doc=\"\"\"\n The media file either local or remote.\"\"\")\n\n volume = param.Number(default=None, bounds=(0, 100), doc=\"\"\"\n The volume of the media player.\"\"\")\n\n _default_mime = None\n\n _formats = []\n\n _media_type = None\n\n _rename = {'name': None, 'sample_rate': None, 'object': 'value'}\n\n _updates = True\n\n __abstract = True\n\n @classmethod\n def applies(cls, obj):\n if isinstance(obj, string_types):\n if isfile(obj) and any(obj.endswith('.'+fmt) for fmt in cls._formats):\n return True\n if isurl(obj, cls._formats):\n return True\n if hasattr(obj, 'read'): # Check for file like object\n return True\n return False\n\n def _get_model(self, doc, root=None, parent=None, comm=None):\n props = self._process_param_change(self._init_params())\n model = self._bokeh_model(**props)\n if root is None:\n root = model\n self._models[root.ref['id']] = (model, parent)\n self._link_props(model, list(model.properties()), doc, root, comm)\n return model\n\n def _from_numpy(self, data):\n from scipy.io import wavfile\n buffer = BytesIO()\n wavfile.write(buffer, self.sample_rate, data)\n return buffer\n\n def _process_param_change(self, msg):\n msg = super()._process_param_change(msg)\n if 'value' in msg:\n value = msg['value']\n if isinstance(value, np.ndarray):\n fmt = 'wav'\n buffer = self._from_numpy(value)\n data = b64encode(buffer.getvalue())\n elif os.path.isfile(value):\n fmt = value.split('.')[-1]\n with open(value, 'rb') as f:\n data = f.read()\n data = b64encode(data)\n elif value.lower().startswith('http'):\n return msg\n elif not value:\n data, fmt = b'', self._default_mime\n else:\n raise ValueError('Object should be either path to a sound file or numpy array')\n template = 'data:audio/{mime};base64,{data}'\n msg['value'] = template.format(data=data.decode('utf-8'),\n mime=fmt)\n \n return msg\n\n\nclass Audio(_MediaBase):\n\n object = param.ClassSelector(default='', class_=(string_types + (np.ndarray,)),\n allow_None=True, doc=\"\"\"\n The audio file either local or remote.\"\"\")\n\n sample_rate = param.Integer(default=44100, doc=\"\"\"\n The sample_rate of the audio when given a NumPy array.\"\"\")\n\n _bokeh_model = _BkAudio\n\n _default_mime = 'wav'\n\n _formats = ['mp3', 'wav', 'ogg']\n\n _media_type = 'audio'\n\n @classmethod\n def applies(cls, obj):\n return (super().applies(obj) or \n (isinstance(obj, np.ndarray) and obj.ndim==1 and obj.dtype in [np.int16, np.uint16]))\n\n\nclass Video(_MediaBase):\n\n _bokeh_model = _BkVideo\n\n _default_mime = 'mp4'\n\n _formats = ['mp4', 'webm', 'ogg']\n\n _media_type = 'video'\n\n", "path": "panel/pane/media.py"}], "after_files": [{"content": "\"\"\"\nContains Media panes including renderers for Audio and Video content.\n\"\"\"\nimport os\n\nfrom base64 import b64encode\nfrom io import BytesIO\nfrom six import string_types\n\nimport numpy as np\nimport param\n\nfrom ..models import Audio as _BkAudio, Video as _BkVideo\nfrom ..util import isfile, isurl\nfrom .base import PaneBase\n\n\nclass _MediaBase(PaneBase):\n\n loop = param.Boolean(default=False, doc=\"\"\"\n Whether the meida should loop\"\"\")\n\n time = param.Number(default=0, doc=\"\"\"\n The current timestamp\"\"\")\n\n throttle = param.Integer(default=250, doc=\"\"\"\n How frequently to sample the current playback time in milliseconds\"\"\")\n\n paused = param.Boolean(default=True, doc=\"\"\"\n Whether the media is currently paused\"\"\")\n\n object = param.String(default='', allow_None=True, doc=\"\"\"\n The media file either local or remote.\"\"\")\n\n volume = param.Number(default=None, bounds=(0, 100), doc=\"\"\"\n The volume of the media player.\"\"\")\n\n _default_mime = None\n\n _formats = []\n\n _media_type = None\n\n _rename = {'name': None, 'sample_rate': None, 'object': 'value'}\n\n _updates = True\n\n __abstract = True\n\n @classmethod\n def applies(cls, obj):\n if isinstance(obj, string_types):\n if isfile(obj) and any(obj.endswith('.'+fmt) for fmt in cls._formats):\n return True\n if isurl(obj, cls._formats):\n return True\n if hasattr(obj, 'read'): # Check for file like object\n return True\n return False\n\n def _get_model(self, doc, root=None, parent=None, comm=None):\n props = self._process_param_change(self._init_params())\n if self.object is not None:\n props['value'] = self.object\n model = self._bokeh_model(**props)\n if root is None:\n root = model\n self._models[root.ref['id']] = (model, parent)\n self._link_props(model, list(model.properties()), doc, root, comm)\n return model\n\n def _from_numpy(self, data):\n from scipy.io import wavfile\n buffer = BytesIO()\n wavfile.write(buffer, self.sample_rate, data)\n return buffer\n\n def _process_param_change(self, msg):\n msg = super()._process_param_change(msg)\n if 'value' in msg:\n value = msg['value']\n if isinstance(value, np.ndarray):\n fmt = 'wav'\n buffer = self._from_numpy(value)\n data = b64encode(buffer.getvalue())\n elif os.path.isfile(value):\n fmt = value.split('.')[-1]\n with open(value, 'rb') as f:\n data = f.read()\n data = b64encode(data)\n elif value.lower().startswith('http'):\n return msg\n elif not value:\n data, fmt = b'', self._default_mime\n else:\n raise ValueError('Object should be either path to a sound file or numpy array')\n template = 'data:audio/{mime};base64,{data}'\n msg['value'] = template.format(data=data.decode('utf-8'),\n mime=fmt)\n \n return msg\n\n\nclass Audio(_MediaBase):\n\n object = param.ClassSelector(default='', class_=(string_types + (np.ndarray,)),\n allow_None=True, doc=\"\"\"\n The audio file either local or remote.\"\"\")\n\n sample_rate = param.Integer(default=44100, doc=\"\"\"\n The sample_rate of the audio when given a NumPy array.\"\"\")\n\n _bokeh_model = _BkAudio\n\n _default_mime = 'wav'\n\n _formats = ['mp3', 'wav', 'ogg']\n\n _media_type = 'audio'\n\n @classmethod\n def applies(cls, obj):\n return (super().applies(obj) or \n (isinstance(obj, np.ndarray) and obj.ndim==1 and obj.dtype in [np.int16, np.uint16]))\n\n\nclass Video(_MediaBase):\n\n _bokeh_model = _BkVideo\n\n _default_mime = 'mp4'\n\n _formats = ['mp4', 'webm', 'ogg']\n\n _media_type = 'video'\n\n", "path": "panel/pane/media.py"}]}
| 3,256 | 121 |
gh_patches_debug_15795
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-6544
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Faciliter la programmation des Unes en modifiant le type de la date de publication.
À ce jour, quand on créer une Unes il faut remplir le champ "Date de publication" avec un format texte de style "2023/08/21 10:00". C'est assez désagréable à remplir.

Les propositions sont des Unes que j'ai déjà faite.
En ajoutant le type "datetime-local" à cette input on pourrait accèder aux interfaces natives des navigateurs/OS pour ce genre d'input.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/featured/forms.py`
Content:
```
1 from crispy_forms.bootstrap import StrictButton
2 from crispy_forms.helper import FormHelper
3 from crispy_forms.layout import Layout, Field, ButtonHolder
4 from django import forms
5 from django.urls import reverse
6 from django.utils.translation import gettext_lazy as _
7
8 from zds.featured.models import FeaturedResource, FeaturedMessage
9
10
11 class FeaturedResourceForm(forms.ModelForm):
12 class Meta:
13 model = FeaturedResource
14
15 fields = ["title", "type", "authors", "image_url", "url"]
16
17 widgets = {
18 "title": forms.TextInput(attrs={"placeholder": _("Titre de la Une")}),
19 "type": forms.TextInput(attrs={"placeholder": _("ex: Un projet, Un article, Un tutoriel...")}),
20 "authors": forms.TextInput(attrs={"placeholder": _("Des auteurs (ou pas) ?")}),
21 "image_url": forms.URLInput(
22 attrs={"placeholder": _("Lien vers l'image de la Une (dimensions: 228x228px).")}
23 ),
24 "url": forms.URLInput(attrs={"placeholder": _("Lien vers la ressource.")}),
25 }
26
27 major_update = forms.BooleanField(
28 label=_("Mise à jour majeure (fera passer la Une en première position lors d'un changement)"),
29 initial=False,
30 required=False,
31 )
32
33 pubdate = forms.DateTimeField(
34 label=_("Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)"),
35 input_formats=[
36 "%d/%m/%Y %H:%M:%S",
37 "%Y-%m-%d %H:%M:%S", # full format with second
38 "%Y-%m-%dT%H:%M", # datetime field format
39 "%Y-%m-%d %H:%M",
40 "%d/%m/%Y %H:%M", # without second
41 "%Y-%m-%d",
42 "%d/%m/%Y", # day only
43 ],
44 widget=forms.DateTimeInput(
45 attrs={"placeholder": _("Exemple : 25/12/2016 10:00"), "type": "text"},
46 format="%d/%m/%Y %H:%M", # datetime field format
47 ),
48 )
49
50 request = forms.IntegerField(widget=forms.HiddenInput(), required=False)
51
52 def __init__(self, *args, **kwargs):
53 hide_major_update_field = kwargs.pop("hide_major_update_field", False)
54
55 super().__init__(*args, **kwargs)
56 self.helper = FormHelper()
57 self.helper.form_class = "content-wrapper"
58 self.helper.form_method = "post"
59 self.helper.form_action = reverse("featured:resource-create")
60
61 fields = [Field("request"), Field("title"), Field("type"), Field("authors"), Field("image_url"), Field("url")]
62
63 if not hide_major_update_field:
64 fields.append(Field("major_update"))
65
66 fields.extend(
67 [
68 Field("pubdate"),
69 ButtonHolder(
70 StrictButton(_("Enregistrer"), type="submit"),
71 ),
72 ]
73 )
74
75 self.helper.layout = Layout(*fields)
76
77
78 class FeaturedMessageForm(forms.ModelForm):
79 class Meta:
80 model = FeaturedMessage
81
82 fields = ["hook", "message", "url"]
83
84 widgets = {
85 "hook": forms.TextInput(attrs={"placeholder": _('Mot d\'accroche court ("Nouveau !")')}),
86 "message": forms.TextInput(attrs={"placeholder": _("Message à afficher")}),
87 "url": forms.URLInput(attrs={"placeholder": _("Lien vers la description de la ressource")}),
88 }
89
90 def __init__(self, *args, **kwargs):
91 super().__init__(*args, **kwargs)
92 self.helper = FormHelper()
93 self.helper.form_class = "content-wrapper"
94 self.helper.form_method = "post"
95 self.helper.form_action = reverse("featured:message-create")
96
97 self.helper.layout = Layout(
98 Field("hook"),
99 Field("message"),
100 Field("url"),
101 ButtonHolder(
102 StrictButton(_("Enregistrer"), type="submit"),
103 ),
104 )
105
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/featured/forms.py b/zds/featured/forms.py
--- a/zds/featured/forms.py
+++ b/zds/featured/forms.py
@@ -31,20 +31,8 @@
)
pubdate = forms.DateTimeField(
- label=_("Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)"),
- input_formats=[
- "%d/%m/%Y %H:%M:%S",
- "%Y-%m-%d %H:%M:%S", # full format with second
- "%Y-%m-%dT%H:%M", # datetime field format
- "%Y-%m-%d %H:%M",
- "%d/%m/%Y %H:%M", # without second
- "%Y-%m-%d",
- "%d/%m/%Y", # day only
- ],
- widget=forms.DateTimeInput(
- attrs={"placeholder": _("Exemple : 25/12/2016 10:00"), "type": "text"},
- format="%d/%m/%Y %H:%M", # datetime field format
- ),
+ label=_("Date de publication (exemple: 25/12/2015 15:00)"),
+ widget=forms.DateTimeInput(attrs={"type": "datetime-local"}),
)
request = forms.IntegerField(widget=forms.HiddenInput(), required=False)
|
{"golden_diff": "diff --git a/zds/featured/forms.py b/zds/featured/forms.py\n--- a/zds/featured/forms.py\n+++ b/zds/featured/forms.py\n@@ -31,20 +31,8 @@\n )\n \n pubdate = forms.DateTimeField(\n- label=_(\"Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)\"),\n- input_formats=[\n- \"%d/%m/%Y %H:%M:%S\",\n- \"%Y-%m-%d %H:%M:%S\", # full format with second\n- \"%Y-%m-%dT%H:%M\", # datetime field format\n- \"%Y-%m-%d %H:%M\",\n- \"%d/%m/%Y %H:%M\", # without second\n- \"%Y-%m-%d\",\n- \"%d/%m/%Y\", # day only\n- ],\n- widget=forms.DateTimeInput(\n- attrs={\"placeholder\": _(\"Exemple : 25/12/2016 10:00\"), \"type\": \"text\"},\n- format=\"%d/%m/%Y %H:%M\", # datetime field format\n- ),\n+ label=_(\"Date de publication (exemple: 25/12/2015 15:00)\"),\n+ widget=forms.DateTimeInput(attrs={\"type\": \"datetime-local\"}),\n )\n \n request = forms.IntegerField(widget=forms.HiddenInput(), required=False)\n", "issue": "Faciliter la programmation des Unes en modifiant le type de la date de publication.\n\u00c0 ce jour, quand on cr\u00e9er une Unes il faut remplir le champ \"Date de publication\" avec un format texte de style \"2023/08/21 10:00\". C'est assez d\u00e9sagr\u00e9able \u00e0 remplir. \r\n\r\n\r\nLes propositions sont des Unes que j'ai d\u00e9j\u00e0 faite.\r\n\r\nEn ajoutant le type \"datetime-local\" \u00e0 cette input on pourrait acc\u00e8der aux interfaces natives des navigateurs/OS pour ce genre d'input.\r\n\r\n\n", "before_files": [{"content": "from crispy_forms.bootstrap import StrictButton\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Field, ButtonHolder\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.featured.models import FeaturedResource, FeaturedMessage\n\n\nclass FeaturedResourceForm(forms.ModelForm):\n class Meta:\n model = FeaturedResource\n\n fields = [\"title\", \"type\", \"authors\", \"image_url\", \"url\"]\n\n widgets = {\n \"title\": forms.TextInput(attrs={\"placeholder\": _(\"Titre de la Une\")}),\n \"type\": forms.TextInput(attrs={\"placeholder\": _(\"ex: Un projet, Un article, Un tutoriel...\")}),\n \"authors\": forms.TextInput(attrs={\"placeholder\": _(\"Des auteurs (ou pas)\u00a0?\")}),\n \"image_url\": forms.URLInput(\n attrs={\"placeholder\": _(\"Lien vers l'image de la Une (dimensions: 228x228px).\")}\n ),\n \"url\": forms.URLInput(attrs={\"placeholder\": _(\"Lien vers la ressource.\")}),\n }\n\n major_update = forms.BooleanField(\n label=_(\"Mise \u00e0 jour majeure (fera passer la Une en premi\u00e8re position lors d'un changement)\"),\n initial=False,\n required=False,\n )\n\n pubdate = forms.DateTimeField(\n label=_(\"Date de publication (exemple: 25/12/2015 15:00 ou 2015-12-25T15:00)\"),\n input_formats=[\n \"%d/%m/%Y %H:%M:%S\",\n \"%Y-%m-%d %H:%M:%S\", # full format with second\n \"%Y-%m-%dT%H:%M\", # datetime field format\n \"%Y-%m-%d %H:%M\",\n \"%d/%m/%Y %H:%M\", # without second\n \"%Y-%m-%d\",\n \"%d/%m/%Y\", # day only\n ],\n widget=forms.DateTimeInput(\n attrs={\"placeholder\": _(\"Exemple : 25/12/2016 10:00\"), \"type\": \"text\"},\n format=\"%d/%m/%Y %H:%M\", # datetime field format\n ),\n )\n\n request = forms.IntegerField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n hide_major_update_field = kwargs.pop(\"hide_major_update_field\", False)\n\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = \"content-wrapper\"\n self.helper.form_method = \"post\"\n self.helper.form_action = reverse(\"featured:resource-create\")\n\n fields = [Field(\"request\"), Field(\"title\"), Field(\"type\"), Field(\"authors\"), Field(\"image_url\"), Field(\"url\")]\n\n if not hide_major_update_field:\n fields.append(Field(\"major_update\"))\n\n fields.extend(\n [\n Field(\"pubdate\"),\n ButtonHolder(\n StrictButton(_(\"Enregistrer\"), type=\"submit\"),\n ),\n ]\n )\n\n self.helper.layout = Layout(*fields)\n\n\nclass FeaturedMessageForm(forms.ModelForm):\n class Meta:\n model = FeaturedMessage\n\n fields = [\"hook\", \"message\", \"url\"]\n\n widgets = {\n \"hook\": forms.TextInput(attrs={\"placeholder\": _('Mot d\\'accroche court (\"Nouveau\u00a0!\")')}),\n \"message\": forms.TextInput(attrs={\"placeholder\": _(\"Message \u00e0 afficher\")}),\n \"url\": forms.URLInput(attrs={\"placeholder\": _(\"Lien vers la description de la ressource\")}),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = \"content-wrapper\"\n self.helper.form_method = \"post\"\n self.helper.form_action = reverse(\"featured:message-create\")\n\n self.helper.layout = Layout(\n Field(\"hook\"),\n Field(\"message\"),\n Field(\"url\"),\n ButtonHolder(\n StrictButton(_(\"Enregistrer\"), type=\"submit\"),\n ),\n )\n", "path": "zds/featured/forms.py"}], "after_files": [{"content": "from crispy_forms.bootstrap import StrictButton\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Field, ButtonHolder\nfrom django import forms\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\n\nfrom zds.featured.models import FeaturedResource, FeaturedMessage\n\n\nclass FeaturedResourceForm(forms.ModelForm):\n class Meta:\n model = FeaturedResource\n\n fields = [\"title\", \"type\", \"authors\", \"image_url\", \"url\"]\n\n widgets = {\n \"title\": forms.TextInput(attrs={\"placeholder\": _(\"Titre de la Une\")}),\n \"type\": forms.TextInput(attrs={\"placeholder\": _(\"ex: Un projet, Un article, Un tutoriel...\")}),\n \"authors\": forms.TextInput(attrs={\"placeholder\": _(\"Des auteurs (ou pas)\u00a0?\")}),\n \"image_url\": forms.URLInput(\n attrs={\"placeholder\": _(\"Lien vers l'image de la Une (dimensions: 228x228px).\")}\n ),\n \"url\": forms.URLInput(attrs={\"placeholder\": _(\"Lien vers la ressource.\")}),\n }\n\n major_update = forms.BooleanField(\n label=_(\"Mise \u00e0 jour majeure (fera passer la Une en premi\u00e8re position lors d'un changement)\"),\n initial=False,\n required=False,\n )\n\n pubdate = forms.DateTimeField(\n label=_(\"Date de publication (exemple: 25/12/2015 15:00)\"),\n widget=forms.DateTimeInput(attrs={\"type\": \"datetime-local\"}),\n )\n\n request = forms.IntegerField(widget=forms.HiddenInput(), required=False)\n\n def __init__(self, *args, **kwargs):\n hide_major_update_field = kwargs.pop(\"hide_major_update_field\", False)\n\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = \"content-wrapper\"\n self.helper.form_method = \"post\"\n self.helper.form_action = reverse(\"featured:resource-create\")\n\n fields = [Field(\"request\"), Field(\"title\"), Field(\"type\"), Field(\"authors\"), Field(\"image_url\"), Field(\"url\")]\n\n if not hide_major_update_field:\n fields.append(Field(\"major_update\"))\n\n fields.extend(\n [\n Field(\"pubdate\"),\n ButtonHolder(\n StrictButton(_(\"Enregistrer\"), type=\"submit\"),\n ),\n ]\n )\n\n self.helper.layout = Layout(*fields)\n\n\nclass FeaturedMessageForm(forms.ModelForm):\n class Meta:\n model = FeaturedMessage\n\n fields = [\"hook\", \"message\", \"url\"]\n\n widgets = {\n \"hook\": forms.TextInput(attrs={\"placeholder\": _('Mot d\\'accroche court (\"Nouveau\u00a0!\")')}),\n \"message\": forms.TextInput(attrs={\"placeholder\": _(\"Message \u00e0 afficher\")}),\n \"url\": forms.URLInput(attrs={\"placeholder\": _(\"Lien vers la description de la ressource\")}),\n }\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = \"content-wrapper\"\n self.helper.form_method = \"post\"\n self.helper.form_action = reverse(\"featured:message-create\")\n\n self.helper.layout = Layout(\n Field(\"hook\"),\n Field(\"message\"),\n Field(\"url\"),\n ButtonHolder(\n StrictButton(_(\"Enregistrer\"), type=\"submit\"),\n ),\n )\n", "path": "zds/featured/forms.py"}]}
| 1,605 | 346 |
gh_patches_debug_8191
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-10361
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
missing_number algorithm dosen't work as intended (bit_manipulation/missing_number.py)
### Repository commit
d0c54acd75cedf14cff353869482a0487fea1697
### Python version (python --version)
Python 3.12.0
### Dependencies version (pip freeze)
setuptools==68.2.2
wheel==0.41.2
### Expected behavior
for array [1,3,4,5,6] the output should be 2
### Actual behavior
the output got is 4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bit_manipulation/missing_number.py`
Content:
```
1 def find_missing_number(nums: list[int]) -> int:
2 """
3 Finds the missing number in a list of consecutive integers.
4
5 Args:
6 nums: A list of integers.
7
8 Returns:
9 The missing number.
10
11 Example:
12 >>> find_missing_number([0, 1, 3, 4])
13 2
14 """
15 n = len(nums)
16 missing_number = n
17
18 for i in range(n):
19 missing_number ^= i ^ nums[i]
20
21 return missing_number
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bit_manipulation/missing_number.py b/bit_manipulation/missing_number.py
--- a/bit_manipulation/missing_number.py
+++ b/bit_manipulation/missing_number.py
@@ -11,11 +11,18 @@
Example:
>>> find_missing_number([0, 1, 3, 4])
2
+ >>> find_missing_number([1, 3, 4, 5, 6])
+ 2
+ >>> find_missing_number([6, 5, 4, 2, 1])
+ 3
+ >>> find_missing_number([6, 1, 5, 3, 4])
+ 2
"""
- n = len(nums)
- missing_number = n
+ low = min(nums)
+ high = max(nums)
+ missing_number = high
- for i in range(n):
- missing_number ^= i ^ nums[i]
+ for i in range(low, high):
+ missing_number ^= i ^ nums[i - low]
return missing_number
|
{"golden_diff": "diff --git a/bit_manipulation/missing_number.py b/bit_manipulation/missing_number.py\n--- a/bit_manipulation/missing_number.py\n+++ b/bit_manipulation/missing_number.py\n@@ -11,11 +11,18 @@\n Example:\n >>> find_missing_number([0, 1, 3, 4])\n 2\n+ >>> find_missing_number([1, 3, 4, 5, 6])\n+ 2\n+ >>> find_missing_number([6, 5, 4, 2, 1])\n+ 3\n+ >>> find_missing_number([6, 1, 5, 3, 4])\n+ 2\n \"\"\"\n- n = len(nums)\n- missing_number = n\n+ low = min(nums)\n+ high = max(nums)\n+ missing_number = high\n \n- for i in range(n):\n- missing_number ^= i ^ nums[i]\n+ for i in range(low, high):\n+ missing_number ^= i ^ nums[i - low]\n \n return missing_number\n", "issue": "missing_number algorithm dosen't work as intended (bit_manipulation/missing_number.py)\n### Repository commit\n\nd0c54acd75cedf14cff353869482a0487fea1697\n\n### Python version (python --version)\n\nPython 3.12.0\n\n### Dependencies version (pip freeze)\n\nsetuptools==68.2.2\r\nwheel==0.41.2\n\n### Expected behavior\n\nfor array [1,3,4,5,6] the output should be 2\n\n### Actual behavior\n\nthe output got is 4\n", "before_files": [{"content": "def find_missing_number(nums: list[int]) -> int:\n \"\"\"\n Finds the missing number in a list of consecutive integers.\n\n Args:\n nums: A list of integers.\n\n Returns:\n The missing number.\n\n Example:\n >>> find_missing_number([0, 1, 3, 4])\n 2\n \"\"\"\n n = len(nums)\n missing_number = n\n\n for i in range(n):\n missing_number ^= i ^ nums[i]\n\n return missing_number\n", "path": "bit_manipulation/missing_number.py"}], "after_files": [{"content": "def find_missing_number(nums: list[int]) -> int:\n \"\"\"\n Finds the missing number in a list of consecutive integers.\n\n Args:\n nums: A list of integers.\n\n Returns:\n The missing number.\n\n Example:\n >>> find_missing_number([0, 1, 3, 4])\n 2\n >>> find_missing_number([1, 3, 4, 5, 6])\n 2\n >>> find_missing_number([6, 5, 4, 2, 1])\n 3\n >>> find_missing_number([6, 1, 5, 3, 4])\n 2\n \"\"\"\n low = min(nums)\n high = max(nums)\n missing_number = high\n\n for i in range(low, high):\n missing_number ^= i ^ nums[i - low]\n\n return missing_number\n", "path": "bit_manipulation/missing_number.py"}]}
| 540 | 243 |
gh_patches_debug_15727
|
rasdani/github-patches
|
git_diff
|
crytic__slither-561
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AssertionError when obtaining address of library
```solidity
library UnsafeMath {
function add(uint a, uint b) external returns (uint) {
return a + b;
}
}
contract Test {
function getUnsafeMathAddr() public view returns (address) {
return address(UnsafeMath);
}
}
```
https://solidity.readthedocs.io/en/latest/contracts.html#libraries:~:text=It%20is%20possible%20to%20obtain%20the%20address%20of%20a%20library
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `slither/slithir/operations/type_conversion.py`
Content:
```
1 from slither.core.solidity_types.type import Type
2 from slither.slithir.operations.lvalue import OperationWithLValue
3 from slither.slithir.utils.utils import is_valid_lvalue, is_valid_rvalue
4
5
6 class TypeConversion(OperationWithLValue):
7
8 def __init__(self, result, variable, variable_type):
9 super().__init__()
10 assert is_valid_rvalue(variable)
11 assert is_valid_lvalue(result)
12 assert isinstance(variable_type, Type)
13
14 self._variable = variable
15 self._type = variable_type
16 self._lvalue = result
17
18
19 @property
20 def variable(self):
21 return self._variable
22
23 @property
24 def type(self):
25 return self._type
26
27 @property
28 def read(self):
29 return [self.variable]
30
31 def __str__(self):
32 return str(self.lvalue) +' = CONVERT {} to {}'.format(self.variable, self.type)
33
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/slither/slithir/operations/type_conversion.py b/slither/slithir/operations/type_conversion.py
--- a/slither/slithir/operations/type_conversion.py
+++ b/slither/slithir/operations/type_conversion.py
@@ -1,3 +1,4 @@
+from slither.core.declarations import Contract
from slither.core.solidity_types.type import Type
from slither.slithir.operations.lvalue import OperationWithLValue
from slither.slithir.utils.utils import is_valid_lvalue, is_valid_rvalue
@@ -7,7 +8,7 @@
def __init__(self, result, variable, variable_type):
super().__init__()
- assert is_valid_rvalue(variable)
+ assert is_valid_rvalue(variable) or isinstance(variable, Contract)
assert is_valid_lvalue(result)
assert isinstance(variable_type, Type)
|
{"golden_diff": "diff --git a/slither/slithir/operations/type_conversion.py b/slither/slithir/operations/type_conversion.py\n--- a/slither/slithir/operations/type_conversion.py\n+++ b/slither/slithir/operations/type_conversion.py\n@@ -1,3 +1,4 @@\n+from slither.core.declarations import Contract\n from slither.core.solidity_types.type import Type\n from slither.slithir.operations.lvalue import OperationWithLValue\n from slither.slithir.utils.utils import is_valid_lvalue, is_valid_rvalue\n@@ -7,7 +8,7 @@\n \n def __init__(self, result, variable, variable_type):\n super().__init__()\n- assert is_valid_rvalue(variable)\n+ assert is_valid_rvalue(variable) or isinstance(variable, Contract)\n assert is_valid_lvalue(result)\n assert isinstance(variable_type, Type)\n", "issue": "AssertionError when obtaining address of library\n```solidity\r\nlibrary UnsafeMath {\r\n function add(uint a, uint b) external returns (uint) {\r\n return a + b;\r\n }\r\n}\r\n\r\ncontract Test {\r\n function getUnsafeMathAddr() public view returns (address) {\r\n return address(UnsafeMath);\r\n }\r\n}\r\n```\r\n\r\nhttps://solidity.readthedocs.io/en/latest/contracts.html#libraries:~:text=It%20is%20possible%20to%20obtain%20the%20address%20of%20a%20library\n", "before_files": [{"content": "from slither.core.solidity_types.type import Type\nfrom slither.slithir.operations.lvalue import OperationWithLValue\nfrom slither.slithir.utils.utils import is_valid_lvalue, is_valid_rvalue\n\n\nclass TypeConversion(OperationWithLValue):\n\n def __init__(self, result, variable, variable_type):\n super().__init__()\n assert is_valid_rvalue(variable)\n assert is_valid_lvalue(result)\n assert isinstance(variable_type, Type)\n\n self._variable = variable\n self._type = variable_type\n self._lvalue = result\n \n\n @property\n def variable(self):\n return self._variable\n\n @property\n def type(self):\n return self._type\n\n @property\n def read(self):\n return [self.variable]\n\n def __str__(self):\n return str(self.lvalue) +' = CONVERT {} to {}'.format(self.variable, self.type)\n", "path": "slither/slithir/operations/type_conversion.py"}], "after_files": [{"content": "from slither.core.declarations import Contract\nfrom slither.core.solidity_types.type import Type\nfrom slither.slithir.operations.lvalue import OperationWithLValue\nfrom slither.slithir.utils.utils import is_valid_lvalue, is_valid_rvalue\n\n\nclass TypeConversion(OperationWithLValue):\n\n def __init__(self, result, variable, variable_type):\n super().__init__()\n assert is_valid_rvalue(variable) or isinstance(variable, Contract)\n assert is_valid_lvalue(result)\n assert isinstance(variable_type, Type)\n\n self._variable = variable\n self._type = variable_type\n self._lvalue = result\n \n\n @property\n def variable(self):\n return self._variable\n\n @property\n def type(self):\n return self._type\n\n @property\n def read(self):\n return [self.variable]\n\n def __str__(self):\n return str(self.lvalue) +' = CONVERT {} to {}'.format(self.variable, self.type)\n", "path": "slither/slithir/operations/type_conversion.py"}]}
| 650 | 188 |
gh_patches_debug_35196
|
rasdani/github-patches
|
git_diff
|
facebookresearch__CompilerGym-692
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add support for loading URLs to CompilerEnvStateReader.read_paths()
## 🚀 Feature
Extend [CompilerEnvStateReader.read_paths()](https://github.com/facebookresearch/CompilerGym/blob/de07d4867e0bb0b47f6fa4bce5e262ea8f014c3e/tests/compiler_env_state_test.py#L212-L335) so that any combination of file path or URL can be loaded.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/compiler_env_state.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 """This module defines a class to represent a compiler environment state."""
6 import csv
7 import sys
8 from typing import Iterable, List, Optional, TextIO
9
10 from pydantic import BaseModel, Field, validator
11
12 from compiler_gym.datasets.uri import BenchmarkUri
13 from compiler_gym.util.truncate import truncate
14
15
16 class CompilerEnvState(BaseModel):
17 """The representation of a compiler environment state.
18
19 The state of an environment is defined as a benchmark and a sequence of
20 actions that has been applied to it. For a given environment, the state
21 contains the information required to reproduce the result.
22 """
23
24 benchmark: str = Field(
25 allow_mutation=False,
26 examples=[
27 "benchmark://cbench-v1/crc32",
28 "generator://csmith-v0/0",
29 ],
30 )
31 """The URI of the benchmark used for this episode."""
32
33 commandline: str
34 """The list of actions that produced this state, as a commandline."""
35
36 walltime: float
37 """The walltime of the episode in seconds. Must be non-negative."""
38
39 reward: Optional[float] = Field(
40 required=False,
41 default=None,
42 allow_mutation=True,
43 )
44 """The cumulative reward for this episode. Optional."""
45
46 @validator("walltime")
47 def walltime_nonnegative(cls, v):
48 if v is not None:
49 assert v >= 0, "Walltime cannot be negative"
50 return v
51
52 @validator("benchmark", pre=True)
53 def validate_benchmark(cls, value):
54 if isinstance(value, BenchmarkUri):
55 return str(value)
56 return value
57
58 @property
59 def has_reward(self) -> bool:
60 """Return whether the state has a reward value."""
61 return self.reward is not None
62
63 def __eq__(self, rhs) -> bool:
64 if not isinstance(rhs, CompilerEnvState):
65 return False
66 epsilon = 1e-5
67 # Only compare reward if both states have it.
68 if not (self.has_reward and rhs.has_reward):
69 reward_equal = True
70 else:
71 reward_equal = abs(self.reward - rhs.reward) < epsilon
72 # Note that walltime is excluded from equivalence checks as two states
73 # are equivalent if they define the same point in the optimization space
74 # irrespective of how long it took to get there.
75 return (
76 self.benchmark == rhs.benchmark
77 and reward_equal
78 and self.commandline == rhs.commandline
79 )
80
81 def __ne__(self, rhs) -> bool:
82 return not self == rhs
83
84 class Config:
85 validate_assignment = True
86
87
88 class CompilerEnvStateWriter:
89 """Serialize compiler environment states to CSV.
90
91 Example use:
92
93 >>> with CompilerEnvStateWriter(open("results.csv", "wb")) as writer:
94 ... writer.write_state(env.state)
95 """
96
97 def __init__(self, f: TextIO, header: bool = True):
98 """Constructor.
99
100 :param f: The file to write to.
101 :param header: Whether to include a header row.
102 """
103 self.f = f
104 self.writer = csv.writer(self.f, lineterminator="\n")
105 self.header = header
106
107 def write_state(self, state: CompilerEnvState, flush: bool = False) -> None:
108 """Write the state to file.
109
110 :param state: A compiler environment state.
111
112 :param flush: Write to file immediately.
113 """
114 if self.header:
115 self.writer.writerow(("benchmark", "reward", "walltime", "commandline"))
116 self.header = False
117 self.writer.writerow(
118 (state.benchmark, state.reward, state.walltime, state.commandline)
119 )
120 if flush:
121 self.f.flush()
122
123 def __enter__(self):
124 """Support with-statement for the writer."""
125 return self
126
127 def __exit__(self, *args):
128 """Support with-statement for the writer."""
129 self.f.close()
130
131
132 class CompilerEnvStateReader:
133 """Read states from a CSV file.
134
135 Example usage:
136
137 >>> with CompilerEnvStateReader(open("results.csv", "rb")) as reader:
138 ... for state in reader:
139 ... print(state)
140 """
141
142 def __init__(self, f: TextIO):
143 """Constructor.
144
145 :param f: The file to read.
146 """
147 self.f = f
148 self.reader = csv.reader(self.f)
149
150 def __iter__(self) -> Iterable[CompilerEnvState]:
151 """Read the states from the file."""
152 columns_in_order = ["benchmark", "reward", "walltime", "commandline"]
153 # Read the CSV and coerce the columns into the expected order.
154 for (
155 benchmark,
156 reward,
157 walltime,
158 commandline,
159 ) in self._iterate_columns_in_order(self.reader, columns_in_order):
160 yield CompilerEnvState(
161 benchmark=benchmark,
162 reward=None if reward == "" else float(reward),
163 walltime=0 if walltime == "" else float(walltime),
164 commandline=commandline,
165 )
166
167 @staticmethod
168 def _iterate_columns_in_order(
169 reader: csv.reader, columns: List[str]
170 ) -> Iterable[List[str]]:
171 """Read the input CSV and return each row in the given column order.
172
173 Supports CSVs both with and without a header. If no header, columns are
174 expected to be in the correct order. Else the header row is used to
175 determine column order.
176
177 Header row detection is case insensitive.
178
179 :param reader: The CSV file to read.
180
181 :param columns: A list of column names in the order that they are
182 expected.
183
184 :return: An iterator over rows.
185 """
186 try:
187 row = next(reader)
188 except StopIteration:
189 # Empty file.
190 return
191
192 if len(row) != len(columns):
193 raise ValueError(
194 f"Expected {len(columns)} columns in the first row of CSV: {truncate(row)}"
195 )
196
197 # Convert the maybe-header columns to lowercase for case-insensitive
198 # comparison.
199 maybe_header = [v.lower() for v in row]
200 if set(maybe_header) == set(columns):
201 # The first row matches the expected columns names, so use it to
202 # determine the column order.
203 column_order = [maybe_header.index(v) for v in columns]
204 yield from ([row[v] for v in column_order] for row in reader)
205 else:
206 # The first row isn't a header, so assume that all rows are in
207 # expected column order.
208 yield row
209 yield from reader
210
211 def __enter__(self):
212 """Support with-statement for the reader."""
213 return self
214
215 def __exit__(self, *args):
216 """Support with-statement for the reader."""
217 self.f.close()
218
219 @staticmethod
220 def read_paths(paths: Iterable[str]) -> Iterable[CompilerEnvState]:
221 """Read a states from a list of file paths.
222
223 Read states from stdin using a special path :code:`"-"`.
224
225 :param: A list of paths.
226
227 :return: A generator of compiler env states.
228 """
229 for path in paths:
230 if path == "-":
231 yield from iter(CompilerEnvStateReader(sys.stdin))
232 else:
233 with open(path) as f:
234 yield from iter(CompilerEnvStateReader(f))
235
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/compiler_gym/compiler_env_state.py b/compiler_gym/compiler_env_state.py
--- a/compiler_gym/compiler_env_state.py
+++ b/compiler_gym/compiler_env_state.py
@@ -4,9 +4,12 @@
# LICENSE file in the root directory of this source tree.
"""This module defines a class to represent a compiler environment state."""
import csv
+import re
import sys
+from io import StringIO
from typing import Iterable, List, Optional, TextIO
+import requests
from pydantic import BaseModel, Field, validator
from compiler_gym.datasets.uri import BenchmarkUri
@@ -23,10 +26,7 @@
benchmark: str = Field(
allow_mutation=False,
- examples=[
- "benchmark://cbench-v1/crc32",
- "generator://csmith-v0/0",
- ],
+ examples=["benchmark://cbench-v1/crc32", "generator://csmith-v0/0",],
)
"""The URI of the benchmark used for this episode."""
@@ -37,9 +37,7 @@
"""The walltime of the episode in seconds. Must be non-negative."""
reward: Optional[float] = Field(
- required=False,
- default=None,
- allow_mutation=True,
+ required=False, default=None, allow_mutation=True,
)
"""The cumulative reward for this episode. Optional."""
@@ -229,6 +227,16 @@
for path in paths:
if path == "-":
yield from iter(CompilerEnvStateReader(sys.stdin))
+ elif (
+ re.match(r"^(http|https)://[a-zA-Z0-9.-_/]+(\.csv)$", path) is not None
+ ):
+ response: requests.Response = requests.get(path)
+ if response.status_code == 200:
+ yield from iter(CompilerEnvStateReader(StringIO(response.text)))
+ else:
+ raise requests.exceptions.InvalidURL(
+ f"Url {path} content could not be obtained"
+ )
else:
with open(path) as f:
yield from iter(CompilerEnvStateReader(f))
|
{"golden_diff": "diff --git a/compiler_gym/compiler_env_state.py b/compiler_gym/compiler_env_state.py\n--- a/compiler_gym/compiler_env_state.py\n+++ b/compiler_gym/compiler_env_state.py\n@@ -4,9 +4,12 @@\n # LICENSE file in the root directory of this source tree.\n \"\"\"This module defines a class to represent a compiler environment state.\"\"\"\n import csv\n+import re\n import sys\n+from io import StringIO\n from typing import Iterable, List, Optional, TextIO\n \n+import requests\n from pydantic import BaseModel, Field, validator\n \n from compiler_gym.datasets.uri import BenchmarkUri\n@@ -23,10 +26,7 @@\n \n benchmark: str = Field(\n allow_mutation=False,\n- examples=[\n- \"benchmark://cbench-v1/crc32\",\n- \"generator://csmith-v0/0\",\n- ],\n+ examples=[\"benchmark://cbench-v1/crc32\", \"generator://csmith-v0/0\",],\n )\n \"\"\"The URI of the benchmark used for this episode.\"\"\"\n \n@@ -37,9 +37,7 @@\n \"\"\"The walltime of the episode in seconds. Must be non-negative.\"\"\"\n \n reward: Optional[float] = Field(\n- required=False,\n- default=None,\n- allow_mutation=True,\n+ required=False, default=None, allow_mutation=True,\n )\n \"\"\"The cumulative reward for this episode. Optional.\"\"\"\n \n@@ -229,6 +227,16 @@\n for path in paths:\n if path == \"-\":\n yield from iter(CompilerEnvStateReader(sys.stdin))\n+ elif (\n+ re.match(r\"^(http|https)://[a-zA-Z0-9.-_/]+(\\.csv)$\", path) is not None\n+ ):\n+ response: requests.Response = requests.get(path)\n+ if response.status_code == 200:\n+ yield from iter(CompilerEnvStateReader(StringIO(response.text)))\n+ else:\n+ raise requests.exceptions.InvalidURL(\n+ f\"Url {path} content could not be obtained\"\n+ )\n else:\n with open(path) as f:\n yield from iter(CompilerEnvStateReader(f))\n", "issue": "Add support for loading URLs to CompilerEnvStateReader.read_paths()\n## \ud83d\ude80 Feature\r\n\r\nExtend [CompilerEnvStateReader.read_paths()](https://github.com/facebookresearch/CompilerGym/blob/de07d4867e0bb0b47f6fa4bce5e262ea8f014c3e/tests/compiler_env_state_test.py#L212-L335) so that any combination of file path or URL can be loaded.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"This module defines a class to represent a compiler environment state.\"\"\"\nimport csv\nimport sys\nfrom typing import Iterable, List, Optional, TextIO\n\nfrom pydantic import BaseModel, Field, validator\n\nfrom compiler_gym.datasets.uri import BenchmarkUri\nfrom compiler_gym.util.truncate import truncate\n\n\nclass CompilerEnvState(BaseModel):\n \"\"\"The representation of a compiler environment state.\n\n The state of an environment is defined as a benchmark and a sequence of\n actions that has been applied to it. For a given environment, the state\n contains the information required to reproduce the result.\n \"\"\"\n\n benchmark: str = Field(\n allow_mutation=False,\n examples=[\n \"benchmark://cbench-v1/crc32\",\n \"generator://csmith-v0/0\",\n ],\n )\n \"\"\"The URI of the benchmark used for this episode.\"\"\"\n\n commandline: str\n \"\"\"The list of actions that produced this state, as a commandline.\"\"\"\n\n walltime: float\n \"\"\"The walltime of the episode in seconds. Must be non-negative.\"\"\"\n\n reward: Optional[float] = Field(\n required=False,\n default=None,\n allow_mutation=True,\n )\n \"\"\"The cumulative reward for this episode. Optional.\"\"\"\n\n @validator(\"walltime\")\n def walltime_nonnegative(cls, v):\n if v is not None:\n assert v >= 0, \"Walltime cannot be negative\"\n return v\n\n @validator(\"benchmark\", pre=True)\n def validate_benchmark(cls, value):\n if isinstance(value, BenchmarkUri):\n return str(value)\n return value\n\n @property\n def has_reward(self) -> bool:\n \"\"\"Return whether the state has a reward value.\"\"\"\n return self.reward is not None\n\n def __eq__(self, rhs) -> bool:\n if not isinstance(rhs, CompilerEnvState):\n return False\n epsilon = 1e-5\n # Only compare reward if both states have it.\n if not (self.has_reward and rhs.has_reward):\n reward_equal = True\n else:\n reward_equal = abs(self.reward - rhs.reward) < epsilon\n # Note that walltime is excluded from equivalence checks as two states\n # are equivalent if they define the same point in the optimization space\n # irrespective of how long it took to get there.\n return (\n self.benchmark == rhs.benchmark\n and reward_equal\n and self.commandline == rhs.commandline\n )\n\n def __ne__(self, rhs) -> bool:\n return not self == rhs\n\n class Config:\n validate_assignment = True\n\n\nclass CompilerEnvStateWriter:\n \"\"\"Serialize compiler environment states to CSV.\n\n Example use:\n\n >>> with CompilerEnvStateWriter(open(\"results.csv\", \"wb\")) as writer:\n ... writer.write_state(env.state)\n \"\"\"\n\n def __init__(self, f: TextIO, header: bool = True):\n \"\"\"Constructor.\n\n :param f: The file to write to.\n :param header: Whether to include a header row.\n \"\"\"\n self.f = f\n self.writer = csv.writer(self.f, lineterminator=\"\\n\")\n self.header = header\n\n def write_state(self, state: CompilerEnvState, flush: bool = False) -> None:\n \"\"\"Write the state to file.\n\n :param state: A compiler environment state.\n\n :param flush: Write to file immediately.\n \"\"\"\n if self.header:\n self.writer.writerow((\"benchmark\", \"reward\", \"walltime\", \"commandline\"))\n self.header = False\n self.writer.writerow(\n (state.benchmark, state.reward, state.walltime, state.commandline)\n )\n if flush:\n self.f.flush()\n\n def __enter__(self):\n \"\"\"Support with-statement for the writer.\"\"\"\n return self\n\n def __exit__(self, *args):\n \"\"\"Support with-statement for the writer.\"\"\"\n self.f.close()\n\n\nclass CompilerEnvStateReader:\n \"\"\"Read states from a CSV file.\n\n Example usage:\n\n >>> with CompilerEnvStateReader(open(\"results.csv\", \"rb\")) as reader:\n ... for state in reader:\n ... print(state)\n \"\"\"\n\n def __init__(self, f: TextIO):\n \"\"\"Constructor.\n\n :param f: The file to read.\n \"\"\"\n self.f = f\n self.reader = csv.reader(self.f)\n\n def __iter__(self) -> Iterable[CompilerEnvState]:\n \"\"\"Read the states from the file.\"\"\"\n columns_in_order = [\"benchmark\", \"reward\", \"walltime\", \"commandline\"]\n # Read the CSV and coerce the columns into the expected order.\n for (\n benchmark,\n reward,\n walltime,\n commandline,\n ) in self._iterate_columns_in_order(self.reader, columns_in_order):\n yield CompilerEnvState(\n benchmark=benchmark,\n reward=None if reward == \"\" else float(reward),\n walltime=0 if walltime == \"\" else float(walltime),\n commandline=commandline,\n )\n\n @staticmethod\n def _iterate_columns_in_order(\n reader: csv.reader, columns: List[str]\n ) -> Iterable[List[str]]:\n \"\"\"Read the input CSV and return each row in the given column order.\n\n Supports CSVs both with and without a header. If no header, columns are\n expected to be in the correct order. Else the header row is used to\n determine column order.\n\n Header row detection is case insensitive.\n\n :param reader: The CSV file to read.\n\n :param columns: A list of column names in the order that they are\n expected.\n\n :return: An iterator over rows.\n \"\"\"\n try:\n row = next(reader)\n except StopIteration:\n # Empty file.\n return\n\n if len(row) != len(columns):\n raise ValueError(\n f\"Expected {len(columns)} columns in the first row of CSV: {truncate(row)}\"\n )\n\n # Convert the maybe-header columns to lowercase for case-insensitive\n # comparison.\n maybe_header = [v.lower() for v in row]\n if set(maybe_header) == set(columns):\n # The first row matches the expected columns names, so use it to\n # determine the column order.\n column_order = [maybe_header.index(v) for v in columns]\n yield from ([row[v] for v in column_order] for row in reader)\n else:\n # The first row isn't a header, so assume that all rows are in\n # expected column order.\n yield row\n yield from reader\n\n def __enter__(self):\n \"\"\"Support with-statement for the reader.\"\"\"\n return self\n\n def __exit__(self, *args):\n \"\"\"Support with-statement for the reader.\"\"\"\n self.f.close()\n\n @staticmethod\n def read_paths(paths: Iterable[str]) -> Iterable[CompilerEnvState]:\n \"\"\"Read a states from a list of file paths.\n\n Read states from stdin using a special path :code:`\"-\"`.\n\n :param: A list of paths.\n\n :return: A generator of compiler env states.\n \"\"\"\n for path in paths:\n if path == \"-\":\n yield from iter(CompilerEnvStateReader(sys.stdin))\n else:\n with open(path) as f:\n yield from iter(CompilerEnvStateReader(f))\n", "path": "compiler_gym/compiler_env_state.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"This module defines a class to represent a compiler environment state.\"\"\"\nimport csv\nimport re\nimport sys\nfrom io import StringIO\nfrom typing import Iterable, List, Optional, TextIO\n\nimport requests\nfrom pydantic import BaseModel, Field, validator\n\nfrom compiler_gym.datasets.uri import BenchmarkUri\nfrom compiler_gym.util.truncate import truncate\n\n\nclass CompilerEnvState(BaseModel):\n \"\"\"The representation of a compiler environment state.\n\n The state of an environment is defined as a benchmark and a sequence of\n actions that has been applied to it. For a given environment, the state\n contains the information required to reproduce the result.\n \"\"\"\n\n benchmark: str = Field(\n allow_mutation=False,\n examples=[\"benchmark://cbench-v1/crc32\", \"generator://csmith-v0/0\",],\n )\n \"\"\"The URI of the benchmark used for this episode.\"\"\"\n\n commandline: str\n \"\"\"The list of actions that produced this state, as a commandline.\"\"\"\n\n walltime: float\n \"\"\"The walltime of the episode in seconds. Must be non-negative.\"\"\"\n\n reward: Optional[float] = Field(\n required=False, default=None, allow_mutation=True,\n )\n \"\"\"The cumulative reward for this episode. Optional.\"\"\"\n\n @validator(\"walltime\")\n def walltime_nonnegative(cls, v):\n if v is not None:\n assert v >= 0, \"Walltime cannot be negative\"\n return v\n\n @validator(\"benchmark\", pre=True)\n def validate_benchmark(cls, value):\n if isinstance(value, BenchmarkUri):\n return str(value)\n return value\n\n @property\n def has_reward(self) -> bool:\n \"\"\"Return whether the state has a reward value.\"\"\"\n return self.reward is not None\n\n def __eq__(self, rhs) -> bool:\n if not isinstance(rhs, CompilerEnvState):\n return False\n epsilon = 1e-5\n # Only compare reward if both states have it.\n if not (self.has_reward and rhs.has_reward):\n reward_equal = True\n else:\n reward_equal = abs(self.reward - rhs.reward) < epsilon\n # Note that walltime is excluded from equivalence checks as two states\n # are equivalent if they define the same point in the optimization space\n # irrespective of how long it took to get there.\n return (\n self.benchmark == rhs.benchmark\n and reward_equal\n and self.commandline == rhs.commandline\n )\n\n def __ne__(self, rhs) -> bool:\n return not self == rhs\n\n class Config:\n validate_assignment = True\n\n\nclass CompilerEnvStateWriter:\n \"\"\"Serialize compiler environment states to CSV.\n\n Example use:\n\n >>> with CompilerEnvStateWriter(open(\"results.csv\", \"wb\")) as writer:\n ... writer.write_state(env.state)\n \"\"\"\n\n def __init__(self, f: TextIO, header: bool = True):\n \"\"\"Constructor.\n\n :param f: The file to write to.\n :param header: Whether to include a header row.\n \"\"\"\n self.f = f\n self.writer = csv.writer(self.f, lineterminator=\"\\n\")\n self.header = header\n\n def write_state(self, state: CompilerEnvState, flush: bool = False) -> None:\n \"\"\"Write the state to file.\n\n :param state: A compiler environment state.\n\n :param flush: Write to file immediately.\n \"\"\"\n if self.header:\n self.writer.writerow((\"benchmark\", \"reward\", \"walltime\", \"commandline\"))\n self.header = False\n self.writer.writerow(\n (state.benchmark, state.reward, state.walltime, state.commandline)\n )\n if flush:\n self.f.flush()\n\n def __enter__(self):\n \"\"\"Support with-statement for the writer.\"\"\"\n return self\n\n def __exit__(self, *args):\n \"\"\"Support with-statement for the writer.\"\"\"\n self.f.close()\n\n\nclass CompilerEnvStateReader:\n \"\"\"Read states from a CSV file.\n\n Example usage:\n\n >>> with CompilerEnvStateReader(open(\"results.csv\", \"rb\")) as reader:\n ... for state in reader:\n ... print(state)\n \"\"\"\n\n def __init__(self, f: TextIO):\n \"\"\"Constructor.\n\n :param f: The file to read.\n \"\"\"\n self.f = f\n self.reader = csv.reader(self.f)\n\n def __iter__(self) -> Iterable[CompilerEnvState]:\n \"\"\"Read the states from the file.\"\"\"\n columns_in_order = [\"benchmark\", \"reward\", \"walltime\", \"commandline\"]\n # Read the CSV and coerce the columns into the expected order.\n for (\n benchmark,\n reward,\n walltime,\n commandline,\n ) in self._iterate_columns_in_order(self.reader, columns_in_order):\n yield CompilerEnvState(\n benchmark=benchmark,\n reward=None if reward == \"\" else float(reward),\n walltime=0 if walltime == \"\" else float(walltime),\n commandline=commandline,\n )\n\n @staticmethod\n def _iterate_columns_in_order(\n reader: csv.reader, columns: List[str]\n ) -> Iterable[List[str]]:\n \"\"\"Read the input CSV and return each row in the given column order.\n\n Supports CSVs both with and without a header. If no header, columns are\n expected to be in the correct order. Else the header row is used to\n determine column order.\n\n Header row detection is case insensitive.\n\n :param reader: The CSV file to read.\n\n :param columns: A list of column names in the order that they are\n expected.\n\n :return: An iterator over rows.\n \"\"\"\n try:\n row = next(reader)\n except StopIteration:\n # Empty file.\n return\n\n if len(row) != len(columns):\n raise ValueError(\n f\"Expected {len(columns)} columns in the first row of CSV: {truncate(row)}\"\n )\n\n # Convert the maybe-header columns to lowercase for case-insensitive\n # comparison.\n maybe_header = [v.lower() for v in row]\n if set(maybe_header) == set(columns):\n # The first row matches the expected columns names, so use it to\n # determine the column order.\n column_order = [maybe_header.index(v) for v in columns]\n yield from ([row[v] for v in column_order] for row in reader)\n else:\n # The first row isn't a header, so assume that all rows are in\n # expected column order.\n yield row\n yield from reader\n\n def __enter__(self):\n \"\"\"Support with-statement for the reader.\"\"\"\n return self\n\n def __exit__(self, *args):\n \"\"\"Support with-statement for the reader.\"\"\"\n self.f.close()\n\n @staticmethod\n def read_paths(paths: Iterable[str]) -> Iterable[CompilerEnvState]:\n \"\"\"Read a states from a list of file paths.\n\n Read states from stdin using a special path :code:`\"-\"`.\n\n :param: A list of paths.\n\n :return: A generator of compiler env states.\n \"\"\"\n for path in paths:\n if path == \"-\":\n yield from iter(CompilerEnvStateReader(sys.stdin))\n elif (\n re.match(r\"^(http|https)://[a-zA-Z0-9.-_/]+(\\.csv)$\", path) is not None\n ):\n response: requests.Response = requests.get(path)\n if response.status_code == 200:\n yield from iter(CompilerEnvStateReader(StringIO(response.text)))\n else:\n raise requests.exceptions.InvalidURL(\n f\"Url {path} content could not be obtained\"\n )\n else:\n with open(path) as f:\n yield from iter(CompilerEnvStateReader(f))\n", "path": "compiler_gym/compiler_env_state.py"}]}
| 2,601 | 480 |
gh_patches_debug_26253
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5551
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.Pluto: Freezes at commercials and plays no audio on TV Shows
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [x] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest build from the master branch
### Description
Hello! Pluto TV plugin have problems. The plugin plays fine, but it freezes at commercials and some TV shows don't have commercials, it plays no audio because it thinks it at a commercial. This happens on all Pluto TV channels.
### Debug log
```text
C:\Windows\system32>streamlink -l debug https://pluto.tv/live-tv/forever-kids best
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.5
[cli][debug] Streamlink: 6.1.0
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.7.22
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.3
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.18.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.22.2
[cli][debug] trio-websocket: 0.10.3
[cli][debug] typing-extensions: 4.7.1
[cli][debug] urllib3: 2.0.4
[cli][debug] websocket-client: 1.6.1
[cli][debug] Arguments:
[cli][debug] url=https://pluto.tv/live-tv/forever-kids
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files (x86)\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin pluto for URL https://pluto.tv/live-tv/forever-kids
[plugins.pluto][debug] slug=forever-kids
[plugins.pluto][debug] app_version=7.7.0-18f7ab32608969ea5bcbce8d0e23b9d0e1b24717
[stream.hls][warning] Encountered a stream discontinuity. This is unsupported and will result in incoherent output data.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/pluto.py`
Content:
```
1 """
2 $description Live TV and video on-demand service owned by Paramount Streaming.
3 $url pluto.tv
4 $type live, vod
5 $metadata id
6 $metadata author
7 $metadata category
8 $metadata title
9 """
10
11 import logging
12 import re
13 from urllib.parse import parse_qs, urljoin
14 from uuid import uuid4
15
16 from streamlink.plugin import Plugin, pluginmatcher
17 from streamlink.plugin.api import validate
18 from streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWriter
19 from streamlink.utils.url import update_qsd
20
21
22 log = logging.getLogger(__name__)
23
24
25 class PlutoHLSStreamWriter(HLSStreamWriter):
26 ad_re = re.compile(r"_ad/creative/|dai\.google\.com|Pluto_TV_OandO/.*Bumper")
27
28 def should_filter_sequence(self, sequence):
29 return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)
30
31
32 class PlutoHLSStreamReader(HLSStreamReader):
33 __writer__ = PlutoHLSStreamWriter
34
35
36 class PlutoHLSStream(HLSStream):
37 __shortname__ = "hls-pluto"
38 __reader__ = PlutoHLSStreamReader
39
40
41 @pluginmatcher(re.compile(r"""
42 https?://(?:www\.)?pluto\.tv/(?:\w{2}/)?(?:
43 live-tv/(?P<slug_live>[^/]+)
44 |
45 on-demand/series/(?P<slug_series>[^/]+)(?:/season/\d+)?/episode/(?P<slug_episode>[^/]+)
46 |
47 on-demand/movies/(?P<slug_movies>[^/]+)
48 )/?$
49 """, re.VERBOSE))
50 class Pluto(Plugin):
51 def _get_api_data(self, kind, slug, slugfilter=None):
52 log.debug(f"slug={slug}")
53 app_version = self.session.http.get(self.url, schema=validate.Schema(
54 validate.parse_html(),
55 validate.xml_xpath_string(".//head/meta[@name='appVersion']/@content"),
56 validate.any(None, str),
57 ))
58 if not app_version:
59 return
60
61 log.debug(f"app_version={app_version}")
62
63 return self.session.http.get(
64 "https://boot.pluto.tv/v4/start",
65 params={
66 "appName": "web",
67 "appVersion": app_version,
68 "deviceVersion": "94.0.0",
69 "deviceModel": "web",
70 "deviceMake": "firefox",
71 "deviceType": "web",
72 "clientID": str(uuid4()),
73 "clientModelNumber": "1.0",
74 kind: slug,
75 },
76 schema=validate.Schema(
77 validate.parse_json(), {
78 "servers": {
79 "stitcher": validate.url(),
80 },
81 validate.optional("EPG"): [{
82 "name": str,
83 "id": str,
84 "slug": str,
85 "stitched": {
86 "path": str,
87 },
88 }],
89 validate.optional("VOD"): [{
90 "name": str,
91 "id": str,
92 "slug": str,
93 "genre": str,
94 "stitched": {
95 "path": str,
96 },
97 validate.optional("seasons"): [{
98 "episodes": validate.all(
99 [{
100 "name": str,
101 "_id": str,
102 "slug": str,
103 "stitched": {
104 "path": str,
105 },
106 }],
107 validate.filter(lambda k: slugfilter and k["slug"] == slugfilter),
108 ),
109 }],
110 }],
111 "sessionToken": str,
112 "stitcherParams": str,
113 },
114 ),
115 )
116
117 def _get_playlist(self, host, path, params, token):
118 qs = parse_qs(params)
119 qs["jwt"] = token
120 yield from PlutoHLSStream.parse_variant_playlist(self.session, update_qsd(urljoin(host, path), qs)).items()
121
122 @staticmethod
123 def _get_media_data(data, key, slug):
124 media = data.get(key)
125 if media and media[0]["slug"] == slug:
126 return media[0]
127
128 def _get_streams(self):
129 m = self.match.groupdict()
130 if m["slug_live"]:
131 data = self._get_api_data("channelSlug", m["slug_live"])
132 media = self._get_media_data(data, "EPG", m["slug_live"])
133 if not media:
134 return
135
136 self.id = media["id"]
137 self.title = media["name"]
138 path = media["stitched"]["path"]
139
140 elif m["slug_series"] and m["slug_episode"]:
141 data = self._get_api_data("episodeSlugs", m["slug_series"], slugfilter=m["slug_episode"])
142 media = self._get_media_data(data, "VOD", m["slug_series"])
143 if not media or "seasons" not in media:
144 return
145
146 for season in media["seasons"]:
147 if season["episodes"]:
148 episode = season["episodes"][0]
149 if episode["slug"] == m["slug_episode"]:
150 break
151 else:
152 return
153
154 self.author = media["name"]
155 self.category = media["genre"]
156 self.id = episode["_id"]
157 self.title = episode["name"]
158 path = episode["stitched"]["path"]
159
160 elif m["slug_movies"]:
161 data = self._get_api_data("episodeSlugs", m["slug_movies"])
162 media = self._get_media_data(data, "VOD", m["slug_movies"])
163 if not media:
164 return
165
166 self.category = media["genre"]
167 self.id = media["id"]
168 self.title = media["name"]
169 path = media["stitched"]["path"]
170
171 else:
172 return
173
174 log.trace(f"data={data!r}")
175 log.debug(f"path={path}")
176
177 return self._get_playlist(
178 data["servers"]["stitcher"],
179 path,
180 data["stitcherParams"],
181 data["sessionToken"],
182 )
183
184
185 __plugin__ = Pluto
186
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/pluto.py b/src/streamlink/plugins/pluto.py
--- a/src/streamlink/plugins/pluto.py
+++ b/src/streamlink/plugins/pluto.py
@@ -10,7 +10,7 @@
import logging
import re
-from urllib.parse import parse_qs, urljoin
+from urllib.parse import parse_qsl, urljoin
from uuid import uuid4
from streamlink.plugin import Plugin, pluginmatcher
@@ -23,7 +23,7 @@
class PlutoHLSStreamWriter(HLSStreamWriter):
- ad_re = re.compile(r"_ad/creative/|dai\.google\.com|Pluto_TV_OandO/.*Bumper")
+ ad_re = re.compile(r"_ad/creative/|dai\.google\.com|Pluto_TV_OandO/.*(Bumper|plutotv_filler)")
def should_filter_sequence(self, sequence):
return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)
@@ -115,9 +115,13 @@
)
def _get_playlist(self, host, path, params, token):
- qs = parse_qs(params)
- qs["jwt"] = token
- yield from PlutoHLSStream.parse_variant_playlist(self.session, update_qsd(urljoin(host, path), qs)).items()
+ qsd = dict(parse_qsl(params))
+ qsd["jwt"] = token
+
+ url = urljoin(host, path)
+ url = update_qsd(url, qsd)
+
+ return PlutoHLSStream.parse_variant_playlist(self.session, url)
@staticmethod
def _get_media_data(data, key, slug):
|
{"golden_diff": "diff --git a/src/streamlink/plugins/pluto.py b/src/streamlink/plugins/pluto.py\n--- a/src/streamlink/plugins/pluto.py\n+++ b/src/streamlink/plugins/pluto.py\n@@ -10,7 +10,7 @@\n \n import logging\n import re\n-from urllib.parse import parse_qs, urljoin\n+from urllib.parse import parse_qsl, urljoin\n from uuid import uuid4\n \n from streamlink.plugin import Plugin, pluginmatcher\n@@ -23,7 +23,7 @@\n \n \n class PlutoHLSStreamWriter(HLSStreamWriter):\n- ad_re = re.compile(r\"_ad/creative/|dai\\.google\\.com|Pluto_TV_OandO/.*Bumper\")\n+ ad_re = re.compile(r\"_ad/creative/|dai\\.google\\.com|Pluto_TV_OandO/.*(Bumper|plutotv_filler)\")\n \n def should_filter_sequence(self, sequence):\n return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)\n@@ -115,9 +115,13 @@\n )\n \n def _get_playlist(self, host, path, params, token):\n- qs = parse_qs(params)\n- qs[\"jwt\"] = token\n- yield from PlutoHLSStream.parse_variant_playlist(self.session, update_qsd(urljoin(host, path), qs)).items()\n+ qsd = dict(parse_qsl(params))\n+ qsd[\"jwt\"] = token\n+\n+ url = urljoin(host, path)\n+ url = update_qsd(url, qsd)\n+\n+ return PlutoHLSStream.parse_variant_playlist(self.session, url)\n \n @staticmethod\n def _get_media_data(data, key, slug):\n", "issue": "plugins.Pluto: Freezes at commercials and plays no audio on TV Shows\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [x] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest build from the master branch\n\n### Description\n\nHello! Pluto TV plugin have problems. The plugin plays fine, but it freezes at commercials and some TV shows don't have commercials, it plays no audio because it thinks it at a commercial. This happens on all Pluto TV channels.\n\n### Debug log\n\n```text\nC:\\Windows\\system32>streamlink -l debug https://pluto.tv/live-tv/forever-kids best\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.5\r\n[cli][debug] Streamlink: 6.1.0\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.7.22\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.3\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.18.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.22.2\r\n[cli][debug] trio-websocket: 0.10.3\r\n[cli][debug] typing-extensions: 4.7.1\r\n[cli][debug] urllib3: 2.0.4\r\n[cli][debug] websocket-client: 1.6.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://pluto.tv/live-tv/forever-kids\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files (x86)\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin pluto for URL https://pluto.tv/live-tv/forever-kids\r\n[plugins.pluto][debug] slug=forever-kids\r\n[plugins.pluto][debug] app_version=7.7.0-18f7ab32608969ea5bcbce8d0e23b9d0e1b24717\r\n[stream.hls][warning] Encountered a stream discontinuity. This is unsupported and will result in incoherent output data.\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV and video on-demand service owned by Paramount Streaming.\n$url pluto.tv\n$type live, vod\n$metadata id\n$metadata author\n$metadata category\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qs, urljoin\nfrom uuid import uuid4\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWriter\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\nclass PlutoHLSStreamWriter(HLSStreamWriter):\n ad_re = re.compile(r\"_ad/creative/|dai\\.google\\.com|Pluto_TV_OandO/.*Bumper\")\n\n def should_filter_sequence(self, sequence):\n return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)\n\n\nclass PlutoHLSStreamReader(HLSStreamReader):\n __writer__ = PlutoHLSStreamWriter\n\n\nclass PlutoHLSStream(HLSStream):\n __shortname__ = \"hls-pluto\"\n __reader__ = PlutoHLSStreamReader\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?pluto\\.tv/(?:\\w{2}/)?(?:\n live-tv/(?P<slug_live>[^/]+)\n |\n on-demand/series/(?P<slug_series>[^/]+)(?:/season/\\d+)?/episode/(?P<slug_episode>[^/]+)\n |\n on-demand/movies/(?P<slug_movies>[^/]+)\n )/?$\n\"\"\", re.VERBOSE))\nclass Pluto(Plugin):\n def _get_api_data(self, kind, slug, slugfilter=None):\n log.debug(f\"slug={slug}\")\n app_version = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//head/meta[@name='appVersion']/@content\"),\n validate.any(None, str),\n ))\n if not app_version:\n return\n\n log.debug(f\"app_version={app_version}\")\n\n return self.session.http.get(\n \"https://boot.pluto.tv/v4/start\",\n params={\n \"appName\": \"web\",\n \"appVersion\": app_version,\n \"deviceVersion\": \"94.0.0\",\n \"deviceModel\": \"web\",\n \"deviceMake\": \"firefox\",\n \"deviceType\": \"web\",\n \"clientID\": str(uuid4()),\n \"clientModelNumber\": \"1.0\",\n kind: slug,\n },\n schema=validate.Schema(\n validate.parse_json(), {\n \"servers\": {\n \"stitcher\": validate.url(),\n },\n validate.optional(\"EPG\"): [{\n \"name\": str,\n \"id\": str,\n \"slug\": str,\n \"stitched\": {\n \"path\": str,\n },\n }],\n validate.optional(\"VOD\"): [{\n \"name\": str,\n \"id\": str,\n \"slug\": str,\n \"genre\": str,\n \"stitched\": {\n \"path\": str,\n },\n validate.optional(\"seasons\"): [{\n \"episodes\": validate.all(\n [{\n \"name\": str,\n \"_id\": str,\n \"slug\": str,\n \"stitched\": {\n \"path\": str,\n },\n }],\n validate.filter(lambda k: slugfilter and k[\"slug\"] == slugfilter),\n ),\n }],\n }],\n \"sessionToken\": str,\n \"stitcherParams\": str,\n },\n ),\n )\n\n def _get_playlist(self, host, path, params, token):\n qs = parse_qs(params)\n qs[\"jwt\"] = token\n yield from PlutoHLSStream.parse_variant_playlist(self.session, update_qsd(urljoin(host, path), qs)).items()\n\n @staticmethod\n def _get_media_data(data, key, slug):\n media = data.get(key)\n if media and media[0][\"slug\"] == slug:\n return media[0]\n\n def _get_streams(self):\n m = self.match.groupdict()\n if m[\"slug_live\"]:\n data = self._get_api_data(\"channelSlug\", m[\"slug_live\"])\n media = self._get_media_data(data, \"EPG\", m[\"slug_live\"])\n if not media:\n return\n\n self.id = media[\"id\"]\n self.title = media[\"name\"]\n path = media[\"stitched\"][\"path\"]\n\n elif m[\"slug_series\"] and m[\"slug_episode\"]:\n data = self._get_api_data(\"episodeSlugs\", m[\"slug_series\"], slugfilter=m[\"slug_episode\"])\n media = self._get_media_data(data, \"VOD\", m[\"slug_series\"])\n if not media or \"seasons\" not in media:\n return\n\n for season in media[\"seasons\"]:\n if season[\"episodes\"]:\n episode = season[\"episodes\"][0]\n if episode[\"slug\"] == m[\"slug_episode\"]:\n break\n else:\n return\n\n self.author = media[\"name\"]\n self.category = media[\"genre\"]\n self.id = episode[\"_id\"]\n self.title = episode[\"name\"]\n path = episode[\"stitched\"][\"path\"]\n\n elif m[\"slug_movies\"]:\n data = self._get_api_data(\"episodeSlugs\", m[\"slug_movies\"])\n media = self._get_media_data(data, \"VOD\", m[\"slug_movies\"])\n if not media:\n return\n\n self.category = media[\"genre\"]\n self.id = media[\"id\"]\n self.title = media[\"name\"]\n path = media[\"stitched\"][\"path\"]\n\n else:\n return\n\n log.trace(f\"data={data!r}\")\n log.debug(f\"path={path}\")\n\n return self._get_playlist(\n data[\"servers\"][\"stitcher\"],\n path,\n data[\"stitcherParams\"],\n data[\"sessionToken\"],\n )\n\n\n__plugin__ = Pluto\n", "path": "src/streamlink/plugins/pluto.py"}], "after_files": [{"content": "\"\"\"\n$description Live TV and video on-demand service owned by Paramount Streaming.\n$url pluto.tv\n$type live, vod\n$metadata id\n$metadata author\n$metadata category\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import parse_qsl, urljoin\nfrom uuid import uuid4\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream, HLSStreamReader, HLSStreamWriter\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\nclass PlutoHLSStreamWriter(HLSStreamWriter):\n ad_re = re.compile(r\"_ad/creative/|dai\\.google\\.com|Pluto_TV_OandO/.*(Bumper|plutotv_filler)\")\n\n def should_filter_sequence(self, sequence):\n return self.ad_re.search(sequence.segment.uri) is not None or super().should_filter_sequence(sequence)\n\n\nclass PlutoHLSStreamReader(HLSStreamReader):\n __writer__ = PlutoHLSStreamWriter\n\n\nclass PlutoHLSStream(HLSStream):\n __shortname__ = \"hls-pluto\"\n __reader__ = PlutoHLSStreamReader\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?pluto\\.tv/(?:\\w{2}/)?(?:\n live-tv/(?P<slug_live>[^/]+)\n |\n on-demand/series/(?P<slug_series>[^/]+)(?:/season/\\d+)?/episode/(?P<slug_episode>[^/]+)\n |\n on-demand/movies/(?P<slug_movies>[^/]+)\n )/?$\n\"\"\", re.VERBOSE))\nclass Pluto(Plugin):\n def _get_api_data(self, kind, slug, slugfilter=None):\n log.debug(f\"slug={slug}\")\n app_version = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//head/meta[@name='appVersion']/@content\"),\n validate.any(None, str),\n ))\n if not app_version:\n return\n\n log.debug(f\"app_version={app_version}\")\n\n return self.session.http.get(\n \"https://boot.pluto.tv/v4/start\",\n params={\n \"appName\": \"web\",\n \"appVersion\": app_version,\n \"deviceVersion\": \"94.0.0\",\n \"deviceModel\": \"web\",\n \"deviceMake\": \"firefox\",\n \"deviceType\": \"web\",\n \"clientID\": str(uuid4()),\n \"clientModelNumber\": \"1.0\",\n kind: slug,\n },\n schema=validate.Schema(\n validate.parse_json(), {\n \"servers\": {\n \"stitcher\": validate.url(),\n },\n validate.optional(\"EPG\"): [{\n \"name\": str,\n \"id\": str,\n \"slug\": str,\n \"stitched\": {\n \"path\": str,\n },\n }],\n validate.optional(\"VOD\"): [{\n \"name\": str,\n \"id\": str,\n \"slug\": str,\n \"genre\": str,\n \"stitched\": {\n \"path\": str,\n },\n validate.optional(\"seasons\"): [{\n \"episodes\": validate.all(\n [{\n \"name\": str,\n \"_id\": str,\n \"slug\": str,\n \"stitched\": {\n \"path\": str,\n },\n }],\n validate.filter(lambda k: slugfilter and k[\"slug\"] == slugfilter),\n ),\n }],\n }],\n \"sessionToken\": str,\n \"stitcherParams\": str,\n },\n ),\n )\n\n def _get_playlist(self, host, path, params, token):\n qsd = dict(parse_qsl(params))\n qsd[\"jwt\"] = token\n\n url = urljoin(host, path)\n url = update_qsd(url, qsd)\n\n return PlutoHLSStream.parse_variant_playlist(self.session, url)\n\n @staticmethod\n def _get_media_data(data, key, slug):\n media = data.get(key)\n if media and media[0][\"slug\"] == slug:\n return media[0]\n\n def _get_streams(self):\n m = self.match.groupdict()\n if m[\"slug_live\"]:\n data = self._get_api_data(\"channelSlug\", m[\"slug_live\"])\n media = self._get_media_data(data, \"EPG\", m[\"slug_live\"])\n if not media:\n return\n\n self.id = media[\"id\"]\n self.title = media[\"name\"]\n path = media[\"stitched\"][\"path\"]\n\n elif m[\"slug_series\"] and m[\"slug_episode\"]:\n data = self._get_api_data(\"episodeSlugs\", m[\"slug_series\"], slugfilter=m[\"slug_episode\"])\n media = self._get_media_data(data, \"VOD\", m[\"slug_series\"])\n if not media or \"seasons\" not in media:\n return\n\n for season in media[\"seasons\"]:\n if season[\"episodes\"]:\n episode = season[\"episodes\"][0]\n if episode[\"slug\"] == m[\"slug_episode\"]:\n break\n else:\n return\n\n self.author = media[\"name\"]\n self.category = media[\"genre\"]\n self.id = episode[\"_id\"]\n self.title = episode[\"name\"]\n path = episode[\"stitched\"][\"path\"]\n\n elif m[\"slug_movies\"]:\n data = self._get_api_data(\"episodeSlugs\", m[\"slug_movies\"])\n media = self._get_media_data(data, \"VOD\", m[\"slug_movies\"])\n if not media:\n return\n\n self.category = media[\"genre\"]\n self.id = media[\"id\"]\n self.title = media[\"name\"]\n path = media[\"stitched\"][\"path\"]\n\n else:\n return\n\n log.trace(f\"data={data!r}\")\n log.debug(f\"path={path}\")\n\n return self._get_playlist(\n data[\"servers\"][\"stitcher\"],\n path,\n data[\"stitcherParams\"],\n data[\"sessionToken\"],\n )\n\n\n__plugin__ = Pluto\n", "path": "src/streamlink/plugins/pluto.py"}]}
| 2,725 | 375 |
gh_patches_debug_22747
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2275
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Idea: warn users when trying to use TextResponse functionality with plain Response
Currently, if we try to use TextResponse functionality like response.text or css()/xpath() methods with a plain Response (e.g. in case of binary content), we get an AttributeError:
```
>>> response.css
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-7d6e256164d4> in <module>()
----> 1 response.css
AttributeError: 'Response' object has no attribute 'css'
>>> response.xpath
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-4f61f6e9fc6e> in <module>()
----> 1 response.xpath
AttributeError: 'Response' object has no attribute 'xpath'
>>> response.text
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-be6a4a00df5e> in <module>()
----> 1 response.text
AttributeError: 'Response' object has no attribute 'text'
```
Would it make sense to add a few methods/properties to explain what's going on for new users?
I was thinking instead of AttributeError, a better behavior could be a ValueError with a message giving a bit more context.
So, in plain `Response`, we could have:
```
def css(self, *args, **kw):
raise ValueError('Response content is not text')
def xpath(self, *args, **kw):
raise ValueError('Response content is not text')
@property
def text(self, *args, **kw):
raise ValueError('Response content is not text')
```
This would be nice, because we'd had to explain fewer things when teaching people about responses and also about using `.css` and `.xpath` methods.
What do you think?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scrapy/http/response/__init__.py`
Content:
```
1 """
2 This module implements the Response class which is used to represent HTTP
3 responses in Scrapy.
4
5 See documentation in docs/topics/request-response.rst
6 """
7 from six.moves.urllib.parse import urljoin
8
9 from scrapy.http.headers import Headers
10 from scrapy.utils.trackref import object_ref
11 from scrapy.http.common import obsolete_setter
12
13 class Response(object_ref):
14
15 def __init__(self, url, status=200, headers=None, body=b'', flags=None, request=None):
16 self.headers = Headers(headers or {})
17 self.status = int(status)
18 self._set_body(body)
19 self._set_url(url)
20 self.request = request
21 self.flags = [] if flags is None else list(flags)
22
23 @property
24 def meta(self):
25 try:
26 return self.request.meta
27 except AttributeError:
28 raise AttributeError(
29 "Response.meta not available, this response "
30 "is not tied to any request"
31 )
32
33 def _get_url(self):
34 return self._url
35
36 def _set_url(self, url):
37 if isinstance(url, str):
38 self._url = url
39 else:
40 raise TypeError('%s url must be str, got %s:' % (type(self).__name__,
41 type(url).__name__))
42
43 url = property(_get_url, obsolete_setter(_set_url, 'url'))
44
45 def _get_body(self):
46 return self._body
47
48 def _set_body(self, body):
49 if body is None:
50 self._body = b''
51 elif not isinstance(body, bytes):
52 raise TypeError(
53 "Response body must be bytes. "
54 "If you want to pass unicode body use TextResponse "
55 "or HtmlResponse.")
56 else:
57 self._body = body
58
59 body = property(_get_body, obsolete_setter(_set_body, 'body'))
60
61 def __str__(self):
62 return "<%d %s>" % (self.status, self.url)
63
64 __repr__ = __str__
65
66 def copy(self):
67 """Return a copy of this Response"""
68 return self.replace()
69
70 def replace(self, *args, **kwargs):
71 """Create a new Response with the same attributes except for those
72 given new values.
73 """
74 for x in ['url', 'status', 'headers', 'body', 'request', 'flags']:
75 kwargs.setdefault(x, getattr(self, x))
76 cls = kwargs.pop('cls', self.__class__)
77 return cls(*args, **kwargs)
78
79 def urljoin(self, url):
80 """Join this Response's url with a possible relative url to form an
81 absolute interpretation of the latter."""
82 return urljoin(self.url, url)
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scrapy/http/response/__init__.py b/scrapy/http/response/__init__.py
--- a/scrapy/http/response/__init__.py
+++ b/scrapy/http/response/__init__.py
@@ -9,6 +9,8 @@
from scrapy.http.headers import Headers
from scrapy.utils.trackref import object_ref
from scrapy.http.common import obsolete_setter
+from scrapy.exceptions import NotSupported
+
class Response(object_ref):
@@ -80,3 +82,22 @@
"""Join this Response's url with a possible relative url to form an
absolute interpretation of the latter."""
return urljoin(self.url, url)
+
+ @property
+ def text(self):
+ """For subclasses of TextResponse, this will return the body
+ as text (unicode object in Python 2 and str in Python 3)
+ """
+ raise AttributeError("Response content isn't text")
+
+ def css(self, *a, **kw):
+ """Shortcut method implemented only by responses whose content
+ is text (subclasses of TextResponse).
+ """
+ raise NotSupported("Response content isn't text")
+
+ def xpath(self, *a, **kw):
+ """Shortcut method implemented only by responses whose content
+ is text (subclasses of TextResponse).
+ """
+ raise NotSupported("Response content isn't text")
|
{"golden_diff": "diff --git a/scrapy/http/response/__init__.py b/scrapy/http/response/__init__.py\n--- a/scrapy/http/response/__init__.py\n+++ b/scrapy/http/response/__init__.py\n@@ -9,6 +9,8 @@\n from scrapy.http.headers import Headers\n from scrapy.utils.trackref import object_ref\n from scrapy.http.common import obsolete_setter\n+from scrapy.exceptions import NotSupported\n+\n \n class Response(object_ref):\n \n@@ -80,3 +82,22 @@\n \"\"\"Join this Response's url with a possible relative url to form an\n absolute interpretation of the latter.\"\"\"\n return urljoin(self.url, url)\n+\n+ @property\n+ def text(self):\n+ \"\"\"For subclasses of TextResponse, this will return the body\n+ as text (unicode object in Python 2 and str in Python 3)\n+ \"\"\"\n+ raise AttributeError(\"Response content isn't text\")\n+\n+ def css(self, *a, **kw):\n+ \"\"\"Shortcut method implemented only by responses whose content\n+ is text (subclasses of TextResponse).\n+ \"\"\"\n+ raise NotSupported(\"Response content isn't text\")\n+\n+ def xpath(self, *a, **kw):\n+ \"\"\"Shortcut method implemented only by responses whose content\n+ is text (subclasses of TextResponse).\n+ \"\"\"\n+ raise NotSupported(\"Response content isn't text\")\n", "issue": "Idea: warn users when trying to use TextResponse functionality with plain Response\nCurrently, if we try to use TextResponse functionality like response.text or css()/xpath() methods with a plain Response (e.g. in case of binary content), we get an AttributeError:\n\n```\n>>> response.css\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-1-7d6e256164d4> in <module>()\n----> 1 response.css\n\nAttributeError: 'Response' object has no attribute 'css'\n>>> response.xpath\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-2-4f61f6e9fc6e> in <module>()\n----> 1 response.xpath\n\nAttributeError: 'Response' object has no attribute 'xpath'\n>>> response.text\n---------------------------------------------------------------------------\nAttributeError Traceback (most recent call last)\n<ipython-input-3-be6a4a00df5e> in <module>()\n----> 1 response.text\n\nAttributeError: 'Response' object has no attribute 'text'\n```\n\nWould it make sense to add a few methods/properties to explain what's going on for new users?\n\nI was thinking instead of AttributeError, a better behavior could be a ValueError with a message giving a bit more context.\n\nSo, in plain `Response`, we could have:\n\n```\ndef css(self, *args, **kw):\n raise ValueError('Response content is not text')\n\ndef xpath(self, *args, **kw):\n raise ValueError('Response content is not text')\n\n@property\ndef text(self, *args, **kw):\n raise ValueError('Response content is not text')\n```\n\nThis would be nice, because we'd had to explain fewer things when teaching people about responses and also about using `.css` and `.xpath` methods.\n\nWhat do you think?\n\n", "before_files": [{"content": "\"\"\"\nThis module implements the Response class which is used to represent HTTP\nresponses in Scrapy.\n\nSee documentation in docs/topics/request-response.rst\n\"\"\"\nfrom six.moves.urllib.parse import urljoin\n\nfrom scrapy.http.headers import Headers\nfrom scrapy.utils.trackref import object_ref\nfrom scrapy.http.common import obsolete_setter\n\nclass Response(object_ref):\n\n def __init__(self, url, status=200, headers=None, body=b'', flags=None, request=None):\n self.headers = Headers(headers or {})\n self.status = int(status)\n self._set_body(body)\n self._set_url(url)\n self.request = request\n self.flags = [] if flags is None else list(flags)\n\n @property\n def meta(self):\n try:\n return self.request.meta\n except AttributeError:\n raise AttributeError(\n \"Response.meta not available, this response \"\n \"is not tied to any request\"\n )\n\n def _get_url(self):\n return self._url\n\n def _set_url(self, url):\n if isinstance(url, str):\n self._url = url\n else:\n raise TypeError('%s url must be str, got %s:' % (type(self).__name__,\n type(url).__name__))\n\n url = property(_get_url, obsolete_setter(_set_url, 'url'))\n\n def _get_body(self):\n return self._body\n\n def _set_body(self, body):\n if body is None:\n self._body = b''\n elif not isinstance(body, bytes):\n raise TypeError(\n \"Response body must be bytes. \"\n \"If you want to pass unicode body use TextResponse \"\n \"or HtmlResponse.\")\n else:\n self._body = body\n\n body = property(_get_body, obsolete_setter(_set_body, 'body'))\n\n def __str__(self):\n return \"<%d %s>\" % (self.status, self.url)\n\n __repr__ = __str__\n\n def copy(self):\n \"\"\"Return a copy of this Response\"\"\"\n return self.replace()\n\n def replace(self, *args, **kwargs):\n \"\"\"Create a new Response with the same attributes except for those\n given new values.\n \"\"\"\n for x in ['url', 'status', 'headers', 'body', 'request', 'flags']:\n kwargs.setdefault(x, getattr(self, x))\n cls = kwargs.pop('cls', self.__class__)\n return cls(*args, **kwargs)\n\n def urljoin(self, url):\n \"\"\"Join this Response's url with a possible relative url to form an\n absolute interpretation of the latter.\"\"\"\n return urljoin(self.url, url)\n", "path": "scrapy/http/response/__init__.py"}], "after_files": [{"content": "\"\"\"\nThis module implements the Response class which is used to represent HTTP\nresponses in Scrapy.\n\nSee documentation in docs/topics/request-response.rst\n\"\"\"\nfrom six.moves.urllib.parse import urljoin\n\nfrom scrapy.http.headers import Headers\nfrom scrapy.utils.trackref import object_ref\nfrom scrapy.http.common import obsolete_setter\nfrom scrapy.exceptions import NotSupported\n\n\nclass Response(object_ref):\n\n def __init__(self, url, status=200, headers=None, body=b'', flags=None, request=None):\n self.headers = Headers(headers or {})\n self.status = int(status)\n self._set_body(body)\n self._set_url(url)\n self.request = request\n self.flags = [] if flags is None else list(flags)\n\n @property\n def meta(self):\n try:\n return self.request.meta\n except AttributeError:\n raise AttributeError(\n \"Response.meta not available, this response \"\n \"is not tied to any request\"\n )\n\n def _get_url(self):\n return self._url\n\n def _set_url(self, url):\n if isinstance(url, str):\n self._url = url\n else:\n raise TypeError('%s url must be str, got %s:' % (type(self).__name__,\n type(url).__name__))\n\n url = property(_get_url, obsolete_setter(_set_url, 'url'))\n\n def _get_body(self):\n return self._body\n\n def _set_body(self, body):\n if body is None:\n self._body = b''\n elif not isinstance(body, bytes):\n raise TypeError(\n \"Response body must be bytes. \"\n \"If you want to pass unicode body use TextResponse \"\n \"or HtmlResponse.\")\n else:\n self._body = body\n\n body = property(_get_body, obsolete_setter(_set_body, 'body'))\n\n def __str__(self):\n return \"<%d %s>\" % (self.status, self.url)\n\n __repr__ = __str__\n\n def copy(self):\n \"\"\"Return a copy of this Response\"\"\"\n return self.replace()\n\n def replace(self, *args, **kwargs):\n \"\"\"Create a new Response with the same attributes except for those\n given new values.\n \"\"\"\n for x in ['url', 'status', 'headers', 'body', 'request', 'flags']:\n kwargs.setdefault(x, getattr(self, x))\n cls = kwargs.pop('cls', self.__class__)\n return cls(*args, **kwargs)\n\n def urljoin(self, url):\n \"\"\"Join this Response's url with a possible relative url to form an\n absolute interpretation of the latter.\"\"\"\n return urljoin(self.url, url)\n\n @property\n def text(self):\n \"\"\"For subclasses of TextResponse, this will return the body\n as text (unicode object in Python 2 and str in Python 3)\n \"\"\"\n raise AttributeError(\"Response content isn't text\")\n\n def css(self, *a, **kw):\n \"\"\"Shortcut method implemented only by responses whose content\n is text (subclasses of TextResponse).\n \"\"\"\n raise NotSupported(\"Response content isn't text\")\n\n def xpath(self, *a, **kw):\n \"\"\"Shortcut method implemented only by responses whose content\n is text (subclasses of TextResponse).\n \"\"\"\n raise NotSupported(\"Response content isn't text\")\n", "path": "scrapy/http/response/__init__.py"}]}
| 1,383 | 301 |
gh_patches_debug_36105
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2056
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Partial parsing: duplicate resource error
### Describe the bug
When the `--partial-parse` flag is enable, dbt can fail with an errant compilation error:
```
$ dbt --partial-parse compile
Running with dbt=0.15.0
Encountered an error:
Compilation Error
dbt found two resources with the name "adwords_ad_performance_adapter". Since these resources have the same name,
dbt will be unable to find the correct resource when ref("adwords_ad_performance_adapter") is used. To fix this,
change the name of one of these resources:
- model.adwords.adwords_ad_performance_adapter (models/router/adapter/criteria/adwords_ad_performance_adapter.sql)
- model.adwords.adwords_ad_performance_adapter (models/router/adapter/criteria/adwords_ad_performance_adapter.sql)
```
dbt is reporting the same model twice. Interestingly, there actually _are_ two instances of models named `adwords_ad_performance_adapter` in the [adwords package](https://github.com/fishtown-analytics/adwords):
- https://github.com/fishtown-analytics/adwords/blob/master/models/router/adapter/criteria/adwords_ad_performance_adapter.sql
- https://github.com/fishtown-analytics/adwords/blob/master/models/router/adapter/url/adwords_ad_performance_adapter.sql
These models are conditionally enabled using a variable, `adapter_value`, defined in the `dbt_project.yml` file.
It's not yet clear to me if dbt is failing because it's finding both of these models and reporting an errant error message, or if dbt really is picking up the same model twice somehow.
### Steps To Reproduce
Add the following to `packages.yml`:
```
packages:
- package: fishtown-analytics/adwords
version: 0.2.9
```
Add the following to `dbt_project.yml`:
```
models:
adwords:
vars:
adapter_value: criteria
```
Create a model called `adwords_criteria_performance.sql` with the contents:
```
select 1 as id
```
```
# Succeeds
$ dbt compile
# Fails with duplicate model error
$ dbt --partial-parse compile
```
### Expected behavior
dbt should compile and run this project successfully.
**The output of `dbt --version`:**
```
dbt v0.15.0
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/dbt/parser/results.py`
Content:
```
1 from dataclasses import dataclass, field
2 from typing import TypeVar, MutableMapping, Mapping, Union, List
3
4 from hologram import JsonSchemaMixin
5
6 from dbt.contracts.graph.manifest import SourceFile, RemoteFile, FileHash
7 from dbt.contracts.graph.parsed import (
8 ParsedNode, HasUniqueID, ParsedMacro, ParsedDocumentation, ParsedNodePatch,
9 ParsedSourceDefinition, ParsedAnalysisNode, ParsedHookNode, ParsedRPCNode,
10 ParsedModelNode, ParsedSeedNode, ParsedTestNode, ParsedSnapshotNode,
11 )
12 from dbt.contracts.util import Writable, Replaceable
13 from dbt.exceptions import (
14 raise_duplicate_resource_name, raise_duplicate_patch_name,
15 CompilationException, InternalException
16 )
17 from dbt.version import __version__
18
19
20 # Parsers can return anything as long as it's a unique ID
21 ParsedValueType = TypeVar('ParsedValueType', bound=HasUniqueID)
22
23
24 def _check_duplicates(
25 value: HasUniqueID, src: Mapping[str, HasUniqueID]
26 ):
27 if value.unique_id in src:
28 raise_duplicate_resource_name(value, src[value.unique_id])
29
30
31 ManifestNodes = Union[
32 ParsedAnalysisNode,
33 ParsedHookNode,
34 ParsedModelNode,
35 ParsedSeedNode,
36 ParsedTestNode,
37 ParsedSnapshotNode,
38 ParsedRPCNode,
39 ]
40
41
42 def dict_field():
43 return field(default_factory=dict)
44
45
46 @dataclass
47 class ParseResult(JsonSchemaMixin, Writable, Replaceable):
48 vars_hash: FileHash
49 profile_hash: FileHash
50 project_hashes: MutableMapping[str, FileHash]
51 nodes: MutableMapping[str, ManifestNodes] = dict_field()
52 sources: MutableMapping[str, ParsedSourceDefinition] = dict_field()
53 docs: MutableMapping[str, ParsedDocumentation] = dict_field()
54 macros: MutableMapping[str, ParsedMacro] = dict_field()
55 patches: MutableMapping[str, ParsedNodePatch] = dict_field()
56 files: MutableMapping[str, SourceFile] = dict_field()
57 disabled: MutableMapping[str, List[ParsedNode]] = dict_field()
58 dbt_version: str = __version__
59
60 def get_file(self, source_file: SourceFile) -> SourceFile:
61 key = source_file.search_key
62 if key is None:
63 return source_file
64 if key not in self.files:
65 self.files[key] = source_file
66 return self.files[key]
67
68 def add_source(
69 self, source_file: SourceFile, node: ParsedSourceDefinition
70 ):
71 # nodes can't be overwritten!
72 _check_duplicates(node, self.sources)
73 self.sources[node.unique_id] = node
74 self.get_file(source_file).sources.append(node.unique_id)
75
76 def add_node(self, source_file: SourceFile, node: ManifestNodes):
77 # nodes can't be overwritten!
78 _check_duplicates(node, self.nodes)
79 self.nodes[node.unique_id] = node
80 self.get_file(source_file).nodes.append(node.unique_id)
81
82 def add_disabled(self, source_file: SourceFile, node: ParsedNode):
83 if node.unique_id in self.disabled:
84 self.disabled[node.unique_id].append(node)
85 else:
86 self.disabled[node.unique_id] = [node]
87 self.get_file(source_file).nodes.append(node.unique_id)
88
89 def add_macro(self, source_file: SourceFile, macro: ParsedMacro):
90 # macros can be overwritten (should they be?)
91 self.macros[macro.unique_id] = macro
92 self.get_file(source_file).macros.append(macro.unique_id)
93
94 def add_doc(self, source_file: SourceFile, doc: ParsedDocumentation):
95 # Docs also can be overwritten (should they be?)
96 self.docs[doc.unique_id] = doc
97 self.get_file(source_file).docs.append(doc.unique_id)
98
99 def add_patch(self, source_file: SourceFile, patch: ParsedNodePatch):
100 # matches can't be overwritten
101 if patch.name in self.patches:
102 raise_duplicate_patch_name(patch.name, patch,
103 self.patches[patch.name])
104 self.patches[patch.name] = patch
105 self.get_file(source_file).patches.append(patch.name)
106
107 def _get_disabled(
108 self, unique_id: str, match_file: SourceFile
109 ) -> List[ParsedNode]:
110 if unique_id not in self.disabled:
111 raise InternalException(
112 'called _get_disabled with id={}, but it does not exist'
113 .format(unique_id)
114 )
115 return [
116 n for n in self.disabled[unique_id]
117 if n.original_file_path == match_file.path.original_file_path
118 ]
119
120 def sanitized_update(
121 self, source_file: SourceFile, old_result: 'ParseResult',
122 ) -> bool:
123 """Perform a santized update. If the file can't be updated, invalidate
124 it and return false.
125 """
126 if isinstance(source_file.path, RemoteFile):
127 return False
128
129 old_file = old_result.get_file(source_file)
130 for doc_id in old_file.docs:
131 doc = _expect_value(doc_id, old_result.docs, old_file, "docs")
132 self.add_doc(source_file, doc)
133
134 for macro_id in old_file.macros:
135 macro = _expect_value(
136 macro_id, old_result.macros, old_file, "macros"
137 )
138 self.add_macro(source_file, macro)
139
140 for source_id in old_file.sources:
141 source = _expect_value(
142 source_id, old_result.sources, old_file, "sources"
143 )
144 self.add_source(source_file, source)
145
146 # because we know this is how we _parsed_ the node, we can safely
147 # assume if it's disabled it was done by the project or file, and
148 # we can keep our old data
149 for node_id in old_file.nodes:
150 if node_id in old_result.nodes:
151 node = old_result.nodes[node_id]
152 self.add_node(source_file, node)
153 elif node_id in old_result.disabled:
154 matches = old_result._get_disabled(node_id, source_file)
155 for match in matches:
156 self.add_disabled(source_file, match)
157 else:
158 raise CompilationException(
159 'Expected to find "{}" in cached "manifest.nodes" or '
160 '"manifest.disabled" based on cached file information: {}!'
161 .format(node_id, old_file)
162 )
163
164 for name in old_file.patches:
165 patch = _expect_value(
166 name, old_result.patches, old_file, "patches"
167 )
168 self.add_patch(source_file, patch)
169
170 return True
171
172 def has_file(self, source_file: SourceFile) -> bool:
173 key = source_file.search_key
174 if key is None:
175 return False
176 if key not in self.files:
177 return False
178 my_checksum = self.files[key].checksum
179 return my_checksum == source_file.checksum
180
181 @classmethod
182 def rpc(cls):
183 # ugh!
184 return cls(FileHash.empty(), FileHash.empty(), {})
185
186
187 T = TypeVar('T')
188
189
190 def _expect_value(
191 key: str, src: Mapping[str, T], old_file: SourceFile, name: str
192 ) -> T:
193 if key not in src:
194 raise CompilationException(
195 'Expected to find "{}" in cached "result.{}" based '
196 'on cached file information: {}!'
197 .format(key, name, old_file)
198 )
199 return src[key]
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/core/dbt/parser/results.py b/core/dbt/parser/results.py
--- a/core/dbt/parser/results.py
+++ b/core/dbt/parser/results.py
@@ -117,6 +117,37 @@
if n.original_file_path == match_file.path.original_file_path
]
+ def _process_node(
+ self,
+ node_id: str,
+ source_file: SourceFile,
+ old_file: SourceFile,
+ old_result: 'ParseResult',
+ ) -> None:
+ """Nodes are a special kind of complicated - there can be multiple
+ with the same name, as long as all but one are disabled.
+ """
+ source_path = source_file.path.original_file_path
+ found: bool = False
+ if node_id in old_result.nodes:
+ old_node = old_result.nodes[node_id]
+ if old_node.original_file_path == source_path:
+ self.add_node(source_file, old_node)
+ found = True
+
+ if node_id in old_result.disabled:
+ matches = old_result._get_disabled(node_id, source_file)
+ for match in matches:
+ self.add_disabled(source_file, match)
+ found = True
+
+ if not found:
+ raise CompilationException(
+ 'Expected to find "{}" in cached "manifest.nodes" or '
+ '"manifest.disabled" based on cached file information: {}!'
+ .format(node_id, old_file)
+ )
+
def sanitized_update(
self, source_file: SourceFile, old_result: 'ParseResult',
) -> bool:
@@ -146,20 +177,10 @@
# because we know this is how we _parsed_ the node, we can safely
# assume if it's disabled it was done by the project or file, and
# we can keep our old data
+ # the node ID could be in old_result.disabled AND in old_result.nodes.
+ # In that case, we have to make sure the path also matches.
for node_id in old_file.nodes:
- if node_id in old_result.nodes:
- node = old_result.nodes[node_id]
- self.add_node(source_file, node)
- elif node_id in old_result.disabled:
- matches = old_result._get_disabled(node_id, source_file)
- for match in matches:
- self.add_disabled(source_file, match)
- else:
- raise CompilationException(
- 'Expected to find "{}" in cached "manifest.nodes" or '
- '"manifest.disabled" based on cached file information: {}!'
- .format(node_id, old_file)
- )
+ self._process_node(node_id, source_file, old_file, old_result)
for name in old_file.patches:
patch = _expect_value(
|
{"golden_diff": "diff --git a/core/dbt/parser/results.py b/core/dbt/parser/results.py\n--- a/core/dbt/parser/results.py\n+++ b/core/dbt/parser/results.py\n@@ -117,6 +117,37 @@\n if n.original_file_path == match_file.path.original_file_path\n ]\n \n+ def _process_node(\n+ self,\n+ node_id: str,\n+ source_file: SourceFile,\n+ old_file: SourceFile,\n+ old_result: 'ParseResult',\n+ ) -> None:\n+ \"\"\"Nodes are a special kind of complicated - there can be multiple\n+ with the same name, as long as all but one are disabled.\n+ \"\"\"\n+ source_path = source_file.path.original_file_path\n+ found: bool = False\n+ if node_id in old_result.nodes:\n+ old_node = old_result.nodes[node_id]\n+ if old_node.original_file_path == source_path:\n+ self.add_node(source_file, old_node)\n+ found = True\n+\n+ if node_id in old_result.disabled:\n+ matches = old_result._get_disabled(node_id, source_file)\n+ for match in matches:\n+ self.add_disabled(source_file, match)\n+ found = True\n+\n+ if not found:\n+ raise CompilationException(\n+ 'Expected to find \"{}\" in cached \"manifest.nodes\" or '\n+ '\"manifest.disabled\" based on cached file information: {}!'\n+ .format(node_id, old_file)\n+ )\n+\n def sanitized_update(\n self, source_file: SourceFile, old_result: 'ParseResult',\n ) -> bool:\n@@ -146,20 +177,10 @@\n # because we know this is how we _parsed_ the node, we can safely\n # assume if it's disabled it was done by the project or file, and\n # we can keep our old data\n+ # the node ID could be in old_result.disabled AND in old_result.nodes.\n+ # In that case, we have to make sure the path also matches.\n for node_id in old_file.nodes:\n- if node_id in old_result.nodes:\n- node = old_result.nodes[node_id]\n- self.add_node(source_file, node)\n- elif node_id in old_result.disabled:\n- matches = old_result._get_disabled(node_id, source_file)\n- for match in matches:\n- self.add_disabled(source_file, match)\n- else:\n- raise CompilationException(\n- 'Expected to find \"{}\" in cached \"manifest.nodes\" or '\n- '\"manifest.disabled\" based on cached file information: {}!'\n- .format(node_id, old_file)\n- )\n+ self._process_node(node_id, source_file, old_file, old_result)\n \n for name in old_file.patches:\n patch = _expect_value(\n", "issue": "Partial parsing: duplicate resource error\n### Describe the bug\r\nWhen the `--partial-parse` flag is enable, dbt can fail with an errant compilation error:\r\n\r\n```\r\n$ dbt --partial-parse compile\r\nRunning with dbt=0.15.0\r\nEncountered an error:\r\nCompilation Error\r\n dbt found two resources with the name \"adwords_ad_performance_adapter\". Since these resources have the same name,\r\n dbt will be unable to find the correct resource when ref(\"adwords_ad_performance_adapter\") is used. To fix this,\r\n change the name of one of these resources:\r\n - model.adwords.adwords_ad_performance_adapter (models/router/adapter/criteria/adwords_ad_performance_adapter.sql)\r\n - model.adwords.adwords_ad_performance_adapter (models/router/adapter/criteria/adwords_ad_performance_adapter.sql)\r\n```\r\n\r\ndbt is reporting the same model twice. Interestingly, there actually _are_ two instances of models named `adwords_ad_performance_adapter` in the [adwords package](https://github.com/fishtown-analytics/adwords):\r\n - https://github.com/fishtown-analytics/adwords/blob/master/models/router/adapter/criteria/adwords_ad_performance_adapter.sql\r\n - https://github.com/fishtown-analytics/adwords/blob/master/models/router/adapter/url/adwords_ad_performance_adapter.sql\r\n\r\nThese models are conditionally enabled using a variable, `adapter_value`, defined in the `dbt_project.yml` file.\r\n\r\nIt's not yet clear to me if dbt is failing because it's finding both of these models and reporting an errant error message, or if dbt really is picking up the same model twice somehow.\r\n\r\n### Steps To Reproduce\r\n\r\nAdd the following to `packages.yml`:\r\n```\r\npackages:\r\n - package: fishtown-analytics/adwords\r\n version: 0.2.9\r\n```\r\n\r\nAdd the following to `dbt_project.yml`:\r\n```\r\nmodels:\r\n adwords:\r\n vars:\r\n adapter_value: criteria\r\n```\r\n\r\nCreate a model called `adwords_criteria_performance.sql` with the contents:\r\n```\r\nselect 1 as id\r\n```\r\n\r\n```\r\n# Succeeds\r\n$ dbt compile\r\n\r\n# Fails with duplicate model error\r\n$ dbt --partial-parse compile\r\n```\r\n\r\n### Expected behavior\r\ndbt should compile and run this project successfully.\r\n\r\n**The output of `dbt --version`:**\r\n```\r\ndbt v0.15.0\r\n```\n", "before_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import TypeVar, MutableMapping, Mapping, Union, List\n\nfrom hologram import JsonSchemaMixin\n\nfrom dbt.contracts.graph.manifest import SourceFile, RemoteFile, FileHash\nfrom dbt.contracts.graph.parsed import (\n ParsedNode, HasUniqueID, ParsedMacro, ParsedDocumentation, ParsedNodePatch,\n ParsedSourceDefinition, ParsedAnalysisNode, ParsedHookNode, ParsedRPCNode,\n ParsedModelNode, ParsedSeedNode, ParsedTestNode, ParsedSnapshotNode,\n)\nfrom dbt.contracts.util import Writable, Replaceable\nfrom dbt.exceptions import (\n raise_duplicate_resource_name, raise_duplicate_patch_name,\n CompilationException, InternalException\n)\nfrom dbt.version import __version__\n\n\n# Parsers can return anything as long as it's a unique ID\nParsedValueType = TypeVar('ParsedValueType', bound=HasUniqueID)\n\n\ndef _check_duplicates(\n value: HasUniqueID, src: Mapping[str, HasUniqueID]\n):\n if value.unique_id in src:\n raise_duplicate_resource_name(value, src[value.unique_id])\n\n\nManifestNodes = Union[\n ParsedAnalysisNode,\n ParsedHookNode,\n ParsedModelNode,\n ParsedSeedNode,\n ParsedTestNode,\n ParsedSnapshotNode,\n ParsedRPCNode,\n]\n\n\ndef dict_field():\n return field(default_factory=dict)\n\n\n@dataclass\nclass ParseResult(JsonSchemaMixin, Writable, Replaceable):\n vars_hash: FileHash\n profile_hash: FileHash\n project_hashes: MutableMapping[str, FileHash]\n nodes: MutableMapping[str, ManifestNodes] = dict_field()\n sources: MutableMapping[str, ParsedSourceDefinition] = dict_field()\n docs: MutableMapping[str, ParsedDocumentation] = dict_field()\n macros: MutableMapping[str, ParsedMacro] = dict_field()\n patches: MutableMapping[str, ParsedNodePatch] = dict_field()\n files: MutableMapping[str, SourceFile] = dict_field()\n disabled: MutableMapping[str, List[ParsedNode]] = dict_field()\n dbt_version: str = __version__\n\n def get_file(self, source_file: SourceFile) -> SourceFile:\n key = source_file.search_key\n if key is None:\n return source_file\n if key not in self.files:\n self.files[key] = source_file\n return self.files[key]\n\n def add_source(\n self, source_file: SourceFile, node: ParsedSourceDefinition\n ):\n # nodes can't be overwritten!\n _check_duplicates(node, self.sources)\n self.sources[node.unique_id] = node\n self.get_file(source_file).sources.append(node.unique_id)\n\n def add_node(self, source_file: SourceFile, node: ManifestNodes):\n # nodes can't be overwritten!\n _check_duplicates(node, self.nodes)\n self.nodes[node.unique_id] = node\n self.get_file(source_file).nodes.append(node.unique_id)\n\n def add_disabled(self, source_file: SourceFile, node: ParsedNode):\n if node.unique_id in self.disabled:\n self.disabled[node.unique_id].append(node)\n else:\n self.disabled[node.unique_id] = [node]\n self.get_file(source_file).nodes.append(node.unique_id)\n\n def add_macro(self, source_file: SourceFile, macro: ParsedMacro):\n # macros can be overwritten (should they be?)\n self.macros[macro.unique_id] = macro\n self.get_file(source_file).macros.append(macro.unique_id)\n\n def add_doc(self, source_file: SourceFile, doc: ParsedDocumentation):\n # Docs also can be overwritten (should they be?)\n self.docs[doc.unique_id] = doc\n self.get_file(source_file).docs.append(doc.unique_id)\n\n def add_patch(self, source_file: SourceFile, patch: ParsedNodePatch):\n # matches can't be overwritten\n if patch.name in self.patches:\n raise_duplicate_patch_name(patch.name, patch,\n self.patches[patch.name])\n self.patches[patch.name] = patch\n self.get_file(source_file).patches.append(patch.name)\n\n def _get_disabled(\n self, unique_id: str, match_file: SourceFile\n ) -> List[ParsedNode]:\n if unique_id not in self.disabled:\n raise InternalException(\n 'called _get_disabled with id={}, but it does not exist'\n .format(unique_id)\n )\n return [\n n for n in self.disabled[unique_id]\n if n.original_file_path == match_file.path.original_file_path\n ]\n\n def sanitized_update(\n self, source_file: SourceFile, old_result: 'ParseResult',\n ) -> bool:\n \"\"\"Perform a santized update. If the file can't be updated, invalidate\n it and return false.\n \"\"\"\n if isinstance(source_file.path, RemoteFile):\n return False\n\n old_file = old_result.get_file(source_file)\n for doc_id in old_file.docs:\n doc = _expect_value(doc_id, old_result.docs, old_file, \"docs\")\n self.add_doc(source_file, doc)\n\n for macro_id in old_file.macros:\n macro = _expect_value(\n macro_id, old_result.macros, old_file, \"macros\"\n )\n self.add_macro(source_file, macro)\n\n for source_id in old_file.sources:\n source = _expect_value(\n source_id, old_result.sources, old_file, \"sources\"\n )\n self.add_source(source_file, source)\n\n # because we know this is how we _parsed_ the node, we can safely\n # assume if it's disabled it was done by the project or file, and\n # we can keep our old data\n for node_id in old_file.nodes:\n if node_id in old_result.nodes:\n node = old_result.nodes[node_id]\n self.add_node(source_file, node)\n elif node_id in old_result.disabled:\n matches = old_result._get_disabled(node_id, source_file)\n for match in matches:\n self.add_disabled(source_file, match)\n else:\n raise CompilationException(\n 'Expected to find \"{}\" in cached \"manifest.nodes\" or '\n '\"manifest.disabled\" based on cached file information: {}!'\n .format(node_id, old_file)\n )\n\n for name in old_file.patches:\n patch = _expect_value(\n name, old_result.patches, old_file, \"patches\"\n )\n self.add_patch(source_file, patch)\n\n return True\n\n def has_file(self, source_file: SourceFile) -> bool:\n key = source_file.search_key\n if key is None:\n return False\n if key not in self.files:\n return False\n my_checksum = self.files[key].checksum\n return my_checksum == source_file.checksum\n\n @classmethod\n def rpc(cls):\n # ugh!\n return cls(FileHash.empty(), FileHash.empty(), {})\n\n\nT = TypeVar('T')\n\n\ndef _expect_value(\n key: str, src: Mapping[str, T], old_file: SourceFile, name: str\n) -> T:\n if key not in src:\n raise CompilationException(\n 'Expected to find \"{}\" in cached \"result.{}\" based '\n 'on cached file information: {}!'\n .format(key, name, old_file)\n )\n return src[key]\n", "path": "core/dbt/parser/results.py"}], "after_files": [{"content": "from dataclasses import dataclass, field\nfrom typing import TypeVar, MutableMapping, Mapping, Union, List\n\nfrom hologram import JsonSchemaMixin\n\nfrom dbt.contracts.graph.manifest import SourceFile, RemoteFile, FileHash\nfrom dbt.contracts.graph.parsed import (\n ParsedNode, HasUniqueID, ParsedMacro, ParsedDocumentation, ParsedNodePatch,\n ParsedSourceDefinition, ParsedAnalysisNode, ParsedHookNode, ParsedRPCNode,\n ParsedModelNode, ParsedSeedNode, ParsedTestNode, ParsedSnapshotNode,\n)\nfrom dbt.contracts.util import Writable, Replaceable\nfrom dbt.exceptions import (\n raise_duplicate_resource_name, raise_duplicate_patch_name,\n CompilationException, InternalException\n)\nfrom dbt.version import __version__\n\n\n# Parsers can return anything as long as it's a unique ID\nParsedValueType = TypeVar('ParsedValueType', bound=HasUniqueID)\n\n\ndef _check_duplicates(\n value: HasUniqueID, src: Mapping[str, HasUniqueID]\n):\n if value.unique_id in src:\n raise_duplicate_resource_name(value, src[value.unique_id])\n\n\nManifestNodes = Union[\n ParsedAnalysisNode,\n ParsedHookNode,\n ParsedModelNode,\n ParsedSeedNode,\n ParsedTestNode,\n ParsedSnapshotNode,\n ParsedRPCNode,\n]\n\n\ndef dict_field():\n return field(default_factory=dict)\n\n\n@dataclass\nclass ParseResult(JsonSchemaMixin, Writable, Replaceable):\n vars_hash: FileHash\n profile_hash: FileHash\n project_hashes: MutableMapping[str, FileHash]\n nodes: MutableMapping[str, ManifestNodes] = dict_field()\n sources: MutableMapping[str, ParsedSourceDefinition] = dict_field()\n docs: MutableMapping[str, ParsedDocumentation] = dict_field()\n macros: MutableMapping[str, ParsedMacro] = dict_field()\n patches: MutableMapping[str, ParsedNodePatch] = dict_field()\n files: MutableMapping[str, SourceFile] = dict_field()\n disabled: MutableMapping[str, List[ParsedNode]] = dict_field()\n dbt_version: str = __version__\n\n def get_file(self, source_file: SourceFile) -> SourceFile:\n key = source_file.search_key\n if key is None:\n return source_file\n if key not in self.files:\n self.files[key] = source_file\n return self.files[key]\n\n def add_source(\n self, source_file: SourceFile, node: ParsedSourceDefinition\n ):\n # nodes can't be overwritten!\n _check_duplicates(node, self.sources)\n self.sources[node.unique_id] = node\n self.get_file(source_file).sources.append(node.unique_id)\n\n def add_node(self, source_file: SourceFile, node: ManifestNodes):\n # nodes can't be overwritten!\n _check_duplicates(node, self.nodes)\n self.nodes[node.unique_id] = node\n self.get_file(source_file).nodes.append(node.unique_id)\n\n def add_disabled(self, source_file: SourceFile, node: ParsedNode):\n if node.unique_id in self.disabled:\n self.disabled[node.unique_id].append(node)\n else:\n self.disabled[node.unique_id] = [node]\n self.get_file(source_file).nodes.append(node.unique_id)\n\n def add_macro(self, source_file: SourceFile, macro: ParsedMacro):\n # macros can be overwritten (should they be?)\n self.macros[macro.unique_id] = macro\n self.get_file(source_file).macros.append(macro.unique_id)\n\n def add_doc(self, source_file: SourceFile, doc: ParsedDocumentation):\n # Docs also can be overwritten (should they be?)\n self.docs[doc.unique_id] = doc\n self.get_file(source_file).docs.append(doc.unique_id)\n\n def add_patch(self, source_file: SourceFile, patch: ParsedNodePatch):\n # matches can't be overwritten\n if patch.name in self.patches:\n raise_duplicate_patch_name(patch.name, patch,\n self.patches[patch.name])\n self.patches[patch.name] = patch\n self.get_file(source_file).patches.append(patch.name)\n\n def _get_disabled(\n self, unique_id: str, match_file: SourceFile\n ) -> List[ParsedNode]:\n if unique_id not in self.disabled:\n raise InternalException(\n 'called _get_disabled with id={}, but it does not exist'\n .format(unique_id)\n )\n return [\n n for n in self.disabled[unique_id]\n if n.original_file_path == match_file.path.original_file_path\n ]\n\n def _process_node(\n self,\n node_id: str,\n source_file: SourceFile,\n old_file: SourceFile,\n old_result: 'ParseResult',\n ) -> None:\n \"\"\"Nodes are a special kind of complicated - there can be multiple\n with the same name, as long as all but one are disabled.\n \"\"\"\n source_path = source_file.path.original_file_path\n found: bool = False\n if node_id in old_result.nodes:\n old_node = old_result.nodes[node_id]\n if old_node.original_file_path == source_path:\n self.add_node(source_file, old_node)\n found = True\n\n if node_id in old_result.disabled:\n matches = old_result._get_disabled(node_id, source_file)\n for match in matches:\n self.add_disabled(source_file, match)\n found = True\n\n if not found:\n raise CompilationException(\n 'Expected to find \"{}\" in cached \"manifest.nodes\" or '\n '\"manifest.disabled\" based on cached file information: {}!'\n .format(node_id, old_file)\n )\n\n def sanitized_update(\n self, source_file: SourceFile, old_result: 'ParseResult',\n ) -> bool:\n \"\"\"Perform a santized update. If the file can't be updated, invalidate\n it and return false.\n \"\"\"\n if isinstance(source_file.path, RemoteFile):\n return False\n\n old_file = old_result.get_file(source_file)\n for doc_id in old_file.docs:\n doc = _expect_value(doc_id, old_result.docs, old_file, \"docs\")\n self.add_doc(source_file, doc)\n\n for macro_id in old_file.macros:\n macro = _expect_value(\n macro_id, old_result.macros, old_file, \"macros\"\n )\n self.add_macro(source_file, macro)\n\n for source_id in old_file.sources:\n source = _expect_value(\n source_id, old_result.sources, old_file, \"sources\"\n )\n self.add_source(source_file, source)\n\n # because we know this is how we _parsed_ the node, we can safely\n # assume if it's disabled it was done by the project or file, and\n # we can keep our old data\n # the node ID could be in old_result.disabled AND in old_result.nodes.\n # In that case, we have to make sure the path also matches.\n for node_id in old_file.nodes:\n self._process_node(node_id, source_file, old_file, old_result)\n\n for name in old_file.patches:\n patch = _expect_value(\n name, old_result.patches, old_file, \"patches\"\n )\n self.add_patch(source_file, patch)\n\n return True\n\n def has_file(self, source_file: SourceFile) -> bool:\n key = source_file.search_key\n if key is None:\n return False\n if key not in self.files:\n return False\n my_checksum = self.files[key].checksum\n return my_checksum == source_file.checksum\n\n @classmethod\n def rpc(cls):\n # ugh!\n return cls(FileHash.empty(), FileHash.empty(), {})\n\n\nT = TypeVar('T')\n\n\ndef _expect_value(\n key: str, src: Mapping[str, T], old_file: SourceFile, name: str\n) -> T:\n if key not in src:\n raise CompilationException(\n 'Expected to find \"{}\" in cached \"result.{}\" based '\n 'on cached file information: {}!'\n .format(key, name, old_file)\n )\n return src[key]\n", "path": "core/dbt/parser/results.py"}]}
| 2,871 | 620 |
gh_patches_debug_66309
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-1463
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No module named 'elasticdl.python.elasticdl.layers' on master
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/elasticdl/python/master/main.py", line 28, in <module>
from elasticdl.python.elasticdl.layers.embedding import Embedding
ModuleNotFoundError: No module named 'elasticdl.python.elasticdl.layers'
```
Seems `layers` directory is not installed to `/usr/local/lib/python3.7/site-packages/elasticdl-develop-py3.7.egg/elasticdl/python/elasticdl` after running `python setup.py install`
Steps to reproduce:
1. In a Python Docker container, clone ElasticDL and run `python setup.py install`
1. remove the cloned source
1. execute a demo job by: `elasticdl train ...`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `elasticdl/python/elasticdl/__init__.py`
Content:
```
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/elasticdl/python/elasticdl/__init__.py b/elasticdl/python/elasticdl/__init__.py
--- a/elasticdl/python/elasticdl/__init__.py
+++ b/elasticdl/python/elasticdl/__init__.py
@@ -0,0 +1 @@
+from elasticdl.python.elasticdl import layers # noqa: F401
|
{"golden_diff": "diff --git a/elasticdl/python/elasticdl/__init__.py b/elasticdl/python/elasticdl/__init__.py\n--- a/elasticdl/python/elasticdl/__init__.py\n+++ b/elasticdl/python/elasticdl/__init__.py\n@@ -0,0 +1 @@\n+from elasticdl.python.elasticdl import layers # noqa: F401\n", "issue": "No module named 'elasticdl.python.elasticdl.layers' on master\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.7/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/elasticdl/python/master/main.py\", line 28, in <module>\r\n from elasticdl.python.elasticdl.layers.embedding import Embedding\r\nModuleNotFoundError: No module named 'elasticdl.python.elasticdl.layers'\r\n```\r\n\r\nSeems `layers` directory is not installed to `/usr/local/lib/python3.7/site-packages/elasticdl-develop-py3.7.egg/elasticdl/python/elasticdl` after running `python setup.py install`\r\n\r\nSteps to reproduce:\r\n\r\n1. In a Python Docker container, clone ElasticDL and run `python setup.py install`\r\n1. remove the cloned source\r\n1. execute a demo job by: `elasticdl train ...`\n", "before_files": [{"content": "", "path": "elasticdl/python/elasticdl/__init__.py"}], "after_files": [{"content": "from elasticdl.python.elasticdl import layers # noqa: F401\n", "path": "elasticdl/python/elasticdl/__init__.py"}]}
| 496 | 83 |
gh_patches_debug_30901
|
rasdani/github-patches
|
git_diff
|
lk-geimfari__mimesis-677
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Auto add builtin provider to Generic based on passed locale
# Feature request
An idea is very simple:
```python
generic = Generic('ru', auto_add_builtin=True)
generic.russia_provider.inn()
```
Instead of this:
```python
from mimesis import Generic
from mimesis.builtins import RussiaSpecProvider
generic = Generic('ru')
generic.add_provider(RussiaSpecProvider)
generic.russia_provider.inn()
```
Optionally we can make builtin's name customizable:
```python
generic = Generic('ru', auto_add_builtin=True, builtin_custom_name='russia')
generic.russia.inn()
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mimesis/providers/generic.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 """Provides all at one."""
4
5 import inspect
6 from typing import Any, List, Type
7
8 from mimesis.providers.address import Address
9 from mimesis.providers.base import BaseDataProvider, BaseProvider
10 from mimesis.providers.business import Business
11 from mimesis.providers.choice import Choice
12 from mimesis.providers.clothing import Clothing
13 from mimesis.providers.code import Code
14 from mimesis.providers.cryptographic import Cryptographic
15 from mimesis.providers.date import Datetime
16 from mimesis.providers.development import Development
17 from mimesis.providers.file import File
18 from mimesis.providers.food import Food
19 from mimesis.providers.hardware import Hardware
20 from mimesis.providers.internet import Internet
21 from mimesis.providers.numbers import Numbers
22 from mimesis.providers.path import Path
23 from mimesis.providers.payment import Payment
24 from mimesis.providers.person import Person
25 from mimesis.providers.science import Science
26 from mimesis.providers.structure import Structure
27 from mimesis.providers.text import Text
28 from mimesis.providers.transport import Transport
29 from mimesis.providers.units import UnitSystem
30
31 __all__ = ['Generic']
32
33
34 class Generic(BaseDataProvider):
35 """Class which contain all providers at one."""
36
37 def __init__(self, *args, **kwargs) -> None:
38 """Initialize attributes lazily.
39
40 :param args: Arguments.
41 :param kwargs: Keyword arguments.
42 """
43 super().__init__(*args, **kwargs)
44 self._person = Person
45 self._address = Address
46 self._datetime = Datetime
47 self._business = Business
48 self._text = Text
49 self._food = Food
50 self._science = Science
51 self.transport = Transport(seed=self.seed)
52 self.code = Code(seed=self.seed)
53 self.unit_system = UnitSystem(seed=self.seed)
54 self.file = File(seed=self.seed)
55 self.numbers = Numbers(seed=self.seed)
56 self.development = Development(seed=self.seed)
57 self.hardware = Hardware(seed=self.seed)
58 self.clothing = Clothing(seed=self.seed)
59 self.internet = Internet(seed=self.seed)
60 self.path = Path(seed=self.seed)
61 self.payment = Payment(seed=self.seed)
62 self.cryptographic = Cryptographic(seed=self.seed)
63 self.structure = Structure(seed=self.seed)
64 self.choice = Choice(seed=self.seed)
65
66 class Meta:
67 """Class for metadata."""
68
69 name = 'generic'
70
71 def __getattr__(self, attrname: str) -> Any:
72 """Get attribute without underscore.
73
74 :param attrname: Attribute name.
75 :return: An attribute.
76 """
77 attribute = object.__getattribute__(
78 self, '_' + attrname)
79 if attribute and callable(attribute):
80 self.__dict__[attrname] = attribute(
81 self.locale,
82 self.seed,
83 )
84 return self.__dict__[attrname]
85
86 def __dir__(self) -> List[str]:
87 """Available data providers.
88
89 The list of result will be used in AbstractField to
90 determine method's class.
91
92 :return: List of attributes.
93 """
94 attributes = []
95 exclude = BaseDataProvider().__dict__.keys()
96
97 for a in self.__dict__:
98 if a not in exclude:
99 if a.startswith('_'):
100 attribute = a.replace('_', '', 1)
101 attributes.append(attribute)
102 else:
103 attributes.append(a)
104 return attributes
105
106 def add_provider(self, cls: Type[BaseProvider]) -> None:
107 """Add a custom provider to Generic() object.
108
109 :param cls: Custom provider.
110 :return: None
111 :raises TypeError: if cls is not class.
112 """
113 if inspect.isclass(cls):
114 if not issubclass(cls, BaseProvider):
115 raise TypeError('The provider must be a '
116 'subclass of BaseProvider')
117 try:
118 meta = getattr(cls, 'Meta')
119 name = getattr(meta, 'name')
120 except AttributeError:
121 name = cls.__name__.lower()
122 setattr(self, name, cls(seed=self.seed))
123 else:
124 raise TypeError('The provider must be a class')
125
126 def add_providers(self, *providers: Type[BaseProvider]) -> None:
127 """Add a lot of custom providers to Generic() object.
128
129 :param providers: Custom providers.
130 :return: None
131 """
132 for provider in providers:
133 self.add_provider(provider)
134
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mimesis/providers/generic.py b/mimesis/providers/generic.py
--- a/mimesis/providers/generic.py
+++ b/mimesis/providers/generic.py
@@ -5,6 +5,17 @@
import inspect
from typing import Any, List, Type
+from mimesis.builtins import (
+ BrazilSpecProvider,
+ DenmarkSpecProvider,
+ GermanySpecProvider,
+ ItalySpecProvider,
+ NetherlandsSpecProvider,
+ PolandSpecProvider,
+ RussiaSpecProvider,
+ UkraineSpecProvider,
+ USASpecProvider,
+)
from mimesis.providers.address import Address
from mimesis.providers.base import BaseDataProvider, BaseProvider
from mimesis.providers.business import Business
@@ -48,6 +59,21 @@
self._text = Text
self._food = Food
self._science = Science
+
+ _spec_providers = {
+ 'de': DenmarkSpecProvider,
+ 'ge': GermanySpecProvider,
+ 'en': USASpecProvider,
+ 'it': ItalySpecProvider,
+ 'nl': NetherlandsSpecProvider,
+ 'pl': PolandSpecProvider,
+ 'pt-br': BrazilSpecProvider,
+ 'ru': RussiaSpecProvider,
+ 'uk': UkraineSpecProvider,
+ }
+ if self.locale in _spec_providers:
+ self.add_provider(_spec_providers[self.locale])
+
self.transport = Transport(seed=self.seed)
self.code = Code(seed=self.seed)
self.unit_system = UnitSystem(seed=self.seed)
@@ -108,7 +134,8 @@
:param cls: Custom provider.
:return: None
- :raises TypeError: if cls is not class.
+ :raises TypeError: if cls is not class or is not a subclass
+ of BaseProvider.
"""
if inspect.isclass(cls):
if not issubclass(cls, BaseProvider):
|
{"golden_diff": "diff --git a/mimesis/providers/generic.py b/mimesis/providers/generic.py\n--- a/mimesis/providers/generic.py\n+++ b/mimesis/providers/generic.py\n@@ -5,6 +5,17 @@\n import inspect\n from typing import Any, List, Type\n \n+from mimesis.builtins import (\n+ BrazilSpecProvider,\n+ DenmarkSpecProvider,\n+ GermanySpecProvider,\n+ ItalySpecProvider,\n+ NetherlandsSpecProvider,\n+ PolandSpecProvider,\n+ RussiaSpecProvider,\n+ UkraineSpecProvider,\n+ USASpecProvider,\n+)\n from mimesis.providers.address import Address\n from mimesis.providers.base import BaseDataProvider, BaseProvider\n from mimesis.providers.business import Business\n@@ -48,6 +59,21 @@\n self._text = Text\n self._food = Food\n self._science = Science\n+\n+ _spec_providers = {\n+ 'de': DenmarkSpecProvider,\n+ 'ge': GermanySpecProvider,\n+ 'en': USASpecProvider,\n+ 'it': ItalySpecProvider,\n+ 'nl': NetherlandsSpecProvider,\n+ 'pl': PolandSpecProvider,\n+ 'pt-br': BrazilSpecProvider,\n+ 'ru': RussiaSpecProvider,\n+ 'uk': UkraineSpecProvider,\n+ }\n+ if self.locale in _spec_providers:\n+ self.add_provider(_spec_providers[self.locale])\n+\n self.transport = Transport(seed=self.seed)\n self.code = Code(seed=self.seed)\n self.unit_system = UnitSystem(seed=self.seed)\n@@ -108,7 +134,8 @@\n \n :param cls: Custom provider.\n :return: None\n- :raises TypeError: if cls is not class.\n+ :raises TypeError: if cls is not class or is not a subclass\n+ of BaseProvider.\n \"\"\"\n if inspect.isclass(cls):\n if not issubclass(cls, BaseProvider):\n", "issue": "Auto add builtin provider to Generic based on passed locale\n# Feature request\r\n\r\nAn idea is very simple:\r\n\r\n```python\r\ngeneric = Generic('ru', auto_add_builtin=True)\r\ngeneric.russia_provider.inn()\r\n```\r\n\r\nInstead of this:\r\n\r\n```python\r\nfrom mimesis import Generic\r\nfrom mimesis.builtins import RussiaSpecProvider\r\n\r\ngeneric = Generic('ru')\r\ngeneric.add_provider(RussiaSpecProvider)\r\ngeneric.russia_provider.inn()\r\n```\r\n\r\nOptionally we can make builtin's name customizable: \r\n\r\n```python\r\ngeneric = Generic('ru', auto_add_builtin=True, builtin_custom_name='russia')\r\ngeneric.russia.inn()\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides all at one.\"\"\"\n\nimport inspect\nfrom typing import Any, List, Type\n\nfrom mimesis.providers.address import Address\nfrom mimesis.providers.base import BaseDataProvider, BaseProvider\nfrom mimesis.providers.business import Business\nfrom mimesis.providers.choice import Choice\nfrom mimesis.providers.clothing import Clothing\nfrom mimesis.providers.code import Code\nfrom mimesis.providers.cryptographic import Cryptographic\nfrom mimesis.providers.date import Datetime\nfrom mimesis.providers.development import Development\nfrom mimesis.providers.file import File\nfrom mimesis.providers.food import Food\nfrom mimesis.providers.hardware import Hardware\nfrom mimesis.providers.internet import Internet\nfrom mimesis.providers.numbers import Numbers\nfrom mimesis.providers.path import Path\nfrom mimesis.providers.payment import Payment\nfrom mimesis.providers.person import Person\nfrom mimesis.providers.science import Science\nfrom mimesis.providers.structure import Structure\nfrom mimesis.providers.text import Text\nfrom mimesis.providers.transport import Transport\nfrom mimesis.providers.units import UnitSystem\n\n__all__ = ['Generic']\n\n\nclass Generic(BaseDataProvider):\n \"\"\"Class which contain all providers at one.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes lazily.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self._person = Person\n self._address = Address\n self._datetime = Datetime\n self._business = Business\n self._text = Text\n self._food = Food\n self._science = Science\n self.transport = Transport(seed=self.seed)\n self.code = Code(seed=self.seed)\n self.unit_system = UnitSystem(seed=self.seed)\n self.file = File(seed=self.seed)\n self.numbers = Numbers(seed=self.seed)\n self.development = Development(seed=self.seed)\n self.hardware = Hardware(seed=self.seed)\n self.clothing = Clothing(seed=self.seed)\n self.internet = Internet(seed=self.seed)\n self.path = Path(seed=self.seed)\n self.payment = Payment(seed=self.seed)\n self.cryptographic = Cryptographic(seed=self.seed)\n self.structure = Structure(seed=self.seed)\n self.choice = Choice(seed=self.seed)\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'generic'\n\n def __getattr__(self, attrname: str) -> Any:\n \"\"\"Get attribute without underscore.\n\n :param attrname: Attribute name.\n :return: An attribute.\n \"\"\"\n attribute = object.__getattribute__(\n self, '_' + attrname)\n if attribute and callable(attribute):\n self.__dict__[attrname] = attribute(\n self.locale,\n self.seed,\n )\n return self.__dict__[attrname]\n\n def __dir__(self) -> List[str]:\n \"\"\"Available data providers.\n\n The list of result will be used in AbstractField to\n determine method's class.\n\n :return: List of attributes.\n \"\"\"\n attributes = []\n exclude = BaseDataProvider().__dict__.keys()\n\n for a in self.__dict__:\n if a not in exclude:\n if a.startswith('_'):\n attribute = a.replace('_', '', 1)\n attributes.append(attribute)\n else:\n attributes.append(a)\n return attributes\n\n def add_provider(self, cls: Type[BaseProvider]) -> None:\n \"\"\"Add a custom provider to Generic() object.\n\n :param cls: Custom provider.\n :return: None\n :raises TypeError: if cls is not class.\n \"\"\"\n if inspect.isclass(cls):\n if not issubclass(cls, BaseProvider):\n raise TypeError('The provider must be a '\n 'subclass of BaseProvider')\n try:\n meta = getattr(cls, 'Meta')\n name = getattr(meta, 'name')\n except AttributeError:\n name = cls.__name__.lower()\n setattr(self, name, cls(seed=self.seed))\n else:\n raise TypeError('The provider must be a class')\n\n def add_providers(self, *providers: Type[BaseProvider]) -> None:\n \"\"\"Add a lot of custom providers to Generic() object.\n\n :param providers: Custom providers.\n :return: None\n \"\"\"\n for provider in providers:\n self.add_provider(provider)\n", "path": "mimesis/providers/generic.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Provides all at one.\"\"\"\n\nimport inspect\nfrom typing import Any, List, Type\n\nfrom mimesis.builtins import (\n BrazilSpecProvider,\n DenmarkSpecProvider,\n GermanySpecProvider,\n ItalySpecProvider,\n NetherlandsSpecProvider,\n PolandSpecProvider,\n RussiaSpecProvider,\n UkraineSpecProvider,\n USASpecProvider,\n)\nfrom mimesis.providers.address import Address\nfrom mimesis.providers.base import BaseDataProvider, BaseProvider\nfrom mimesis.providers.business import Business\nfrom mimesis.providers.choice import Choice\nfrom mimesis.providers.clothing import Clothing\nfrom mimesis.providers.code import Code\nfrom mimesis.providers.cryptographic import Cryptographic\nfrom mimesis.providers.date import Datetime\nfrom mimesis.providers.development import Development\nfrom mimesis.providers.file import File\nfrom mimesis.providers.food import Food\nfrom mimesis.providers.hardware import Hardware\nfrom mimesis.providers.internet import Internet\nfrom mimesis.providers.numbers import Numbers\nfrom mimesis.providers.path import Path\nfrom mimesis.providers.payment import Payment\nfrom mimesis.providers.person import Person\nfrom mimesis.providers.science import Science\nfrom mimesis.providers.structure import Structure\nfrom mimesis.providers.text import Text\nfrom mimesis.providers.transport import Transport\nfrom mimesis.providers.units import UnitSystem\n\n__all__ = ['Generic']\n\n\nclass Generic(BaseDataProvider):\n \"\"\"Class which contain all providers at one.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes lazily.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self._person = Person\n self._address = Address\n self._datetime = Datetime\n self._business = Business\n self._text = Text\n self._food = Food\n self._science = Science\n\n _spec_providers = {\n 'de': DenmarkSpecProvider,\n 'ge': GermanySpecProvider,\n 'en': USASpecProvider,\n 'it': ItalySpecProvider,\n 'nl': NetherlandsSpecProvider,\n 'pl': PolandSpecProvider,\n 'pt-br': BrazilSpecProvider,\n 'ru': RussiaSpecProvider,\n 'uk': UkraineSpecProvider,\n }\n if self.locale in _spec_providers:\n self.add_provider(_spec_providers[self.locale])\n\n self.transport = Transport(seed=self.seed)\n self.code = Code(seed=self.seed)\n self.unit_system = UnitSystem(seed=self.seed)\n self.file = File(seed=self.seed)\n self.numbers = Numbers(seed=self.seed)\n self.development = Development(seed=self.seed)\n self.hardware = Hardware(seed=self.seed)\n self.clothing = Clothing(seed=self.seed)\n self.internet = Internet(seed=self.seed)\n self.path = Path(seed=self.seed)\n self.payment = Payment(seed=self.seed)\n self.cryptographic = Cryptographic(seed=self.seed)\n self.structure = Structure(seed=self.seed)\n self.choice = Choice(seed=self.seed)\n\n class Meta:\n \"\"\"Class for metadata.\"\"\"\n\n name = 'generic'\n\n def __getattr__(self, attrname: str) -> Any:\n \"\"\"Get attribute without underscore.\n\n :param attrname: Attribute name.\n :return: An attribute.\n \"\"\"\n attribute = object.__getattribute__(\n self, '_' + attrname)\n if attribute and callable(attribute):\n self.__dict__[attrname] = attribute(\n self.locale,\n self.seed,\n )\n return self.__dict__[attrname]\n\n def __dir__(self) -> List[str]:\n \"\"\"Available data providers.\n\n The list of result will be used in AbstractField to\n determine method's class.\n\n :return: List of attributes.\n \"\"\"\n attributes = []\n exclude = BaseDataProvider().__dict__.keys()\n\n for a in self.__dict__:\n if a not in exclude:\n if a.startswith('_'):\n attribute = a.replace('_', '', 1)\n attributes.append(attribute)\n else:\n attributes.append(a)\n return attributes\n\n def add_provider(self, cls: Type[BaseProvider]) -> None:\n \"\"\"Add a custom provider to Generic() object.\n\n :param cls: Custom provider.\n :return: None\n :raises TypeError: if cls is not class or is not a subclass\n of BaseProvider.\n \"\"\"\n if inspect.isclass(cls):\n if not issubclass(cls, BaseProvider):\n raise TypeError('The provider must be a '\n 'subclass of BaseProvider')\n try:\n meta = getattr(cls, 'Meta')\n name = getattr(meta, 'name')\n except AttributeError:\n name = cls.__name__.lower()\n setattr(self, name, cls(seed=self.seed))\n else:\n raise TypeError('The provider must be a class')\n\n def add_providers(self, *providers: Type[BaseProvider]) -> None:\n \"\"\"Add a lot of custom providers to Generic() object.\n\n :param providers: Custom providers.\n :return: None\n \"\"\"\n for provider in providers:\n self.add_provider(provider)\n", "path": "mimesis/providers/generic.py"}]}
| 1,629 | 426 |
gh_patches_debug_10035
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1433
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Helptext for supported file formats is not up-to-date
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/grandchallenge/cases/forms.py`
Content:
```
1 from typing import List
2
3 from crispy_forms.helper import FormHelper
4 from crispy_forms.layout import Submit
5 from django import forms
6 from django.conf import settings
7 from django.core.exceptions import ValidationError
8
9 from grandchallenge.cases.models import RawImageFile, RawImageUploadSession
10 from grandchallenge.jqfileupload.widgets import uploader
11 from grandchallenge.jqfileupload.widgets.uploader import (
12 StagedAjaxFile,
13 UploadedAjaxFileList,
14 )
15
16
17 class UploadRawImagesForm(forms.ModelForm):
18 files = UploadedAjaxFileList(
19 widget=uploader.AjaxUploadWidget(multifile=True, auto_commit=False),
20 label="Image files",
21 help_text=(
22 "The total size of all files uploaded in a single session "
23 "cannot exceed 10 GB.<br>"
24 "The following file formats are supported: "
25 ".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg."
26 ),
27 )
28
29 def __init__(self, *args, user, linked_task=None, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.helper = FormHelper()
32 self.helper.add_input(Submit("save", "Submit"))
33 self.fields["files"].widget.user = user
34 self._linked_task = linked_task
35
36 def clean_files(self):
37 files = self.cleaned_data["files"]
38
39 if len({f.name for f in files}) != len(files):
40 raise ValidationError("Filenames must be unique.")
41
42 if sum([f.size for f in files]) > settings.UPLOAD_SESSION_MAX_BYTES:
43 raise ValidationError(
44 "Total size of all files exceeds the upload limit."
45 )
46
47 return files
48
49 def save(self, commit=True):
50 instance = super().save(commit=False) # type: RawImageUploadSession
51
52 # Create links between the created session and all uploaded files
53 uploaded_files = self.cleaned_data[
54 "files"
55 ] # type: List[StagedAjaxFile]
56
57 raw_files = [
58 RawImageFile(
59 upload_session=instance,
60 filename=uploaded_file.name,
61 staged_file_id=uploaded_file.uuid,
62 )
63 for uploaded_file in uploaded_files
64 ]
65
66 if commit:
67 instance.save()
68 RawImageFile.objects.bulk_create(raw_files)
69 instance.process_images(linked_task=self._linked_task)
70
71 return instance
72
73 class Meta:
74 model = RawImageUploadSession
75 fields = ["files"]
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/app/grandchallenge/cases/forms.py b/app/grandchallenge/cases/forms.py
--- a/app/grandchallenge/cases/forms.py
+++ b/app/grandchallenge/cases/forms.py
@@ -22,7 +22,10 @@
"The total size of all files uploaded in a single session "
"cannot exceed 10 GB.<br>"
"The following file formats are supported: "
- ".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg."
+ ".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg.<br>"
+ "The following file formats can be uploaded and will be converted to "
+ "tif: Aperio(.svs), Hamamatsu(.vms, .vmu, .ndpi), Leica(.scn), MIRAX"
+ "(.mrxs) and Ventana(.bif)."
),
)
|
{"golden_diff": "diff --git a/app/grandchallenge/cases/forms.py b/app/grandchallenge/cases/forms.py\n--- a/app/grandchallenge/cases/forms.py\n+++ b/app/grandchallenge/cases/forms.py\n@@ -22,7 +22,10 @@\n \"The total size of all files uploaded in a single session \"\n \"cannot exceed 10 GB.<br>\"\n \"The following file formats are supported: \"\n- \".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg.\"\n+ \".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg.<br>\"\n+ \"The following file formats can be uploaded and will be converted to \"\n+ \"tif: Aperio(.svs), Hamamatsu(.vms, .vmu, .ndpi), Leica(.scn), MIRAX\"\n+ \"(.mrxs) and Ventana(.bif).\"\n ),\n )\n", "issue": "Helptext for supported file formats is not up-to-date\n\n", "before_files": [{"content": "from typing import List\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Submit\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\n\nfrom grandchallenge.cases.models import RawImageFile, RawImageUploadSession\nfrom grandchallenge.jqfileupload.widgets import uploader\nfrom grandchallenge.jqfileupload.widgets.uploader import (\n StagedAjaxFile,\n UploadedAjaxFileList,\n)\n\n\nclass UploadRawImagesForm(forms.ModelForm):\n files = UploadedAjaxFileList(\n widget=uploader.AjaxUploadWidget(multifile=True, auto_commit=False),\n label=\"Image files\",\n help_text=(\n \"The total size of all files uploaded in a single session \"\n \"cannot exceed 10 GB.<br>\"\n \"The following file formats are supported: \"\n \".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg.\"\n ),\n )\n\n def __init__(self, *args, user, linked_task=None, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.add_input(Submit(\"save\", \"Submit\"))\n self.fields[\"files\"].widget.user = user\n self._linked_task = linked_task\n\n def clean_files(self):\n files = self.cleaned_data[\"files\"]\n\n if len({f.name for f in files}) != len(files):\n raise ValidationError(\"Filenames must be unique.\")\n\n if sum([f.size for f in files]) > settings.UPLOAD_SESSION_MAX_BYTES:\n raise ValidationError(\n \"Total size of all files exceeds the upload limit.\"\n )\n\n return files\n\n def save(self, commit=True):\n instance = super().save(commit=False) # type: RawImageUploadSession\n\n # Create links between the created session and all uploaded files\n uploaded_files = self.cleaned_data[\n \"files\"\n ] # type: List[StagedAjaxFile]\n\n raw_files = [\n RawImageFile(\n upload_session=instance,\n filename=uploaded_file.name,\n staged_file_id=uploaded_file.uuid,\n )\n for uploaded_file in uploaded_files\n ]\n\n if commit:\n instance.save()\n RawImageFile.objects.bulk_create(raw_files)\n instance.process_images(linked_task=self._linked_task)\n\n return instance\n\n class Meta:\n model = RawImageUploadSession\n fields = [\"files\"]\n", "path": "app/grandchallenge/cases/forms.py"}], "after_files": [{"content": "from typing import List\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Submit\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\n\nfrom grandchallenge.cases.models import RawImageFile, RawImageUploadSession\nfrom grandchallenge.jqfileupload.widgets import uploader\nfrom grandchallenge.jqfileupload.widgets.uploader import (\n StagedAjaxFile,\n UploadedAjaxFileList,\n)\n\n\nclass UploadRawImagesForm(forms.ModelForm):\n files = UploadedAjaxFileList(\n widget=uploader.AjaxUploadWidget(multifile=True, auto_commit=False),\n label=\"Image files\",\n help_text=(\n \"The total size of all files uploaded in a single session \"\n \"cannot exceed 10 GB.<br>\"\n \"The following file formats are supported: \"\n \".mha, .mhd, .raw, .zraw, .dcm, .tiff, .png, .jpeg and .jpg.<br>\"\n \"The following file formats can be uploaded and will be converted to \"\n \"tif: Aperio(.svs), Hamamatsu(.vms, .vmu, .ndpi), Leica(.scn), MIRAX\"\n \"(.mrxs) and Ventana(.bif).\"\n ),\n )\n\n def __init__(self, *args, user, **kwargs):\n super().__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.add_input(Submit(\"save\", \"Submit\"))\n self.fields[\"files\"].widget.user = user\n\n def clean_files(self):\n files = self.cleaned_data[\"files\"]\n\n if len({f.name for f in files}) != len(files):\n raise ValidationError(\"Filenames must be unique.\")\n\n if sum([f.size for f in files]) > settings.UPLOAD_SESSION_MAX_BYTES:\n raise ValidationError(\n \"Total size of all files exceeds the upload limit.\"\n )\n\n return files\n\n def save(self, commit=True):\n instance = super().save(commit=False) # type: RawImageUploadSession\n\n # Create links between the created session and all uploaded files\n uploaded_files = self.cleaned_data[\n \"files\"\n ] # type: List[StagedAjaxFile]\n\n raw_files = [\n RawImageFile(\n upload_session=instance,\n filename=uploaded_file.name,\n staged_file_id=uploaded_file.uuid,\n )\n for uploaded_file in uploaded_files\n ]\n\n if commit:\n instance.save()\n RawImageFile.objects.bulk_create(raw_files)\n instance.process_images()\n\n return instance\n\n class Meta:\n model = RawImageUploadSession\n fields = [\"files\"]\n", "path": "app/grandchallenge/cases/forms.py"}]}
| 949 | 235 |
gh_patches_debug_16908
|
rasdani/github-patches
|
git_diff
|
conda__conda-build-1548
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
patching issue with test_api_skeleton.py
Hi,
I am having issued with tests/test_api_skeleton.py.
On the first failure:
```bash
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: <...>/conda-build, inifile: setup.cfg
plugins: capturelog-0.7, cov-2.3.1
collected 11 items
tests/test_api_skeleton.py F
generated xml file: <...>/conda-build/junit.xml
=================================== FAILURES ===================================
__________________________ test_repo[-pypi-pip-8.1.2] __________________________
Traceback (most recent call last):
File "<...>/conda-build/tests/test_api_skeleton.py", line 21, in test_repo
api.skeletonize(package, repo, version=version, output_dir=testing_workdir, config=test_config)
File "<...>/conda-build/conda_build/api.py", line 193, in skeletonize
recursive=recursive, config=config, **kwargs)
File "<...>/conda-build/conda_build/skeletons/pypi.py", line 406, in skeletonize
noprompt, packages, config=config, setup_options=setup_options)
File "<...>/conda-build/conda_build/skeletons/pypi.py", line 664, in get_package_metadata
config=config)
File "<...>/conda-build/conda_build/skeletons/pypi.py", line 924, in get_pkginfo
run_setuppy(src_dir, tempdir, python_version, config=config, setup_options=setup_options)
File "<...>/conda-build/conda_build/skeletons/pypi.py", line 983, in run_setuppy
apply_patch(join(stdlib_dir, 'distutils'), patch, config=config)
File "<...>/conda-build/conda_build/source.py", line 483, in apply_patch
check_call_env([patch] + patch_args, cwd=src_dir)
File "<...>/conda-build/conda_build/utils.py", line 552, in check_call_env
return _func_defaulting_env_to_os_environ(subprocess.check_call, *popenargs, **kwargs)
File "<...>/conda-build/conda_build/utils.py", line 548, in _func_defaulting_env_to_os_environ
return func(_args, **kwargs)
File "<...>/lib/python3.5/subprocess.py", line 581, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/patch', '-p0', '-i', '/tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz/pypi-distutils.patch']' returned non-zero exit status 1
----------------------------- Captured stdout call -----------------------------
Using url https://pypi.python.org/packages/e7/a8/7556133689add8d1a54c0b14aeff0acb03c64707ce100ecd53934da1aa13/pip-8.1.2.tar.gz (1.1 MB) for pip.
Downloading pip
Unpacking pip...
done
working in /tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz
updating index in: /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/linux-64
updating index in: /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/noarch
The following NEW packages will be INSTALLED:
openssl: 1.0.2j-0 (soft-link)
pip: 8.1.2-py35_0 (soft-link)
python: 3.5.2-0 (soft-link)
pyyaml: 3.12-py35_0 (soft-link)
readline: 6.2-2 (soft-link)
setuptools: 27.2.0-py35_0 (soft-link)
sqlite: 3.13.0-0 (soft-link)
tk: 8.5.18-0 (soft-link)
wheel: 0.29.0-py35_0 (soft-link)
xz: 5.2.2-0 (soft-link)
yaml: 0.1.6-0 (soft-link)
zlib: 1.2.8-3 (soft-link)
Applying patch: '/tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz/pypi-distutils.patch' in /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/skeleton_1478167437529/_b_env_placehold_<...>/lib/python3.5/distutils
File core.py is not a regular file -- refusing to patch
1 out of 1 hunk ignored -- saving rejects to file core.py.rej
```
patching fails because core.py is a symlink ...
A temporary fix would be adding --follow-symlinks in the patch command, but this is ugly
**source.py l.480 in apply_patch**
```python
patch_args = ['-p%d' % patch_strip_level, '-i', path, '--follow-symlinks']
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_build/conda_interface.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 from functools import partial
5 from pkg_resources import parse_version
6
7 import conda
8 from conda import compat, plan # NOQA
9 from conda.api import get_index # NOQA
10 from conda.cli.common import (Completer, InstalledPackages, add_parser_channels, add_parser_prefix, # NOQA
11 specs_from_args, spec_from_line, specs_from_url) # NOQA
12 from conda.cli.conda_argparse import ArgumentParser # NOQA
13 from conda.compat import (PY3, StringIO, configparser, input, iteritems, lchmod, string_types, # NOQA
14 text_type, TemporaryDirectory) # NOQA
15 from conda.connection import CondaSession # NOQA
16 from conda.fetch import TmpDownload, download, fetch_index, handle_proxy_407 # NOQA
17 from conda.install import (delete_trash, is_linked, linked, linked_data, prefix_placeholder, # NOQA
18 rm_rf, symlink_conda, rm_fetched, package_cache) # NOQA
19 from conda.lock import Locked # NOQA
20 from conda.misc import untracked, walk_prefix # NOQA
21 from conda.resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA
22 from conda.signature import KEYS, KEYS_DIR, hash_file, verify # NOQA
23 from conda.utils import human_bytes, hashsum_file, md5_file, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA
24 import conda.config as cc # NOQA
25 from conda.config import rc_path # NOQA
26 from conda.version import VersionOrder # NOQA
27
28 if parse_version(conda.__version__) >= parse_version("4.2"):
29 # conda 4.2.x
30 import conda.base.context
31 import conda.exceptions
32 from conda.base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA
33
34 from conda.base.constants import DEFAULT_CHANNELS # NOQA
35 get_prefix = partial(context_get_prefix, conda.base.context.context)
36 get_default_urls = lambda: DEFAULT_CHANNELS
37
38 arch_name = conda.base.context.context.arch_name
39 binstar_upload = conda.base.context.context.binstar_upload
40 bits = conda.base.context.context.bits
41 default_prefix = conda.base.context.context.default_prefix
42 default_python = conda.base.context.context.default_python
43 envs_dirs = conda.base.context.context.envs_dirs
44 pkgs_dirs = conda.base.context.context.pkgs_dirs
45 platform = conda.base.context.context.platform
46 root_dir = conda.base.context.context.root_dir
47 root_writable = conda.base.context.context.root_writable
48 subdir = conda.base.context.context.subdir
49 from conda.models.channel import get_conda_build_local_url
50 get_rc_urls = lambda: list(conda.base.context.context.channels)
51 get_local_urls = lambda: list(get_conda_build_local_url()) or []
52 load_condarc = lambda fn: conda.base.context.reset_context([fn])
53 PaddingError = conda.exceptions.PaddingError
54 LinkError = conda.exceptions.LinkError
55 NoPackagesFoundError = conda.exceptions.NoPackagesFoundError
56 CondaValueError = conda.exceptions.CondaValueError
57
58 else:
59 from conda.config import get_default_urls, non_x86_linux_machines, load_condarc # NOQA
60 from conda.cli.common import get_prefix # NOQA
61
62 arch_name = cc.arch_name
63 binstar_upload = cc.binstar_upload
64 bits = cc.bits
65 default_prefix = cc.default_prefix
66 default_python = cc.default_python
67 envs_dirs = cc.envs_dirs
68 pkgs_dirs = cc.pkgs_dirs
69 platform = cc.platform
70 root_dir = cc.root_dir
71 root_writable = cc.root_writable
72 subdir = cc.subdir
73
74 get_rc_urls = cc.get_rc_urls
75 get_local_urls = cc.get_local_urls
76
77 class PaddingError(Exception):
78 pass
79
80 class LinkError(Exception):
81 pass
82
83 class NoPackagesFoundError(Exception):
84 pass
85
86 class CondaValueError(Exception):
87 pass
88
89
90 class SignatureError(Exception):
91 pass
92
93
94 def which_package(path):
95 """
96 given the path (of a (presumably) conda installed file) iterate over
97 the conda packages the file came from. Usually the iteration yields
98 only one package.
99 """
100 from os.path import abspath, join
101 path = abspath(path)
102 prefix = which_prefix(path)
103 if prefix is None:
104 raise RuntimeError("could not determine conda prefix from: %s" % path)
105 for dist in linked(prefix):
106 meta = is_linked(prefix, dist)
107 if any(abspath(join(prefix, f)) == path for f in meta['files']):
108 yield dist
109
110
111 def which_prefix(path):
112 """
113 given the path (to a (presumably) conda installed file) return the
114 environment prefix in which the file in located
115 """
116 from os.path import abspath, join, isdir, dirname
117 prefix = abspath(path)
118 while True:
119 if isdir(join(prefix, 'conda-meta')):
120 # we found the it, so let's return it
121 return prefix
122 if prefix == dirname(prefix):
123 # we cannot chop off any more directories, so we didn't find it
124 return None
125 prefix = dirname(prefix)
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conda_build/conda_interface.py b/conda_build/conda_interface.py
--- a/conda_build/conda_interface.py
+++ b/conda_build/conda_interface.py
@@ -55,6 +55,9 @@
NoPackagesFoundError = conda.exceptions.NoPackagesFoundError
CondaValueError = conda.exceptions.CondaValueError
+ # disallow softlinks. This avoids a lot of dumb issues, at the potential cost of disk space.
+ conda.base.context.context.allow_softlinks = False
+
else:
from conda.config import get_default_urls, non_x86_linux_machines, load_condarc # NOQA
from conda.cli.common import get_prefix # NOQA
@@ -74,6 +77,8 @@
get_rc_urls = cc.get_rc_urls
get_local_urls = cc.get_local_urls
+ cc.allow_softlinks = False
+
class PaddingError(Exception):
pass
|
{"golden_diff": "diff --git a/conda_build/conda_interface.py b/conda_build/conda_interface.py\n--- a/conda_build/conda_interface.py\n+++ b/conda_build/conda_interface.py\n@@ -55,6 +55,9 @@\n NoPackagesFoundError = conda.exceptions.NoPackagesFoundError\n CondaValueError = conda.exceptions.CondaValueError\n \n+ # disallow softlinks. This avoids a lot of dumb issues, at the potential cost of disk space.\n+ conda.base.context.context.allow_softlinks = False\n+\n else:\n from conda.config import get_default_urls, non_x86_linux_machines, load_condarc # NOQA\n from conda.cli.common import get_prefix # NOQA\n@@ -74,6 +77,8 @@\n get_rc_urls = cc.get_rc_urls\n get_local_urls = cc.get_local_urls\n \n+ cc.allow_softlinks = False\n+\n class PaddingError(Exception):\n pass\n", "issue": "patching issue with test_api_skeleton.py\nHi, \r\n\r\nI am having issued with tests/test_api_skeleton.py.\r\n\r\nOn the first failure:\r\n\r\n```bash\r\n============================= test session starts ==============================\r\nplatform linux -- Python 3.5.2, pytest-2.9.2, py-1.4.31, pluggy-0.3.1\r\nrootdir: <...>/conda-build, inifile: setup.cfg\r\nplugins: capturelog-0.7, cov-2.3.1\r\ncollected 11 items\r\n\r\ntests/test_api_skeleton.py F\r\n\r\n generated xml file: <...>/conda-build/junit.xml \r\n=================================== FAILURES ===================================\r\n__________________________ test_repo[-pypi-pip-8.1.2] __________________________\r\nTraceback (most recent call last):\r\n File \"<...>/conda-build/tests/test_api_skeleton.py\", line 21, in test_repo\r\n api.skeletonize(package, repo, version=version, output_dir=testing_workdir, config=test_config)\r\n File \"<...>/conda-build/conda_build/api.py\", line 193, in skeletonize\r\n recursive=recursive, config=config, **kwargs)\r\n File \"<...>/conda-build/conda_build/skeletons/pypi.py\", line 406, in skeletonize\r\n noprompt, packages, config=config, setup_options=setup_options)\r\n File \"<...>/conda-build/conda_build/skeletons/pypi.py\", line 664, in get_package_metadata\r\n config=config)\r\n File \"<...>/conda-build/conda_build/skeletons/pypi.py\", line 924, in get_pkginfo\r\n run_setuppy(src_dir, tempdir, python_version, config=config, setup_options=setup_options)\r\n File \"<...>/conda-build/conda_build/skeletons/pypi.py\", line 983, in run_setuppy\r\n apply_patch(join(stdlib_dir, 'distutils'), patch, config=config)\r\n File \"<...>/conda-build/conda_build/source.py\", line 483, in apply_patch\r\n check_call_env([patch] + patch_args, cwd=src_dir)\r\n File \"<...>/conda-build/conda_build/utils.py\", line 552, in check_call_env\r\n return _func_defaulting_env_to_os_environ(subprocess.check_call, *popenargs, **kwargs)\r\n File \"<...>/conda-build/conda_build/utils.py\", line 548, in _func_defaulting_env_to_os_environ\r\n return func(_args, **kwargs)\r\n File \"<...>/lib/python3.5/subprocess.py\", line 581, in check_call\r\n raise CalledProcessError(retcode, cmd)\r\nsubprocess.CalledProcessError: Command '['/usr/bin/patch', '-p0', '-i', '/tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz/pypi-distutils.patch']' returned non-zero exit status 1\r\n----------------------------- Captured stdout call -----------------------------\r\nUsing url https://pypi.python.org/packages/e7/a8/7556133689add8d1a54c0b14aeff0acb03c64707ce100ecd53934da1aa13/pip-8.1.2.tar.gz (1.1 MB) for pip.\r\nDownloading pip\r\nUnpacking pip...\r\ndone\r\nworking in /tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz\r\nupdating index in: /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/linux-64\r\nupdating index in: /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/noarch\r\n\r\nThe following NEW packages will be INSTALLED:\r\n\r\n openssl: 1.0.2j-0 (soft-link)\r\n pip: 8.1.2-py35_0 (soft-link)\r\n python: 3.5.2-0 (soft-link)\r\n pyyaml: 3.12-py35_0 (soft-link)\r\n readline: 6.2-2 (soft-link)\r\n setuptools: 27.2.0-py35_0 (soft-link)\r\n sqlite: 3.13.0-0 (soft-link)\r\n tk: 8.5.18-0 (soft-link)\r\n wheel: 0.29.0-py35_0 (soft-link)\r\n xz: 5.2.2-0 (soft-link)\r\n yaml: 0.1.6-0 (soft-link)\r\n zlib: 1.2.8-3 (soft-link)\r\n\r\nApplying patch: '/tmp/tmp3rh8k2j4conda_skeleton_pip-8.1.2.tar.gz/pypi-distutils.patch' in /tmp/pytest-of-<...>/pytest-7/test_repo__pypi_pip_8_1_2_0/skeleton_1478167437529/_b_env_placehold_<...>/lib/python3.5/distutils\r\nFile core.py is not a regular file -- refusing to patch\r\n1 out of 1 hunk ignored -- saving rejects to file core.py.rej\r\n```\r\n\r\npatching fails because core.py is a symlink ...\r\n\r\nA temporary fix would be adding --follow-symlinks in the patch command, but this is ugly\r\n**source.py l.480 in apply_patch**\r\n```python\r\npatch_args = ['-p%d' % patch_strip_level, '-i', path, '--follow-symlinks']\r\n``` \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom pkg_resources import parse_version\n\nimport conda\nfrom conda import compat, plan # NOQA\nfrom conda.api import get_index # NOQA\nfrom conda.cli.common import (Completer, InstalledPackages, add_parser_channels, add_parser_prefix, # NOQA\n specs_from_args, spec_from_line, specs_from_url) # NOQA\nfrom conda.cli.conda_argparse import ArgumentParser # NOQA\nfrom conda.compat import (PY3, StringIO, configparser, input, iteritems, lchmod, string_types, # NOQA\n text_type, TemporaryDirectory) # NOQA\nfrom conda.connection import CondaSession # NOQA\nfrom conda.fetch import TmpDownload, download, fetch_index, handle_proxy_407 # NOQA\nfrom conda.install import (delete_trash, is_linked, linked, linked_data, prefix_placeholder, # NOQA\n rm_rf, symlink_conda, rm_fetched, package_cache) # NOQA\nfrom conda.lock import Locked # NOQA\nfrom conda.misc import untracked, walk_prefix # NOQA\nfrom conda.resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nfrom conda.signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nfrom conda.utils import human_bytes, hashsum_file, md5_file, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nimport conda.config as cc # NOQA\nfrom conda.config import rc_path # NOQA\nfrom conda.version import VersionOrder # NOQA\n\nif parse_version(conda.__version__) >= parse_version(\"4.2\"):\n # conda 4.2.x\n import conda.base.context\n import conda.exceptions\n from conda.base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\n\n from conda.base.constants import DEFAULT_CHANNELS # NOQA\n get_prefix = partial(context_get_prefix, conda.base.context.context)\n get_default_urls = lambda: DEFAULT_CHANNELS\n\n arch_name = conda.base.context.context.arch_name\n binstar_upload = conda.base.context.context.binstar_upload\n bits = conda.base.context.context.bits\n default_prefix = conda.base.context.context.default_prefix\n default_python = conda.base.context.context.default_python\n envs_dirs = conda.base.context.context.envs_dirs\n pkgs_dirs = conda.base.context.context.pkgs_dirs\n platform = conda.base.context.context.platform\n root_dir = conda.base.context.context.root_dir\n root_writable = conda.base.context.context.root_writable\n subdir = conda.base.context.context.subdir\n from conda.models.channel import get_conda_build_local_url\n get_rc_urls = lambda: list(conda.base.context.context.channels)\n get_local_urls = lambda: list(get_conda_build_local_url()) or []\n load_condarc = lambda fn: conda.base.context.reset_context([fn])\n PaddingError = conda.exceptions.PaddingError\n LinkError = conda.exceptions.LinkError\n NoPackagesFoundError = conda.exceptions.NoPackagesFoundError\n CondaValueError = conda.exceptions.CondaValueError\n\nelse:\n from conda.config import get_default_urls, non_x86_linux_machines, load_condarc # NOQA\n from conda.cli.common import get_prefix # NOQA\n\n arch_name = cc.arch_name\n binstar_upload = cc.binstar_upload\n bits = cc.bits\n default_prefix = cc.default_prefix\n default_python = cc.default_python\n envs_dirs = cc.envs_dirs\n pkgs_dirs = cc.pkgs_dirs\n platform = cc.platform\n root_dir = cc.root_dir\n root_writable = cc.root_writable\n subdir = cc.subdir\n\n get_rc_urls = cc.get_rc_urls\n get_local_urls = cc.get_local_urls\n\n class PaddingError(Exception):\n pass\n\n class LinkError(Exception):\n pass\n\n class NoPackagesFoundError(Exception):\n pass\n\n class CondaValueError(Exception):\n pass\n\n\nclass SignatureError(Exception):\n pass\n\n\ndef which_package(path):\n \"\"\"\n given the path (of a (presumably) conda installed file) iterate over\n the conda packages the file came from. Usually the iteration yields\n only one package.\n \"\"\"\n from os.path import abspath, join\n path = abspath(path)\n prefix = which_prefix(path)\n if prefix is None:\n raise RuntimeError(\"could not determine conda prefix from: %s\" % path)\n for dist in linked(prefix):\n meta = is_linked(prefix, dist)\n if any(abspath(join(prefix, f)) == path for f in meta['files']):\n yield dist\n\n\ndef which_prefix(path):\n \"\"\"\n given the path (to a (presumably) conda installed file) return the\n environment prefix in which the file in located\n \"\"\"\n from os.path import abspath, join, isdir, dirname\n prefix = abspath(path)\n while True:\n if isdir(join(prefix, 'conda-meta')):\n # we found the it, so let's return it\n return prefix\n if prefix == dirname(prefix):\n # we cannot chop off any more directories, so we didn't find it\n return None\n prefix = dirname(prefix)\n", "path": "conda_build/conda_interface.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom functools import partial\nfrom pkg_resources import parse_version\n\nimport conda\nfrom conda import compat, plan # NOQA\nfrom conda.api import get_index # NOQA\nfrom conda.cli.common import (Completer, InstalledPackages, add_parser_channels, add_parser_prefix, # NOQA\n specs_from_args, spec_from_line, specs_from_url) # NOQA\nfrom conda.cli.conda_argparse import ArgumentParser # NOQA\nfrom conda.compat import (PY3, StringIO, configparser, input, iteritems, lchmod, string_types, # NOQA\n text_type, TemporaryDirectory) # NOQA\nfrom conda.connection import CondaSession # NOQA\nfrom conda.fetch import TmpDownload, download, fetch_index, handle_proxy_407 # NOQA\nfrom conda.install import (delete_trash, is_linked, linked, linked_data, prefix_placeholder, # NOQA\n rm_rf, symlink_conda, rm_fetched, package_cache) # NOQA\nfrom conda.lock import Locked # NOQA\nfrom conda.misc import untracked, walk_prefix # NOQA\nfrom conda.resolve import MatchSpec, NoPackagesFound, Resolve, Unsatisfiable, normalized_version # NOQA\nfrom conda.signature import KEYS, KEYS_DIR, hash_file, verify # NOQA\nfrom conda.utils import human_bytes, hashsum_file, md5_file, memoized, unix_path_to_win, win_path_to_unix, url_path # NOQA\nimport conda.config as cc # NOQA\nfrom conda.config import rc_path # NOQA\nfrom conda.version import VersionOrder # NOQA\n\nif parse_version(conda.__version__) >= parse_version(\"4.2\"):\n # conda 4.2.x\n import conda.base.context\n import conda.exceptions\n from conda.base.context import get_prefix as context_get_prefix, non_x86_linux_machines # NOQA\n\n from conda.base.constants import DEFAULT_CHANNELS # NOQA\n get_prefix = partial(context_get_prefix, conda.base.context.context)\n get_default_urls = lambda: DEFAULT_CHANNELS\n\n arch_name = conda.base.context.context.arch_name\n binstar_upload = conda.base.context.context.binstar_upload\n bits = conda.base.context.context.bits\n default_prefix = conda.base.context.context.default_prefix\n default_python = conda.base.context.context.default_python\n envs_dirs = conda.base.context.context.envs_dirs\n pkgs_dirs = conda.base.context.context.pkgs_dirs\n platform = conda.base.context.context.platform\n root_dir = conda.base.context.context.root_dir\n root_writable = conda.base.context.context.root_writable\n subdir = conda.base.context.context.subdir\n from conda.models.channel import get_conda_build_local_url\n get_rc_urls = lambda: list(conda.base.context.context.channels)\n get_local_urls = lambda: list(get_conda_build_local_url()) or []\n load_condarc = lambda fn: conda.base.context.reset_context([fn])\n PaddingError = conda.exceptions.PaddingError\n LinkError = conda.exceptions.LinkError\n NoPackagesFoundError = conda.exceptions.NoPackagesFoundError\n CondaValueError = conda.exceptions.CondaValueError\n\n # disallow softlinks. This avoids a lot of dumb issues, at the potential cost of disk space.\n conda.base.context.context.allow_softlinks = False\n\nelse:\n from conda.config import get_default_urls, non_x86_linux_machines, load_condarc # NOQA\n from conda.cli.common import get_prefix # NOQA\n\n arch_name = cc.arch_name\n binstar_upload = cc.binstar_upload\n bits = cc.bits\n default_prefix = cc.default_prefix\n default_python = cc.default_python\n envs_dirs = cc.envs_dirs\n pkgs_dirs = cc.pkgs_dirs\n platform = cc.platform\n root_dir = cc.root_dir\n root_writable = cc.root_writable\n subdir = cc.subdir\n\n get_rc_urls = cc.get_rc_urls\n get_local_urls = cc.get_local_urls\n\n cc.allow_softlinks = False\n\n class PaddingError(Exception):\n pass\n\n class LinkError(Exception):\n pass\n\n class NoPackagesFoundError(Exception):\n pass\n\n class CondaValueError(Exception):\n pass\n\n\nclass SignatureError(Exception):\n pass\n\n\ndef which_package(path):\n \"\"\"\n given the path (of a (presumably) conda installed file) iterate over\n the conda packages the file came from. Usually the iteration yields\n only one package.\n \"\"\"\n from os.path import abspath, join\n path = abspath(path)\n prefix = which_prefix(path)\n if prefix is None:\n raise RuntimeError(\"could not determine conda prefix from: %s\" % path)\n for dist in linked(prefix):\n meta = is_linked(prefix, dist)\n if any(abspath(join(prefix, f)) == path for f in meta['files']):\n yield dist\n\n\ndef which_prefix(path):\n \"\"\"\n given the path (to a (presumably) conda installed file) return the\n environment prefix in which the file in located\n \"\"\"\n from os.path import abspath, join, isdir, dirname\n prefix = abspath(path)\n while True:\n if isdir(join(prefix, 'conda-meta')):\n # we found the it, so let's return it\n return prefix\n if prefix == dirname(prefix):\n # we cannot chop off any more directories, so we didn't find it\n return None\n prefix = dirname(prefix)\n", "path": "conda_build/conda_interface.py"}]}
| 3,027 | 212 |
gh_patches_debug_58053
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3312
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Spider upsstore is broken
During the global build at 2021-10-13-14-42-23, spider **upsstore** failed with **5176 features** and **5 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/logs/upsstore.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/upsstore.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/upsstore.geojson))
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `locations/spiders/upsstore.py`
Content:
```
1 import scrapy
2 import json
3 import re
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7 DAY_MAPPING = {
8 "MONDAY": "Mo",
9 "TUESDAY": "Tu",
10 "WEDNESDAY": "We",
11 "THURSDAY": "Th",
12 "FRIDAY": "Fr",
13 "SATURDAY": "Sa",
14 "SUNDAY": "Su"
15 }
16
17
18 class UpsStoreSpider(scrapy.Spider):
19 name = "upsstore"
20 item_attributes = { 'brand': "UPS Store" }
21 allowed_domains = ["theupsstore.com"]
22 download_delay = 0.1
23 start_urls = (
24 'https://locations.theupsstore.com/',
25 )
26
27 def parse_hours(self, hours):
28 """
29 :param hours:
30 :return:
31 """
32 hours = json.loads(hours)
33 o = OpeningHours()
34
35 for day in hours["hours"]["days"]:
36 if not day["isClosed"]:
37 interval = day["intervals"][0]
38
39 o.add_range(DAY_MAPPING[day["day"]],
40 open_time=str(interval["start"]),
41 close_time=str(interval["end"]),
42 time_format="%H%M")
43 return o.as_opening_hours()
44
45 def parse_store(self, response):
46 ref = response.xpath('//input[@id="store_id"]/@value').extract_first()
47 if not ref:
48 ref = re.search(r'store(\d+)@theupsstore.com',
49 response.xpath('//a[@itemprop="email"]/text()').extract_first()).groups()
50
51 properties = {
52 'name': response.xpath('//span[@class="LocationName-geo"]/text()').extract_first(),
53 'phone': response.xpath('//span[@itemprop="telephone"]/text()').extract_first(),
54 'addr_full': response.xpath('//meta[@itemprop="streetAddress"]/@content').extract_first(),
55 'city': response.xpath('//meta[@itemprop="addressLocality"]/@content').extract_first(),
56 'state': response.xpath('//abbr[@itemprop="addressRegion"]/text()').extract_first(),
57 'country': response.xpath('//abbr[@itemprop="addressCountry"]/text()').extract_first(),
58 'postcode': response.xpath('//span[@itemprop="postalCode"]/text()').extract_first(),
59 'ref': ref,
60 'website': response.url,
61 'lat': float(response.xpath('//meta[@itemprop="latitude"]/@content').extract_first()),
62 'lon': float(response.xpath('//meta[@itemprop="longitude"]/@content').extract_first()),
63 }
64
65 hours = response.xpath('//script[@id="location_info_hours"]/text()').extract_first()
66 try:
67 hours = self.parse_hours(hours)
68 if hours:
69 properties['opening_hours'] = hours
70 except:
71 pass
72
73 yield GeojsonPointItem(**properties)
74
75 def parse(self, response):
76 urls = response.xpath('//a[@class="Directory-listLink"]/@href').extract()
77
78 if urls:
79 for url in urls:
80 if len(url.split('/')) == 3:
81 callback = self.parse_store
82 else:
83 callback = self.parse
84
85 yield scrapy.Request(
86 response.urljoin(url),
87 callback=callback,
88 )
89
90 else:
91 urls = response.xpath('//a[@class="Link"]/@href').extract()
92 for url in urls:
93 yield scrapy.Request(
94 response.urljoin(url),
95 callback=self.parse_store,
96 )
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/locations/spiders/upsstore.py b/locations/spiders/upsstore.py
--- a/locations/spiders/upsstore.py
+++ b/locations/spiders/upsstore.py
@@ -43,6 +43,9 @@
return o.as_opening_hours()
def parse_store(self, response):
+ if "Permanently Closed" in response.text:
+ return
+
ref = response.xpath('//input[@id="store_id"]/@value').extract_first()
if not ref:
ref = re.search(r'store(\d+)@theupsstore.com',
|
{"golden_diff": "diff --git a/locations/spiders/upsstore.py b/locations/spiders/upsstore.py\n--- a/locations/spiders/upsstore.py\n+++ b/locations/spiders/upsstore.py\n@@ -43,6 +43,9 @@\n return o.as_opening_hours()\n \n def parse_store(self, response):\n+ if \"Permanently Closed\" in response.text:\n+ return\n+\n ref = response.xpath('//input[@id=\"store_id\"]/@value').extract_first()\n if not ref:\n ref = re.search(r'store(\\d+)@theupsstore.com',\n", "issue": "Spider upsstore is broken\nDuring the global build at 2021-10-13-14-42-23, spider **upsstore** failed with **5176 features** and **5 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/logs/upsstore.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/upsstore.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/upsstore.geojson))\n", "before_files": [{"content": "import scrapy\nimport json\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = {\n \"MONDAY\": \"Mo\",\n \"TUESDAY\": \"Tu\",\n \"WEDNESDAY\": \"We\",\n \"THURSDAY\": \"Th\",\n \"FRIDAY\": \"Fr\",\n \"SATURDAY\": \"Sa\",\n \"SUNDAY\": \"Su\"\n}\n\n\nclass UpsStoreSpider(scrapy.Spider):\n name = \"upsstore\"\n item_attributes = { 'brand': \"UPS Store\" }\n allowed_domains = [\"theupsstore.com\"]\n download_delay = 0.1\n start_urls = (\n 'https://locations.theupsstore.com/',\n )\n\n def parse_hours(self, hours):\n \"\"\"\n :param hours:\n :return:\n \"\"\"\n hours = json.loads(hours)\n o = OpeningHours()\n\n for day in hours[\"hours\"][\"days\"]:\n if not day[\"isClosed\"]:\n interval = day[\"intervals\"][0]\n\n o.add_range(DAY_MAPPING[day[\"day\"]],\n open_time=str(interval[\"start\"]),\n close_time=str(interval[\"end\"]),\n time_format=\"%H%M\")\n return o.as_opening_hours()\n\n def parse_store(self, response):\n ref = response.xpath('//input[@id=\"store_id\"]/@value').extract_first()\n if not ref:\n ref = re.search(r'store(\\d+)@theupsstore.com',\n response.xpath('//a[@itemprop=\"email\"]/text()').extract_first()).groups()\n\n properties = {\n 'name': response.xpath('//span[@class=\"LocationName-geo\"]/text()').extract_first(),\n 'phone': response.xpath('//span[@itemprop=\"telephone\"]/text()').extract_first(),\n 'addr_full': response.xpath('//meta[@itemprop=\"streetAddress\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@itemprop=\"addressLocality\"]/@content').extract_first(),\n 'state': response.xpath('//abbr[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'country': response.xpath('//abbr[@itemprop=\"addressCountry\"]/text()').extract_first(),\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'ref': ref,\n 'website': response.url,\n 'lat': float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first()),\n }\n\n hours = response.xpath('//script[@id=\"location_info_hours\"]/text()').extract_first()\n try:\n hours = self.parse_hours(hours)\n if hours:\n properties['opening_hours'] = hours\n except:\n pass\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"Directory-listLink\"]/@href').extract()\n\n if urls:\n for url in urls:\n if len(url.split('/')) == 3:\n callback = self.parse_store\n else:\n callback = self.parse\n\n yield scrapy.Request(\n response.urljoin(url),\n callback=callback,\n )\n\n else:\n urls = response.xpath('//a[@class=\"Link\"]/@href').extract()\n for url in urls:\n yield scrapy.Request(\n response.urljoin(url),\n callback=self.parse_store,\n )", "path": "locations/spiders/upsstore.py"}], "after_files": [{"content": "import scrapy\nimport json\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nDAY_MAPPING = {\n \"MONDAY\": \"Mo\",\n \"TUESDAY\": \"Tu\",\n \"WEDNESDAY\": \"We\",\n \"THURSDAY\": \"Th\",\n \"FRIDAY\": \"Fr\",\n \"SATURDAY\": \"Sa\",\n \"SUNDAY\": \"Su\"\n}\n\n\nclass UpsStoreSpider(scrapy.Spider):\n name = \"upsstore\"\n item_attributes = { 'brand': \"UPS Store\" }\n allowed_domains = [\"theupsstore.com\"]\n download_delay = 0.1\n start_urls = (\n 'https://locations.theupsstore.com/',\n )\n\n def parse_hours(self, hours):\n \"\"\"\n :param hours:\n :return:\n \"\"\"\n hours = json.loads(hours)\n o = OpeningHours()\n\n for day in hours[\"hours\"][\"days\"]:\n if not day[\"isClosed\"]:\n interval = day[\"intervals\"][0]\n\n o.add_range(DAY_MAPPING[day[\"day\"]],\n open_time=str(interval[\"start\"]),\n close_time=str(interval[\"end\"]),\n time_format=\"%H%M\")\n return o.as_opening_hours()\n\n def parse_store(self, response):\n if \"Permanently Closed\" in response.text:\n return\n\n ref = response.xpath('//input[@id=\"store_id\"]/@value').extract_first()\n if not ref:\n ref = re.search(r'store(\\d+)@theupsstore.com',\n response.xpath('//a[@itemprop=\"email\"]/text()').extract_first()).groups()\n\n properties = {\n 'name': response.xpath('//span[@class=\"LocationName-geo\"]/text()').extract_first(),\n 'phone': response.xpath('//span[@itemprop=\"telephone\"]/text()').extract_first(),\n 'addr_full': response.xpath('//meta[@itemprop=\"streetAddress\"]/@content').extract_first(),\n 'city': response.xpath('//meta[@itemprop=\"addressLocality\"]/@content').extract_first(),\n 'state': response.xpath('//abbr[@itemprop=\"addressRegion\"]/text()').extract_first(),\n 'country': response.xpath('//abbr[@itemprop=\"addressCountry\"]/text()').extract_first(),\n 'postcode': response.xpath('//span[@itemprop=\"postalCode\"]/text()').extract_first(),\n 'ref': ref,\n 'website': response.url,\n 'lat': float(response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first()),\n 'lon': float(response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first()),\n }\n\n hours = response.xpath('//script[@id=\"location_info_hours\"]/text()').extract_first()\n try:\n hours = self.parse_hours(hours)\n if hours:\n properties['opening_hours'] = hours\n except:\n pass\n\n yield GeojsonPointItem(**properties)\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"Directory-listLink\"]/@href').extract()\n\n if urls:\n for url in urls:\n if len(url.split('/')) == 3:\n callback = self.parse_store\n else:\n callback = self.parse\n\n yield scrapy.Request(\n response.urljoin(url),\n callback=callback,\n )\n\n else:\n urls = response.xpath('//a[@class=\"Link\"]/@href').extract()\n for url in urls:\n yield scrapy.Request(\n response.urljoin(url),\n callback=self.parse_store,\n )", "path": "locations/spiders/upsstore.py"}]}
| 1,371 | 133 |
gh_patches_debug_40290
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmengine-764
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unexpected weight initialization
Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmengine.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
When using `Pretrained` init_cfg to init a model, the outside init_cfg will **disable all submodules' init_weights**.
If the outside model's pretrained weight does not strictly match, e.g. different num_classes, the mismatched submodule's weight will initialize **by pytorch's default init weight logic**, instead of the submodule's init_cfg.
This lead's to some unacceptable results, e.g. a huge classification loss due to the failed initialization of the classification head when finetuning a model.
**Reproduction**
In mmdet, finetuning rtmdet_l_8xb32-300e_coco
```python
_base_ = './rtmdet_l_8xb32-300e_coco.py'
checkpoint = 'work_dir/rtmdet/rtmdet_l_8xb32-300e_coco/epoch_300.pth'
model = dict(bbox_head=dict(num_classes=1), init_cfg=dict(type='Pretrained', checkpoint=checkpoint))
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmengine/model/base_module.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import copy
3 import logging
4 import warnings
5 from abc import ABCMeta
6 from collections import defaultdict
7 from logging import FileHandler
8 from typing import Iterable, Optional
9
10 import torch.nn as nn
11
12 from mmengine.dist import master_only
13 from mmengine.logging import MMLogger, print_log
14 from .weight_init import initialize, update_init_info
15
16
17 class BaseModule(nn.Module, metaclass=ABCMeta):
18 """Base module for all modules in openmmlab. ``BaseModule`` is a wrapper of
19 ``torch.nn.Module`` with additional functionality of parameter
20 initialization. Compared with ``torch.nn.Module``, ``BaseModule`` mainly
21 adds three attributes.
22
23 - ``init_cfg``: the config to control the initialization.
24 - ``init_weights``: The function of parameter initialization and recording
25 initialization information.
26 - ``_params_init_info``: Used to track the parameter initialization
27 information. This attribute only exists during executing the
28 ``init_weights``.
29 Args:
30 init_cfg (dict, optional): Initialization config dict.
31 """
32
33 def __init__(self, init_cfg=None):
34 """Initialize BaseModule, inherited from `torch.nn.Module`"""
35
36 # NOTE init_cfg can be defined in different levels, but init_cfg
37 # in low levels has a higher priority.
38
39 super().__init__()
40 # define default value of init_cfg instead of hard code
41 # in init_weights() function
42 self._is_init = False
43
44 self.init_cfg = copy.deepcopy(init_cfg)
45
46 # Backward compatibility in derived classes
47 # if pretrained is not None:
48 # warnings.warn('DeprecationWarning: pretrained is a deprecated \
49 # key, please consider using init_cfg')
50 # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)
51
52 @property
53 def is_init(self):
54 return self._is_init
55
56 def init_weights(self):
57 """Initialize the weights."""
58
59 is_top_level_module = False
60 # check if it is top-level module
61 if not hasattr(self, '_params_init_info'):
62 # The `_params_init_info` is used to record the initialization
63 # information of the parameters
64 # the key should be the obj:`nn.Parameter` of model and the value
65 # should be a dict containing
66 # - init_info (str): The string that describes the initialization.
67 # - tmp_mean_value (FloatTensor): The mean of the parameter,
68 # which indicates whether the parameter has been modified.
69 # this attribute would be deleted after all parameters
70 # is initialized.
71 self._params_init_info = defaultdict(dict)
72 is_top_level_module = True
73
74 # Initialize the `_params_init_info`,
75 # When detecting the `tmp_mean_value` of
76 # the corresponding parameter is changed, update related
77 # initialization information
78 for name, param in self.named_parameters():
79 self._params_init_info[param][
80 'init_info'] = f'The value is the same before and ' \
81 f'after calling `init_weights` ' \
82 f'of {self.__class__.__name__} '
83 self._params_init_info[param][
84 'tmp_mean_value'] = param.data.mean().cpu()
85
86 # pass `params_init_info` to all submodules
87 # All submodules share the same `params_init_info`,
88 # so it will be updated when parameters are
89 # modified at any level of the model.
90 for sub_module in self.modules():
91 sub_module._params_init_info = self._params_init_info
92
93 logger = MMLogger.get_current_instance()
94 logger_name = logger.instance_name
95
96 module_name = self.__class__.__name__
97 if not self._is_init:
98 if self.init_cfg:
99 print_log(
100 f'initialize {module_name} with init_cfg {self.init_cfg}',
101 logger=logger_name,
102 level=logging.DEBUG)
103 initialize(self, self.init_cfg)
104 if isinstance(self.init_cfg, dict):
105 # prevent the parameters of
106 # the pre-trained model
107 # from being overwritten by
108 # the `init_weights`
109 if self.init_cfg['type'] == 'Pretrained':
110 return
111
112 for m in self.children():
113 if hasattr(m, 'init_weights'):
114 m.init_weights()
115 # users may overload the `init_weights`
116 update_init_info(
117 m,
118 init_info=f'Initialized by '
119 f'user-defined `init_weights`'
120 f' in {m.__class__.__name__} ')
121
122 self._is_init = True
123 else:
124 warnings.warn(f'init_weights of {self.__class__.__name__} has '
125 f'been called more than once.')
126
127 if is_top_level_module:
128 # self._dump_init_info(logger_name)
129 self._dump_init_info()
130
131 for sub_module in self.modules():
132 del sub_module._params_init_info
133
134 @master_only
135 def _dump_init_info(self):
136 """Dump the initialization information to a file named
137 `initialization.log.json` in workdir.
138
139 Args:
140 logger_name (str): The name of logger.
141 """
142
143 logger = MMLogger.get_current_instance()
144 logger_name = logger.instance_name
145 with_file_handler = False
146 # dump the information to the logger file if there is a `FileHandler`
147 for handler in logger.handlers:
148 if isinstance(handler, FileHandler):
149 handler.stream.write(
150 'Name of parameter - Initialization information\n')
151 for name, param in self.named_parameters():
152 handler.stream.write(
153 f'\n{name} - {param.shape}: '
154 f"\n{self._params_init_info[param]['init_info']} \n")
155 handler.stream.flush()
156 with_file_handler = True
157 if not with_file_handler:
158 for name, param in self.named_parameters():
159 print_log(
160 f'\n{name} - {param.shape}: '
161 f"\n{self._params_init_info[param]['init_info']} \n ",
162 logger=logger_name)
163
164 def __repr__(self):
165 s = super().__repr__()
166 if self.init_cfg:
167 s += f'\ninit_cfg={self.init_cfg}'
168 return s
169
170
171 class Sequential(BaseModule, nn.Sequential):
172 """Sequential module in openmmlab.
173
174 Ensures that all modules in ``Sequential`` have a different initialization
175 strategy than the outer model
176
177 Args:
178 init_cfg (dict, optional): Initialization config dict.
179 """
180
181 def __init__(self, *args, init_cfg: Optional[dict] = None):
182 BaseModule.__init__(self, init_cfg)
183 nn.Sequential.__init__(self, *args)
184
185
186 class ModuleList(BaseModule, nn.ModuleList):
187 """ModuleList in openmmlab.
188
189 Ensures that all modules in ``ModuleList`` have a different initialization
190 strategy than the outer model
191
192 Args:
193 modules (iterable, optional): An iterable of modules to add.
194 init_cfg (dict, optional): Initialization config dict.
195 """
196
197 def __init__(self,
198 modules: Optional[Iterable] = None,
199 init_cfg: Optional[dict] = None):
200 BaseModule.__init__(self, init_cfg)
201 nn.ModuleList.__init__(self, modules)
202
203
204 class ModuleDict(BaseModule, nn.ModuleDict):
205 """ModuleDict in openmmlab.
206
207 Ensures that all modules in ``ModuleDict`` have a different initialization
208 strategy than the outer model
209
210 Args:
211 modules (dict, optional): A mapping (dictionary) of (string: module)
212 or an iterable of key-value pairs of type (string, module).
213 init_cfg (dict, optional): Initialization config dict.
214 """
215
216 def __init__(self,
217 modules: Optional[dict] = None,
218 init_cfg: Optional[dict] = None):
219 BaseModule.__init__(self, init_cfg)
220 nn.ModuleDict.__init__(self, modules)
221
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmengine/model/base_module.py b/mmengine/model/base_module.py
--- a/mmengine/model/base_module.py
+++ b/mmengine/model/base_module.py
@@ -5,7 +5,7 @@
from abc import ABCMeta
from collections import defaultdict
from logging import FileHandler
-from typing import Iterable, Optional
+from typing import Iterable, List, Optional, Union
import torch.nn as nn
@@ -26,11 +26,17 @@
- ``_params_init_info``: Used to track the parameter initialization
information. This attribute only exists during executing the
``init_weights``.
+
+ Note:
+ :obj:`PretrainedInit` has a higher priority than any other
+ initializer. The loaded pretrained weights will overwrite
+ the previous initialized weights.
+
Args:
- init_cfg (dict, optional): Initialization config dict.
+ init_cfg (dict or List[dict], optional): Initialization config dict.
"""
- def __init__(self, init_cfg=None):
+ def __init__(self, init_cfg: Union[dict, List[dict], None] = None):
"""Initialize BaseModule, inherited from `torch.nn.Module`"""
# NOTE init_cfg can be defined in different levels, but init_cfg
@@ -100,14 +106,25 @@
f'initialize {module_name} with init_cfg {self.init_cfg}',
logger=logger_name,
level=logging.DEBUG)
- initialize(self, self.init_cfg)
+
+ init_cfgs = self.init_cfg
if isinstance(self.init_cfg, dict):
- # prevent the parameters of
- # the pre-trained model
- # from being overwritten by
- # the `init_weights`
- if self.init_cfg['type'] == 'Pretrained':
- return
+ init_cfgs = [self.init_cfg]
+
+ # PretrainedInit has higher priority than any other init_cfg.
+ # Therefore we initialize `pretrained_cfg` last to overwrite
+ # the previous initialized weights.
+ # See details in https://github.com/open-mmlab/mmengine/issues/691 # noqa E501
+ other_cfgs = []
+ pretrained_cfg = []
+ for init_cfg in init_cfgs:
+ assert isinstance(init_cfg, dict)
+ if init_cfg['type'] == 'Pretrained':
+ pretrained_cfg.append(init_cfg)
+ else:
+ other_cfgs.append(init_cfg)
+
+ initialize(self, other_cfgs)
for m in self.children():
if hasattr(m, 'init_weights'):
@@ -118,7 +135,8 @@
init_info=f'Initialized by '
f'user-defined `init_weights`'
f' in {m.__class__.__name__} ')
-
+ if self.init_cfg and pretrained_cfg:
+ initialize(self, pretrained_cfg)
self._is_init = True
else:
warnings.warn(f'init_weights of {self.__class__.__name__} has '
|
{"golden_diff": "diff --git a/mmengine/model/base_module.py b/mmengine/model/base_module.py\n--- a/mmengine/model/base_module.py\n+++ b/mmengine/model/base_module.py\n@@ -5,7 +5,7 @@\n from abc import ABCMeta\n from collections import defaultdict\n from logging import FileHandler\n-from typing import Iterable, Optional\n+from typing import Iterable, List, Optional, Union\n \n import torch.nn as nn\n \n@@ -26,11 +26,17 @@\n - ``_params_init_info``: Used to track the parameter initialization\n information. This attribute only exists during executing the\n ``init_weights``.\n+\n+ Note:\n+ :obj:`PretrainedInit` has a higher priority than any other\n+ initializer. The loaded pretrained weights will overwrite\n+ the previous initialized weights.\n+\n Args:\n- init_cfg (dict, optional): Initialization config dict.\n+ init_cfg (dict or List[dict], optional): Initialization config dict.\n \"\"\"\n \n- def __init__(self, init_cfg=None):\n+ def __init__(self, init_cfg: Union[dict, List[dict], None] = None):\n \"\"\"Initialize BaseModule, inherited from `torch.nn.Module`\"\"\"\n \n # NOTE init_cfg can be defined in different levels, but init_cfg\n@@ -100,14 +106,25 @@\n f'initialize {module_name} with init_cfg {self.init_cfg}',\n logger=logger_name,\n level=logging.DEBUG)\n- initialize(self, self.init_cfg)\n+\n+ init_cfgs = self.init_cfg\n if isinstance(self.init_cfg, dict):\n- # prevent the parameters of\n- # the pre-trained model\n- # from being overwritten by\n- # the `init_weights`\n- if self.init_cfg['type'] == 'Pretrained':\n- return\n+ init_cfgs = [self.init_cfg]\n+\n+ # PretrainedInit has higher priority than any other init_cfg.\n+ # Therefore we initialize `pretrained_cfg` last to overwrite\n+ # the previous initialized weights.\n+ # See details in https://github.com/open-mmlab/mmengine/issues/691 # noqa E501\n+ other_cfgs = []\n+ pretrained_cfg = []\n+ for init_cfg in init_cfgs:\n+ assert isinstance(init_cfg, dict)\n+ if init_cfg['type'] == 'Pretrained':\n+ pretrained_cfg.append(init_cfg)\n+ else:\n+ other_cfgs.append(init_cfg)\n+\n+ initialize(self, other_cfgs)\n \n for m in self.children():\n if hasattr(m, 'init_weights'):\n@@ -118,7 +135,8 @@\n init_info=f'Initialized by '\n f'user-defined `init_weights`'\n f' in {m.__class__.__name__} ')\n-\n+ if self.init_cfg and pretrained_cfg:\n+ initialize(self, pretrained_cfg)\n self._is_init = True\n else:\n warnings.warn(f'init_weights of {self.__class__.__name__} has '\n", "issue": "Unexpected weight initialization\nThanks for your error report and we appreciate it a lot.\r\n\r\n**Checklist**\r\n\r\n1. I have searched related issues but cannot get the expected help.\r\n2. I have read the [FAQ documentation](https://mmengine.readthedocs.io/en/latest/faq.html) but cannot get the expected help.\r\n3. The bug has not been fixed in the latest version.\r\n\r\n**Describe the bug**\r\n\r\nWhen using `Pretrained` init_cfg to init a model, the outside init_cfg will **disable all submodules' init_weights**.\r\n\r\nIf the outside model's pretrained weight does not strictly match, e.g. different num_classes, the mismatched submodule's weight will initialize **by pytorch's default init weight logic**, instead of the submodule's init_cfg. \r\n\r\nThis lead's to some unacceptable results, e.g. a huge classification loss due to the failed initialization of the classification head when finetuning a model.\r\n\r\n\r\n**Reproduction**\r\n\r\nIn mmdet, finetuning rtmdet_l_8xb32-300e_coco\r\n\r\n```python\r\n_base_ = './rtmdet_l_8xb32-300e_coco.py'\r\n\r\ncheckpoint = 'work_dir/rtmdet/rtmdet_l_8xb32-300e_coco/epoch_300.pth'\r\nmodel = dict(bbox_head=dict(num_classes=1), init_cfg=dict(type='Pretrained', checkpoint=checkpoint))\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport copy\nimport logging\nimport warnings\nfrom abc import ABCMeta\nfrom collections import defaultdict\nfrom logging import FileHandler\nfrom typing import Iterable, Optional\n\nimport torch.nn as nn\n\nfrom mmengine.dist import master_only\nfrom mmengine.logging import MMLogger, print_log\nfrom .weight_init import initialize, update_init_info\n\n\nclass BaseModule(nn.Module, metaclass=ABCMeta):\n \"\"\"Base module for all modules in openmmlab. ``BaseModule`` is a wrapper of\n ``torch.nn.Module`` with additional functionality of parameter\n initialization. Compared with ``torch.nn.Module``, ``BaseModule`` mainly\n adds three attributes.\n\n - ``init_cfg``: the config to control the initialization.\n - ``init_weights``: The function of parameter initialization and recording\n initialization information.\n - ``_params_init_info``: Used to track the parameter initialization\n information. This attribute only exists during executing the\n ``init_weights``.\n Args:\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self, init_cfg=None):\n \"\"\"Initialize BaseModule, inherited from `torch.nn.Module`\"\"\"\n\n # NOTE init_cfg can be defined in different levels, but init_cfg\n # in low levels has a higher priority.\n\n super().__init__()\n # define default value of init_cfg instead of hard code\n # in init_weights() function\n self._is_init = False\n\n self.init_cfg = copy.deepcopy(init_cfg)\n\n # Backward compatibility in derived classes\n # if pretrained is not None:\n # warnings.warn('DeprecationWarning: pretrained is a deprecated \\\n # key, please consider using init_cfg')\n # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)\n\n @property\n def is_init(self):\n return self._is_init\n\n def init_weights(self):\n \"\"\"Initialize the weights.\"\"\"\n\n is_top_level_module = False\n # check if it is top-level module\n if not hasattr(self, '_params_init_info'):\n # The `_params_init_info` is used to record the initialization\n # information of the parameters\n # the key should be the obj:`nn.Parameter` of model and the value\n # should be a dict containing\n # - init_info (str): The string that describes the initialization.\n # - tmp_mean_value (FloatTensor): The mean of the parameter,\n # which indicates whether the parameter has been modified.\n # this attribute would be deleted after all parameters\n # is initialized.\n self._params_init_info = defaultdict(dict)\n is_top_level_module = True\n\n # Initialize the `_params_init_info`,\n # When detecting the `tmp_mean_value` of\n # the corresponding parameter is changed, update related\n # initialization information\n for name, param in self.named_parameters():\n self._params_init_info[param][\n 'init_info'] = f'The value is the same before and ' \\\n f'after calling `init_weights` ' \\\n f'of {self.__class__.__name__} '\n self._params_init_info[param][\n 'tmp_mean_value'] = param.data.mean().cpu()\n\n # pass `params_init_info` to all submodules\n # All submodules share the same `params_init_info`,\n # so it will be updated when parameters are\n # modified at any level of the model.\n for sub_module in self.modules():\n sub_module._params_init_info = self._params_init_info\n\n logger = MMLogger.get_current_instance()\n logger_name = logger.instance_name\n\n module_name = self.__class__.__name__\n if not self._is_init:\n if self.init_cfg:\n print_log(\n f'initialize {module_name} with init_cfg {self.init_cfg}',\n logger=logger_name,\n level=logging.DEBUG)\n initialize(self, self.init_cfg)\n if isinstance(self.init_cfg, dict):\n # prevent the parameters of\n # the pre-trained model\n # from being overwritten by\n # the `init_weights`\n if self.init_cfg['type'] == 'Pretrained':\n return\n\n for m in self.children():\n if hasattr(m, 'init_weights'):\n m.init_weights()\n # users may overload the `init_weights`\n update_init_info(\n m,\n init_info=f'Initialized by '\n f'user-defined `init_weights`'\n f' in {m.__class__.__name__} ')\n\n self._is_init = True\n else:\n warnings.warn(f'init_weights of {self.__class__.__name__} has '\n f'been called more than once.')\n\n if is_top_level_module:\n # self._dump_init_info(logger_name)\n self._dump_init_info()\n\n for sub_module in self.modules():\n del sub_module._params_init_info\n\n @master_only\n def _dump_init_info(self):\n \"\"\"Dump the initialization information to a file named\n `initialization.log.json` in workdir.\n\n Args:\n logger_name (str): The name of logger.\n \"\"\"\n\n logger = MMLogger.get_current_instance()\n logger_name = logger.instance_name\n with_file_handler = False\n # dump the information to the logger file if there is a `FileHandler`\n for handler in logger.handlers:\n if isinstance(handler, FileHandler):\n handler.stream.write(\n 'Name of parameter - Initialization information\\n')\n for name, param in self.named_parameters():\n handler.stream.write(\n f'\\n{name} - {param.shape}: '\n f\"\\n{self._params_init_info[param]['init_info']} \\n\")\n handler.stream.flush()\n with_file_handler = True\n if not with_file_handler:\n for name, param in self.named_parameters():\n print_log(\n f'\\n{name} - {param.shape}: '\n f\"\\n{self._params_init_info[param]['init_info']} \\n \",\n logger=logger_name)\n\n def __repr__(self):\n s = super().__repr__()\n if self.init_cfg:\n s += f'\\ninit_cfg={self.init_cfg}'\n return s\n\n\nclass Sequential(BaseModule, nn.Sequential):\n \"\"\"Sequential module in openmmlab.\n\n Ensures that all modules in ``Sequential`` have a different initialization\n strategy than the outer model\n\n Args:\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self, *args, init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.Sequential.__init__(self, *args)\n\n\nclass ModuleList(BaseModule, nn.ModuleList):\n \"\"\"ModuleList in openmmlab.\n\n Ensures that all modules in ``ModuleList`` have a different initialization\n strategy than the outer model\n\n Args:\n modules (iterable, optional): An iterable of modules to add.\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self,\n modules: Optional[Iterable] = None,\n init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.ModuleList.__init__(self, modules)\n\n\nclass ModuleDict(BaseModule, nn.ModuleDict):\n \"\"\"ModuleDict in openmmlab.\n\n Ensures that all modules in ``ModuleDict`` have a different initialization\n strategy than the outer model\n\n Args:\n modules (dict, optional): A mapping (dictionary) of (string: module)\n or an iterable of key-value pairs of type (string, module).\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self,\n modules: Optional[dict] = None,\n init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.ModuleDict.__init__(self, modules)\n", "path": "mmengine/model/base_module.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport copy\nimport logging\nimport warnings\nfrom abc import ABCMeta\nfrom collections import defaultdict\nfrom logging import FileHandler\nfrom typing import Iterable, List, Optional, Union\n\nimport torch.nn as nn\n\nfrom mmengine.dist import master_only\nfrom mmengine.logging import MMLogger, print_log\nfrom .weight_init import initialize, update_init_info\n\n\nclass BaseModule(nn.Module, metaclass=ABCMeta):\n \"\"\"Base module for all modules in openmmlab. ``BaseModule`` is a wrapper of\n ``torch.nn.Module`` with additional functionality of parameter\n initialization. Compared with ``torch.nn.Module``, ``BaseModule`` mainly\n adds three attributes.\n\n - ``init_cfg``: the config to control the initialization.\n - ``init_weights``: The function of parameter initialization and recording\n initialization information.\n - ``_params_init_info``: Used to track the parameter initialization\n information. This attribute only exists during executing the\n ``init_weights``.\n\n Note:\n :obj:`PretrainedInit` has a higher priority than any other\n initializer. The loaded pretrained weights will overwrite\n the previous initialized weights.\n\n Args:\n init_cfg (dict or List[dict], optional): Initialization config dict.\n \"\"\"\n\n def __init__(self, init_cfg: Union[dict, List[dict], None] = None):\n \"\"\"Initialize BaseModule, inherited from `torch.nn.Module`\"\"\"\n\n # NOTE init_cfg can be defined in different levels, but init_cfg\n # in low levels has a higher priority.\n\n super().__init__()\n # define default value of init_cfg instead of hard code\n # in init_weights() function\n self._is_init = False\n\n self.init_cfg = copy.deepcopy(init_cfg)\n\n # Backward compatibility in derived classes\n # if pretrained is not None:\n # warnings.warn('DeprecationWarning: pretrained is a deprecated \\\n # key, please consider using init_cfg')\n # self.init_cfg = dict(type='Pretrained', checkpoint=pretrained)\n\n @property\n def is_init(self):\n return self._is_init\n\n def init_weights(self):\n \"\"\"Initialize the weights.\"\"\"\n\n is_top_level_module = False\n # check if it is top-level module\n if not hasattr(self, '_params_init_info'):\n # The `_params_init_info` is used to record the initialization\n # information of the parameters\n # the key should be the obj:`nn.Parameter` of model and the value\n # should be a dict containing\n # - init_info (str): The string that describes the initialization.\n # - tmp_mean_value (FloatTensor): The mean of the parameter,\n # which indicates whether the parameter has been modified.\n # this attribute would be deleted after all parameters\n # is initialized.\n self._params_init_info = defaultdict(dict)\n is_top_level_module = True\n\n # Initialize the `_params_init_info`,\n # When detecting the `tmp_mean_value` of\n # the corresponding parameter is changed, update related\n # initialization information\n for name, param in self.named_parameters():\n self._params_init_info[param][\n 'init_info'] = f'The value is the same before and ' \\\n f'after calling `init_weights` ' \\\n f'of {self.__class__.__name__} '\n self._params_init_info[param][\n 'tmp_mean_value'] = param.data.mean().cpu()\n\n # pass `params_init_info` to all submodules\n # All submodules share the same `params_init_info`,\n # so it will be updated when parameters are\n # modified at any level of the model.\n for sub_module in self.modules():\n sub_module._params_init_info = self._params_init_info\n\n logger = MMLogger.get_current_instance()\n logger_name = logger.instance_name\n\n module_name = self.__class__.__name__\n if not self._is_init:\n if self.init_cfg:\n print_log(\n f'initialize {module_name} with init_cfg {self.init_cfg}',\n logger=logger_name,\n level=logging.DEBUG)\n\n init_cfgs = self.init_cfg\n if isinstance(self.init_cfg, dict):\n init_cfgs = [self.init_cfg]\n\n # PretrainedInit has higher priority than any other init_cfg.\n # Therefore we initialize `pretrained_cfg` last to overwrite\n # the previous initialized weights.\n # See details in https://github.com/open-mmlab/mmengine/issues/691 # noqa E501\n other_cfgs = []\n pretrained_cfg = []\n for init_cfg in init_cfgs:\n assert isinstance(init_cfg, dict)\n if init_cfg['type'] == 'Pretrained':\n pretrained_cfg.append(init_cfg)\n else:\n other_cfgs.append(init_cfg)\n\n initialize(self, other_cfgs)\n\n for m in self.children():\n if hasattr(m, 'init_weights'):\n m.init_weights()\n # users may overload the `init_weights`\n update_init_info(\n m,\n init_info=f'Initialized by '\n f'user-defined `init_weights`'\n f' in {m.__class__.__name__} ')\n if self.init_cfg and pretrained_cfg:\n initialize(self, pretrained_cfg)\n self._is_init = True\n else:\n warnings.warn(f'init_weights of {self.__class__.__name__} has '\n f'been called more than once.')\n\n if is_top_level_module:\n # self._dump_init_info(logger_name)\n self._dump_init_info()\n\n for sub_module in self.modules():\n del sub_module._params_init_info\n\n @master_only\n def _dump_init_info(self):\n \"\"\"Dump the initialization information to a file named\n `initialization.log.json` in workdir.\n\n Args:\n logger_name (str): The name of logger.\n \"\"\"\n\n logger = MMLogger.get_current_instance()\n logger_name = logger.instance_name\n with_file_handler = False\n # dump the information to the logger file if there is a `FileHandler`\n for handler in logger.handlers:\n if isinstance(handler, FileHandler):\n handler.stream.write(\n 'Name of parameter - Initialization information\\n')\n for name, param in self.named_parameters():\n handler.stream.write(\n f'\\n{name} - {param.shape}: '\n f\"\\n{self._params_init_info[param]['init_info']} \\n\")\n handler.stream.flush()\n with_file_handler = True\n if not with_file_handler:\n for name, param in self.named_parameters():\n print_log(\n f'\\n{name} - {param.shape}: '\n f\"\\n{self._params_init_info[param]['init_info']} \\n \",\n logger=logger_name)\n\n def __repr__(self):\n s = super().__repr__()\n if self.init_cfg:\n s += f'\\ninit_cfg={self.init_cfg}'\n return s\n\n\nclass Sequential(BaseModule, nn.Sequential):\n \"\"\"Sequential module in openmmlab.\n\n Ensures that all modules in ``Sequential`` have a different initialization\n strategy than the outer model\n\n Args:\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self, *args, init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.Sequential.__init__(self, *args)\n\n\nclass ModuleList(BaseModule, nn.ModuleList):\n \"\"\"ModuleList in openmmlab.\n\n Ensures that all modules in ``ModuleList`` have a different initialization\n strategy than the outer model\n\n Args:\n modules (iterable, optional): An iterable of modules to add.\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self,\n modules: Optional[Iterable] = None,\n init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.ModuleList.__init__(self, modules)\n\n\nclass ModuleDict(BaseModule, nn.ModuleDict):\n \"\"\"ModuleDict in openmmlab.\n\n Ensures that all modules in ``ModuleDict`` have a different initialization\n strategy than the outer model\n\n Args:\n modules (dict, optional): A mapping (dictionary) of (string: module)\n or an iterable of key-value pairs of type (string, module).\n init_cfg (dict, optional): Initialization config dict.\n \"\"\"\n\n def __init__(self,\n modules: Optional[dict] = None,\n init_cfg: Optional[dict] = None):\n BaseModule.__init__(self, init_cfg)\n nn.ModuleDict.__init__(self, modules)\n", "path": "mmengine/model/base_module.py"}]}
| 2,845 | 664 |
gh_patches_debug_55717
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5622
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.dlive: Failed to fetch segment | 403 Client Error
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
[cli][info] Your Streamlink version (6.2.1) is up to date!
### Description
I navigate to the folder where streamlink.exe is located and enter the command "streamlink.exe https://dlive.tv/cryptokaprika best". It doesn't matter which channel is specified, the same error comes up for all of them as of late.
Here is the complete output that is shown to me in the command line:
C:\Program Files\Streamlink\bin>streamlink.exe https://dlive.tv/cryptokaprika best
[cli][info] Found matching plugin dlive for URL https://dlive.tv/cryptokaprika
[cli][info] Available streams: src (worst, best)
[cli][info] Opening stream: src (hls)
[cli][info] Starting player: C:\Program Files\VideoLAN\VLC\vlc.exe
[stream.hls][error] Failed to fetch segment 79790: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079790.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079790.ts)
[stream.hls][error] Failed to fetch segment 79791: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079791.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079791.ts)
[cli][info] Stream ended
[cli][info] Closing currently open stream...
The VLC Media Player also starts, but I only get the following picture and am referred to the homepage: [https://imgur.com/a/NpuAHQ3](https://imgur.com/a/NpuAHQ3)
### Debug log
```text
C:\Program Files\Streamlink\bin>streamlink.exe --loglevel=debug https://dlive.tv/cryptokaprika best
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.5
[cli][debug] OpenSSL: OpenSSL 3.0.9 30 May 2023
[cli][debug] Streamlink: 6.2.1
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.7.22
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.3
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.19.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.22.2
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.8.0
[cli][debug] urllib3: 2.0.6
[cli][debug] websocket-client: 1.6.3
[cli][debug] Arguments:
[cli][debug] url=https://dlive.tv/cryptokaprika
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin dlive for URL https://dlive.tv/cryptokaprika
[plugins.dlive][debug] Getting live HLS streams for cryptokaprika
[utils.l10n][debug] Language code: en_US
[cli][info] Available streams: src (worst, best)
[cli][info] Opening stream: src (hls)
[cli][info] Starting player: C:\Program Files\VideoLAN\VLC\vlc.exe
[stream.hls][debug] Reloading playlist
[cli][debug] Pre-buffering 8192 bytes
[stream.hls][debug] First Sequence: 79786; Last Sequence: 79791
[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 79786; End Sequence: 79791
[stream.hls][debug] Adding segment 79786 to queue
[stream.hls][debug] Adding segment 79787 to queue
[stream.hls][debug] Adding segment 79788 to queue
[stream.hls][debug] Adding segment 79789 to queue
[stream.hls][debug] Adding segment 79790 to queue
[stream.hls][debug] Adding segment 79791 to queue
[stream.segmented][debug] Closing worker thread
[stream.hls][debug] Writing segment 79786 to output
[stream.hls][debug] Segment 79786 complete
[cli.output][debug] Opening subprocess: ['C:\\Program Files\\VideoLAN\\VLC\\vlc.exe', '--input-title-format', 'https://dlive.tv/cryptokaprika', '-']
[stream.hls][debug] Writing segment 79787 to output
[stream.hls][debug] Segment 79787 complete
[stream.hls][debug] Writing segment 79788 to output
[stream.hls][debug] Segment 79788 complete
[stream.hls][debug] Writing segment 79789 to output
[stream.hls][debug] Segment 79789 complete
[cli][debug] Writing stream to output
[stream.hls][error] Failed to fetch segment 79790: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079790.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079790.ts)
[stream.hls][error] Failed to fetch segment 79791: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079791.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079791.ts)
[stream.segmented][debug] Closing writer thread
[cli][info] Stream ended
[cli][info] Closing currently open stream...
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/dlive.py`
Content:
```
1 """
2 $description Global live-streaming platform owned by BitTorrent, Inc.
3 $url dlive.tv
4 $type live, vod
5 $metadata author
6 $metadata title
7 """
8
9 import logging
10 import re
11 from urllib.parse import unquote_plus
12
13 from streamlink.plugin import Plugin, pluginmatcher
14 from streamlink.plugin.api import validate
15 from streamlink.stream.hls import HLSStream
16
17
18 log = logging.getLogger(__name__)
19
20
21 @pluginmatcher(re.compile(r"""
22 https?://(?:www\.)?dlive\.tv/
23 (?:
24 p/(?P<video>[^/]+)
25 |
26 (?P<channel>[^/]+)
27 )
28 """, re.VERBOSE))
29 class DLive(Plugin):
30 URL_LIVE = "https://live.prd.dlive.tv/hls/live/{username}.m3u8"
31
32 QUALITY_WEIGHTS = {
33 "src": 1080,
34 }
35
36 @classmethod
37 def stream_weight(cls, key):
38 weight = cls.QUALITY_WEIGHTS.get(key)
39 if weight:
40 return weight, "dlive"
41
42 return super().stream_weight(key)
43
44 def _get_streams_video(self, video):
45 log.debug(f"Getting video HLS streams for {video}")
46 hls_url = self.session.http.get(self.url, schema=validate.Schema(
47 validate.regex(re.compile(r'"playbackUrl"\s*:\s*"([^"]+\.m3u8)"')),
48 validate.get(1),
49 validate.transform(unquote_plus),
50 validate.transform(lambda url: bytes(url, "utf-8").decode("unicode_escape")),
51 validate.url(),
52 ))
53
54 return HLSStream.parse_variant_playlist(self.session, hls_url)
55
56 def _get_streams_live(self, channel):
57 log.debug(f"Getting live HLS streams for {channel}")
58 query = f"""query {{
59 userByDisplayName(displayname:"{channel}") {{
60 livestream {{
61 title
62 }}
63 username
64 }}
65 }}"""
66 livestream, username = self.session.http.post(
67 "https://graphigo.prd.dlive.tv/",
68 json={"query": query},
69 schema=validate.Schema(
70 validate.parse_json(),
71 {
72 "data": {
73 "userByDisplayName": {
74 "livestream": {
75 "title": str,
76 },
77 "username": str,
78 },
79 },
80 },
81 validate.get(("data", "userByDisplayName")),
82 validate.union_get("livestream", "username"),
83 ),
84 )
85
86 self.author = channel
87 self.title = livestream["title"]
88
89 return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username))
90
91 def _get_streams(self):
92 video = self.match.group("video")
93 channel = self.match.group("channel")
94
95 if video:
96 return self._get_streams_video(video)
97 elif channel:
98 return self._get_streams_live(channel)
99
100
101 __plugin__ = DLive
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/streamlink/plugins/dlive.py b/src/streamlink/plugins/dlive.py
--- a/src/streamlink/plugins/dlive.py
+++ b/src/streamlink/plugins/dlive.py
@@ -86,7 +86,7 @@
self.author = channel
self.title = livestream["title"]
- return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username))
+ return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username), headers={"Referer": "https://dlive.tv/"})
def _get_streams(self):
video = self.match.group("video")
|
{"golden_diff": "diff --git a/src/streamlink/plugins/dlive.py b/src/streamlink/plugins/dlive.py\n--- a/src/streamlink/plugins/dlive.py\n+++ b/src/streamlink/plugins/dlive.py\n@@ -86,7 +86,7 @@\n self.author = channel\n self.title = livestream[\"title\"]\n \n- return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username))\n+ return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username), headers={\"Referer\": \"https://dlive.tv/\"})\n \n def _get_streams(self):\n video = self.match.group(\"video\")\n", "issue": "plugins.dlive: Failed to fetch segment | 403 Client Error\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\n[cli][info] Your Streamlink version (6.2.1) is up to date!\n\n### Description\n\nI navigate to the folder where streamlink.exe is located and enter the command \"streamlink.exe https://dlive.tv/cryptokaprika best\". It doesn't matter which channel is specified, the same error comes up for all of them as of late.\r\n\r\n\r\nHere is the complete output that is shown to me in the command line:\r\n\r\n\r\nC:\\Program Files\\Streamlink\\bin>streamlink.exe https://dlive.tv/cryptokaprika best\r\n\r\n[cli][info] Found matching plugin dlive for URL https://dlive.tv/cryptokaprika\r\n\r\n[cli][info] Available streams: src (worst, best)\r\n\r\n[cli][info] Opening stream: src (hls)\r\n\r\n[cli][info] Starting player: C:\\Program Files\\VideoLAN\\VLC\\vlc.exe\r\n\r\n[stream.hls][error] Failed to fetch segment 79790: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079790.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079790.ts)\r\n\r\n[stream.hls][error] Failed to fetch segment 79791: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079791.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079791.ts)\r\n\r\n[cli][info] Stream ended\r\n\r\n[cli][info] Closing currently open stream...\r\n\r\n\r\nThe VLC Media Player also starts, but I only get the following picture and am referred to the homepage: [https://imgur.com/a/NpuAHQ3](https://imgur.com/a/NpuAHQ3)\n\n### Debug log\n\n```text\nC:\\Program Files\\Streamlink\\bin>streamlink.exe --loglevel=debug https://dlive.tv/cryptokaprika best\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.5\r\n[cli][debug] OpenSSL: OpenSSL 3.0.9 30 May 2023\r\n[cli][debug] Streamlink: 6.2.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.7.22\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.3\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.19.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.22.2\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.8.0\r\n[cli][debug] urllib3: 2.0.6\r\n[cli][debug] websocket-client: 1.6.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://dlive.tv/cryptokaprika\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin dlive for URL https://dlive.tv/cryptokaprika\r\n[plugins.dlive][debug] Getting live HLS streams for cryptokaprika\r\n[utils.l10n][debug] Language code: en_US\r\n[cli][info] Available streams: src (worst, best)\r\n[cli][info] Opening stream: src (hls)\r\n[cli][info] Starting player: C:\\Program Files\\VideoLAN\\VLC\\vlc.exe\r\n[stream.hls][debug] Reloading playlist\r\n[cli][debug] Pre-buffering 8192 bytes\r\n[stream.hls][debug] First Sequence: 79786; Last Sequence: 79791\r\n[stream.hls][debug] Start offset: 0; Duration: None; Start Sequence: 79786; End Sequence: 79791\r\n[stream.hls][debug] Adding segment 79786 to queue\r\n[stream.hls][debug] Adding segment 79787 to queue\r\n[stream.hls][debug] Adding segment 79788 to queue\r\n[stream.hls][debug] Adding segment 79789 to queue\r\n[stream.hls][debug] Adding segment 79790 to queue\r\n[stream.hls][debug] Adding segment 79791 to queue\r\n[stream.segmented][debug] Closing worker thread\r\n[stream.hls][debug] Writing segment 79786 to output\r\n[stream.hls][debug] Segment 79786 complete\r\n[cli.output][debug] Opening subprocess: ['C:\\\\Program Files\\\\VideoLAN\\\\VLC\\\\vlc.exe', '--input-title-format', 'https://dlive.tv/cryptokaprika', '-']\r\n[stream.hls][debug] Writing segment 79787 to output\r\n[stream.hls][debug] Segment 79787 complete\r\n[stream.hls][debug] Writing segment 79788 to output\r\n[stream.hls][debug] Segment 79788 complete\r\n[stream.hls][debug] Writing segment 79789 to output\r\n[stream.hls][debug] Segment 79789 complete\r\n[cli][debug] Writing stream to output\r\n[stream.hls][error] Failed to fetch segment 79790: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079790.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079790.ts)\r\n[stream.hls][error] Failed to fetch segment 79791: Unable to open URL: https://videos.prd.dlivecdn.com/dlive/0000079791.ts (403 Client Error: Forbidden for url: https://videos.prd.dlivecdn.com/dlive/0000079791.ts)\r\n[stream.segmented][debug] Closing writer thread\r\n[cli][info] Stream ended\r\n[cli][info] Closing currently open stream...\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Global live-streaming platform owned by BitTorrent, Inc.\n$url dlive.tv\n$type live, vod\n$metadata author\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import unquote_plus\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?dlive\\.tv/\n (?:\n p/(?P<video>[^/]+)\n |\n (?P<channel>[^/]+)\n )\n\"\"\", re.VERBOSE))\nclass DLive(Plugin):\n URL_LIVE = \"https://live.prd.dlive.tv/hls/live/{username}.m3u8\"\n\n QUALITY_WEIGHTS = {\n \"src\": 1080,\n }\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.QUALITY_WEIGHTS.get(key)\n if weight:\n return weight, \"dlive\"\n\n return super().stream_weight(key)\n\n def _get_streams_video(self, video):\n log.debug(f\"Getting video HLS streams for {video}\")\n hls_url = self.session.http.get(self.url, schema=validate.Schema(\n validate.regex(re.compile(r'\"playbackUrl\"\\s*:\\s*\"([^\"]+\\.m3u8)\"')),\n validate.get(1),\n validate.transform(unquote_plus),\n validate.transform(lambda url: bytes(url, \"utf-8\").decode(\"unicode_escape\")),\n validate.url(),\n ))\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def _get_streams_live(self, channel):\n log.debug(f\"Getting live HLS streams for {channel}\")\n query = f\"\"\"query {{\n userByDisplayName(displayname:\"{channel}\") {{\n livestream {{\n title\n }}\n username\n }}\n }}\"\"\"\n livestream, username = self.session.http.post(\n \"https://graphigo.prd.dlive.tv/\",\n json={\"query\": query},\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"data\": {\n \"userByDisplayName\": {\n \"livestream\": {\n \"title\": str,\n },\n \"username\": str,\n },\n },\n },\n validate.get((\"data\", \"userByDisplayName\")),\n validate.union_get(\"livestream\", \"username\"),\n ),\n )\n\n self.author = channel\n self.title = livestream[\"title\"]\n\n return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username))\n\n def _get_streams(self):\n video = self.match.group(\"video\")\n channel = self.match.group(\"channel\")\n\n if video:\n return self._get_streams_video(video)\n elif channel:\n return self._get_streams_live(channel)\n\n\n__plugin__ = DLive\n", "path": "src/streamlink/plugins/dlive.py"}], "after_files": [{"content": "\"\"\"\n$description Global live-streaming platform owned by BitTorrent, Inc.\n$url dlive.tv\n$type live, vod\n$metadata author\n$metadata title\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import unquote_plus\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:www\\.)?dlive\\.tv/\n (?:\n p/(?P<video>[^/]+)\n |\n (?P<channel>[^/]+)\n )\n\"\"\", re.VERBOSE))\nclass DLive(Plugin):\n URL_LIVE = \"https://live.prd.dlive.tv/hls/live/{username}.m3u8\"\n\n QUALITY_WEIGHTS = {\n \"src\": 1080,\n }\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.QUALITY_WEIGHTS.get(key)\n if weight:\n return weight, \"dlive\"\n\n return super().stream_weight(key)\n\n def _get_streams_video(self, video):\n log.debug(f\"Getting video HLS streams for {video}\")\n hls_url = self.session.http.get(self.url, schema=validate.Schema(\n validate.regex(re.compile(r'\"playbackUrl\"\\s*:\\s*\"([^\"]+\\.m3u8)\"')),\n validate.get(1),\n validate.transform(unquote_plus),\n validate.transform(lambda url: bytes(url, \"utf-8\").decode(\"unicode_escape\")),\n validate.url(),\n ))\n\n return HLSStream.parse_variant_playlist(self.session, hls_url)\n\n def _get_streams_live(self, channel):\n log.debug(f\"Getting live HLS streams for {channel}\")\n query = f\"\"\"query {{\n userByDisplayName(displayname:\"{channel}\") {{\n livestream {{\n title\n }}\n username\n }}\n }}\"\"\"\n livestream, username = self.session.http.post(\n \"https://graphigo.prd.dlive.tv/\",\n json={\"query\": query},\n schema=validate.Schema(\n validate.parse_json(),\n {\n \"data\": {\n \"userByDisplayName\": {\n \"livestream\": {\n \"title\": str,\n },\n \"username\": str,\n },\n },\n },\n validate.get((\"data\", \"userByDisplayName\")),\n validate.union_get(\"livestream\", \"username\"),\n ),\n )\n\n self.author = channel\n self.title = livestream[\"title\"]\n\n return HLSStream.parse_variant_playlist(self.session, self.URL_LIVE.format(username=username), headers={\"Referer\": \"https://dlive.tv/\"})\n\n def _get_streams(self):\n video = self.match.group(\"video\")\n channel = self.match.group(\"channel\")\n\n if video:\n return self._get_streams_video(video)\n elif channel:\n return self._get_streams_live(channel)\n\n\n__plugin__ = DLive\n", "path": "src/streamlink/plugins/dlive.py"}]}
| 2,768 | 136 |
gh_patches_debug_40324
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-15841
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix Scala using `add_dependencies_on_all_siblings=True`
After dependency inference was improved for Scala, `add_dependencies_on_all_siblings` was not removed:
https://github.com/pantsbuild/pants/blob/c2f6404c1ed5fd11a6a37eac8682a5d337bf22aa/src/python/pants/backend/scala/target_types.py#L233 This means that we are overly coarsening Scala (all BUILD targets end up compiled together).
We should fix this (or drive it via an option), but it will definitely impact compilation success rates, and might also impact performance. We should test out the impact in our testbed repositories.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/python/pants/backend/scala/target_types.py`
Content:
```
1 # Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 from dataclasses import dataclass
7
8 from pants.engine.rules import collect_rules, rule
9 from pants.engine.target import (
10 COMMON_TARGET_FIELDS,
11 AsyncFieldMixin,
12 Dependencies,
13 FieldSet,
14 MultipleSourcesField,
15 SingleSourceField,
16 StringField,
17 StringSequenceField,
18 Target,
19 TargetFilesGenerator,
20 TargetFilesGeneratorSettings,
21 TargetFilesGeneratorSettingsRequest,
22 generate_multiple_sources_field_help_message,
23 )
24 from pants.engine.unions import UnionRule
25 from pants.jvm.target_types import (
26 JunitTestSourceField,
27 JvmJdkField,
28 JvmProvidesTypesField,
29 JvmResolveField,
30 )
31 from pants.util.strutil import softwrap
32
33
34 class ScalaSettingsRequest(TargetFilesGeneratorSettingsRequest):
35 pass
36
37
38 @rule
39 def scala_settings_request(_: ScalaSettingsRequest) -> TargetFilesGeneratorSettings:
40 # TODO: See https://github.com/pantsbuild/pants/issues/14382.
41 return TargetFilesGeneratorSettings(add_dependencies_on_all_siblings=True)
42
43
44 class ScalaSourceField(SingleSourceField):
45 expected_file_extensions = (".scala",)
46
47
48 class ScalaGeneratorSourcesField(MultipleSourcesField):
49 expected_file_extensions = (".scala",)
50
51
52 class ScalaDependenciesField(Dependencies):
53 pass
54
55
56 class ScalaConsumedPluginNamesField(StringSequenceField):
57 help = softwrap(
58 """
59 The names of Scala plugins that this source file requires.
60
61 The plugin must be defined by a corresponding `scalac_plugin` AND `jvm_artifact` target,
62 and must be present in this target's resolve's lockfile.
63
64 If not specified, this will default to the plugins specified in
65 `[scalac].plugins_for_resolve` for this target's resolve.
66 """
67 )
68
69 alias = "scalac_plugins"
70 required = False
71
72
73 @dataclass(frozen=True)
74 class ScalaFieldSet(FieldSet):
75 required_fields = (ScalaSourceField,)
76
77 sources: ScalaSourceField
78
79
80 @dataclass(frozen=True)
81 class ScalaGeneratorFieldSet(FieldSet):
82 required_fields = (ScalaGeneratorSourcesField,)
83
84 sources: ScalaGeneratorSourcesField
85
86
87 # -----------------------------------------------------------------------------------------------
88 # `scalatest_tests`
89 # -----------------------------------------------------------------------------------------------
90
91
92 class ScalatestTestSourceField(ScalaSourceField):
93 pass
94
95
96 class ScalatestTestTarget(Target):
97 alias = "scalatest_test"
98 core_fields = (
99 *COMMON_TARGET_FIELDS,
100 ScalaDependenciesField,
101 ScalatestTestSourceField,
102 ScalaConsumedPluginNamesField,
103 JvmResolveField,
104 JvmProvidesTypesField,
105 JvmJdkField,
106 )
107 help = "A single Scala test, run with Scalatest."
108
109
110 class ScalatestTestsGeneratorSourcesField(ScalaGeneratorSourcesField):
111 default = ("*Spec.scala", "*Suite.scala")
112 help = generate_multiple_sources_field_help_message(
113 "Example: `sources=['*Spec.scala', '!SuiteIgnore.scala']`"
114 )
115
116
117 class ScalatestTestsGeneratorTarget(TargetFilesGenerator):
118 alias = "scalatest_tests"
119 core_fields = (
120 *COMMON_TARGET_FIELDS,
121 ScalatestTestsGeneratorSourcesField,
122 )
123 generated_target_cls = ScalatestTestTarget
124 copied_fields = COMMON_TARGET_FIELDS
125 moved_fields = (
126 ScalaDependenciesField,
127 ScalaConsumedPluginNamesField,
128 JvmJdkField,
129 JvmProvidesTypesField,
130 JvmResolveField,
131 )
132 settings_request_cls = ScalaSettingsRequest
133 help = softwrap(
134 f"""
135 Generate a `scalatest_test` target for each file in the `sources` field (defaults to
136 all files in the directory matching {ScalatestTestsGeneratorSourcesField.default}).
137 """
138 )
139
140
141 # -----------------------------------------------------------------------------------------------
142 # `scala_junit_tests`
143 # -----------------------------------------------------------------------------------------------
144
145
146 class ScalaJunitTestSourceField(ScalaSourceField, JunitTestSourceField):
147 pass
148
149
150 class ScalaJunitTestTarget(Target):
151 alias = "scala_junit_test"
152 core_fields = (
153 *COMMON_TARGET_FIELDS,
154 ScalaDependenciesField,
155 ScalaJunitTestSourceField,
156 ScalaConsumedPluginNamesField,
157 JvmResolveField,
158 JvmProvidesTypesField,
159 JvmJdkField,
160 )
161 help = "A single Scala test, run with JUnit."
162
163
164 class ScalaJunitTestsGeneratorSourcesField(ScalaGeneratorSourcesField):
165 default = ("*Test.scala",)
166 help = generate_multiple_sources_field_help_message(
167 "Example: `sources=['*Test.scala', '!TestIgnore.scala']`"
168 )
169
170
171 class ScalaJunitTestsGeneratorTarget(TargetFilesGenerator):
172 alias = "scala_junit_tests"
173 core_fields = (
174 *COMMON_TARGET_FIELDS,
175 ScalaJunitTestsGeneratorSourcesField,
176 )
177 generated_target_cls = ScalaJunitTestTarget
178 copied_fields = COMMON_TARGET_FIELDS
179 moved_fields = (
180 ScalaDependenciesField,
181 ScalaConsumedPluginNamesField,
182 JvmJdkField,
183 JvmProvidesTypesField,
184 JvmResolveField,
185 )
186 settings_request_cls = ScalaSettingsRequest
187 help = "Generate a `scala_junit_test` target for each file in the `sources` field."
188
189
190 # -----------------------------------------------------------------------------------------------
191 # `scala_source` target
192 # -----------------------------------------------------------------------------------------------
193
194
195 class ScalaSourceTarget(Target):
196 alias = "scala_source"
197 core_fields = (
198 *COMMON_TARGET_FIELDS,
199 ScalaDependenciesField,
200 ScalaSourceField,
201 ScalaConsumedPluginNamesField,
202 JvmResolveField,
203 JvmProvidesTypesField,
204 JvmJdkField,
205 )
206 help = "A single Scala source file containing application or library code."
207
208
209 # -----------------------------------------------------------------------------------------------
210 # `scala_sources` target generator
211 # -----------------------------------------------------------------------------------------------
212
213
214 class ScalaSourcesGeneratorSourcesField(ScalaGeneratorSourcesField):
215 default = (
216 "*.scala",
217 *(f"!{pat}" for pat in (ScalaJunitTestsGeneratorSourcesField.default)),
218 *(f"!{pat}" for pat in (ScalatestTestsGeneratorSourcesField.default)),
219 )
220 help = generate_multiple_sources_field_help_message(
221 "Example: `sources=['Example.scala', 'New*.scala', '!OldIgnore.scala']`"
222 )
223
224
225 class ScalaSourcesGeneratorTarget(TargetFilesGenerator):
226 alias = "scala_sources"
227 core_fields = (
228 *COMMON_TARGET_FIELDS,
229 ScalaSourcesGeneratorSourcesField,
230 )
231 generated_target_cls = ScalaSourceTarget
232 copied_fields = COMMON_TARGET_FIELDS
233 moved_fields = (
234 ScalaDependenciesField,
235 ScalaConsumedPluginNamesField,
236 JvmResolveField,
237 JvmJdkField,
238 JvmProvidesTypesField,
239 )
240 settings_request_cls = ScalaSettingsRequest
241 help = "Generate a `scala_source` target for each file in the `sources` field."
242
243
244 # -----------------------------------------------------------------------------------------------
245 # `scalac_plugin` target
246 # -----------------------------------------------------------------------------------------------
247
248
249 class ScalacPluginArtifactField(StringField, AsyncFieldMixin):
250 alias = "artifact"
251 required = True
252 value: str
253 help = "The address of a `jvm_artifact` that defines a plugin for `scalac`."
254
255
256 class ScalacPluginNameField(StringField):
257 alias = "plugin_name"
258 help = softwrap(
259 """
260 The name that `scalac` should use to load the plugin.
261
262 If not set, the plugin name defaults to the target name.
263 """
264 )
265
266
267 class ScalacPluginTarget(Target):
268 alias = "scalac_plugin"
269 core_fields = (
270 *COMMON_TARGET_FIELDS,
271 ScalacPluginArtifactField,
272 ScalacPluginNameField,
273 )
274 help = softwrap(
275 """
276 A plugin for `scalac`.
277
278 Currently only thirdparty plugins are supported. To enable a plugin, define this
279 target type, and set the `artifact=` field to the address of a `jvm_artifact` that
280 provides the plugin.
281
282 If the `scalac`-loaded name of the plugin does not match the target's name,
283 additionally set the `plugin_name=` field.
284 """
285 )
286
287
288 def rules():
289 return (
290 *collect_rules(),
291 UnionRule(TargetFilesGeneratorSettingsRequest, ScalaSettingsRequest),
292 )
293
```
Path: `src/python/pants/backend/scala/subsystems/scala_infer.py`
Content:
```
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from pants.option.option_types import BoolOption
5 from pants.option.subsystem import Subsystem
6
7
8 class ScalaInferSubsystem(Subsystem):
9 options_scope = "scala-infer"
10 help = "Options controlling which dependencies will be inferred for Scala targets."
11
12 imports = BoolOption(
13 "--imports",
14 default=True,
15 help="Infer a target's dependencies by parsing import statements from sources.",
16 )
17 consumed_types = BoolOption(
18 "--consumed-types",
19 default=True,
20 help="Infer a target's dependencies by parsing consumed types from sources.",
21 )
22
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/python/pants/backend/scala/subsystems/scala_infer.py b/src/python/pants/backend/scala/subsystems/scala_infer.py
--- a/src/python/pants/backend/scala/subsystems/scala_infer.py
+++ b/src/python/pants/backend/scala/subsystems/scala_infer.py
@@ -3,6 +3,7 @@
from pants.option.option_types import BoolOption
from pants.option.subsystem import Subsystem
+from pants.util.strutil import softwrap
class ScalaInferSubsystem(Subsystem):
@@ -19,3 +20,18 @@
default=True,
help="Infer a target's dependencies by parsing consumed types from sources.",
)
+ force_add_siblings_as_dependencies = BoolOption(
+ "--force-add-siblings-as-dependencies",
+ default=True,
+ help=softwrap(
+ """
+ If true, add a dependency on all scala_source targets generated by the same scala_sources target generator.
+ If false, the dependencies will only be added if [scala-infer].imports is also set to false.
+
+ Setting this option to true reduces the precision of dependency information.
+ That means that you may end up compiling more than is necessary for a particular task,
+ and that compilation will be invalidated more frequently than actually necessary.
+ However, setting to true may be helpful if compilation fails due to missing dependencies.
+ """
+ ),
+ )
diff --git a/src/python/pants/backend/scala/target_types.py b/src/python/pants/backend/scala/target_types.py
--- a/src/python/pants/backend/scala/target_types.py
+++ b/src/python/pants/backend/scala/target_types.py
@@ -5,6 +5,8 @@
from dataclasses import dataclass
+from pants.backend.scala.subsystems.scala_infer import ScalaInferSubsystem
+from pants.base.deprecated import warn_or_error
from pants.engine.rules import collect_rules, rule
from pants.engine.target import (
COMMON_TARGET_FIELDS,
@@ -36,9 +38,35 @@
@rule
-def scala_settings_request(_: ScalaSettingsRequest) -> TargetFilesGeneratorSettings:
- # TODO: See https://github.com/pantsbuild/pants/issues/14382.
- return TargetFilesGeneratorSettings(add_dependencies_on_all_siblings=True)
+def scala_settings_request(
+ scala_infer_subsystem: ScalaInferSubsystem, _: ScalaSettingsRequest
+) -> TargetFilesGeneratorSettings:
+ if scala_infer_subsystem.options.is_default("force_add_siblings_as_dependencies"):
+ warn_or_error(
+ removal_version="2.14.0.dev0",
+ entity="`force_add_siblings_as_dependencies` defaulting to True",
+ hint=softwrap(
+ """
+ Setting this option to true reduces the precision of dependency information.
+ That means that you may end up compiling more than is necessary for a particular task,
+ and that compilation will be invalidated more frequently than actually necessary.
+ However, setting to true may be helpful if compilation fails due to missing dependencies.
+
+ We have made several improvements to Pants's Scala dependency inference,
+ where we no longer think it's necessary to adding dependencies on sibling targets.
+ If you have compilation failures after disabling this option, please consider opening an issue at
+ https://github.com/pantsbuild/pants/issues/new so that we can continue to improve Pants's dependency inference.
+
+ To opt into the new default early, set `force_add_siblings_as_dependencies = false` in the `[scala_infer]`
+ section in `pants.toml`. Otherwise, set to `true` to silence this warning.
+ """
+ ),
+ )
+
+ return TargetFilesGeneratorSettings(
+ add_dependencies_on_all_siblings=scala_infer_subsystem.force_add_siblings_as_dependencies
+ or not scala_infer_subsystem.imports
+ )
class ScalaSourceField(SingleSourceField):
|
{"golden_diff": "diff --git a/src/python/pants/backend/scala/subsystems/scala_infer.py b/src/python/pants/backend/scala/subsystems/scala_infer.py\n--- a/src/python/pants/backend/scala/subsystems/scala_infer.py\n+++ b/src/python/pants/backend/scala/subsystems/scala_infer.py\n@@ -3,6 +3,7 @@\n \n from pants.option.option_types import BoolOption\n from pants.option.subsystem import Subsystem\n+from pants.util.strutil import softwrap\n \n \n class ScalaInferSubsystem(Subsystem):\n@@ -19,3 +20,18 @@\n default=True,\n help=\"Infer a target's dependencies by parsing consumed types from sources.\",\n )\n+ force_add_siblings_as_dependencies = BoolOption(\n+ \"--force-add-siblings-as-dependencies\",\n+ default=True,\n+ help=softwrap(\n+ \"\"\"\n+ If true, add a dependency on all scala_source targets generated by the same scala_sources target generator.\n+ If false, the dependencies will only be added if [scala-infer].imports is also set to false.\n+\n+ Setting this option to true reduces the precision of dependency information.\n+ That means that you may end up compiling more than is necessary for a particular task,\n+ and that compilation will be invalidated more frequently than actually necessary.\n+ However, setting to true may be helpful if compilation fails due to missing dependencies.\n+ \"\"\"\n+ ),\n+ )\ndiff --git a/src/python/pants/backend/scala/target_types.py b/src/python/pants/backend/scala/target_types.py\n--- a/src/python/pants/backend/scala/target_types.py\n+++ b/src/python/pants/backend/scala/target_types.py\n@@ -5,6 +5,8 @@\n \n from dataclasses import dataclass\n \n+from pants.backend.scala.subsystems.scala_infer import ScalaInferSubsystem\n+from pants.base.deprecated import warn_or_error\n from pants.engine.rules import collect_rules, rule\n from pants.engine.target import (\n COMMON_TARGET_FIELDS,\n@@ -36,9 +38,35 @@\n \n \n @rule\n-def scala_settings_request(_: ScalaSettingsRequest) -> TargetFilesGeneratorSettings:\n- # TODO: See https://github.com/pantsbuild/pants/issues/14382.\n- return TargetFilesGeneratorSettings(add_dependencies_on_all_siblings=True)\n+def scala_settings_request(\n+ scala_infer_subsystem: ScalaInferSubsystem, _: ScalaSettingsRequest\n+) -> TargetFilesGeneratorSettings:\n+ if scala_infer_subsystem.options.is_default(\"force_add_siblings_as_dependencies\"):\n+ warn_or_error(\n+ removal_version=\"2.14.0.dev0\",\n+ entity=\"`force_add_siblings_as_dependencies` defaulting to True\",\n+ hint=softwrap(\n+ \"\"\"\n+ Setting this option to true reduces the precision of dependency information.\n+ That means that you may end up compiling more than is necessary for a particular task,\n+ and that compilation will be invalidated more frequently than actually necessary.\n+ However, setting to true may be helpful if compilation fails due to missing dependencies.\n+\n+ We have made several improvements to Pants's Scala dependency inference,\n+ where we no longer think it's necessary to adding dependencies on sibling targets.\n+ If you have compilation failures after disabling this option, please consider opening an issue at\n+ https://github.com/pantsbuild/pants/issues/new so that we can continue to improve Pants's dependency inference.\n+\n+ To opt into the new default early, set `force_add_siblings_as_dependencies = false` in the `[scala_infer]`\n+ section in `pants.toml`. Otherwise, set to `true` to silence this warning.\n+ \"\"\"\n+ ),\n+ )\n+\n+ return TargetFilesGeneratorSettings(\n+ add_dependencies_on_all_siblings=scala_infer_subsystem.force_add_siblings_as_dependencies\n+ or not scala_infer_subsystem.imports\n+ )\n \n \n class ScalaSourceField(SingleSourceField):\n", "issue": "Fix Scala using `add_dependencies_on_all_siblings=True`\nAfter dependency inference was improved for Scala, `add_dependencies_on_all_siblings` was not removed:\r\nhttps://github.com/pantsbuild/pants/blob/c2f6404c1ed5fd11a6a37eac8682a5d337bf22aa/src/python/pants/backend/scala/target_types.py#L233 This means that we are overly coarsening Scala (all BUILD targets end up compiled together).\r\n\r\nWe should fix this (or drive it via an option), but it will definitely impact compilation success rates, and might also impact performance. We should test out the impact in our testbed repositories.\n", "before_files": [{"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom pants.engine.rules import collect_rules, rule\nfrom pants.engine.target import (\n COMMON_TARGET_FIELDS,\n AsyncFieldMixin,\n Dependencies,\n FieldSet,\n MultipleSourcesField,\n SingleSourceField,\n StringField,\n StringSequenceField,\n Target,\n TargetFilesGenerator,\n TargetFilesGeneratorSettings,\n TargetFilesGeneratorSettingsRequest,\n generate_multiple_sources_field_help_message,\n)\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm.target_types import (\n JunitTestSourceField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n)\nfrom pants.util.strutil import softwrap\n\n\nclass ScalaSettingsRequest(TargetFilesGeneratorSettingsRequest):\n pass\n\n\n@rule\ndef scala_settings_request(_: ScalaSettingsRequest) -> TargetFilesGeneratorSettings:\n # TODO: See https://github.com/pantsbuild/pants/issues/14382.\n return TargetFilesGeneratorSettings(add_dependencies_on_all_siblings=True)\n\n\nclass ScalaSourceField(SingleSourceField):\n expected_file_extensions = (\".scala\",)\n\n\nclass ScalaGeneratorSourcesField(MultipleSourcesField):\n expected_file_extensions = (\".scala\",)\n\n\nclass ScalaDependenciesField(Dependencies):\n pass\n\n\nclass ScalaConsumedPluginNamesField(StringSequenceField):\n help = softwrap(\n \"\"\"\n The names of Scala plugins that this source file requires.\n\n The plugin must be defined by a corresponding `scalac_plugin` AND `jvm_artifact` target,\n and must be present in this target's resolve's lockfile.\n\n If not specified, this will default to the plugins specified in\n `[scalac].plugins_for_resolve` for this target's resolve.\n \"\"\"\n )\n\n alias = \"scalac_plugins\"\n required = False\n\n\n@dataclass(frozen=True)\nclass ScalaFieldSet(FieldSet):\n required_fields = (ScalaSourceField,)\n\n sources: ScalaSourceField\n\n\n@dataclass(frozen=True)\nclass ScalaGeneratorFieldSet(FieldSet):\n required_fields = (ScalaGeneratorSourcesField,)\n\n sources: ScalaGeneratorSourcesField\n\n\n# -----------------------------------------------------------------------------------------------\n# `scalatest_tests`\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalatestTestSourceField(ScalaSourceField):\n pass\n\n\nclass ScalatestTestTarget(Target):\n alias = \"scalatest_test\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalatestTestSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala test, run with Scalatest.\"\n\n\nclass ScalatestTestsGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\"*Spec.scala\", \"*Suite.scala\")\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['*Spec.scala', '!SuiteIgnore.scala']`\"\n )\n\n\nclass ScalatestTestsGeneratorTarget(TargetFilesGenerator):\n alias = \"scalatest_tests\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalatestTestsGeneratorSourcesField,\n )\n generated_target_cls = ScalatestTestTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = softwrap(\n f\"\"\"\n Generate a `scalatest_test` target for each file in the `sources` field (defaults to\n all files in the directory matching {ScalatestTestsGeneratorSourcesField.default}).\n \"\"\"\n )\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_junit_tests`\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaJunitTestSourceField(ScalaSourceField, JunitTestSourceField):\n pass\n\n\nclass ScalaJunitTestTarget(Target):\n alias = \"scala_junit_test\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalaJunitTestSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala test, run with JUnit.\"\n\n\nclass ScalaJunitTestsGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\"*Test.scala\",)\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['*Test.scala', '!TestIgnore.scala']`\"\n )\n\n\nclass ScalaJunitTestsGeneratorTarget(TargetFilesGenerator):\n alias = \"scala_junit_tests\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaJunitTestsGeneratorSourcesField,\n )\n generated_target_cls = ScalaJunitTestTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = \"Generate a `scala_junit_test` target for each file in the `sources` field.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_source` target\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaSourceTarget(Target):\n alias = \"scala_source\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalaSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala source file containing application or library code.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_sources` target generator\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaSourcesGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\n \"*.scala\",\n *(f\"!{pat}\" for pat in (ScalaJunitTestsGeneratorSourcesField.default)),\n *(f\"!{pat}\" for pat in (ScalatestTestsGeneratorSourcesField.default)),\n )\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['Example.scala', 'New*.scala', '!OldIgnore.scala']`\"\n )\n\n\nclass ScalaSourcesGeneratorTarget(TargetFilesGenerator):\n alias = \"scala_sources\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaSourcesGeneratorSourcesField,\n )\n generated_target_cls = ScalaSourceTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmJdkField,\n JvmProvidesTypesField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = \"Generate a `scala_source` target for each file in the `sources` field.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scalac_plugin` target\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalacPluginArtifactField(StringField, AsyncFieldMixin):\n alias = \"artifact\"\n required = True\n value: str\n help = \"The address of a `jvm_artifact` that defines a plugin for `scalac`.\"\n\n\nclass ScalacPluginNameField(StringField):\n alias = \"plugin_name\"\n help = softwrap(\n \"\"\"\n The name that `scalac` should use to load the plugin.\n\n If not set, the plugin name defaults to the target name.\n \"\"\"\n )\n\n\nclass ScalacPluginTarget(Target):\n alias = \"scalac_plugin\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalacPluginArtifactField,\n ScalacPluginNameField,\n )\n help = softwrap(\n \"\"\"\n A plugin for `scalac`.\n\n Currently only thirdparty plugins are supported. To enable a plugin, define this\n target type, and set the `artifact=` field to the address of a `jvm_artifact` that\n provides the plugin.\n\n If the `scalac`-loaded name of the plugin does not match the target's name,\n additionally set the `plugin_name=` field.\n \"\"\"\n )\n\n\ndef rules():\n return (\n *collect_rules(),\n UnionRule(TargetFilesGeneratorSettingsRequest, ScalaSettingsRequest),\n )\n", "path": "src/python/pants/backend/scala/target_types.py"}, {"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.option.option_types import BoolOption\nfrom pants.option.subsystem import Subsystem\n\n\nclass ScalaInferSubsystem(Subsystem):\n options_scope = \"scala-infer\"\n help = \"Options controlling which dependencies will be inferred for Scala targets.\"\n\n imports = BoolOption(\n \"--imports\",\n default=True,\n help=\"Infer a target's dependencies by parsing import statements from sources.\",\n )\n consumed_types = BoolOption(\n \"--consumed-types\",\n default=True,\n help=\"Infer a target's dependencies by parsing consumed types from sources.\",\n )\n", "path": "src/python/pants/backend/scala/subsystems/scala_infer.py"}], "after_files": [{"content": "# Copyright 2020 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom pants.backend.scala.subsystems.scala_infer import ScalaInferSubsystem\nfrom pants.base.deprecated import warn_or_error\nfrom pants.engine.rules import collect_rules, rule\nfrom pants.engine.target import (\n COMMON_TARGET_FIELDS,\n AsyncFieldMixin,\n Dependencies,\n FieldSet,\n MultipleSourcesField,\n SingleSourceField,\n StringField,\n StringSequenceField,\n Target,\n TargetFilesGenerator,\n TargetFilesGeneratorSettings,\n TargetFilesGeneratorSettingsRequest,\n generate_multiple_sources_field_help_message,\n)\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm.target_types import (\n JunitTestSourceField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n)\nfrom pants.util.strutil import softwrap\n\n\nclass ScalaSettingsRequest(TargetFilesGeneratorSettingsRequest):\n pass\n\n\n@rule\ndef scala_settings_request(\n scala_infer_subsystem: ScalaInferSubsystem, _: ScalaSettingsRequest\n) -> TargetFilesGeneratorSettings:\n if scala_infer_subsystem.options.is_default(\"force_add_siblings_as_dependencies\"):\n warn_or_error(\n removal_version=\"2.14.0.dev0\",\n entity=\"`force_add_siblings_as_dependencies` defaulting to True\",\n hint=softwrap(\n \"\"\"\n Setting this option to true reduces the precision of dependency information.\n That means that you may end up compiling more than is necessary for a particular task,\n and that compilation will be invalidated more frequently than actually necessary.\n However, setting to true may be helpful if compilation fails due to missing dependencies.\n\n We have made several improvements to Pants's Scala dependency inference,\n where we no longer think it's necessary to adding dependencies on sibling targets.\n If you have compilation failures after disabling this option, please consider opening an issue at\n https://github.com/pantsbuild/pants/issues/new so that we can continue to improve Pants's dependency inference.\n\n To opt into the new default early, set `force_add_siblings_as_dependencies = false` in the `[scala_infer]`\n section in `pants.toml`. Otherwise, set to `true` to silence this warning.\n \"\"\"\n ),\n )\n\n return TargetFilesGeneratorSettings(\n add_dependencies_on_all_siblings=scala_infer_subsystem.force_add_siblings_as_dependencies\n or not scala_infer_subsystem.imports\n )\n\n\nclass ScalaSourceField(SingleSourceField):\n expected_file_extensions = (\".scala\",)\n\n\nclass ScalaGeneratorSourcesField(MultipleSourcesField):\n expected_file_extensions = (\".scala\",)\n\n\nclass ScalaDependenciesField(Dependencies):\n pass\n\n\nclass ScalaConsumedPluginNamesField(StringSequenceField):\n help = softwrap(\n \"\"\"\n The names of Scala plugins that this source file requires.\n\n The plugin must be defined by a corresponding `scalac_plugin` AND `jvm_artifact` target,\n and must be present in this target's resolve's lockfile.\n\n If not specified, this will default to the plugins specified in\n `[scalac].plugins_for_resolve` for this target's resolve.\n \"\"\"\n )\n\n alias = \"scalac_plugins\"\n required = False\n\n\n@dataclass(frozen=True)\nclass ScalaFieldSet(FieldSet):\n required_fields = (ScalaSourceField,)\n\n sources: ScalaSourceField\n\n\n@dataclass(frozen=True)\nclass ScalaGeneratorFieldSet(FieldSet):\n required_fields = (ScalaGeneratorSourcesField,)\n\n sources: ScalaGeneratorSourcesField\n\n\n# -----------------------------------------------------------------------------------------------\n# `scalatest_tests`\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalatestTestSourceField(ScalaSourceField):\n pass\n\n\nclass ScalatestTestTarget(Target):\n alias = \"scalatest_test\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalatestTestSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala test, run with Scalatest.\"\n\n\nclass ScalatestTestsGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\"*Spec.scala\", \"*Suite.scala\")\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['*Spec.scala', '!SuiteIgnore.scala']`\"\n )\n\n\nclass ScalatestTestsGeneratorTarget(TargetFilesGenerator):\n alias = \"scalatest_tests\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalatestTestsGeneratorSourcesField,\n )\n generated_target_cls = ScalatestTestTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = softwrap(\n f\"\"\"\n Generate a `scalatest_test` target for each file in the `sources` field (defaults to\n all files in the directory matching {ScalatestTestsGeneratorSourcesField.default}).\n \"\"\"\n )\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_junit_tests`\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaJunitTestSourceField(ScalaSourceField, JunitTestSourceField):\n pass\n\n\nclass ScalaJunitTestTarget(Target):\n alias = \"scala_junit_test\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalaJunitTestSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala test, run with JUnit.\"\n\n\nclass ScalaJunitTestsGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\"*Test.scala\",)\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['*Test.scala', '!TestIgnore.scala']`\"\n )\n\n\nclass ScalaJunitTestsGeneratorTarget(TargetFilesGenerator):\n alias = \"scala_junit_tests\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaJunitTestsGeneratorSourcesField,\n )\n generated_target_cls = ScalaJunitTestTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmJdkField,\n JvmProvidesTypesField,\n JvmResolveField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = \"Generate a `scala_junit_test` target for each file in the `sources` field.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_source` target\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaSourceTarget(Target):\n alias = \"scala_source\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaDependenciesField,\n ScalaSourceField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmProvidesTypesField,\n JvmJdkField,\n )\n help = \"A single Scala source file containing application or library code.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scala_sources` target generator\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalaSourcesGeneratorSourcesField(ScalaGeneratorSourcesField):\n default = (\n \"*.scala\",\n *(f\"!{pat}\" for pat in (ScalaJunitTestsGeneratorSourcesField.default)),\n *(f\"!{pat}\" for pat in (ScalatestTestsGeneratorSourcesField.default)),\n )\n help = generate_multiple_sources_field_help_message(\n \"Example: `sources=['Example.scala', 'New*.scala', '!OldIgnore.scala']`\"\n )\n\n\nclass ScalaSourcesGeneratorTarget(TargetFilesGenerator):\n alias = \"scala_sources\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalaSourcesGeneratorSourcesField,\n )\n generated_target_cls = ScalaSourceTarget\n copied_fields = COMMON_TARGET_FIELDS\n moved_fields = (\n ScalaDependenciesField,\n ScalaConsumedPluginNamesField,\n JvmResolveField,\n JvmJdkField,\n JvmProvidesTypesField,\n )\n settings_request_cls = ScalaSettingsRequest\n help = \"Generate a `scala_source` target for each file in the `sources` field.\"\n\n\n# -----------------------------------------------------------------------------------------------\n# `scalac_plugin` target\n# -----------------------------------------------------------------------------------------------\n\n\nclass ScalacPluginArtifactField(StringField, AsyncFieldMixin):\n alias = \"artifact\"\n required = True\n value: str\n help = \"The address of a `jvm_artifact` that defines a plugin for `scalac`.\"\n\n\nclass ScalacPluginNameField(StringField):\n alias = \"plugin_name\"\n help = softwrap(\n \"\"\"\n The name that `scalac` should use to load the plugin.\n\n If not set, the plugin name defaults to the target name.\n \"\"\"\n )\n\n\nclass ScalacPluginTarget(Target):\n alias = \"scalac_plugin\"\n core_fields = (\n *COMMON_TARGET_FIELDS,\n ScalacPluginArtifactField,\n ScalacPluginNameField,\n )\n help = softwrap(\n \"\"\"\n A plugin for `scalac`.\n\n Currently only thirdparty plugins are supported. To enable a plugin, define this\n target type, and set the `artifact=` field to the address of a `jvm_artifact` that\n provides the plugin.\n\n If the `scalac`-loaded name of the plugin does not match the target's name,\n additionally set the `plugin_name=` field.\n \"\"\"\n )\n\n\ndef rules():\n return (\n *collect_rules(),\n UnionRule(TargetFilesGeneratorSettingsRequest, ScalaSettingsRequest),\n )\n", "path": "src/python/pants/backend/scala/target_types.py"}, {"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.option.option_types import BoolOption\nfrom pants.option.subsystem import Subsystem\nfrom pants.util.strutil import softwrap\n\n\nclass ScalaInferSubsystem(Subsystem):\n options_scope = \"scala-infer\"\n help = \"Options controlling which dependencies will be inferred for Scala targets.\"\n\n imports = BoolOption(\n \"--imports\",\n default=True,\n help=\"Infer a target's dependencies by parsing import statements from sources.\",\n )\n consumed_types = BoolOption(\n \"--consumed-types\",\n default=True,\n help=\"Infer a target's dependencies by parsing consumed types from sources.\",\n )\n force_add_siblings_as_dependencies = BoolOption(\n \"--force-add-siblings-as-dependencies\",\n default=True,\n help=softwrap(\n \"\"\"\n If true, add a dependency on all scala_source targets generated by the same scala_sources target generator.\n If false, the dependencies will only be added if [scala-infer].imports is also set to false.\n\n Setting this option to true reduces the precision of dependency information.\n That means that you may end up compiling more than is necessary for a particular task,\n and that compilation will be invalidated more frequently than actually necessary.\n However, setting to true may be helpful if compilation fails due to missing dependencies.\n \"\"\"\n ),\n )\n", "path": "src/python/pants/backend/scala/subsystems/scala_infer.py"}]}
| 3,192 | 851 |
gh_patches_debug_309
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-195
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix documentation main page's header
The header is gone:
<img width="1032" alt="2018-10-03 0 18 01" src="https://user-images.githubusercontent.com/4660275/46377643-d0ce1080-c6a1-11e8-950b-d2d0c515dee1.png">
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wemake_python_styleguide/visitors/ast/numbers.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 import ast
4 from typing import Optional
5
6 from wemake_python_styleguide.constants import MAGIC_NUMBERS_WHITELIST
7 from wemake_python_styleguide.violations.best_practices import (
8 MagicNumberViolation,
9 )
10 from wemake_python_styleguide.visitors.base import BaseNodeVisitor
11
12
13 class MagicNumberVisitor(BaseNodeVisitor):
14 """Checks magic numbers used in the code."""
15
16 _ALLOWED_PARENTS = (
17 ast.Assign,
18
19 # Constructor usages:
20 ast.FunctionDef,
21 ast.arguments,
22
23 # Primitives:
24 ast.List,
25 ast.Dict,
26 ast.Set,
27 ast.Tuple,
28 )
29
30 _PROXY_PARENTS = (
31 ast.UnaryOp,
32 )
33
34 def _get_real_parent(self, node: Optional[ast.AST]) -> Optional[ast.AST]:
35 """
36 Returns real number's parent.
37
38 What can go wrong?
39
40 1. Number can be negative: ``x = -1``,
41 so ``1`` has ``UnaryOp`` as parent, but should return ``Assign``
42
43 """
44 parent = getattr(node, 'parent', None)
45 if isinstance(parent, self._PROXY_PARENTS):
46 return self._get_real_parent(parent)
47 return parent
48
49 def _check_is_magic(self, node: ast.Num) -> None:
50 parent = self._get_real_parent(node)
51 if isinstance(parent, self._ALLOWED_PARENTS):
52 return
53
54 if node.n in MAGIC_NUMBERS_WHITELIST:
55 return
56
57 if isinstance(node.n, int) and node.n <= 10:
58 return
59
60 self.add_violation(MagicNumberViolation(node, text=str(node.n)))
61
62 def visit_Num(self, node: ast.Num) -> None:
63 """
64 Checks numbers not to be magic constants inside the code.
65
66 Raises:
67 MagicNumberViolation
68
69 """
70 self._check_is_magic(node)
71 self.generic_visit(node)
72
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/wemake_python_styleguide/visitors/ast/numbers.py b/wemake_python_styleguide/visitors/ast/numbers.py
--- a/wemake_python_styleguide/visitors/ast/numbers.py
+++ b/wemake_python_styleguide/visitors/ast/numbers.py
@@ -27,6 +27,7 @@
ast.Tuple,
)
+ # TODO: make consistent naming rules for class attributes:
_PROXY_PARENTS = (
ast.UnaryOp,
)
|
{"golden_diff": "diff --git a/wemake_python_styleguide/visitors/ast/numbers.py b/wemake_python_styleguide/visitors/ast/numbers.py\n--- a/wemake_python_styleguide/visitors/ast/numbers.py\n+++ b/wemake_python_styleguide/visitors/ast/numbers.py\n@@ -27,6 +27,7 @@\n ast.Tuple,\n )\n \n+ # TODO: make consistent naming rules for class attributes:\n _PROXY_PARENTS = (\n ast.UnaryOp,\n )\n", "issue": "Fix documentation main page's header\nThe header is gone:\r\n<img width=\"1032\" alt=\"2018-10-03 0 18 01\" src=\"https://user-images.githubusercontent.com/4660275/46377643-d0ce1080-c6a1-11e8-950b-d2d0c515dee1.png\">\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\nfrom typing import Optional\n\nfrom wemake_python_styleguide.constants import MAGIC_NUMBERS_WHITELIST\nfrom wemake_python_styleguide.violations.best_practices import (\n MagicNumberViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\nclass MagicNumberVisitor(BaseNodeVisitor):\n \"\"\"Checks magic numbers used in the code.\"\"\"\n\n _ALLOWED_PARENTS = (\n ast.Assign,\n\n # Constructor usages:\n ast.FunctionDef,\n ast.arguments,\n\n # Primitives:\n ast.List,\n ast.Dict,\n ast.Set,\n ast.Tuple,\n )\n\n _PROXY_PARENTS = (\n ast.UnaryOp,\n )\n\n def _get_real_parent(self, node: Optional[ast.AST]) -> Optional[ast.AST]:\n \"\"\"\n Returns real number's parent.\n\n What can go wrong?\n\n 1. Number can be negative: ``x = -1``,\n so ``1`` has ``UnaryOp`` as parent, but should return ``Assign``\n\n \"\"\"\n parent = getattr(node, 'parent', None)\n if isinstance(parent, self._PROXY_PARENTS):\n return self._get_real_parent(parent)\n return parent\n\n def _check_is_magic(self, node: ast.Num) -> None:\n parent = self._get_real_parent(node)\n if isinstance(parent, self._ALLOWED_PARENTS):\n return\n\n if node.n in MAGIC_NUMBERS_WHITELIST:\n return\n\n if isinstance(node.n, int) and node.n <= 10:\n return\n\n self.add_violation(MagicNumberViolation(node, text=str(node.n)))\n\n def visit_Num(self, node: ast.Num) -> None:\n \"\"\"\n Checks numbers not to be magic constants inside the code.\n\n Raises:\n MagicNumberViolation\n\n \"\"\"\n self._check_is_magic(node)\n self.generic_visit(node)\n", "path": "wemake_python_styleguide/visitors/ast/numbers.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport ast\nfrom typing import Optional\n\nfrom wemake_python_styleguide.constants import MAGIC_NUMBERS_WHITELIST\nfrom wemake_python_styleguide.violations.best_practices import (\n MagicNumberViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\nclass MagicNumberVisitor(BaseNodeVisitor):\n \"\"\"Checks magic numbers used in the code.\"\"\"\n\n _ALLOWED_PARENTS = (\n ast.Assign,\n\n # Constructor usages:\n ast.FunctionDef,\n ast.arguments,\n\n # Primitives:\n ast.List,\n ast.Dict,\n ast.Set,\n ast.Tuple,\n )\n\n # TODO: make consistent naming rules for class attributes:\n _PROXY_PARENTS = (\n ast.UnaryOp,\n )\n\n def _get_real_parent(self, node: Optional[ast.AST]) -> Optional[ast.AST]:\n \"\"\"\n Returns real number's parent.\n\n What can go wrong?\n\n 1. Number can be negative: ``x = -1``,\n so ``1`` has ``UnaryOp`` as parent, but should return ``Assign``\n\n \"\"\"\n parent = getattr(node, 'parent', None)\n if isinstance(parent, self._PROXY_PARENTS):\n return self._get_real_parent(parent)\n return parent\n\n def _check_is_magic(self, node: ast.Num) -> None:\n parent = self._get_real_parent(node)\n if isinstance(parent, self._ALLOWED_PARENTS):\n return\n\n if node.n in MAGIC_NUMBERS_WHITELIST:\n return\n\n if isinstance(node.n, int) and node.n <= 10:\n return\n\n self.add_violation(MagicNumberViolation(node, text=str(node.n)))\n\n def visit_Num(self, node: ast.Num) -> None:\n \"\"\"\n Checks numbers not to be magic constants inside the code.\n\n Raises:\n MagicNumberViolation\n\n \"\"\"\n self._check_is_magic(node)\n self.generic_visit(node)\n", "path": "wemake_python_styleguide/visitors/ast/numbers.py"}]}
| 933 | 117 |
gh_patches_debug_37319
|
rasdani/github-patches
|
git_diff
|
pytorch__audio-1066
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Save signal with AMR-NB format
Conversion with `sox input.wav -r 8k ouptut.amr-nb` command works fine, `libsox-fmt-all` is installed.
```python
# saving to GSM works:
torchaudio.save('output.gsm', signal, 8000)
# saving to AMR-NB does not:
torchaudio.save('output.amr-nb', signal, 8000)
#formats: no handler for given file type `amr-nb'
#Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# File "/miniconda/lib/python3.7/site-packages/torchaudio/__init__.py", line 133, in save
# return save_encinfo(filepath, src, channels_first, si)
# File "/miniconda/lib/python3.7/site-packages/torchaudio/__init__.py", line 202, in save_encinfo
# _torch_sox.write_audio_file(filepath, src, signalinfo, encodinginfo, filetype)
#RuntimeError: Error writing audio file: could not open file for writing
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `build_tools/setup_helpers/extension.py`
Content:
```
1 import os
2 import platform
3 import subprocess
4 from pathlib import Path
5
6 from torch.utils.cpp_extension import (
7 CppExtension,
8 BuildExtension as TorchBuildExtension
9 )
10
11 __all__ = [
12 'get_ext_modules',
13 'BuildExtension',
14 ]
15
16 _THIS_DIR = Path(__file__).parent.resolve()
17 _ROOT_DIR = _THIS_DIR.parent.parent.resolve()
18 _CSRC_DIR = _ROOT_DIR / 'torchaudio' / 'csrc'
19 _TP_BASE_DIR = _ROOT_DIR / 'third_party'
20 _TP_INSTALL_DIR = _TP_BASE_DIR / 'install'
21
22
23 def _get_build_sox():
24 val = os.environ.get('BUILD_SOX', '0')
25 trues = ['1', 'true', 'TRUE', 'on', 'ON', 'yes', 'YES']
26 falses = ['0', 'false', 'FALSE', 'off', 'OFF', 'no', 'NO']
27 if val in trues:
28 return True
29 if val not in falses:
30 print(
31 f'WARNING: Unexpected environment variable value `BUILD_SOX={val}`. '
32 f'Expected one of {trues + falses}')
33 return False
34
35
36 _BUILD_SOX = _get_build_sox()
37
38
39 def _get_eca(debug):
40 eca = []
41 if debug:
42 eca += ["-O0", "-g"]
43 else:
44 eca += ["-O3"]
45 return eca
46
47
48 def _get_ela(debug):
49 ela = []
50 if debug:
51 if platform.system() == "Windows":
52 ela += ["/DEBUG:FULL"]
53 else:
54 ela += ["-O0", "-g"]
55 else:
56 ela += ["-O3"]
57 return ela
58
59
60 def _get_srcs():
61 return [str(p) for p in _CSRC_DIR.glob('**/*.cpp')]
62
63
64 def _get_include_dirs():
65 dirs = [
66 str(_ROOT_DIR),
67 ]
68 if _BUILD_SOX:
69 dirs.append(str(_TP_INSTALL_DIR / 'include'))
70 return dirs
71
72
73 def _get_extra_objects():
74 objs = []
75 if _BUILD_SOX:
76 # NOTE: The order of the library listed bellow matters.
77 #
78 # (the most important thing is that dependencies come after a library
79 # e.g., sox comes first, flac/vorbis comes before ogg, and
80 # vorbisenc/vorbisfile comes before vorbis
81 libs = [
82 'libsox.a',
83 'libmad.a',
84 'libFLAC.a',
85 'libmp3lame.a',
86 'libopusfile.a',
87 'libopus.a',
88 'libvorbisenc.a',
89 'libvorbisfile.a',
90 'libvorbis.a',
91 'libogg.a',
92 ]
93 for lib in libs:
94 objs.append(str(_TP_INSTALL_DIR / 'lib' / lib))
95 return objs
96
97
98 def _get_libraries():
99 return [] if _BUILD_SOX else ['sox']
100
101
102 def _build_third_party():
103 build_dir = str(_TP_BASE_DIR / 'build')
104 os.makedirs(build_dir, exist_ok=True)
105 subprocess.run(
106 args=['cmake', '..'],
107 cwd=build_dir,
108 check=True,
109 )
110 subprocess.run(
111 args=['cmake', '--build', '.'],
112 cwd=build_dir,
113 check=True,
114 )
115
116
117 _EXT_NAME = 'torchaudio._torchaudio'
118
119
120 def get_ext_modules(debug=False):
121 if platform.system() == 'Windows':
122 return None
123 return [
124 CppExtension(
125 _EXT_NAME,
126 _get_srcs(),
127 libraries=_get_libraries(),
128 include_dirs=_get_include_dirs(),
129 extra_compile_args=_get_eca(debug),
130 extra_objects=_get_extra_objects(),
131 extra_link_args=_get_ela(debug),
132 ),
133 ]
134
135
136 class BuildExtension(TorchBuildExtension):
137 def build_extension(self, ext):
138 if ext.name == _EXT_NAME and _BUILD_SOX:
139 _build_third_party()
140 super().build_extension(ext)
141
```
Path: `torchaudio/backend/sox_io_backend.py`
Content:
```
1 from typing import Tuple, Optional
2
3 import torch
4 from torchaudio._internal import (
5 module_utils as _mod_utils,
6 )
7
8 from .common import AudioMetaData
9
10
11 @_mod_utils.requires_module('torchaudio._torchaudio')
12 def info(filepath: str) -> AudioMetaData:
13 """Get signal information of an audio file.
14
15 Args:
16 filepath (str or pathlib.Path):
17 Path to audio file. This function also handles ``pathlib.Path`` objects,
18 but is annotated as ``str`` for TorchScript compatibility.
19
20 Returns:
21 AudioMetaData: Metadata of the given audio.
22 """
23 # Cast to str in case type is `pathlib.Path`
24 filepath = str(filepath)
25 sinfo = torch.ops.torchaudio.sox_io_get_info(filepath)
26 return AudioMetaData(sinfo.get_sample_rate(), sinfo.get_num_frames(), sinfo.get_num_channels())
27
28
29 @_mod_utils.requires_module('torchaudio._torchaudio')
30 def load(
31 filepath: str,
32 frame_offset: int = 0,
33 num_frames: int = -1,
34 normalize: bool = True,
35 channels_first: bool = True,
36 ) -> Tuple[torch.Tensor, int]:
37 """Load audio data from file.
38
39 Note:
40 This function can handle all the codecs that underlying libsox can handle,
41 however it is tested on the following formats;
42
43 * WAV
44
45 * 32-bit floating-point
46 * 32-bit signed integer
47 * 16-bit signed integer
48 * 8-bit unsigned integer
49
50 * MP3
51 * FLAC
52 * OGG/VORBIS
53 * OPUS
54 * SPHERE
55
56 To load ``MP3``, ``FLAC``, ``OGG/VORBIS``, ``OPUS`` and other codecs ``libsox`` does not
57 handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``
58 and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.
59
60 By default (``normalize=True``, ``channels_first=True``), this function returns Tensor with
61 ``float32`` dtype and the shape of ``[channel, time]``.
62 The samples are normalized to fit in the range of ``[-1.0, 1.0]``.
63
64 When the input format is WAV with integer type, such as 32-bit signed integer, 16-bit
65 signed integer and 8-bit unsigned integer (24-bit signed integer is not supported),
66 by providing ``normalize=False``, this function can return integer Tensor, where the samples
67 are expressed within the whole range of the corresponding dtype, that is, ``int32`` tensor
68 for 32-bit signed PCM, ``int16`` for 16-bit signed PCM and ``uint8`` for 8-bit unsigned PCM.
69
70 ``normalize`` parameter has no effect on 32-bit floating-point WAV and other formats, such as
71 ``flac`` and ``mp3``.
72 For these formats, this function always returns ``float32`` Tensor with values normalized to
73 ``[-1.0, 1.0]``.
74
75 Args:
76 filepath (str or pathlib.Path):
77 Path to audio file. This function also handles ``pathlib.Path`` objects, but is
78 annotated as ``str`` for TorchScript compiler compatibility.
79 frame_offset (int):
80 Number of frames to skip before start reading data.
81 num_frames (int):
82 Maximum number of frames to read. ``-1`` reads all the remaining samples,
83 starting from ``frame_offset``.
84 This function may return the less number of frames if there is not enough
85 frames in the given file.
86 normalize (bool):
87 When ``True``, this function always return ``float32``, and sample values are
88 normalized to ``[-1.0, 1.0]``.
89 If input file is integer WAV, giving ``False`` will change the resulting Tensor type to
90 integer type.
91 This argument has no effect for formats other than integer WAV type.
92 channels_first (bool):
93 When True, the returned Tensor has dimension ``[channel, time]``.
94 Otherwise, the returned Tensor's dimension is ``[time, channel]``.
95
96 Returns:
97 torch.Tensor:
98 If the input file has integer wav format and normalization is off, then it has
99 integer type, else ``float32`` type. If ``channels_first=True``, it has
100 ``[channel, time]`` else ``[time, channel]``.
101 """
102 # Cast to str in case type is `pathlib.Path`
103 filepath = str(filepath)
104 signal = torch.ops.torchaudio.sox_io_load_audio_file(
105 filepath, frame_offset, num_frames, normalize, channels_first)
106 return signal.get_tensor(), signal.get_sample_rate()
107
108
109 @_mod_utils.requires_module('torchaudio._torchaudio')
110 def save(
111 filepath: str,
112 src: torch.Tensor,
113 sample_rate: int,
114 channels_first: bool = True,
115 compression: Optional[float] = None,
116 ):
117 """Save audio data to file.
118
119 Note:
120 Supported formats are;
121
122 * WAV
123
124 * 32-bit floating-point
125 * 32-bit signed integer
126 * 16-bit signed integer
127 * 8-bit unsigned integer
128
129 * MP3
130 * FLAC
131 * OGG/VORBIS
132 * SPHERE
133
134 To save ``MP3``, ``FLAC``, ``OGG/VORBIS``, and other codecs ``libsox`` does not
135 handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``
136 and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.
137
138 Args:
139 filepath (str or pathlib.Path):
140 Path to save file. This function also handles ``pathlib.Path`` objects, but is annotated
141 as ``str`` for TorchScript compiler compatibility.
142 tensor (torch.Tensor): Audio data to save. must be 2D tensor.
143 sample_rate (int): sampling rate
144 channels_first (bool):
145 If ``True``, the given tensor is interpreted as ``[channel, time]``,
146 otherwise ``[time, channel]``.
147 compression (Optional[float]):
148 Used for formats other than WAV. This corresponds to ``-C`` option of ``sox`` command.
149
150 * | ``MP3``: Either bitrate (in ``kbps``) with quality factor, such as ``128.2``, or
151 | VBR encoding with quality factor such as ``-4.2``. Default: ``-4.5``.
152 * | ``FLAC``: compression level. Whole number from ``0`` to ``8``.
153 | ``8`` is default and highest compression.
154 * | ``OGG/VORBIS``: number from ``-1`` to ``10``; ``-1`` is the highest compression
155 | and lowest quality. Default: ``3``.
156
157 See the detail at http://sox.sourceforge.net/soxformat.html.
158 """
159 # Cast to str in case type is `pathlib.Path`
160 filepath = str(filepath)
161 if compression is None:
162 ext = str(filepath).split('.')[-1].lower()
163 if ext in ['wav', 'sph']:
164 compression = 0.
165 elif ext == 'mp3':
166 compression = -4.5
167 elif ext == 'flac':
168 compression = 8.
169 elif ext in ['ogg', 'vorbis']:
170 compression = 3.
171 else:
172 raise RuntimeError(f'Unsupported file type: "{ext}"')
173 signal = torch.classes.torchaudio.TensorSignal(src, sample_rate, channels_first)
174 torch.ops.torchaudio.sox_io_save_audio_file(filepath, signal, compression)
175
176
177 @_mod_utils.requires_module('torchaudio._torchaudio')
178 @_mod_utils.deprecated('Please use "torchaudio.load".', '0.9.0')
179 def load_wav(
180 filepath: str,
181 frame_offset: int = 0,
182 num_frames: int = -1,
183 channels_first: bool = True,
184 ) -> Tuple[torch.Tensor, int]:
185 """Load wave file.
186
187 This function is defined only for the purpose of compatibility against other backend
188 for simple usecases, such as ``torchaudio.load_wav(filepath)``.
189 The implementation is same as :py:func:`load`.
190 """
191 return load(filepath, frame_offset, num_frames, normalize=False, channels_first=channels_first)
192
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/build_tools/setup_helpers/extension.py b/build_tools/setup_helpers/extension.py
--- a/build_tools/setup_helpers/extension.py
+++ b/build_tools/setup_helpers/extension.py
@@ -89,6 +89,8 @@
'libvorbisfile.a',
'libvorbis.a',
'libogg.a',
+ 'libopencore-amrnb.a',
+ 'libopencore-amrwb.a',
]
for lib in libs:
objs.append(str(_TP_INSTALL_DIR / 'lib' / lib))
diff --git a/torchaudio/backend/sox_io_backend.py b/torchaudio/backend/sox_io_backend.py
--- a/torchaudio/backend/sox_io_backend.py
+++ b/torchaudio/backend/sox_io_backend.py
@@ -40,18 +40,19 @@
This function can handle all the codecs that underlying libsox can handle,
however it is tested on the following formats;
- * WAV
+ * WAV, AMB
* 32-bit floating-point
* 32-bit signed integer
* 16-bit signed integer
- * 8-bit unsigned integer
+ * 8-bit unsigned integer (WAV only)
* MP3
* FLAC
* OGG/VORBIS
* OPUS
* SPHERE
+ * AMR-NB
To load ``MP3``, ``FLAC``, ``OGG/VORBIS``, ``OPUS`` and other codecs ``libsox`` does not
handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``
@@ -119,7 +120,7 @@
Note:
Supported formats are;
- * WAV
+ * WAV, AMB
* 32-bit floating-point
* 32-bit signed integer
@@ -130,6 +131,7 @@
* FLAC
* OGG/VORBIS
* SPHERE
+ * AMR-NB
To save ``MP3``, ``FLAC``, ``OGG/VORBIS``, and other codecs ``libsox`` does not
handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``
@@ -160,7 +162,7 @@
filepath = str(filepath)
if compression is None:
ext = str(filepath).split('.')[-1].lower()
- if ext in ['wav', 'sph']:
+ if ext in ['wav', 'sph', 'amb', 'amr-nb']:
compression = 0.
elif ext == 'mp3':
compression = -4.5
|
{"golden_diff": "diff --git a/build_tools/setup_helpers/extension.py b/build_tools/setup_helpers/extension.py\n--- a/build_tools/setup_helpers/extension.py\n+++ b/build_tools/setup_helpers/extension.py\n@@ -89,6 +89,8 @@\n 'libvorbisfile.a',\n 'libvorbis.a',\n 'libogg.a',\n+ 'libopencore-amrnb.a',\n+ 'libopencore-amrwb.a',\n ]\n for lib in libs:\n objs.append(str(_TP_INSTALL_DIR / 'lib' / lib))\ndiff --git a/torchaudio/backend/sox_io_backend.py b/torchaudio/backend/sox_io_backend.py\n--- a/torchaudio/backend/sox_io_backend.py\n+++ b/torchaudio/backend/sox_io_backend.py\n@@ -40,18 +40,19 @@\n This function can handle all the codecs that underlying libsox can handle,\n however it is tested on the following formats;\n \n- * WAV\n+ * WAV, AMB\n \n * 32-bit floating-point\n * 32-bit signed integer\n * 16-bit signed integer\n- * 8-bit unsigned integer\n+ * 8-bit unsigned integer (WAV only)\n \n * MP3\n * FLAC\n * OGG/VORBIS\n * OPUS\n * SPHERE\n+ * AMR-NB\n \n To load ``MP3``, ``FLAC``, ``OGG/VORBIS``, ``OPUS`` and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n@@ -119,7 +120,7 @@\n Note:\n Supported formats are;\n \n- * WAV\n+ * WAV, AMB\n \n * 32-bit floating-point\n * 32-bit signed integer\n@@ -130,6 +131,7 @@\n * FLAC\n * OGG/VORBIS\n * SPHERE\n+ * AMR-NB\n \n To save ``MP3``, ``FLAC``, ``OGG/VORBIS``, and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n@@ -160,7 +162,7 @@\n filepath = str(filepath)\n if compression is None:\n ext = str(filepath).split('.')[-1].lower()\n- if ext in ['wav', 'sph']:\n+ if ext in ['wav', 'sph', 'amb', 'amr-nb']:\n compression = 0.\n elif ext == 'mp3':\n compression = -4.5\n", "issue": "Save signal with AMR-NB format\nConversion with `sox input.wav -r 8k ouptut.amr-nb` command works fine, `libsox-fmt-all` is installed.\r\n\r\n```python\r\n# saving to GSM works:\r\ntorchaudio.save('output.gsm', signal, 8000)\r\n\r\n# saving to AMR-NB does not:\r\ntorchaudio.save('output.amr-nb', signal, 8000)\r\n#formats: no handler for given file type `amr-nb'\r\n#Traceback (most recent call last):\r\n# File \"<stdin>\", line 1, in <module>\r\n# File \"/miniconda/lib/python3.7/site-packages/torchaudio/__init__.py\", line 133, in save\r\n# return save_encinfo(filepath, src, channels_first, si)\r\n# File \"/miniconda/lib/python3.7/site-packages/torchaudio/__init__.py\", line 202, in save_encinfo\r\n# _torch_sox.write_audio_file(filepath, src, signalinfo, encodinginfo, filetype)\r\n#RuntimeError: Error writing audio file: could not open file for writing\r\n```\n", "before_files": [{"content": "import os\nimport platform\nimport subprocess\nfrom pathlib import Path\n\nfrom torch.utils.cpp_extension import (\n CppExtension,\n BuildExtension as TorchBuildExtension\n)\n\n__all__ = [\n 'get_ext_modules',\n 'BuildExtension',\n]\n\n_THIS_DIR = Path(__file__).parent.resolve()\n_ROOT_DIR = _THIS_DIR.parent.parent.resolve()\n_CSRC_DIR = _ROOT_DIR / 'torchaudio' / 'csrc'\n_TP_BASE_DIR = _ROOT_DIR / 'third_party'\n_TP_INSTALL_DIR = _TP_BASE_DIR / 'install'\n\n\ndef _get_build_sox():\n val = os.environ.get('BUILD_SOX', '0')\n trues = ['1', 'true', 'TRUE', 'on', 'ON', 'yes', 'YES']\n falses = ['0', 'false', 'FALSE', 'off', 'OFF', 'no', 'NO']\n if val in trues:\n return True\n if val not in falses:\n print(\n f'WARNING: Unexpected environment variable value `BUILD_SOX={val}`. '\n f'Expected one of {trues + falses}')\n return False\n\n\n_BUILD_SOX = _get_build_sox()\n\n\ndef _get_eca(debug):\n eca = []\n if debug:\n eca += [\"-O0\", \"-g\"]\n else:\n eca += [\"-O3\"]\n return eca\n\n\ndef _get_ela(debug):\n ela = []\n if debug:\n if platform.system() == \"Windows\":\n ela += [\"/DEBUG:FULL\"]\n else:\n ela += [\"-O0\", \"-g\"]\n else:\n ela += [\"-O3\"]\n return ela\n\n\ndef _get_srcs():\n return [str(p) for p in _CSRC_DIR.glob('**/*.cpp')]\n\n\ndef _get_include_dirs():\n dirs = [\n str(_ROOT_DIR),\n ]\n if _BUILD_SOX:\n dirs.append(str(_TP_INSTALL_DIR / 'include'))\n return dirs\n\n\ndef _get_extra_objects():\n objs = []\n if _BUILD_SOX:\n # NOTE: The order of the library listed bellow matters.\n #\n # (the most important thing is that dependencies come after a library\n # e.g., sox comes first, flac/vorbis comes before ogg, and\n # vorbisenc/vorbisfile comes before vorbis\n libs = [\n 'libsox.a',\n 'libmad.a',\n 'libFLAC.a',\n 'libmp3lame.a',\n 'libopusfile.a',\n 'libopus.a',\n 'libvorbisenc.a',\n 'libvorbisfile.a',\n 'libvorbis.a',\n 'libogg.a',\n ]\n for lib in libs:\n objs.append(str(_TP_INSTALL_DIR / 'lib' / lib))\n return objs\n\n\ndef _get_libraries():\n return [] if _BUILD_SOX else ['sox']\n\n\ndef _build_third_party():\n build_dir = str(_TP_BASE_DIR / 'build')\n os.makedirs(build_dir, exist_ok=True)\n subprocess.run(\n args=['cmake', '..'],\n cwd=build_dir,\n check=True,\n )\n subprocess.run(\n args=['cmake', '--build', '.'],\n cwd=build_dir,\n check=True,\n )\n\n\n_EXT_NAME = 'torchaudio._torchaudio'\n\n\ndef get_ext_modules(debug=False):\n if platform.system() == 'Windows':\n return None\n return [\n CppExtension(\n _EXT_NAME,\n _get_srcs(),\n libraries=_get_libraries(),\n include_dirs=_get_include_dirs(),\n extra_compile_args=_get_eca(debug),\n extra_objects=_get_extra_objects(),\n extra_link_args=_get_ela(debug),\n ),\n ]\n\n\nclass BuildExtension(TorchBuildExtension):\n def build_extension(self, ext):\n if ext.name == _EXT_NAME and _BUILD_SOX:\n _build_third_party()\n super().build_extension(ext)\n", "path": "build_tools/setup_helpers/extension.py"}, {"content": "from typing import Tuple, Optional\n\nimport torch\nfrom torchaudio._internal import (\n module_utils as _mod_utils,\n)\n\nfrom .common import AudioMetaData\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef info(filepath: str) -> AudioMetaData:\n \"\"\"Get signal information of an audio file.\n\n Args:\n filepath (str or pathlib.Path):\n Path to audio file. This function also handles ``pathlib.Path`` objects,\n but is annotated as ``str`` for TorchScript compatibility.\n\n Returns:\n AudioMetaData: Metadata of the given audio.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n sinfo = torch.ops.torchaudio.sox_io_get_info(filepath)\n return AudioMetaData(sinfo.get_sample_rate(), sinfo.get_num_frames(), sinfo.get_num_channels())\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef load(\n filepath: str,\n frame_offset: int = 0,\n num_frames: int = -1,\n normalize: bool = True,\n channels_first: bool = True,\n) -> Tuple[torch.Tensor, int]:\n \"\"\"Load audio data from file.\n\n Note:\n This function can handle all the codecs that underlying libsox can handle,\n however it is tested on the following formats;\n\n * WAV\n\n * 32-bit floating-point\n * 32-bit signed integer\n * 16-bit signed integer\n * 8-bit unsigned integer\n\n * MP3\n * FLAC\n * OGG/VORBIS\n * OPUS\n * SPHERE\n\n To load ``MP3``, ``FLAC``, ``OGG/VORBIS``, ``OPUS`` and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.\n\n By default (``normalize=True``, ``channels_first=True``), this function returns Tensor with\n ``float32`` dtype and the shape of ``[channel, time]``.\n The samples are normalized to fit in the range of ``[-1.0, 1.0]``.\n\n When the input format is WAV with integer type, such as 32-bit signed integer, 16-bit\n signed integer and 8-bit unsigned integer (24-bit signed integer is not supported),\n by providing ``normalize=False``, this function can return integer Tensor, where the samples\n are expressed within the whole range of the corresponding dtype, that is, ``int32`` tensor\n for 32-bit signed PCM, ``int16`` for 16-bit signed PCM and ``uint8`` for 8-bit unsigned PCM.\n\n ``normalize`` parameter has no effect on 32-bit floating-point WAV and other formats, such as\n ``flac`` and ``mp3``.\n For these formats, this function always returns ``float32`` Tensor with values normalized to\n ``[-1.0, 1.0]``.\n\n Args:\n filepath (str or pathlib.Path):\n Path to audio file. This function also handles ``pathlib.Path`` objects, but is\n annotated as ``str`` for TorchScript compiler compatibility.\n frame_offset (int):\n Number of frames to skip before start reading data.\n num_frames (int):\n Maximum number of frames to read. ``-1`` reads all the remaining samples,\n starting from ``frame_offset``.\n This function may return the less number of frames if there is not enough\n frames in the given file.\n normalize (bool):\n When ``True``, this function always return ``float32``, and sample values are\n normalized to ``[-1.0, 1.0]``.\n If input file is integer WAV, giving ``False`` will change the resulting Tensor type to\n integer type.\n This argument has no effect for formats other than integer WAV type.\n channels_first (bool):\n When True, the returned Tensor has dimension ``[channel, time]``.\n Otherwise, the returned Tensor's dimension is ``[time, channel]``.\n\n Returns:\n torch.Tensor:\n If the input file has integer wav format and normalization is off, then it has\n integer type, else ``float32`` type. If ``channels_first=True``, it has\n ``[channel, time]`` else ``[time, channel]``.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n signal = torch.ops.torchaudio.sox_io_load_audio_file(\n filepath, frame_offset, num_frames, normalize, channels_first)\n return signal.get_tensor(), signal.get_sample_rate()\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef save(\n filepath: str,\n src: torch.Tensor,\n sample_rate: int,\n channels_first: bool = True,\n compression: Optional[float] = None,\n):\n \"\"\"Save audio data to file.\n\n Note:\n Supported formats are;\n\n * WAV\n\n * 32-bit floating-point\n * 32-bit signed integer\n * 16-bit signed integer\n * 8-bit unsigned integer\n\n * MP3\n * FLAC\n * OGG/VORBIS\n * SPHERE\n\n To save ``MP3``, ``FLAC``, ``OGG/VORBIS``, and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.\n\n Args:\n filepath (str or pathlib.Path):\n Path to save file. This function also handles ``pathlib.Path`` objects, but is annotated\n as ``str`` for TorchScript compiler compatibility.\n tensor (torch.Tensor): Audio data to save. must be 2D tensor.\n sample_rate (int): sampling rate\n channels_first (bool):\n If ``True``, the given tensor is interpreted as ``[channel, time]``,\n otherwise ``[time, channel]``.\n compression (Optional[float]):\n Used for formats other than WAV. This corresponds to ``-C`` option of ``sox`` command.\n\n * | ``MP3``: Either bitrate (in ``kbps``) with quality factor, such as ``128.2``, or\n | VBR encoding with quality factor such as ``-4.2``. Default: ``-4.5``.\n * | ``FLAC``: compression level. Whole number from ``0`` to ``8``.\n | ``8`` is default and highest compression.\n * | ``OGG/VORBIS``: number from ``-1`` to ``10``; ``-1`` is the highest compression\n | and lowest quality. Default: ``3``.\n\n See the detail at http://sox.sourceforge.net/soxformat.html.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n if compression is None:\n ext = str(filepath).split('.')[-1].lower()\n if ext in ['wav', 'sph']:\n compression = 0.\n elif ext == 'mp3':\n compression = -4.5\n elif ext == 'flac':\n compression = 8.\n elif ext in ['ogg', 'vorbis']:\n compression = 3.\n else:\n raise RuntimeError(f'Unsupported file type: \"{ext}\"')\n signal = torch.classes.torchaudio.TensorSignal(src, sample_rate, channels_first)\n torch.ops.torchaudio.sox_io_save_audio_file(filepath, signal, compression)\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\n@_mod_utils.deprecated('Please use \"torchaudio.load\".', '0.9.0')\ndef load_wav(\n filepath: str,\n frame_offset: int = 0,\n num_frames: int = -1,\n channels_first: bool = True,\n) -> Tuple[torch.Tensor, int]:\n \"\"\"Load wave file.\n\n This function is defined only for the purpose of compatibility against other backend\n for simple usecases, such as ``torchaudio.load_wav(filepath)``.\n The implementation is same as :py:func:`load`.\n \"\"\"\n return load(filepath, frame_offset, num_frames, normalize=False, channels_first=channels_first)\n", "path": "torchaudio/backend/sox_io_backend.py"}], "after_files": [{"content": "import os\nimport platform\nimport subprocess\nfrom pathlib import Path\n\nfrom torch.utils.cpp_extension import (\n CppExtension,\n BuildExtension as TorchBuildExtension\n)\n\n__all__ = [\n 'get_ext_modules',\n 'BuildExtension',\n]\n\n_THIS_DIR = Path(__file__).parent.resolve()\n_ROOT_DIR = _THIS_DIR.parent.parent.resolve()\n_CSRC_DIR = _ROOT_DIR / 'torchaudio' / 'csrc'\n_TP_BASE_DIR = _ROOT_DIR / 'third_party'\n_TP_INSTALL_DIR = _TP_BASE_DIR / 'install'\n\n\ndef _get_build_sox():\n val = os.environ.get('BUILD_SOX', '0')\n trues = ['1', 'true', 'TRUE', 'on', 'ON', 'yes', 'YES']\n falses = ['0', 'false', 'FALSE', 'off', 'OFF', 'no', 'NO']\n if val in trues:\n return True\n if val not in falses:\n print(\n f'WARNING: Unexpected environment variable value `BUILD_SOX={val}`. '\n f'Expected one of {trues + falses}')\n return False\n\n\n_BUILD_SOX = _get_build_sox()\n\n\ndef _get_eca(debug):\n eca = []\n if debug:\n eca += [\"-O0\", \"-g\"]\n else:\n eca += [\"-O3\"]\n return eca\n\n\ndef _get_ela(debug):\n ela = []\n if debug:\n if platform.system() == \"Windows\":\n ela += [\"/DEBUG:FULL\"]\n else:\n ela += [\"-O0\", \"-g\"]\n else:\n ela += [\"-O3\"]\n return ela\n\n\ndef _get_srcs():\n return [str(p) for p in _CSRC_DIR.glob('**/*.cpp')]\n\n\ndef _get_include_dirs():\n dirs = [\n str(_ROOT_DIR),\n ]\n if _BUILD_SOX:\n dirs.append(str(_TP_INSTALL_DIR / 'include'))\n return dirs\n\n\ndef _get_extra_objects():\n objs = []\n if _BUILD_SOX:\n # NOTE: The order of the library listed bellow matters.\n #\n # (the most important thing is that dependencies come after a library\n # e.g., sox comes first, flac/vorbis comes before ogg, and\n # vorbisenc/vorbisfile comes before vorbis\n libs = [\n 'libsox.a',\n 'libmad.a',\n 'libFLAC.a',\n 'libmp3lame.a',\n 'libopusfile.a',\n 'libopus.a',\n 'libvorbisenc.a',\n 'libvorbisfile.a',\n 'libvorbis.a',\n 'libogg.a',\n 'libopencore-amrnb.a',\n 'libopencore-amrwb.a',\n ]\n for lib in libs:\n objs.append(str(_TP_INSTALL_DIR / 'lib' / lib))\n return objs\n\n\ndef _get_libraries():\n return [] if _BUILD_SOX else ['sox']\n\n\ndef _build_third_party():\n build_dir = str(_TP_BASE_DIR / 'build')\n os.makedirs(build_dir, exist_ok=True)\n subprocess.run(\n args=['cmake', '..'],\n cwd=build_dir,\n check=True,\n )\n subprocess.run(\n args=['cmake', '--build', '.'],\n cwd=build_dir,\n check=True,\n )\n\n\n_EXT_NAME = 'torchaudio._torchaudio'\n\n\ndef get_ext_modules(debug=False):\n if platform.system() == 'Windows':\n return None\n return [\n CppExtension(\n _EXT_NAME,\n _get_srcs(),\n libraries=_get_libraries(),\n include_dirs=_get_include_dirs(),\n extra_compile_args=_get_eca(debug),\n extra_objects=_get_extra_objects(),\n extra_link_args=_get_ela(debug),\n ),\n ]\n\n\nclass BuildExtension(TorchBuildExtension):\n def build_extension(self, ext):\n if ext.name == _EXT_NAME and _BUILD_SOX:\n _build_third_party()\n super().build_extension(ext)\n", "path": "build_tools/setup_helpers/extension.py"}, {"content": "from typing import Tuple, Optional\n\nimport torch\nfrom torchaudio._internal import (\n module_utils as _mod_utils,\n)\n\nfrom .common import AudioMetaData\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef info(filepath: str) -> AudioMetaData:\n \"\"\"Get signal information of an audio file.\n\n Args:\n filepath (str or pathlib.Path):\n Path to audio file. This function also handles ``pathlib.Path`` objects,\n but is annotated as ``str`` for TorchScript compatibility.\n\n Returns:\n AudioMetaData: Metadata of the given audio.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n sinfo = torch.ops.torchaudio.sox_io_get_info(filepath)\n return AudioMetaData(sinfo.get_sample_rate(), sinfo.get_num_frames(), sinfo.get_num_channels())\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef load(\n filepath: str,\n frame_offset: int = 0,\n num_frames: int = -1,\n normalize: bool = True,\n channels_first: bool = True,\n) -> Tuple[torch.Tensor, int]:\n \"\"\"Load audio data from file.\n\n Note:\n This function can handle all the codecs that underlying libsox can handle,\n however it is tested on the following formats;\n\n * WAV, AMB\n\n * 32-bit floating-point\n * 32-bit signed integer\n * 16-bit signed integer\n * 8-bit unsigned integer (WAV only)\n\n * MP3\n * FLAC\n * OGG/VORBIS\n * OPUS\n * SPHERE\n * AMR-NB\n\n To load ``MP3``, ``FLAC``, ``OGG/VORBIS``, ``OPUS`` and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.\n\n By default (``normalize=True``, ``channels_first=True``), this function returns Tensor with\n ``float32`` dtype and the shape of ``[channel, time]``.\n The samples are normalized to fit in the range of ``[-1.0, 1.0]``.\n\n When the input format is WAV with integer type, such as 32-bit signed integer, 16-bit\n signed integer and 8-bit unsigned integer (24-bit signed integer is not supported),\n by providing ``normalize=False``, this function can return integer Tensor, where the samples\n are expressed within the whole range of the corresponding dtype, that is, ``int32`` tensor\n for 32-bit signed PCM, ``int16`` for 16-bit signed PCM and ``uint8`` for 8-bit unsigned PCM.\n\n ``normalize`` parameter has no effect on 32-bit floating-point WAV and other formats, such as\n ``flac`` and ``mp3``.\n For these formats, this function always returns ``float32`` Tensor with values normalized to\n ``[-1.0, 1.0]``.\n\n Args:\n filepath (str or pathlib.Path):\n Path to audio file. This function also handles ``pathlib.Path`` objects, but is\n annotated as ``str`` for TorchScript compiler compatibility.\n frame_offset (int):\n Number of frames to skip before start reading data.\n num_frames (int):\n Maximum number of frames to read. ``-1`` reads all the remaining samples,\n starting from ``frame_offset``.\n This function may return the less number of frames if there is not enough\n frames in the given file.\n normalize (bool):\n When ``True``, this function always return ``float32``, and sample values are\n normalized to ``[-1.0, 1.0]``.\n If input file is integer WAV, giving ``False`` will change the resulting Tensor type to\n integer type.\n This argument has no effect for formats other than integer WAV type.\n channels_first (bool):\n When True, the returned Tensor has dimension ``[channel, time]``.\n Otherwise, the returned Tensor's dimension is ``[time, channel]``.\n\n Returns:\n torch.Tensor:\n If the input file has integer wav format and normalization is off, then it has\n integer type, else ``float32`` type. If ``channels_first=True``, it has\n ``[channel, time]`` else ``[time, channel]``.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n signal = torch.ops.torchaudio.sox_io_load_audio_file(\n filepath, frame_offset, num_frames, normalize, channels_first)\n return signal.get_tensor(), signal.get_sample_rate()\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\ndef save(\n filepath: str,\n src: torch.Tensor,\n sample_rate: int,\n channels_first: bool = True,\n compression: Optional[float] = None,\n):\n \"\"\"Save audio data to file.\n\n Note:\n Supported formats are;\n\n * WAV, AMB\n\n * 32-bit floating-point\n * 32-bit signed integer\n * 16-bit signed integer\n * 8-bit unsigned integer\n\n * MP3\n * FLAC\n * OGG/VORBIS\n * SPHERE\n * AMR-NB\n\n To save ``MP3``, ``FLAC``, ``OGG/VORBIS``, and other codecs ``libsox`` does not\n handle natively, your installation of ``torchaudio`` has to be linked to ``libsox``\n and corresponding codec libraries such as ``libmad`` or ``libmp3lame`` etc.\n\n Args:\n filepath (str or pathlib.Path):\n Path to save file. This function also handles ``pathlib.Path`` objects, but is annotated\n as ``str`` for TorchScript compiler compatibility.\n tensor (torch.Tensor): Audio data to save. must be 2D tensor.\n sample_rate (int): sampling rate\n channels_first (bool):\n If ``True``, the given tensor is interpreted as ``[channel, time]``,\n otherwise ``[time, channel]``.\n compression (Optional[float]):\n Used for formats other than WAV. This corresponds to ``-C`` option of ``sox`` command.\n\n * | ``MP3``: Either bitrate (in ``kbps``) with quality factor, such as ``128.2``, or\n | VBR encoding with quality factor such as ``-4.2``. Default: ``-4.5``.\n * | ``FLAC``: compression level. Whole number from ``0`` to ``8``.\n | ``8`` is default and highest compression.\n * | ``OGG/VORBIS``: number from ``-1`` to ``10``; ``-1`` is the highest compression\n | and lowest quality. Default: ``3``.\n\n See the detail at http://sox.sourceforge.net/soxformat.html.\n \"\"\"\n # Cast to str in case type is `pathlib.Path`\n filepath = str(filepath)\n if compression is None:\n ext = str(filepath).split('.')[-1].lower()\n if ext in ['wav', 'sph', 'amb', 'amr-nb']:\n compression = 0.\n elif ext == 'mp3':\n compression = -4.5\n elif ext == 'flac':\n compression = 8.\n elif ext in ['ogg', 'vorbis']:\n compression = 3.\n else:\n raise RuntimeError(f'Unsupported file type: \"{ext}\"')\n signal = torch.classes.torchaudio.TensorSignal(src, sample_rate, channels_first)\n torch.ops.torchaudio.sox_io_save_audio_file(filepath, signal, compression)\n\n\n@_mod_utils.requires_module('torchaudio._torchaudio')\n@_mod_utils.deprecated('Please use \"torchaudio.load\".', '0.9.0')\ndef load_wav(\n filepath: str,\n frame_offset: int = 0,\n num_frames: int = -1,\n channels_first: bool = True,\n) -> Tuple[torch.Tensor, int]:\n \"\"\"Load wave file.\n\n This function is defined only for the purpose of compatibility against other backend\n for simple usecases, such as ``torchaudio.load_wav(filepath)``.\n The implementation is same as :py:func:`load`.\n \"\"\"\n return load(filepath, frame_offset, num_frames, normalize=False, channels_first=channels_first)\n", "path": "torchaudio/backend/sox_io_backend.py"}]}
| 4,079 | 612 |
gh_patches_debug_20210
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-11001
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add custom user-agent to `NetworkClient` and external API calls
<!--
Instructions:
* Fill out the sections below, replace …'s with information about your issue
* Use the 'preview' function above this text box to verify formatting before submitting
-->
## Observed behavior
<!--
Description of the behavior that was observed, including screenshots or other references when applicable
-->
The user agent for API calls made with python `requests` do not clearly identify the device as Kolibri
## Errors and logs
<!--
Relevant logs from:
* the command line
* ~/.kolibri/logs/kolibri.txt
* the browser console
Please wrap errors in triple backticks for clean formatting like this:
```
01:10 info: something happened
01:12 error: something bad happened
```
-->
Perhaps connected with errors like the following although these are not the primary reason to customize the user-agent:
```
http.client.RemoteDisconnected: Remote end closed connection without response
```
## Expected behavior
<!--
Description of what behavior was expected but did not occur
-->
It's recommended that we clearly identify the application making API calls, to differentiate it from potential scripting. According to MDN, it should have the format:
```
User-Agent: <product> / <product-version> <comment>
```
So for Kolibri 0.16.0:
```
User-Agent: kolibri/0.16.0 python-requests/2.28.2
```
## User-facing consequences
<!--
Implications and real-world consequences for learners, coaches, admins, and other users of the application
-->
Web application firewalls may take more aggressive action against clients making many requests if it appears to be purely something scripted.
## Context
<!--
Tell us about your environment, including:
* Kolibri version
* Operating system
* Browser
-->
Kolibri 0.15+
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kolibri/core/discovery/utils/network/client.py`
Content:
```
1 import logging
2
3 import requests
4 from six import raise_from
5 from six.moves.urllib.parse import urlparse
6
7 from . import errors
8 from .urls import get_normalized_url_variations
9 from .urls import HTTP_PORTS
10 from .urls import HTTPS_PORTS
11 from kolibri.core.discovery.models import ConnectionStatus
12 from kolibri.core.tasks.utils import get_current_job
13 from kolibri.core.utils.urls import join_url
14 from kolibri.utils.server import get_urls
15
16 logger = logging.getLogger(__name__)
17
18 device_info_defaults = {
19 "subset_of_users_device": False,
20 }
21
22 DEFAULT_CONNECT_TIMEOUT = 5
23 DEFAULT_READ_TIMEOUT = 60
24 # default read timeout when within a job
25 DEFAULT_ASYNC_READ_TIMEOUT = 30
26 # when the network client tries variations of a url, that means the overall length of time it takes
27 # is multiplied by the number of variations, so for synchronous operations (in a HTTP request) we
28 # make the overall timeout ~= the DEFAULT_READ_TIMEOUT
29 DEFAULT_SYNC_READ_TIMEOUT = DEFAULT_READ_TIMEOUT / (len(HTTP_PORTS) + len(HTTPS_PORTS))
30
31
32 class NetworkClient(requests.Session):
33 __slots__ = ("base_url", "timeout", "session", "device_info", "remote_ip")
34
35 def __init__(self, base_url, timeout=None):
36 """
37 If an explicit base_url is already known, provide that. If only a vague address is known,
38 `build_from_address` can build a client to determine the actual `base_url`
39 :param base_url: The fully composed URL for a network location, without path
40 :param timeout: A timeout value in seconds or tuple for (connect, read)
41 :type timeout: float|tuple
42 """
43 super(NetworkClient, self).__init__()
44
45 self.base_url = base_url
46 self.timeout = timeout or (DEFAULT_CONNECT_TIMEOUT, DEFAULT_READ_TIMEOUT)
47 self.session = None
48 self.device_info = None
49 self.remote_ip = None
50
51 @classmethod
52 def build_for_address(cls, address, timeout=None):
53 """
54 Normalizes the address URL and tries a number of variations until we find one
55 that's able to connect
56
57 :param address: The address of which to try variations of
58 :param timeout: A timeout value in seconds or tuple for (connect, read)
59 :return: A NetworkClient with a verified connection
60 :rtype: NetworkClient
61 """
62 logger.info(
63 "Attempting connections to variations of the URL: {}".format(address)
64 )
65 if timeout is None:
66 if get_current_job() is not None:
67 # when we're within a job, then we can use longer timeouts
68 timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_ASYNC_READ_TIMEOUT)
69 else:
70 # if we're within a request thread, then we limit it for an overall time
71 timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_SYNC_READ_TIMEOUT)
72 _, self_urls = get_urls()
73 for url in get_normalized_url_variations(address):
74 if url in self_urls:
75 continue # exclude our own URLs
76 with NetworkClient(url, timeout=timeout) as client:
77 if client.connect(raise_if_unavailable=False):
78 return client
79 # we weren't able to connect to any of the URL variations, so all we can do is throw
80 raise errors.NetworkLocationNotFound()
81
82 @classmethod
83 def build_from_network_location(cls, network_location, timeout=None):
84 """
85 Creates a NetworkClient for a NetworkLocation, and validates the connection if the status
86 isn't already 'Okay'
87 :param network_location: The network location model
88 :type network_location: kolibri.core.discovery.models.NetworkLocation
89 :param timeout: A timeout value in seconds or tuple for (connect, read)
90 :return: A NetworkClient with a verified connection
91 :rtype: NetworkClient
92 """
93 # expect that static network locations have an exact base_url, and only try different
94 # variations if we haven't already
95 if (
96 network_location.dynamic
97 and network_location.connection_status == ConnectionStatus.Unknown
98 ):
99 return NetworkClient.build_for_address(
100 network_location.base_url, timeout=timeout
101 )
102 return NetworkClient(network_location.base_url, timeout=timeout)
103
104 def head(self, path, **kwargs):
105 return self.request("HEAD", path, **kwargs)
106
107 def get(self, path, **kwargs):
108 return self.request("GET", path, **kwargs)
109
110 def post(self, path, **kwargs):
111 return self.request("POST", path, **kwargs)
112
113 def request(self, method, path, **kwargs):
114 response = None
115 if "timeout" not in kwargs:
116 kwargs.update(timeout=self.timeout)
117
118 url = join_url(self.base_url, path)
119 try:
120 with super(NetworkClient, self).request(
121 method, url, stream=True, **kwargs
122 ) as response:
123 if response.raw._connection.sock is None:
124 raise requests.exceptions.ConnectionError("No socket available")
125
126 # capture the remote IP address, which requires `stream=True` and before consumed
127 self.remote_ip = response.raw._connection.sock.getpeername()[0]
128 # now consume content, see how `Session.send` does this when `stream=False`
129 response.content
130
131 response.raise_for_status()
132 return response
133 except (
134 requests.exceptions.ConnectionError,
135 requests.exceptions.SSLError,
136 requests.exceptions.ConnectTimeout,
137 requests.exceptions.URLRequired,
138 requests.exceptions.MissingSchema,
139 requests.exceptions.InvalidSchema,
140 requests.exceptions.InvalidURL,
141 requests.exceptions.InvalidHeader,
142 requests.exceptions.InvalidJSONError,
143 ) as e:
144 raise_from(
145 errors.NetworkLocationConnectionFailure(
146 "Unable to connect: {}".format(url)
147 ),
148 e,
149 )
150 except (
151 requests.exceptions.ReadTimeout,
152 requests.exceptions.TooManyRedirects,
153 ) as e:
154 raise_from(
155 errors.NetworkLocationResponseTimeout(
156 "Response timeout: {}".format(url)
157 ),
158 e,
159 )
160 except (
161 requests.exceptions.HTTPError,
162 requests.exceptions.ContentDecodingError,
163 requests.exceptions.ChunkedEncodingError,
164 requests.exceptions.RequestException,
165 ) as e:
166 raise_from(
167 errors.NetworkLocationResponseFailure(
168 "Response failure: {}".format(url), response=response
169 ),
170 e,
171 )
172
173 def connect(self, raise_if_unavailable=True): # noqa: C901
174 """
175 Attempts a connection to the instance and caches its device information if successful
176 :param raise_if_unavailable: Raises an error if connection fails and this value is True
177 :return: A boolean determining success, never False if `raise_if_unavailable=True`
178 """
179
180 from kolibri.core.device.utils import DEVICE_INFO_VERSION
181 from kolibri.core.device.utils import device_info_keys
182
183 # don't reconnect if client has already done so
184 if self.device_info is not None:
185 return True
186
187 try:
188 logger.info("Attempting connection to: {}".format(self.base_url))
189 response = self.get(
190 "api/public/info/",
191 allow_redirects=True,
192 params={"v": DEVICE_INFO_VERSION},
193 timeout=(DEFAULT_CONNECT_TIMEOUT, 5),
194 )
195 except errors.NetworkClientError as e:
196 logger.info(e)
197 if raise_if_unavailable:
198 raise e
199 return False
200
201 # check that we successfully connected, and if we were redirected that it's still
202 # the right endpoint
203 parsed_url = urlparse(response.url)
204 if response.status_code != 200:
205 if raise_if_unavailable:
206 raise errors.NetworkLocationInvalidResponse(
207 "Response status {}".format(response.status_code)
208 )
209 return False
210 if not parsed_url.path.rstrip("/").endswith("/api/public/info"):
211 if raise_if_unavailable:
212 raise errors.NetworkLocationInvalidResponse(
213 "Request redirected to {}".format(parsed_url.path)
214 )
215 return False
216
217 try:
218 info = response.json()
219 self.device_info = {}
220 for key in device_info_keys.get(DEVICE_INFO_VERSION, []):
221 self.device_info[key] = info.get(key, device_info_defaults.get(key))
222 if self.device_info["application"] not in ["studio", "kolibri"]:
223 raise errors.NetworkLocationInvalidResponse(
224 "Server is not running Kolibri or Studio"
225 )
226 logger.info("Success! We connected to: {}".format(response.url))
227
228 self.base_url = "{}://{}{}".format(
229 parsed_url.scheme,
230 parsed_url.netloc,
231 parsed_url.path.rstrip("/").replace("api/public/info", ""),
232 )
233 except (requests.exceptions.JSONDecodeError, ValueError) as e:
234 logger.info(
235 "Invalid JSON returned when attempting to connect to a remote server"
236 )
237 if raise_if_unavailable:
238 raise_from(
239 errors.NetworkLocationInvalidResponse("Invalid JSON returned"), e
240 )
241 return False
242
243 return True
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kolibri/core/discovery/utils/network/client.py b/kolibri/core/discovery/utils/network/client.py
--- a/kolibri/core/discovery/utils/network/client.py
+++ b/kolibri/core/discovery/utils/network/client.py
@@ -4,6 +4,7 @@
from six import raise_from
from six.moves.urllib.parse import urlparse
+import kolibri
from . import errors
from .urls import get_normalized_url_variations
from .urls import HTTP_PORTS
@@ -47,6 +48,11 @@
self.session = None
self.device_info = None
self.remote_ip = None
+ self.headers.update(
+ {
+ "User-Agent": get_user_agent(),
+ }
+ )
@classmethod
def build_for_address(cls, address, timeout=None):
@@ -241,3 +247,9 @@
return False
return True
+
+
+def get_user_agent():
+ return "Kolibri/{0} python-requests/{1}".format(
+ kolibri.__version__, requests.__version__
+ )
|
{"golden_diff": "diff --git a/kolibri/core/discovery/utils/network/client.py b/kolibri/core/discovery/utils/network/client.py\n--- a/kolibri/core/discovery/utils/network/client.py\n+++ b/kolibri/core/discovery/utils/network/client.py\n@@ -4,6 +4,7 @@\n from six import raise_from\n from six.moves.urllib.parse import urlparse\n \n+import kolibri\n from . import errors\n from .urls import get_normalized_url_variations\n from .urls import HTTP_PORTS\n@@ -47,6 +48,11 @@\n self.session = None\n self.device_info = None\n self.remote_ip = None\n+ self.headers.update(\n+ {\n+ \"User-Agent\": get_user_agent(),\n+ }\n+ )\n \n @classmethod\n def build_for_address(cls, address, timeout=None):\n@@ -241,3 +247,9 @@\n return False\n \n return True\n+\n+\n+def get_user_agent():\n+ return \"Kolibri/{0} python-requests/{1}\".format(\n+ kolibri.__version__, requests.__version__\n+ )\n", "issue": "Add custom user-agent to `NetworkClient` and external API calls\n<!--\r\nInstructions:\r\n * Fill out the sections below, replace \u2026's with information about your issue\r\n * Use the 'preview' function above this text box to verify formatting before submitting\r\n-->\r\n\r\n## Observed behavior\r\n<!--\r\nDescription of the behavior that was observed, including screenshots or other references when applicable\r\n-->\r\nThe user agent for API calls made with python `requests` do not clearly identify the device as Kolibri\r\n\r\n## Errors and logs\r\n<!--\r\nRelevant logs from:\r\n * the command line\r\n * ~/.kolibri/logs/kolibri.txt\r\n * the browser console\r\n\r\nPlease wrap errors in triple backticks for clean formatting like this:\r\n```\r\n01:10 info: something happened\r\n01:12 error: something bad happened\r\n```\r\n-->\r\nPerhaps connected with errors like the following although these are not the primary reason to customize the user-agent:\r\n```\r\nhttp.client.RemoteDisconnected: Remote end closed connection without response\r\n```\r\n\r\n## Expected behavior\r\n<!--\r\nDescription of what behavior was expected but did not occur\r\n-->\r\nIt's recommended that we clearly identify the application making API calls, to differentiate it from potential scripting. According to MDN, it should have the format:\r\n```\r\nUser-Agent: <product> / <product-version> <comment>\r\n```\r\nSo for Kolibri 0.16.0:\r\n```\r\nUser-Agent: kolibri/0.16.0 python-requests/2.28.2\r\n```\r\n\r\n## User-facing consequences\r\n<!--\r\nImplications and real-world consequences for learners, coaches, admins, and other users of the application\r\n-->\r\nWeb application firewalls may take more aggressive action against clients making many requests if it appears to be purely something scripted.\r\n\r\n\r\n## Context\r\n<!--\r\nTell us about your environment, including:\r\n * Kolibri version\r\n * Operating system\r\n * Browser\r\n-->\r\nKolibri 0.15+\r\n\n", "before_files": [{"content": "import logging\n\nimport requests\nfrom six import raise_from\nfrom six.moves.urllib.parse import urlparse\n\nfrom . import errors\nfrom .urls import get_normalized_url_variations\nfrom .urls import HTTP_PORTS\nfrom .urls import HTTPS_PORTS\nfrom kolibri.core.discovery.models import ConnectionStatus\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.core.utils.urls import join_url\nfrom kolibri.utils.server import get_urls\n\nlogger = logging.getLogger(__name__)\n\ndevice_info_defaults = {\n \"subset_of_users_device\": False,\n}\n\nDEFAULT_CONNECT_TIMEOUT = 5\nDEFAULT_READ_TIMEOUT = 60\n# default read timeout when within a job\nDEFAULT_ASYNC_READ_TIMEOUT = 30\n# when the network client tries variations of a url, that means the overall length of time it takes\n# is multiplied by the number of variations, so for synchronous operations (in a HTTP request) we\n# make the overall timeout ~= the DEFAULT_READ_TIMEOUT\nDEFAULT_SYNC_READ_TIMEOUT = DEFAULT_READ_TIMEOUT / (len(HTTP_PORTS) + len(HTTPS_PORTS))\n\n\nclass NetworkClient(requests.Session):\n __slots__ = (\"base_url\", \"timeout\", \"session\", \"device_info\", \"remote_ip\")\n\n def __init__(self, base_url, timeout=None):\n \"\"\"\n If an explicit base_url is already known, provide that. If only a vague address is known,\n `build_from_address` can build a client to determine the actual `base_url`\n :param base_url: The fully composed URL for a network location, without path\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :type timeout: float|tuple\n \"\"\"\n super(NetworkClient, self).__init__()\n\n self.base_url = base_url\n self.timeout = timeout or (DEFAULT_CONNECT_TIMEOUT, DEFAULT_READ_TIMEOUT)\n self.session = None\n self.device_info = None\n self.remote_ip = None\n\n @classmethod\n def build_for_address(cls, address, timeout=None):\n \"\"\"\n Normalizes the address URL and tries a number of variations until we find one\n that's able to connect\n\n :param address: The address of which to try variations of\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :return: A NetworkClient with a verified connection\n :rtype: NetworkClient\n \"\"\"\n logger.info(\n \"Attempting connections to variations of the URL: {}\".format(address)\n )\n if timeout is None:\n if get_current_job() is not None:\n # when we're within a job, then we can use longer timeouts\n timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_ASYNC_READ_TIMEOUT)\n else:\n # if we're within a request thread, then we limit it for an overall time\n timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_SYNC_READ_TIMEOUT)\n _, self_urls = get_urls()\n for url in get_normalized_url_variations(address):\n if url in self_urls:\n continue # exclude our own URLs\n with NetworkClient(url, timeout=timeout) as client:\n if client.connect(raise_if_unavailable=False):\n return client\n # we weren't able to connect to any of the URL variations, so all we can do is throw\n raise errors.NetworkLocationNotFound()\n\n @classmethod\n def build_from_network_location(cls, network_location, timeout=None):\n \"\"\"\n Creates a NetworkClient for a NetworkLocation, and validates the connection if the status\n isn't already 'Okay'\n :param network_location: The network location model\n :type network_location: kolibri.core.discovery.models.NetworkLocation\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :return: A NetworkClient with a verified connection\n :rtype: NetworkClient\n \"\"\"\n # expect that static network locations have an exact base_url, and only try different\n # variations if we haven't already\n if (\n network_location.dynamic\n and network_location.connection_status == ConnectionStatus.Unknown\n ):\n return NetworkClient.build_for_address(\n network_location.base_url, timeout=timeout\n )\n return NetworkClient(network_location.base_url, timeout=timeout)\n\n def head(self, path, **kwargs):\n return self.request(\"HEAD\", path, **kwargs)\n\n def get(self, path, **kwargs):\n return self.request(\"GET\", path, **kwargs)\n\n def post(self, path, **kwargs):\n return self.request(\"POST\", path, **kwargs)\n\n def request(self, method, path, **kwargs):\n response = None\n if \"timeout\" not in kwargs:\n kwargs.update(timeout=self.timeout)\n\n url = join_url(self.base_url, path)\n try:\n with super(NetworkClient, self).request(\n method, url, stream=True, **kwargs\n ) as response:\n if response.raw._connection.sock is None:\n raise requests.exceptions.ConnectionError(\"No socket available\")\n\n # capture the remote IP address, which requires `stream=True` and before consumed\n self.remote_ip = response.raw._connection.sock.getpeername()[0]\n # now consume content, see how `Session.send` does this when `stream=False`\n response.content\n\n response.raise_for_status()\n return response\n except (\n requests.exceptions.ConnectionError,\n requests.exceptions.SSLError,\n requests.exceptions.ConnectTimeout,\n requests.exceptions.URLRequired,\n requests.exceptions.MissingSchema,\n requests.exceptions.InvalidSchema,\n requests.exceptions.InvalidURL,\n requests.exceptions.InvalidHeader,\n requests.exceptions.InvalidJSONError,\n ) as e:\n raise_from(\n errors.NetworkLocationConnectionFailure(\n \"Unable to connect: {}\".format(url)\n ),\n e,\n )\n except (\n requests.exceptions.ReadTimeout,\n requests.exceptions.TooManyRedirects,\n ) as e:\n raise_from(\n errors.NetworkLocationResponseTimeout(\n \"Response timeout: {}\".format(url)\n ),\n e,\n )\n except (\n requests.exceptions.HTTPError,\n requests.exceptions.ContentDecodingError,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.RequestException,\n ) as e:\n raise_from(\n errors.NetworkLocationResponseFailure(\n \"Response failure: {}\".format(url), response=response\n ),\n e,\n )\n\n def connect(self, raise_if_unavailable=True): # noqa: C901\n \"\"\"\n Attempts a connection to the instance and caches its device information if successful\n :param raise_if_unavailable: Raises an error if connection fails and this value is True\n :return: A boolean determining success, never False if `raise_if_unavailable=True`\n \"\"\"\n\n from kolibri.core.device.utils import DEVICE_INFO_VERSION\n from kolibri.core.device.utils import device_info_keys\n\n # don't reconnect if client has already done so\n if self.device_info is not None:\n return True\n\n try:\n logger.info(\"Attempting connection to: {}\".format(self.base_url))\n response = self.get(\n \"api/public/info/\",\n allow_redirects=True,\n params={\"v\": DEVICE_INFO_VERSION},\n timeout=(DEFAULT_CONNECT_TIMEOUT, 5),\n )\n except errors.NetworkClientError as e:\n logger.info(e)\n if raise_if_unavailable:\n raise e\n return False\n\n # check that we successfully connected, and if we were redirected that it's still\n # the right endpoint\n parsed_url = urlparse(response.url)\n if response.status_code != 200:\n if raise_if_unavailable:\n raise errors.NetworkLocationInvalidResponse(\n \"Response status {}\".format(response.status_code)\n )\n return False\n if not parsed_url.path.rstrip(\"/\").endswith(\"/api/public/info\"):\n if raise_if_unavailable:\n raise errors.NetworkLocationInvalidResponse(\n \"Request redirected to {}\".format(parsed_url.path)\n )\n return False\n\n try:\n info = response.json()\n self.device_info = {}\n for key in device_info_keys.get(DEVICE_INFO_VERSION, []):\n self.device_info[key] = info.get(key, device_info_defaults.get(key))\n if self.device_info[\"application\"] not in [\"studio\", \"kolibri\"]:\n raise errors.NetworkLocationInvalidResponse(\n \"Server is not running Kolibri or Studio\"\n )\n logger.info(\"Success! We connected to: {}\".format(response.url))\n\n self.base_url = \"{}://{}{}\".format(\n parsed_url.scheme,\n parsed_url.netloc,\n parsed_url.path.rstrip(\"/\").replace(\"api/public/info\", \"\"),\n )\n except (requests.exceptions.JSONDecodeError, ValueError) as e:\n logger.info(\n \"Invalid JSON returned when attempting to connect to a remote server\"\n )\n if raise_if_unavailable:\n raise_from(\n errors.NetworkLocationInvalidResponse(\"Invalid JSON returned\"), e\n )\n return False\n\n return True\n", "path": "kolibri/core/discovery/utils/network/client.py"}], "after_files": [{"content": "import logging\n\nimport requests\nfrom six import raise_from\nfrom six.moves.urllib.parse import urlparse\n\nimport kolibri\nfrom . import errors\nfrom .urls import get_normalized_url_variations\nfrom .urls import HTTP_PORTS\nfrom .urls import HTTPS_PORTS\nfrom kolibri.core.discovery.models import ConnectionStatus\nfrom kolibri.core.tasks.utils import get_current_job\nfrom kolibri.core.utils.urls import join_url\nfrom kolibri.utils.server import get_urls\n\nlogger = logging.getLogger(__name__)\n\ndevice_info_defaults = {\n \"subset_of_users_device\": False,\n}\n\nDEFAULT_CONNECT_TIMEOUT = 5\nDEFAULT_READ_TIMEOUT = 60\n# default read timeout when within a job\nDEFAULT_ASYNC_READ_TIMEOUT = 30\n# when the network client tries variations of a url, that means the overall length of time it takes\n# is multiplied by the number of variations, so for synchronous operations (in a HTTP request) we\n# make the overall timeout ~= the DEFAULT_READ_TIMEOUT\nDEFAULT_SYNC_READ_TIMEOUT = DEFAULT_READ_TIMEOUT / (len(HTTP_PORTS) + len(HTTPS_PORTS))\n\n\nclass NetworkClient(requests.Session):\n __slots__ = (\"base_url\", \"timeout\", \"session\", \"device_info\", \"remote_ip\")\n\n def __init__(self, base_url, timeout=None):\n \"\"\"\n If an explicit base_url is already known, provide that. If only a vague address is known,\n `build_from_address` can build a client to determine the actual `base_url`\n :param base_url: The fully composed URL for a network location, without path\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :type timeout: float|tuple\n \"\"\"\n super(NetworkClient, self).__init__()\n\n self.base_url = base_url\n self.timeout = timeout or (DEFAULT_CONNECT_TIMEOUT, DEFAULT_READ_TIMEOUT)\n self.session = None\n self.device_info = None\n self.remote_ip = None\n self.headers.update(\n {\n \"User-Agent\": get_user_agent(),\n }\n )\n\n @classmethod\n def build_for_address(cls, address, timeout=None):\n \"\"\"\n Normalizes the address URL and tries a number of variations until we find one\n that's able to connect\n\n :param address: The address of which to try variations of\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :return: A NetworkClient with a verified connection\n :rtype: NetworkClient\n \"\"\"\n logger.info(\n \"Attempting connections to variations of the URL: {}\".format(address)\n )\n if timeout is None:\n if get_current_job() is not None:\n # when we're within a job, then we can use longer timeouts\n timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_ASYNC_READ_TIMEOUT)\n else:\n # if we're within a request thread, then we limit it for an overall time\n timeout = (DEFAULT_CONNECT_TIMEOUT, DEFAULT_SYNC_READ_TIMEOUT)\n _, self_urls = get_urls()\n for url in get_normalized_url_variations(address):\n if url in self_urls:\n continue # exclude our own URLs\n with NetworkClient(url, timeout=timeout) as client:\n if client.connect(raise_if_unavailable=False):\n return client\n # we weren't able to connect to any of the URL variations, so all we can do is throw\n raise errors.NetworkLocationNotFound()\n\n @classmethod\n def build_from_network_location(cls, network_location, timeout=None):\n \"\"\"\n Creates a NetworkClient for a NetworkLocation, and validates the connection if the status\n isn't already 'Okay'\n :param network_location: The network location model\n :type network_location: kolibri.core.discovery.models.NetworkLocation\n :param timeout: A timeout value in seconds or tuple for (connect, read)\n :return: A NetworkClient with a verified connection\n :rtype: NetworkClient\n \"\"\"\n # expect that static network locations have an exact base_url, and only try different\n # variations if we haven't already\n if (\n network_location.dynamic\n and network_location.connection_status == ConnectionStatus.Unknown\n ):\n return NetworkClient.build_for_address(\n network_location.base_url, timeout=timeout\n )\n return NetworkClient(network_location.base_url, timeout=timeout)\n\n def head(self, path, **kwargs):\n return self.request(\"HEAD\", path, **kwargs)\n\n def get(self, path, **kwargs):\n return self.request(\"GET\", path, **kwargs)\n\n def post(self, path, **kwargs):\n return self.request(\"POST\", path, **kwargs)\n\n def request(self, method, path, **kwargs):\n response = None\n if \"timeout\" not in kwargs:\n kwargs.update(timeout=self.timeout)\n\n url = join_url(self.base_url, path)\n try:\n with super(NetworkClient, self).request(\n method, url, stream=True, **kwargs\n ) as response:\n if response.raw._connection.sock is None:\n raise requests.exceptions.ConnectionError(\"No socket available\")\n\n # capture the remote IP address, which requires `stream=True` and before consumed\n self.remote_ip = response.raw._connection.sock.getpeername()[0]\n # now consume content, see how `Session.send` does this when `stream=False`\n response.content\n\n response.raise_for_status()\n return response\n except (\n requests.exceptions.ConnectionError,\n requests.exceptions.SSLError,\n requests.exceptions.ConnectTimeout,\n requests.exceptions.URLRequired,\n requests.exceptions.MissingSchema,\n requests.exceptions.InvalidSchema,\n requests.exceptions.InvalidURL,\n requests.exceptions.InvalidHeader,\n requests.exceptions.InvalidJSONError,\n ) as e:\n raise_from(\n errors.NetworkLocationConnectionFailure(\n \"Unable to connect: {}\".format(url)\n ),\n e,\n )\n except (\n requests.exceptions.ReadTimeout,\n requests.exceptions.TooManyRedirects,\n ) as e:\n raise_from(\n errors.NetworkLocationResponseTimeout(\n \"Response timeout: {}\".format(url)\n ),\n e,\n )\n except (\n requests.exceptions.HTTPError,\n requests.exceptions.ContentDecodingError,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.RequestException,\n ) as e:\n raise_from(\n errors.NetworkLocationResponseFailure(\n \"Response failure: {}\".format(url), response=response\n ),\n e,\n )\n\n def connect(self, raise_if_unavailable=True): # noqa: C901\n \"\"\"\n Attempts a connection to the instance and caches its device information if successful\n :param raise_if_unavailable: Raises an error if connection fails and this value is True\n :return: A boolean determining success, never False if `raise_if_unavailable=True`\n \"\"\"\n\n from kolibri.core.device.utils import DEVICE_INFO_VERSION\n from kolibri.core.device.utils import device_info_keys\n\n # don't reconnect if client has already done so\n if self.device_info is not None:\n return True\n\n try:\n logger.info(\"Attempting connection to: {}\".format(self.base_url))\n response = self.get(\n \"api/public/info/\",\n allow_redirects=True,\n params={\"v\": DEVICE_INFO_VERSION},\n timeout=(DEFAULT_CONNECT_TIMEOUT, 5),\n )\n except errors.NetworkClientError as e:\n logger.info(e)\n if raise_if_unavailable:\n raise e\n return False\n\n # check that we successfully connected, and if we were redirected that it's still\n # the right endpoint\n parsed_url = urlparse(response.url)\n if response.status_code != 200:\n if raise_if_unavailable:\n raise errors.NetworkLocationInvalidResponse(\n \"Response status {}\".format(response.status_code)\n )\n return False\n if not parsed_url.path.rstrip(\"/\").endswith(\"/api/public/info\"):\n if raise_if_unavailable:\n raise errors.NetworkLocationInvalidResponse(\n \"Request redirected to {}\".format(parsed_url.path)\n )\n return False\n\n try:\n info = response.json()\n self.device_info = {}\n for key in device_info_keys.get(DEVICE_INFO_VERSION, []):\n self.device_info[key] = info.get(key, device_info_defaults.get(key))\n if self.device_info[\"application\"] not in [\"studio\", \"kolibri\"]:\n raise errors.NetworkLocationInvalidResponse(\n \"Server is not running Kolibri or Studio\"\n )\n logger.info(\"Success! We connected to: {}\".format(response.url))\n\n self.base_url = \"{}://{}{}\".format(\n parsed_url.scheme,\n parsed_url.netloc,\n parsed_url.path.rstrip(\"/\").replace(\"api/public/info\", \"\"),\n )\n except (requests.exceptions.JSONDecodeError, ValueError) as e:\n logger.info(\n \"Invalid JSON returned when attempting to connect to a remote server\"\n )\n if raise_if_unavailable:\n raise_from(\n errors.NetworkLocationInvalidResponse(\"Invalid JSON returned\"), e\n )\n return False\n\n return True\n\n\ndef get_user_agent():\n return \"Kolibri/{0} python-requests/{1}\".format(\n kolibri.__version__, requests.__version__\n )\n", "path": "kolibri/core/discovery/utils/network/client.py"}]}
| 3,174 | 241 |
gh_patches_debug_9631
|
rasdani/github-patches
|
git_diff
|
larq__larq-446
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CaseOptimizer should throw if optimizer matches no variable
### Feature motivation
We are changing the default behaviour of Bop in #442. If users are not careful this could lead to hard to debug errors when no variable matches any optimizer.
### Feature description
We should throw if one optimizer in used by `CaseOptimizer` doesn't receive any variables.
### Feature implementation
We should throw if any of the [`len(opt_grads_and_vars) == 0`](https://github.com/larq/larq/blob/c44ed976812edc2cc7d60c98155513942fc3af98/larq/optimizers.py#L140).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `larq/optimizers.py`
Content:
```
1 """Neural networks with extremely low-precision weights and activations, such as
2 Binarized Neural Networks (BNNs), usually contain a mix of low-precision weights (e.g.
3 1-bit) and higher-precision weights (e.g. 8-bit, 16-bit, or 32-bit). Examples of this
4 include the first and last layers of image classificiation models, which have
5 higher-precision weights in most BNN architectures from the literature.
6
7 Training a BNN, then, consists of optimizing both low-precision and higher-precision
8 weights. In `larq`, we provide a mechanism to target different bit-precision variables
9 with different optimizers using the `CaseOptimizer` class. Modeled after the
10 [`tf.case`](https://www.tensorflow.org/api_docs/python/tf/case) signature,
11 `CaseOptimizer` accepts pairs of predicates and optimizers. A predicate, given a
12 variable, decides whether its optimizer should train that variable.
13
14 A `CaseOptimizer` behaves much like any other
15 [Keras optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), and
16 once you instantiate it you can pass it to your `model.compile()` as usual. To
17 instantiate a `CaseOptimzer`, pass one or a list of `(predicate, optimizer)` tuples,
18 along with a `default` optimizer which trains any variables not claimed by another
19 optimizer. A variable may not be claimed by more than one optimizer's predicate.
20
21 !!! example
22 ```python
23 no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)
24 layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)
25
26 case_optimizer = lq.optimizers.CaseOptimizer(
27 (
28 lq.optimizers.Bop.is_binary_variable, # predicate
29 lq.optimizers.Bop(threshold=1e-6, gamma=1e-3), # optimizer
30 ),
31 default_optimizer=tf.keras.optimizers.Adam(0.01),
32 )
33 ```
34 """
35
36
37 import warnings
38 from copy import deepcopy
39 from typing import Callable, Optional, Tuple
40
41 import tensorflow as tf
42
43 import larq as lq
44 from larq import utils
45
46 __all__ = ["Bop", "CaseOptimizer"]
47
48
49 @utils.register_keras_custom_object
50 class CaseOptimizer(tf.keras.optimizers.Optimizer):
51 """An optmizer wrapper that applies different optimizers to a subset of variables.
52
53 An optimizer is used to train a variable iff its accompanying predicate evaluates to
54 `True`.
55
56 For each variable, at most one optimizer's predicate may evaluate to `True`. If no
57 optimizer's predicate evaluates to `True` for a variable, it is trained with the
58 `default_optimizer`. If a variable is claimed by no optimizers and
59 `default_optimizer == None`, the variable is not trained.
60
61 # Arguments
62 predicate_optimizer_pairs: One or more `(pred, tf.keras.optimizers.Optimizer)` pairs,
63 where `pred` takes one `tf.Variable` as argument and returns `True` if the
64 optimizer should be used for that variable, e.g. `pred(var) == True`.
65 default_optimizer: A `tf.keras.optimizers.Optimizer` to be applied to any variable
66 not claimed by any other optimizer. (Must be passed as keyword argument.)
67 """
68
69 def __init__(
70 self,
71 *predicate_optimizer_pairs: Tuple[
72 Callable[[tf.Variable], bool], tf.keras.optimizers.Optimizer
73 ],
74 default_optimizer: Optional[tf.keras.optimizers.Optimizer] = None,
75 name: str = "optimizer_case",
76 ):
77 super().__init__(name=name)
78
79 # Type checks for (predicate, optimizer) pairs
80 for i, (predicate, optimizer) in enumerate(predicate_optimizer_pairs):
81 if not callable(predicate):
82 raise TypeError(
83 f"Expected callable predicate at `predicate_optimizer_pairs[{i}][0]` but got `{type(predicate)}`."
84 )
85 if not isinstance(optimizer, tf.keras.optimizers.Optimizer):
86 raise TypeError(
87 f"Expected `tf.keras.optimizers.Optimizer` at `predicate_optimizer_pairs[{i}][1]` but got `{type(optimizer)}`."
88 )
89
90 # Type check for default optimizers
91 if default_optimizer is not None and not isinstance(
92 default_optimizer, tf.keras.optimizers.Optimizer
93 ):
94 raise TypeError(
95 f"Expected `tf.keras.optimizers.Optimizer` for `default_optimizer` but got `{type(default_optimizer)}`."
96 )
97
98 self.pred_opt_pairs = predicate_optimizer_pairs
99 self.default = default_optimizer
100
101 self.var_opt_mapping = None
102
103 # List of optimizers ending in `default_optimizer`, for easier internal access
104 self.optimizers = [opt for (_, opt) in self.pred_opt_pairs]
105
106 if self.default:
107 self.optimizers.append(self.default)
108 self.DEFAULT_OPT_INDEX = len(self.pred_opt_pairs)
109
110 # Track optimizers to support reloading via tf.train.Checkpoint
111 for i, optimizer in enumerate(self.optimizers):
112 self._track_trackable(optimizer, name=f"optimizer_{i}")
113
114 @property
115 def weights(self):
116 weights = []
117 for optimizer in self.optimizers:
118 weights.extend(optimizer.weights)
119 return weights
120
121 def apply_gradients(self, grads_and_vars, name: Optional[str] = None):
122 """Apply gradients to variables for each optimizer.
123
124 On the first call to `apply_gradients()`, compute the mapping from variables to
125 optimizers and cache it in the `self.var_opt_mapping` dict for serialization and
126 faster access.
127 """
128
129 if self.var_opt_mapping is None:
130 # Convert `grads_and_vars` to list so we can iterate multiple times over it
131 grads_and_vars = list(grads_and_vars)
132 self._compute_var_opt_mapping(grads_and_vars)
133
134 # Split gradients and variables into a separate list for each optimizer
135 grad_var_lists = [[] for _ in range(len(self.pred_opt_pairs) + 1)]
136 for grad, var in grads_and_vars:
137 if var.name in self.var_opt_mapping:
138 grad_var_lists[self.var_opt_mapping[var.name]].append((grad, var))
139
140 # Apply gradients to each optimizer
141 train_ops = [
142 optimizer.apply_gradients(opt_grads_and_vars)
143 for optimizer, opt_grads_and_vars in zip(self.optimizers, grad_var_lists)
144 ]
145
146 return tf.group(*train_ops, name="train_with_group")
147
148 def get_config(self):
149 optimizer_configs = [opt.get_config() for (_, opt) in self.pred_opt_pairs]
150 default_config = self.default.get_config()
151
152 config = {
153 "optimizer_configs": [
154 {"class_name": optimizer_config["name"], "config": optimizer_config}
155 for optimizer_config in optimizer_configs
156 ],
157 "default_config": {
158 "class_name": default_config["name"],
159 "config": default_config,
160 },
161 "var_opt_mapping": self.var_opt_mapping, # serialized instead of `pred`s
162 }
163 return {**super().get_config(), **config}
164
165 @classmethod
166 def from_config(cls, original_config, custom_objects=None):
167 config = deepcopy(original_config)
168
169 case_optimizer = cls(
170 *[ # `(pred, opt)` tuples
171 (
172 lambda _: False, # placeholder callable (`pred` is not serialized)
173 tf.keras.optimizers.deserialize( # optimizer `opt`
174 opt_config, custom_objects=custom_objects
175 ),
176 )
177 for opt_config in config["optimizer_configs"]
178 ],
179 default_optimizer=tf.keras.optimizers.deserialize(
180 config["default_config"], custom_objects=custom_objects
181 ),
182 )
183
184 # Since we no longer have the `pred`s, we set the mapping explicitly
185 case_optimizer.var_opt_mapping = config["var_opt_mapping"]
186
187 return case_optimizer
188
189 def _compute_var_opt_mapping(self, grads_and_vars):
190 """Compute a unique mapping from variables to optimizer indices."""
191
192 self.var_opt_mapping = {}
193
194 for grad, var in grads_and_vars:
195 num_optimizers = 0
196
197 # Find the optimizer(s) that want to claim this variable
198 for optimizer_index, (predicate, _) in enumerate(self.pred_opt_pairs):
199 if predicate(var):
200 self.var_opt_mapping[var.name] = optimizer_index
201 num_optimizers += 1
202
203 if num_optimizers > 1:
204 raise ValueError(f"Variable `{var}` claimed by multiple optimizers.")
205 if num_optimizers == 0:
206 if self.default is not None:
207 self.var_opt_mapping[var.name] = self.DEFAULT_OPT_INDEX
208 else:
209 warnings.warn(
210 f"No `default_optimizer` provided to train variable `{var}`."
211 )
212
213
214 @utils.register_keras_custom_object
215 class Bop(tf.keras.optimizers.Optimizer):
216 """Binary optimizer (Bop).
217
218 Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and
219 Binary Weight Networks (BWN).
220
221 Bop maintains an exponential moving average of the gradients controlled by
222 `gamma`. If this average exceeds the `threshold`, a weight is flipped.
223
224 The hyperparameter `gamma` is somewhat analogues to the learning rate in
225 SGD methods: a high `gamma` results in rapid convergence but also makes
226 training more noisy.
227
228 Note that the default `threshold` is not optimal for all situations.
229 Setting the threshold too high results in little learning, while setting it
230 too low results in overly noisy behaviour.
231
232 !!! warning
233 The `is_binary_variable` check of this optimizer will only target variables that
234 have been explicitly marked as being binary using `NoOpQuantizer(precision=1)`.
235
236 !!! example
237 ```python
238 no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)
239 layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)
240
241 optimizer = lq.optimizers.CaseOptimizer(
242 (
243 lq.optimizers.Bop.is_binary_variable,
244 lq.optimizers.Bop(),
245 ),
246 default_optimizer=tf.keras.optimizers.Adam(0.01), # for FP weights
247 )
248 ```
249
250 # Arguments
251 threshold: magnitude of average gradient signal required to flip a weight.
252 gamma: the adaptivity rate.
253 name: name of the optimizer.
254
255 # References
256 - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)
257 """
258
259 def __init__(
260 self, threshold: float = 1e-8, gamma: float = 1e-4, name: str = "Bop", **kwargs
261 ):
262 super().__init__(name=name, **kwargs)
263
264 self._set_hyper("threshold", threshold)
265 self._set_hyper("gamma", gamma)
266
267 def _create_slots(self, var_list):
268 for var in var_list:
269 self.add_slot(var, "m")
270
271 def _get_decayed_hyper(self, name: str, var_dtype):
272 hyper = self._get_hyper(name, var_dtype)
273 if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):
274 local_step = tf.cast(self.iterations, var_dtype)
275 hyper = tf.cast(hyper(local_step), var_dtype)
276 return hyper
277
278 def _resource_apply_dense(self, grad, var):
279 var_dtype = var.dtype.base_dtype
280 gamma = self._get_decayed_hyper("gamma", var_dtype)
281 threshold = self._get_decayed_hyper("threshold", var_dtype)
282 m = self.get_slot(var, "m")
283
284 m_t = m.assign_add(gamma * (grad - m))
285 var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)
286 return var.assign(var_t).op
287
288 def _resource_apply_sparse(self, grad, var, indices):
289 raise NotImplementedError()
290
291 def get_config(self):
292 config = {
293 "threshold": self._serialize_hyperparameter("threshold"),
294 "gamma": self._serialize_hyperparameter("gamma"),
295 }
296 return {**super().get_config(), **config}
297
298 @classmethod
299 def from_config(cls, config, custom_objects=None):
300 for hyper in ("gamma", "threshold"):
301 if hyper in config and isinstance(config[hyper], dict):
302 config[hyper] = tf.keras.optimizers.schedules.deserialize(
303 config[hyper], custom_objects=custom_objects
304 )
305 return cls(**config)
306
307 @staticmethod
308 def is_binary_variable(var: tf.Variable) -> bool:
309 """Returns `True` for variables with `var.precision == 1`.
310
311 This is an example of a predictate that can be used by the `CaseOptimizer`.
312
313 # Arguments
314 var: a `tf.Variable`.
315 """
316 return getattr(var, "precision", 32) == 1
317
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/larq/optimizers.py b/larq/optimizers.py
--- a/larq/optimizers.py
+++ b/larq/optimizers.py
@@ -210,6 +210,13 @@
f"No `default_optimizer` provided to train variable `{var}`."
)
+ # Make sure that each optimizer touches at least one variable
+ for optimizer_index, (_, optimizer) in enumerate(self.pred_opt_pairs):
+ if optimizer_index not in self.var_opt_mapping.values():
+ raise ValueError(
+ f"Optimizer `{optimizer}` did not claim any variables."
+ )
+
@utils.register_keras_custom_object
class Bop(tf.keras.optimizers.Optimizer):
|
{"golden_diff": "diff --git a/larq/optimizers.py b/larq/optimizers.py\n--- a/larq/optimizers.py\n+++ b/larq/optimizers.py\n@@ -210,6 +210,13 @@\n f\"No `default_optimizer` provided to train variable `{var}`.\"\n )\n \n+ # Make sure that each optimizer touches at least one variable\n+ for optimizer_index, (_, optimizer) in enumerate(self.pred_opt_pairs):\n+ if optimizer_index not in self.var_opt_mapping.values():\n+ raise ValueError(\n+ f\"Optimizer `{optimizer}` did not claim any variables.\"\n+ )\n+\n \n @utils.register_keras_custom_object\n class Bop(tf.keras.optimizers.Optimizer):\n", "issue": "CaseOptimizer should throw if optimizer matches no variable\n### Feature motivation\r\nWe are changing the default behaviour of Bop in #442. If users are not careful this could lead to hard to debug errors when no variable matches any optimizer.\r\n\r\n### Feature description\r\nWe should throw if one optimizer in used by `CaseOptimizer` doesn't receive any variables.\r\n\r\n### Feature implementation\r\nWe should throw if any of the [`len(opt_grads_and_vars) == 0`](https://github.com/larq/larq/blob/c44ed976812edc2cc7d60c98155513942fc3af98/larq/optimizers.py#L140).\n", "before_files": [{"content": "\"\"\"Neural networks with extremely low-precision weights and activations, such as\nBinarized Neural Networks (BNNs), usually contain a mix of low-precision weights (e.g.\n1-bit) and higher-precision weights (e.g. 8-bit, 16-bit, or 32-bit). Examples of this\ninclude the first and last layers of image classificiation models, which have\nhigher-precision weights in most BNN architectures from the literature.\n\nTraining a BNN, then, consists of optimizing both low-precision and higher-precision\nweights. In `larq`, we provide a mechanism to target different bit-precision variables\nwith different optimizers using the `CaseOptimizer` class. Modeled after the\n[`tf.case`](https://www.tensorflow.org/api_docs/python/tf/case) signature,\n`CaseOptimizer` accepts pairs of predicates and optimizers. A predicate, given a\nvariable, decides whether its optimizer should train that variable.\n\nA `CaseOptimizer` behaves much like any other\n[Keras optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), and\nonce you instantiate it you can pass it to your `model.compile()` as usual. To\ninstantiate a `CaseOptimzer`, pass one or a list of `(predicate, optimizer)` tuples,\nalong with a `default` optimizer which trains any variables not claimed by another\noptimizer. A variable may not be claimed by more than one optimizer's predicate.\n\n!!! example\n ```python\n no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)\n layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)\n\n case_optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable, # predicate\n lq.optimizers.Bop(threshold=1e-6, gamma=1e-3), # optimizer\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01),\n )\n ```\n\"\"\"\n\n\nimport warnings\nfrom copy import deepcopy\nfrom typing import Callable, Optional, Tuple\n\nimport tensorflow as tf\n\nimport larq as lq\nfrom larq import utils\n\n__all__ = [\"Bop\", \"CaseOptimizer\"]\n\n\[email protected]_keras_custom_object\nclass CaseOptimizer(tf.keras.optimizers.Optimizer):\n \"\"\"An optmizer wrapper that applies different optimizers to a subset of variables.\n\n An optimizer is used to train a variable iff its accompanying predicate evaluates to\n `True`.\n\n For each variable, at most one optimizer's predicate may evaluate to `True`. If no\n optimizer's predicate evaluates to `True` for a variable, it is trained with the\n `default_optimizer`. If a variable is claimed by no optimizers and\n `default_optimizer == None`, the variable is not trained.\n\n # Arguments\n predicate_optimizer_pairs: One or more `(pred, tf.keras.optimizers.Optimizer)` pairs,\n where `pred` takes one `tf.Variable` as argument and returns `True` if the\n optimizer should be used for that variable, e.g. `pred(var) == True`.\n default_optimizer: A `tf.keras.optimizers.Optimizer` to be applied to any variable\n not claimed by any other optimizer. (Must be passed as keyword argument.)\n \"\"\"\n\n def __init__(\n self,\n *predicate_optimizer_pairs: Tuple[\n Callable[[tf.Variable], bool], tf.keras.optimizers.Optimizer\n ],\n default_optimizer: Optional[tf.keras.optimizers.Optimizer] = None,\n name: str = \"optimizer_case\",\n ):\n super().__init__(name=name)\n\n # Type checks for (predicate, optimizer) pairs\n for i, (predicate, optimizer) in enumerate(predicate_optimizer_pairs):\n if not callable(predicate):\n raise TypeError(\n f\"Expected callable predicate at `predicate_optimizer_pairs[{i}][0]` but got `{type(predicate)}`.\"\n )\n if not isinstance(optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` at `predicate_optimizer_pairs[{i}][1]` but got `{type(optimizer)}`.\"\n )\n\n # Type check for default optimizers\n if default_optimizer is not None and not isinstance(\n default_optimizer, tf.keras.optimizers.Optimizer\n ):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` for `default_optimizer` but got `{type(default_optimizer)}`.\"\n )\n\n self.pred_opt_pairs = predicate_optimizer_pairs\n self.default = default_optimizer\n\n self.var_opt_mapping = None\n\n # List of optimizers ending in `default_optimizer`, for easier internal access\n self.optimizers = [opt for (_, opt) in self.pred_opt_pairs]\n\n if self.default:\n self.optimizers.append(self.default)\n self.DEFAULT_OPT_INDEX = len(self.pred_opt_pairs)\n\n # Track optimizers to support reloading via tf.train.Checkpoint\n for i, optimizer in enumerate(self.optimizers):\n self._track_trackable(optimizer, name=f\"optimizer_{i}\")\n\n @property\n def weights(self):\n weights = []\n for optimizer in self.optimizers:\n weights.extend(optimizer.weights)\n return weights\n\n def apply_gradients(self, grads_and_vars, name: Optional[str] = None):\n \"\"\"Apply gradients to variables for each optimizer.\n\n On the first call to `apply_gradients()`, compute the mapping from variables to\n optimizers and cache it in the `self.var_opt_mapping` dict for serialization and\n faster access.\n \"\"\"\n\n if self.var_opt_mapping is None:\n # Convert `grads_and_vars` to list so we can iterate multiple times over it\n grads_and_vars = list(grads_and_vars)\n self._compute_var_opt_mapping(grads_and_vars)\n\n # Split gradients and variables into a separate list for each optimizer\n grad_var_lists = [[] for _ in range(len(self.pred_opt_pairs) + 1)]\n for grad, var in grads_and_vars:\n if var.name in self.var_opt_mapping:\n grad_var_lists[self.var_opt_mapping[var.name]].append((grad, var))\n\n # Apply gradients to each optimizer\n train_ops = [\n optimizer.apply_gradients(opt_grads_and_vars)\n for optimizer, opt_grads_and_vars in zip(self.optimizers, grad_var_lists)\n ]\n\n return tf.group(*train_ops, name=\"train_with_group\")\n\n def get_config(self):\n optimizer_configs = [opt.get_config() for (_, opt) in self.pred_opt_pairs]\n default_config = self.default.get_config()\n\n config = {\n \"optimizer_configs\": [\n {\"class_name\": optimizer_config[\"name\"], \"config\": optimizer_config}\n for optimizer_config in optimizer_configs\n ],\n \"default_config\": {\n \"class_name\": default_config[\"name\"],\n \"config\": default_config,\n },\n \"var_opt_mapping\": self.var_opt_mapping, # serialized instead of `pred`s\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, original_config, custom_objects=None):\n config = deepcopy(original_config)\n\n case_optimizer = cls(\n *[ # `(pred, opt)` tuples\n (\n lambda _: False, # placeholder callable (`pred` is not serialized)\n tf.keras.optimizers.deserialize( # optimizer `opt`\n opt_config, custom_objects=custom_objects\n ),\n )\n for opt_config in config[\"optimizer_configs\"]\n ],\n default_optimizer=tf.keras.optimizers.deserialize(\n config[\"default_config\"], custom_objects=custom_objects\n ),\n )\n\n # Since we no longer have the `pred`s, we set the mapping explicitly\n case_optimizer.var_opt_mapping = config[\"var_opt_mapping\"]\n\n return case_optimizer\n\n def _compute_var_opt_mapping(self, grads_and_vars):\n \"\"\"Compute a unique mapping from variables to optimizer indices.\"\"\"\n\n self.var_opt_mapping = {}\n\n for grad, var in grads_and_vars:\n num_optimizers = 0\n\n # Find the optimizer(s) that want to claim this variable\n for optimizer_index, (predicate, _) in enumerate(self.pred_opt_pairs):\n if predicate(var):\n self.var_opt_mapping[var.name] = optimizer_index\n num_optimizers += 1\n\n if num_optimizers > 1:\n raise ValueError(f\"Variable `{var}` claimed by multiple optimizers.\")\n if num_optimizers == 0:\n if self.default is not None:\n self.var_opt_mapping[var.name] = self.DEFAULT_OPT_INDEX\n else:\n warnings.warn(\n f\"No `default_optimizer` provided to train variable `{var}`.\"\n )\n\n\[email protected]_keras_custom_object\nclass Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n\n Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and\n Binary Weight Networks (BWN).\n\n Bop maintains an exponential moving average of the gradients controlled by\n `gamma`. If this average exceeds the `threshold`, a weight is flipped.\n\n The hyperparameter `gamma` is somewhat analogues to the learning rate in\n SGD methods: a high `gamma` results in rapid convergence but also makes\n training more noisy.\n\n Note that the default `threshold` is not optimal for all situations.\n Setting the threshold too high results in little learning, while setting it\n too low results in overly noisy behaviour.\n\n !!! warning\n The `is_binary_variable` check of this optimizer will only target variables that\n have been explicitly marked as being binary using `NoOpQuantizer(precision=1)`.\n\n !!! example\n ```python\n no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)\n layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)\n\n optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable,\n lq.optimizers.Bop(),\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01), # for FP weights\n )\n ```\n\n # Arguments\n threshold: magnitude of average gradient signal required to flip a weight.\n gamma: the adaptivity rate.\n name: name of the optimizer.\n\n # References\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)\n \"\"\"\n\n def __init__(\n self, threshold: float = 1e-8, gamma: float = 1e-4, name: str = \"Bop\", **kwargs\n ):\n super().__init__(name=name, **kwargs)\n\n self._set_hyper(\"threshold\", threshold)\n self._set_hyper(\"gamma\", gamma)\n\n def _create_slots(self, var_list):\n for var in var_list:\n self.add_slot(var, \"m\")\n\n def _get_decayed_hyper(self, name: str, var_dtype):\n hyper = self._get_hyper(name, var_dtype)\n if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):\n local_step = tf.cast(self.iterations, var_dtype)\n hyper = tf.cast(hyper(local_step), var_dtype)\n return hyper\n\n def _resource_apply_dense(self, grad, var):\n var_dtype = var.dtype.base_dtype\n gamma = self._get_decayed_hyper(\"gamma\", var_dtype)\n threshold = self._get_decayed_hyper(\"threshold\", var_dtype)\n m = self.get_slot(var, \"m\")\n\n m_t = m.assign_add(gamma * (grad - m))\n var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)\n return var.assign(var_t).op\n\n def _resource_apply_sparse(self, grad, var, indices):\n raise NotImplementedError()\n\n def get_config(self):\n config = {\n \"threshold\": self._serialize_hyperparameter(\"threshold\"),\n \"gamma\": self._serialize_hyperparameter(\"gamma\"),\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n for hyper in (\"gamma\", \"threshold\"):\n if hyper in config and isinstance(config[hyper], dict):\n config[hyper] = tf.keras.optimizers.schedules.deserialize(\n config[hyper], custom_objects=custom_objects\n )\n return cls(**config)\n\n @staticmethod\n def is_binary_variable(var: tf.Variable) -> bool:\n \"\"\"Returns `True` for variables with `var.precision == 1`.\n\n This is an example of a predictate that can be used by the `CaseOptimizer`.\n\n # Arguments\n var: a `tf.Variable`.\n \"\"\"\n return getattr(var, \"precision\", 32) == 1\n", "path": "larq/optimizers.py"}], "after_files": [{"content": "\"\"\"Neural networks with extremely low-precision weights and activations, such as\nBinarized Neural Networks (BNNs), usually contain a mix of low-precision weights (e.g.\n1-bit) and higher-precision weights (e.g. 8-bit, 16-bit, or 32-bit). Examples of this\ninclude the first and last layers of image classificiation models, which have\nhigher-precision weights in most BNN architectures from the literature.\n\nTraining a BNN, then, consists of optimizing both low-precision and higher-precision\nweights. In `larq`, we provide a mechanism to target different bit-precision variables\nwith different optimizers using the `CaseOptimizer` class. Modeled after the\n[`tf.case`](https://www.tensorflow.org/api_docs/python/tf/case) signature,\n`CaseOptimizer` accepts pairs of predicates and optimizers. A predicate, given a\nvariable, decides whether its optimizer should train that variable.\n\nA `CaseOptimizer` behaves much like any other\n[Keras optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), and\nonce you instantiate it you can pass it to your `model.compile()` as usual. To\ninstantiate a `CaseOptimzer`, pass one or a list of `(predicate, optimizer)` tuples,\nalong with a `default` optimizer which trains any variables not claimed by another\noptimizer. A variable may not be claimed by more than one optimizer's predicate.\n\n!!! example\n ```python\n no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)\n layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)\n\n case_optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable, # predicate\n lq.optimizers.Bop(threshold=1e-6, gamma=1e-3), # optimizer\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01),\n )\n ```\n\"\"\"\n\n\nimport warnings\nfrom copy import deepcopy\nfrom typing import Callable, Optional, Tuple\n\nimport tensorflow as tf\n\nimport larq as lq\nfrom larq import utils\n\n__all__ = [\"Bop\", \"CaseOptimizer\"]\n\n\[email protected]_keras_custom_object\nclass CaseOptimizer(tf.keras.optimizers.Optimizer):\n \"\"\"An optmizer wrapper that applies different optimizers to a subset of variables.\n\n An optimizer is used to train a variable iff its accompanying predicate evaluates to\n `True`.\n\n For each variable, at most one optimizer's predicate may evaluate to `True`. If no\n optimizer's predicate evaluates to `True` for a variable, it is trained with the\n `default_optimizer`. If a variable is claimed by no optimizers and\n `default_optimizer == None`, the variable is not trained.\n\n # Arguments\n predicate_optimizer_pairs: One or more `(pred, tf.keras.optimizers.Optimizer)` pairs,\n where `pred` takes one `tf.Variable` as argument and returns `True` if the\n optimizer should be used for that variable, e.g. `pred(var) == True`.\n default_optimizer: A `tf.keras.optimizers.Optimizer` to be applied to any variable\n not claimed by any other optimizer. (Must be passed as keyword argument.)\n \"\"\"\n\n def __init__(\n self,\n *predicate_optimizer_pairs: Tuple[\n Callable[[tf.Variable], bool], tf.keras.optimizers.Optimizer\n ],\n default_optimizer: Optional[tf.keras.optimizers.Optimizer] = None,\n name: str = \"optimizer_case\",\n ):\n super().__init__(name=name)\n\n # Type checks for (predicate, optimizer) pairs\n for i, (predicate, optimizer) in enumerate(predicate_optimizer_pairs):\n if not callable(predicate):\n raise TypeError(\n f\"Expected callable predicate at `predicate_optimizer_pairs[{i}][0]` but got `{type(predicate)}`.\"\n )\n if not isinstance(optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` at `predicate_optimizer_pairs[{i}][1]` but got `{type(optimizer)}`.\"\n )\n\n # Type check for default optimizers\n if default_optimizer is not None and not isinstance(\n default_optimizer, tf.keras.optimizers.Optimizer\n ):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` for `default_optimizer` but got `{type(default_optimizer)}`.\"\n )\n\n self.pred_opt_pairs = predicate_optimizer_pairs\n self.default = default_optimizer\n\n self.var_opt_mapping = None\n\n # List of optimizers ending in `default_optimizer`, for easier internal access\n self.optimizers = [opt for (_, opt) in self.pred_opt_pairs]\n\n if self.default:\n self.optimizers.append(self.default)\n self.DEFAULT_OPT_INDEX = len(self.pred_opt_pairs)\n\n # Track optimizers to support reloading via tf.train.Checkpoint\n for i, optimizer in enumerate(self.optimizers):\n self._track_trackable(optimizer, name=f\"optimizer_{i}\")\n\n @property\n def weights(self):\n weights = []\n for optimizer in self.optimizers:\n weights.extend(optimizer.weights)\n return weights\n\n def apply_gradients(self, grads_and_vars, name: Optional[str] = None):\n \"\"\"Apply gradients to variables for each optimizer.\n\n On the first call to `apply_gradients()`, compute the mapping from variables to\n optimizers and cache it in the `self.var_opt_mapping` dict for serialization and\n faster access.\n \"\"\"\n\n if self.var_opt_mapping is None:\n # Convert `grads_and_vars` to list so we can iterate multiple times over it\n grads_and_vars = list(grads_and_vars)\n self._compute_var_opt_mapping(grads_and_vars)\n\n # Split gradients and variables into a separate list for each optimizer\n grad_var_lists = [[] for _ in range(len(self.pred_opt_pairs) + 1)]\n for grad, var in grads_and_vars:\n if var.name in self.var_opt_mapping:\n grad_var_lists[self.var_opt_mapping[var.name]].append((grad, var))\n\n # Apply gradients to each optimizer\n train_ops = [\n optimizer.apply_gradients(opt_grads_and_vars)\n for optimizer, opt_grads_and_vars in zip(self.optimizers, grad_var_lists)\n ]\n\n return tf.group(*train_ops, name=\"train_with_group\")\n\n def get_config(self):\n optimizer_configs = [opt.get_config() for (_, opt) in self.pred_opt_pairs]\n default_config = self.default.get_config()\n\n config = {\n \"optimizer_configs\": [\n {\"class_name\": optimizer_config[\"name\"], \"config\": optimizer_config}\n for optimizer_config in optimizer_configs\n ],\n \"default_config\": {\n \"class_name\": default_config[\"name\"],\n \"config\": default_config,\n },\n \"var_opt_mapping\": self.var_opt_mapping, # serialized instead of `pred`s\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, original_config, custom_objects=None):\n config = deepcopy(original_config)\n\n case_optimizer = cls(\n *[ # `(pred, opt)` tuples\n (\n lambda _: False, # placeholder callable (`pred` is not serialized)\n tf.keras.optimizers.deserialize( # optimizer `opt`\n opt_config, custom_objects=custom_objects\n ),\n )\n for opt_config in config[\"optimizer_configs\"]\n ],\n default_optimizer=tf.keras.optimizers.deserialize(\n config[\"default_config\"], custom_objects=custom_objects\n ),\n )\n\n # Since we no longer have the `pred`s, we set the mapping explicitly\n case_optimizer.var_opt_mapping = config[\"var_opt_mapping\"]\n\n return case_optimizer\n\n def _compute_var_opt_mapping(self, grads_and_vars):\n \"\"\"Compute a unique mapping from variables to optimizer indices.\"\"\"\n\n self.var_opt_mapping = {}\n\n for grad, var in grads_and_vars:\n num_optimizers = 0\n\n # Find the optimizer(s) that want to claim this variable\n for optimizer_index, (predicate, _) in enumerate(self.pred_opt_pairs):\n if predicate(var):\n self.var_opt_mapping[var.name] = optimizer_index\n num_optimizers += 1\n\n if num_optimizers > 1:\n raise ValueError(f\"Variable `{var}` claimed by multiple optimizers.\")\n if num_optimizers == 0:\n if self.default is not None:\n self.var_opt_mapping[var.name] = self.DEFAULT_OPT_INDEX\n else:\n warnings.warn(\n f\"No `default_optimizer` provided to train variable `{var}`.\"\n )\n\n # Make sure that each optimizer touches at least one variable\n for optimizer_index, (_, optimizer) in enumerate(self.pred_opt_pairs):\n if optimizer_index not in self.var_opt_mapping.values():\n raise ValueError(\n f\"Optimizer `{optimizer}` did not claim any variables.\"\n )\n\n\[email protected]_keras_custom_object\nclass Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n\n Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and\n Binary Weight Networks (BWN).\n\n Bop maintains an exponential moving average of the gradients controlled by\n `gamma`. If this average exceeds the `threshold`, a weight is flipped.\n\n The hyperparameter `gamma` is somewhat analogues to the learning rate in\n SGD methods: a high `gamma` results in rapid convergence but also makes\n training more noisy.\n\n Note that the default `threshold` is not optimal for all situations.\n Setting the threshold too high results in little learning, while setting it\n too low results in overly noisy behaviour.\n\n !!! warning\n The `is_binary_variable` check of this optimizer will only target variables that\n have been explicitly marked as being binary using `NoOpQuantizer(precision=1)`.\n\n !!! example\n ```python\n no_op_quantizer = lq.quantizers.NoOpQuantizer(precision=1)\n layer = lq.layers.QuantDense(16, kernel_quantizer=no_op_quantizer)\n\n optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable,\n lq.optimizers.Bop(),\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01), # for FP weights\n )\n ```\n\n # Arguments\n threshold: magnitude of average gradient signal required to flip a weight.\n gamma: the adaptivity rate.\n name: name of the optimizer.\n\n # References\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)\n \"\"\"\n\n def __init__(\n self, threshold: float = 1e-8, gamma: float = 1e-4, name: str = \"Bop\", **kwargs\n ):\n super().__init__(name=name, **kwargs)\n\n self._set_hyper(\"threshold\", threshold)\n self._set_hyper(\"gamma\", gamma)\n\n def _create_slots(self, var_list):\n for var in var_list:\n self.add_slot(var, \"m\")\n\n def _get_decayed_hyper(self, name: str, var_dtype):\n hyper = self._get_hyper(name, var_dtype)\n if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):\n local_step = tf.cast(self.iterations, var_dtype)\n hyper = tf.cast(hyper(local_step), var_dtype)\n return hyper\n\n def _resource_apply_dense(self, grad, var):\n var_dtype = var.dtype.base_dtype\n gamma = self._get_decayed_hyper(\"gamma\", var_dtype)\n threshold = self._get_decayed_hyper(\"threshold\", var_dtype)\n m = self.get_slot(var, \"m\")\n\n m_t = m.assign_add(gamma * (grad - m))\n var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)\n return var.assign(var_t).op\n\n def _resource_apply_sparse(self, grad, var, indices):\n raise NotImplementedError()\n\n def get_config(self):\n config = {\n \"threshold\": self._serialize_hyperparameter(\"threshold\"),\n \"gamma\": self._serialize_hyperparameter(\"gamma\"),\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n for hyper in (\"gamma\", \"threshold\"):\n if hyper in config and isinstance(config[hyper], dict):\n config[hyper] = tf.keras.optimizers.schedules.deserialize(\n config[hyper], custom_objects=custom_objects\n )\n return cls(**config)\n\n @staticmethod\n def is_binary_variable(var: tf.Variable) -> bool:\n \"\"\"Returns `True` for variables with `var.precision == 1`.\n\n This is an example of a predictate that can be used by the `CaseOptimizer`.\n\n # Arguments\n var: a `tf.Variable`.\n \"\"\"\n return getattr(var, \"precision\", 32) == 1\n", "path": "larq/optimizers.py"}]}
| 4,079 | 159 |
gh_patches_debug_32176
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-2288
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error when number of qubits is of type numpy.int64
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. -->
### What is the expected enhancement?
In `qiskit/validation/base.py`, function `check_types`: currently, if `n_qubits` or `memory_slots` are of type `numpy.int64`, then an error is triggered, because type `int` is expected.
I find it too strict. Especially considering that if the number of qubits is originated in a `numpy` array, then its default type is `numpy.int64`. Terra can allow additional types, or convert the type internally.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/circuit/register.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2017.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """
16 Base register reference object.
17 """
18 import re
19 import logging
20 import itertools
21
22 from qiskit.exceptions import QiskitError, QiskitIndexError
23
24 logger = logging.getLogger(__name__)
25
26
27 class Register:
28 """Implement a generic register."""
29
30 # Counter for the number of instances in this class.
31 instances_counter = itertools.count()
32 # Prefix to use for auto naming.
33 prefix = 'reg'
34
35 def __init__(self, size, name=None):
36 """Create a new generic register.
37 """
38
39 if name is None:
40 name = '%s%i' % (self.prefix, next(self.instances_counter))
41
42 if not isinstance(name, str):
43 raise QiskitError("The circuit name should be a string "
44 "(or None for autogenerate a name).")
45
46 test = re.compile('[a-z][a-zA-Z0-9_]*')
47 if test.match(name) is None:
48 raise QiskitError("%s is an invalid OPENQASM register name." % name)
49
50 self.name = name
51 self.size = size
52 if size <= 0:
53 raise QiskitError("register size must be positive")
54
55 def __repr__(self):
56 """Return the official string representing the register."""
57 return "%s(%d, '%s')" % (self.__class__.__qualname__,
58 self.size, self.name)
59
60 def __len__(self):
61 """Return register size"""
62 return self.size
63
64 def check_range(self, j):
65 """Check that j is a valid index into self."""
66 if isinstance(j, int):
67 if j < 0 or j >= self.size:
68 raise QiskitIndexError("register index out of range")
69 elif isinstance(j, slice):
70 if j.start < 0 or j.stop >= self.size or (j.step is not None and
71 j.step <= 0):
72 raise QiskitIndexError("register index slice out of range")
73
74 def __getitem__(self, key):
75 """
76 Arg:
77 key (int|slice|list): index of the bit/qubit to be retrieved.
78
79 Returns:
80 tuple[Register, int]: a tuple in the form `(self, key)` if key is int.
81 If key is a slice, return a `list((self,key))`.
82
83 Raises:
84 QiskitError: if the `key` is not an integer.
85 QiskitIndexError: if the `key` is not in the range
86 `(0, self.size)`.
87 """
88 if not isinstance(key, (int, slice, list)):
89 raise QiskitError("expected integer or slice index into register")
90 if isinstance(key, int) and key < 0:
91 key = self.size + key
92 self.check_range(key)
93 if isinstance(key, slice):
94 return [(self, ind) for ind in range(*key.indices(len(self)))]
95 elif isinstance(key, list): # list of qubit indices
96 if max(key) < len(self):
97 return [(self, ind) for ind in key]
98 else:
99 raise QiskitError('register index out of range')
100 else:
101 return self, key
102
103 def __iter__(self):
104 """
105 Returns:
106 iterator: an iterator over the bits/qubits of the register, in the
107 form `tuple (Register, int)`.
108 """
109 return zip([self]*self.size, range(self.size))
110
111 def __eq__(self, other):
112 """Two Registers are the same if they are of the same type
113 (i.e. quantum/classical), and have the same name and size.
114
115 Args:
116 other (Register): other Register
117
118 Returns:
119 bool: are self and other equal.
120 """
121 res = False
122 if type(self) is type(other) and \
123 self.name == other.name and \
124 self.size == other.size:
125 res = True
126 return res
127
128 def __hash__(self):
129 """Make object hashable, based on the name and size to hash."""
130 return hash((type(self), self.name, self.size))
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/circuit/register.py b/qiskit/circuit/register.py
--- a/qiskit/circuit/register.py
+++ b/qiskit/circuit/register.py
@@ -36,21 +36,29 @@
"""Create a new generic register.
"""
+ # validate (or cast) size
+ try:
+ size = int(size)
+ except Exception:
+ raise QiskitError("size needs to be castable to an int")
+ if size <= 0:
+ raise QiskitError("register size must be positive")
+
+ # validate (or cast) name
if name is None:
name = '%s%i' % (self.prefix, next(self.instances_counter))
-
- if not isinstance(name, str):
- raise QiskitError("The circuit name should be a string "
- "(or None for autogenerate a name).")
-
- test = re.compile('[a-z][a-zA-Z0-9_]*')
- if test.match(name) is None:
- raise QiskitError("%s is an invalid OPENQASM register name." % name)
+ else:
+ try:
+ name = str(name)
+ except Exception:
+ raise QiskitError("The circuit name should be castable to a string "
+ "(or None for autogenerate a name).")
+ name_format = re.compile('[a-z][a-zA-Z0-9_]*')
+ if name_format.match(name) is None:
+ raise QiskitError("%s is an invalid OPENQASM register name." % name)
self.name = name
self.size = size
- if size <= 0:
- raise QiskitError("register size must be positive")
def __repr__(self):
"""Return the official string representing the register."""
@@ -106,7 +114,7 @@
iterator: an iterator over the bits/qubits of the register, in the
form `tuple (Register, int)`.
"""
- return zip([self]*self.size, range(self.size))
+ return zip([self] * self.size, range(self.size))
def __eq__(self, other):
"""Two Registers are the same if they are of the same type
|
{"golden_diff": "diff --git a/qiskit/circuit/register.py b/qiskit/circuit/register.py\n--- a/qiskit/circuit/register.py\n+++ b/qiskit/circuit/register.py\n@@ -36,21 +36,29 @@\n \"\"\"Create a new generic register.\n \"\"\"\n \n+ # validate (or cast) size\n+ try:\n+ size = int(size)\n+ except Exception:\n+ raise QiskitError(\"size needs to be castable to an int\")\n+ if size <= 0:\n+ raise QiskitError(\"register size must be positive\")\n+\n+ # validate (or cast) name\n if name is None:\n name = '%s%i' % (self.prefix, next(self.instances_counter))\n-\n- if not isinstance(name, str):\n- raise QiskitError(\"The circuit name should be a string \"\n- \"(or None for autogenerate a name).\")\n-\n- test = re.compile('[a-z][a-zA-Z0-9_]*')\n- if test.match(name) is None:\n- raise QiskitError(\"%s is an invalid OPENQASM register name.\" % name)\n+ else:\n+ try:\n+ name = str(name)\n+ except Exception:\n+ raise QiskitError(\"The circuit name should be castable to a string \"\n+ \"(or None for autogenerate a name).\")\n+ name_format = re.compile('[a-z][a-zA-Z0-9_]*')\n+ if name_format.match(name) is None:\n+ raise QiskitError(\"%s is an invalid OPENQASM register name.\" % name)\n \n self.name = name\n self.size = size\n- if size <= 0:\n- raise QiskitError(\"register size must be positive\")\n \n def __repr__(self):\n \"\"\"Return the official string representing the register.\"\"\"\n@@ -106,7 +114,7 @@\n iterator: an iterator over the bits/qubits of the register, in the\n form `tuple (Register, int)`.\n \"\"\"\n- return zip([self]*self.size, range(self.size))\n+ return zip([self] * self.size, range(self.size))\n \n def __eq__(self, other):\n \"\"\"Two Registers are the same if they are of the same type\n", "issue": "Error when number of qubits is of type numpy.int64\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues to confirm this idea does not exist. -->\r\n\r\n### What is the expected enhancement?\r\n\r\nIn `qiskit/validation/base.py`, function `check_types`: currently, if `n_qubits` or `memory_slots` are of type `numpy.int64`, then an error is triggered, because type `int` is expected.\r\n\r\nI find it too strict. Especially considering that if the number of qubits is originated in a `numpy` array, then its default type is `numpy.int64`. Terra can allow additional types, or convert the type internally.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nBase register reference object.\n\"\"\"\nimport re\nimport logging\nimport itertools\n\nfrom qiskit.exceptions import QiskitError, QiskitIndexError\n\nlogger = logging.getLogger(__name__)\n\n\nclass Register:\n \"\"\"Implement a generic register.\"\"\"\n\n # Counter for the number of instances in this class.\n instances_counter = itertools.count()\n # Prefix to use for auto naming.\n prefix = 'reg'\n\n def __init__(self, size, name=None):\n \"\"\"Create a new generic register.\n \"\"\"\n\n if name is None:\n name = '%s%i' % (self.prefix, next(self.instances_counter))\n\n if not isinstance(name, str):\n raise QiskitError(\"The circuit name should be a string \"\n \"(or None for autogenerate a name).\")\n\n test = re.compile('[a-z][a-zA-Z0-9_]*')\n if test.match(name) is None:\n raise QiskitError(\"%s is an invalid OPENQASM register name.\" % name)\n\n self.name = name\n self.size = size\n if size <= 0:\n raise QiskitError(\"register size must be positive\")\n\n def __repr__(self):\n \"\"\"Return the official string representing the register.\"\"\"\n return \"%s(%d, '%s')\" % (self.__class__.__qualname__,\n self.size, self.name)\n\n def __len__(self):\n \"\"\"Return register size\"\"\"\n return self.size\n\n def check_range(self, j):\n \"\"\"Check that j is a valid index into self.\"\"\"\n if isinstance(j, int):\n if j < 0 or j >= self.size:\n raise QiskitIndexError(\"register index out of range\")\n elif isinstance(j, slice):\n if j.start < 0 or j.stop >= self.size or (j.step is not None and\n j.step <= 0):\n raise QiskitIndexError(\"register index slice out of range\")\n\n def __getitem__(self, key):\n \"\"\"\n Arg:\n key (int|slice|list): index of the bit/qubit to be retrieved.\n\n Returns:\n tuple[Register, int]: a tuple in the form `(self, key)` if key is int.\n If key is a slice, return a `list((self,key))`.\n\n Raises:\n QiskitError: if the `key` is not an integer.\n QiskitIndexError: if the `key` is not in the range\n `(0, self.size)`.\n \"\"\"\n if not isinstance(key, (int, slice, list)):\n raise QiskitError(\"expected integer or slice index into register\")\n if isinstance(key, int) and key < 0:\n key = self.size + key\n self.check_range(key)\n if isinstance(key, slice):\n return [(self, ind) for ind in range(*key.indices(len(self)))]\n elif isinstance(key, list): # list of qubit indices\n if max(key) < len(self):\n return [(self, ind) for ind in key]\n else:\n raise QiskitError('register index out of range')\n else:\n return self, key\n\n def __iter__(self):\n \"\"\"\n Returns:\n iterator: an iterator over the bits/qubits of the register, in the\n form `tuple (Register, int)`.\n \"\"\"\n return zip([self]*self.size, range(self.size))\n\n def __eq__(self, other):\n \"\"\"Two Registers are the same if they are of the same type\n (i.e. quantum/classical), and have the same name and size.\n\n Args:\n other (Register): other Register\n\n Returns:\n bool: are self and other equal.\n \"\"\"\n res = False\n if type(self) is type(other) and \\\n self.name == other.name and \\\n self.size == other.size:\n res = True\n return res\n\n def __hash__(self):\n \"\"\"Make object hashable, based on the name and size to hash.\"\"\"\n return hash((type(self), self.name, self.size))\n", "path": "qiskit/circuit/register.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2017.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nBase register reference object.\n\"\"\"\nimport re\nimport logging\nimport itertools\n\nfrom qiskit.exceptions import QiskitError, QiskitIndexError\n\nlogger = logging.getLogger(__name__)\n\n\nclass Register:\n \"\"\"Implement a generic register.\"\"\"\n\n # Counter for the number of instances in this class.\n instances_counter = itertools.count()\n # Prefix to use for auto naming.\n prefix = 'reg'\n\n def __init__(self, size, name=None):\n \"\"\"Create a new generic register.\n \"\"\"\n\n # validate (or cast) size\n try:\n size = int(size)\n except Exception:\n raise QiskitError(\"size needs to be castable to an int\")\n if size <= 0:\n raise QiskitError(\"register size must be positive\")\n\n # validate (or cast) name\n if name is None:\n name = '%s%i' % (self.prefix, next(self.instances_counter))\n else:\n try:\n name = str(name)\n except Exception:\n raise QiskitError(\"The circuit name should be castable to a string \"\n \"(or None for autogenerate a name).\")\n name_format = re.compile('[a-z][a-zA-Z0-9_]*')\n if name_format.match(name) is None:\n raise QiskitError(\"%s is an invalid OPENQASM register name.\" % name)\n\n self.name = name\n self.size = size\n\n def __repr__(self):\n \"\"\"Return the official string representing the register.\"\"\"\n return \"%s(%d, '%s')\" % (self.__class__.__qualname__,\n self.size, self.name)\n\n def __len__(self):\n \"\"\"Return register size\"\"\"\n return self.size\n\n def check_range(self, j):\n \"\"\"Check that j is a valid index into self.\"\"\"\n if isinstance(j, int):\n if j < 0 or j >= self.size:\n raise QiskitIndexError(\"register index out of range\")\n elif isinstance(j, slice):\n if j.start < 0 or j.stop >= self.size or (j.step is not None and\n j.step <= 0):\n raise QiskitIndexError(\"register index slice out of range\")\n\n def __getitem__(self, key):\n \"\"\"\n Arg:\n key (int|slice|list): index of the bit/qubit to be retrieved.\n\n Returns:\n tuple[Register, int]: a tuple in the form `(self, key)` if key is int.\n If key is a slice, return a `list((self,key))`.\n\n Raises:\n QiskitError: if the `key` is not an integer.\n QiskitIndexError: if the `key` is not in the range\n `(0, self.size)`.\n \"\"\"\n if not isinstance(key, (int, slice, list)):\n raise QiskitError(\"expected integer or slice index into register\")\n if isinstance(key, int) and key < 0:\n key = self.size + key\n self.check_range(key)\n if isinstance(key, slice):\n return [(self, ind) for ind in range(*key.indices(len(self)))]\n elif isinstance(key, list): # list of qubit indices\n if max(key) < len(self):\n return [(self, ind) for ind in key]\n else:\n raise QiskitError('register index out of range')\n else:\n return self, key\n\n def __iter__(self):\n \"\"\"\n Returns:\n iterator: an iterator over the bits/qubits of the register, in the\n form `tuple (Register, int)`.\n \"\"\"\n return zip([self] * self.size, range(self.size))\n\n def __eq__(self, other):\n \"\"\"Two Registers are the same if they are of the same type\n (i.e. quantum/classical), and have the same name and size.\n\n Args:\n other (Register): other Register\n\n Returns:\n bool: are self and other equal.\n \"\"\"\n res = False\n if type(self) is type(other) and \\\n self.name == other.name and \\\n self.size == other.size:\n res = True\n return res\n\n def __hash__(self):\n \"\"\"Make object hashable, based on the name and size to hash.\"\"\"\n return hash((type(self), self.name, self.size))\n", "path": "qiskit/circuit/register.py"}]}
| 1,723 | 509 |
gh_patches_debug_4874
|
rasdani/github-patches
|
git_diff
|
facebookresearch__CompilerGym-418
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use round-robin ordering for leaderboard experiment execution
## 🚀 Feature
Don't repeat all `--n` runs of each benchmark in order, perform 1 run of each benchmark, then proceed to the 2nd run, etc, until all `n` runs have been completed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `compiler_gym/leaderboard/llvm_instcount.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 """LLVM is a popular open source compiler used widely in industry and research.
6 The :code:`llvm-ic-v0` environment exposes LLVM's optimizing passes as a set of
7 actions that can be applied to a particular program. The goal of the agent is to
8 select the sequence of optimizations that lead to the greatest reduction in
9 instruction count in the program being compiled. Reward is the reduction in
10 instruction count achieved scaled to the reduction achieved by LLVM's builtin
11 :code:`-Oz` pipeline.
12
13 +--------------------+------------------------------------------------------+
14 | Property | Value |
15 +====================+======================================================+
16 | Environment | :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>`. |
17 +--------------------+------------------------------------------------------+
18 | Observation Space | Any. |
19 +--------------------+------------------------------------------------------+
20 | Reward Space | Instruction count reduction relative to :code:`-Oz`. |
21 +--------------------+------------------------------------------------------+
22 | Test Dataset | The 23 cBench benchmarks. |
23 +--------------------+------------------------------------------------------+
24
25 Users who wish to create a submission for this leaderboard may use
26 :func:`eval_llvm_instcount_policy()
27 <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` to
28 automatically evaluate their agent on the test set.
29 """
30 import logging
31 import os
32 from itertools import islice
33 from pathlib import Path
34 from threading import Thread
35 from time import sleep
36 from typing import Callable, List
37
38 import gym
39 import humanize
40 from absl import app, flags
41
42 import compiler_gym.envs # noqa Register environments.
43 from compiler_gym.bin.validate import main as validate
44 from compiler_gym.compiler_env_state import (
45 CompilerEnvState,
46 CompilerEnvStateReader,
47 CompilerEnvStateWriter,
48 )
49 from compiler_gym.envs import LlvmEnv
50 from compiler_gym.util.statistics import arithmetic_mean, geometric_mean
51 from compiler_gym.util.timer import Timer, humanize_duration_hms
52
53 flags.DEFINE_string(
54 "leaderboard_results",
55 "llvm_instcount-results.csv",
56 "The path of the file to write results to.",
57 )
58 flags.DEFINE_string(
59 "leaderboard_logfile",
60 "llvm_instcount-results.log",
61 "The path of a file to stream CompilerGym logs to.",
62 )
63 flags.DEFINE_integer(
64 "max_benchmarks",
65 0,
66 "If > 0, use only the the first --max_benchmarks benchmarks from the "
67 "dataset, as determined by alphabetical sort. If not set, all benchmarks "
68 "from the dataset are used.",
69 )
70 flags.DEFINE_integer(
71 "n", 10, "The number of repetitions of the search to run for each benchmark."
72 )
73 flags.DEFINE_string("test_dataset", "cbench-v1", "The dataset to use for the search.")
74 flags.DEFINE_boolean("validate", True, "Run validation on the results.")
75 flags.DEFINE_boolean(
76 "resume",
77 False,
78 "If true, read the --leaderboard_results file first and run only the "
79 "evaluations not already in the results file.",
80 )
81 FLAGS = flags.FLAGS
82
83 # A policy is a function that accepts as input an LLVM environment, and
84 # interacts with that environment with the goal of maximising cumulative reward.
85 Policy = Callable[[LlvmEnv], None]
86
87
88 class _EvalPolicyWorker(Thread):
89 """Worker thread to evaluate a policy."""
90
91 def __init__(
92 self,
93 env: LlvmEnv,
94 benchmarks: List[str],
95 policy: Policy,
96 init_states: List[CompilerEnvState],
97 ):
98 super().__init__()
99 self.env = env
100 self.benchmarks = benchmarks
101 self.policy = policy
102 self.states: List[CompilerEnvState] = init_states
103 self.alive = True
104
105 def run(self):
106 # Determine if we need to print a header.
107 header = (
108 not Path(FLAGS.leaderboard_results).is_file()
109 or os.stat(FLAGS.leaderboard_results).st_size == 0
110 )
111 with CompilerEnvStateWriter(
112 open(FLAGS.leaderboard_results, "a"), header=header
113 ) as writer:
114 for benchmark in self.benchmarks:
115 self.env.reset(benchmark=benchmark)
116 with Timer() as timer:
117 self.policy(self.env)
118
119 # Sanity check that the policy didn't change the expected
120 # experimental setup.
121 assert self.env.in_episode, "Environment is no longer in an episode"
122 assert self.env.benchmark and (
123 self.env.benchmark == benchmark
124 ), "Policy changed environment benchmark"
125 assert self.env.reward_space, "Policy unset environment reward space"
126 assert (
127 self.env.reward_space.id == "IrInstructionCountOz"
128 ), "Policy changed environment reward space"
129
130 # Override walltime in the generated state.
131 state = self.env.state.copy()
132 state.walltime = timer.time
133
134 writer.write_state(state, flush=True)
135 self.states.append(state)
136
137 if not self.alive:
138 return
139
140
141 def eval_llvm_instcount_policy(policy: Policy) -> None:
142 """Evaluate an LLVM codesize policy and generate results for a leaderboard
143 submission.
144
145 To use it, you define your policy as a function that takes an
146 :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>` instance as input and modifies
147 it in place. For example, for a trivial random policy:
148
149 >>> from compiler_gym.envs import LlvmEnv
150 >>> def my_policy(env: LlvmEnv) -> None:
151 .... # Defines a policy that takes 10 random steps.
152 ... for _ in range(10):
153 ... _, _, done, _ = env.step(env.action_space.sample())
154 ... if done: break
155
156 If your policy is stateful, you can use a class and override the
157 :code:`__call__()` method:
158
159 >>> class MyPolicy:
160 ... def __init__(self):
161 ... self.my_stateful_vars = {} # or similar
162 ... def __call__(self, env: LlvmEnv) -> None:
163 ... pass # ... do fun stuff!
164 >>> my_policy = MyPolicy()
165
166 The role of your policy is to perform a sequence of actions on the supplied
167 environment so as to maximize cumulative reward. By default, no observation
168 space is set on the environment, so :meth:`env.step()
169 <compiler_gym.envs.CompilerEnv.step>` will return :code:`None` for the
170 observation. You may set a new observation space:
171
172 >>> env.observation_space = "InstCount" # Set a new space for env.step()
173 >>> env.observation["InstCount"] # Calculate a one-off observation.
174
175 However, the policy may not change the reward space of the environment, or
176 the benchmark.
177
178 Once you have defined your policy, call the
179 :func:`eval_llvm_instcount_policy()
180 <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper
181 function, passing it your policy as its only argument:
182
183 >>> eval_llvm_instcount_policy(my_policy)
184
185 Put together as a complete example, a leaderboard submission script may look
186 like:
187
188 .. code-block:: python
189
190 # my_policy.py
191 from compiler_gym.leaderboard.llvm_instcount import eval_llvm_instcount_policy
192 from compiler_gym.envs import LlvmEnv
193
194 def my_policy(env: LlvmEnv) -> None:
195 env.observation_space = "InstCount" # we're going to use instcount space
196 pass # ... do fun stuff!
197
198 if __name__ == "__main__":
199 eval_llvm_instcount_policy(my_policy)
200
201 The :func:`eval_llvm_instcount_policy()
202 <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper
203 defines a number of commandline flags that can be overriden to control the
204 behavior of the evaluation. For example the flag :code:`--n` determines the
205 number of times the policy is run on each benchmark (default is 10), and
206 :code:`--leaderboard_results` determines the path of the generated results file:
207
208 .. code-block::
209
210 $ python my_policy.py --n=5 --leaderboard_results=my_policy_results.csv
211
212 You can use :code:`--helpfull` flag to list all of the flags that are
213 defined:
214
215 .. code-block::
216
217 $ python my_policy.py --helpfull
218
219 Once you are happy with your approach, see the `contributing guide
220 <https://github.com/facebookresearch/CompilerGym/blob/development/CONTRIBUTING.md#leaderboard-submissions>`_
221 for instructions on preparing a submission to the leaderboard.
222 """
223
224 def main(argv):
225 assert len(argv) == 1, f"Unknown args: {argv[:1]}"
226 assert FLAGS.n > 0, "n must be > 0"
227
228 with gym.make("llvm-ic-v0") as env:
229
230 # Stream verbose CompilerGym logs to file.
231 logger = logging.getLogger("compiler_gym")
232 logger.setLevel(logging.DEBUG)
233 env.logger.setLevel(logging.DEBUG)
234 log_handler = logging.FileHandler(FLAGS.leaderboard_logfile)
235 logger.addHandler(log_handler)
236 logger.propagate = False
237
238 print(f"Writing results to {FLAGS.leaderboard_results}")
239 print(f"Writing logs to {FLAGS.leaderboard_logfile}")
240
241 # Build the list of benchmarks to evaluate.
242 benchmarks = env.datasets[FLAGS.test_dataset].benchmark_uris()
243 if FLAGS.max_benchmarks:
244 benchmarks = islice(benchmarks, FLAGS.max_benchmarks)
245 benchmarks = list(benchmarks)
246
247 # Repeat the searches for the requested number of iterations.
248 benchmarks *= FLAGS.n
249 benchmarks = sorted(benchmarks)
250 total_count = len(benchmarks)
251
252 # If we are resuming from a previous job, read the states that have
253 # already been proccessed and remove those benchmarks from the list
254 # of benchmarks to evaluate.
255 init_states = []
256 if FLAGS.resume and Path(FLAGS.leaderboard_results).is_file():
257 with CompilerEnvStateReader(open(FLAGS.leaderboard_results)) as reader:
258 for state in reader:
259 init_states.append(state)
260 if state.benchmark in benchmarks:
261 benchmarks.remove(state.benchmark)
262
263 # Run the benchmark loop in background so that we can asynchronously
264 # log progress.
265 worker = _EvalPolicyWorker(env, benchmarks, policy, init_states)
266 worker.start()
267 timer = Timer().reset()
268 try:
269 print(
270 f"=== Evaluating policy on "
271 f"{humanize.intcomma(total_count)} "
272 f"{FLAGS.test_dataset} benchmarks ==="
273 "\n\n" # Blank lines will be filled below
274 )
275 while worker.is_alive():
276 done_count = len(worker.states)
277 remaining_count = total_count - done_count
278 time = timer.time
279 gmean_reward = geometric_mean([s.reward for s in worker.states])
280 mean_walltime = (
281 arithmetic_mean([s.walltime for s in worker.states]) or time
282 )
283 print(
284 "\r\033[2A"
285 "\033[K"
286 f"Runtime: {humanize_duration_hms(time)}. "
287 f"Estimated completion: {humanize_duration_hms(time + mean_walltime * remaining_count)}. "
288 f"Completed: {humanize.intcomma(done_count)} / {humanize.intcomma(total_count)} "
289 f"({done_count / total_count:.1%})."
290 "\n\033[K"
291 f"Current mean walltime: {mean_walltime:.3f}s / benchmark."
292 "\n\033[K"
293 f"Current geomean reward: {gmean_reward:.4f}.",
294 flush=True,
295 end="",
296 )
297 sleep(1)
298 except KeyboardInterrupt:
299 print("\nkeyboard interrupt", flush=True)
300 worker.alive = False
301 # User interrupt, don't validate.
302 FLAGS.validate = False
303
304 if FLAGS.validate:
305 FLAGS.env = "llvm-ic-v0"
306 validate(["argv0", FLAGS.leaderboard_results])
307
308 app.run(main)
309
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/compiler_gym/leaderboard/llvm_instcount.py b/compiler_gym/leaderboard/llvm_instcount.py
--- a/compiler_gym/leaderboard/llvm_instcount.py
+++ b/compiler_gym/leaderboard/llvm_instcount.py
@@ -246,7 +246,6 @@
# Repeat the searches for the requested number of iterations.
benchmarks *= FLAGS.n
- benchmarks = sorted(benchmarks)
total_count = len(benchmarks)
# If we are resuming from a previous job, read the states that have
|
{"golden_diff": "diff --git a/compiler_gym/leaderboard/llvm_instcount.py b/compiler_gym/leaderboard/llvm_instcount.py\n--- a/compiler_gym/leaderboard/llvm_instcount.py\n+++ b/compiler_gym/leaderboard/llvm_instcount.py\n@@ -246,7 +246,6 @@\n \n # Repeat the searches for the requested number of iterations.\n benchmarks *= FLAGS.n\n- benchmarks = sorted(benchmarks)\n total_count = len(benchmarks)\n \n # If we are resuming from a previous job, read the states that have\n", "issue": "Use round-robin ordering for leaderboard experiment execution\n## \ud83d\ude80 Feature\r\n\r\nDon't repeat all `--n` runs of each benchmark in order, perform 1 run of each benchmark, then proceed to the 2nd run, etc, until all `n` runs have been completed.\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"LLVM is a popular open source compiler used widely in industry and research.\nThe :code:`llvm-ic-v0` environment exposes LLVM's optimizing passes as a set of\nactions that can be applied to a particular program. The goal of the agent is to\nselect the sequence of optimizations that lead to the greatest reduction in\ninstruction count in the program being compiled. Reward is the reduction in\ninstruction count achieved scaled to the reduction achieved by LLVM's builtin\n:code:`-Oz` pipeline.\n\n+--------------------+------------------------------------------------------+\n| Property | Value |\n+====================+======================================================+\n| Environment | :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>`. |\n+--------------------+------------------------------------------------------+\n| Observation Space | Any. |\n+--------------------+------------------------------------------------------+\n| Reward Space | Instruction count reduction relative to :code:`-Oz`. |\n+--------------------+------------------------------------------------------+\n| Test Dataset | The 23 cBench benchmarks. |\n+--------------------+------------------------------------------------------+\n\nUsers who wish to create a submission for this leaderboard may use\n:func:`eval_llvm_instcount_policy()\n<compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` to\nautomatically evaluate their agent on the test set.\n\"\"\"\nimport logging\nimport os\nfrom itertools import islice\nfrom pathlib import Path\nfrom threading import Thread\nfrom time import sleep\nfrom typing import Callable, List\n\nimport gym\nimport humanize\nfrom absl import app, flags\n\nimport compiler_gym.envs # noqa Register environments.\nfrom compiler_gym.bin.validate import main as validate\nfrom compiler_gym.compiler_env_state import (\n CompilerEnvState,\n CompilerEnvStateReader,\n CompilerEnvStateWriter,\n)\nfrom compiler_gym.envs import LlvmEnv\nfrom compiler_gym.util.statistics import arithmetic_mean, geometric_mean\nfrom compiler_gym.util.timer import Timer, humanize_duration_hms\n\nflags.DEFINE_string(\n \"leaderboard_results\",\n \"llvm_instcount-results.csv\",\n \"The path of the file to write results to.\",\n)\nflags.DEFINE_string(\n \"leaderboard_logfile\",\n \"llvm_instcount-results.log\",\n \"The path of a file to stream CompilerGym logs to.\",\n)\nflags.DEFINE_integer(\n \"max_benchmarks\",\n 0,\n \"If > 0, use only the the first --max_benchmarks benchmarks from the \"\n \"dataset, as determined by alphabetical sort. If not set, all benchmarks \"\n \"from the dataset are used.\",\n)\nflags.DEFINE_integer(\n \"n\", 10, \"The number of repetitions of the search to run for each benchmark.\"\n)\nflags.DEFINE_string(\"test_dataset\", \"cbench-v1\", \"The dataset to use for the search.\")\nflags.DEFINE_boolean(\"validate\", True, \"Run validation on the results.\")\nflags.DEFINE_boolean(\n \"resume\",\n False,\n \"If true, read the --leaderboard_results file first and run only the \"\n \"evaluations not already in the results file.\",\n)\nFLAGS = flags.FLAGS\n\n# A policy is a function that accepts as input an LLVM environment, and\n# interacts with that environment with the goal of maximising cumulative reward.\nPolicy = Callable[[LlvmEnv], None]\n\n\nclass _EvalPolicyWorker(Thread):\n \"\"\"Worker thread to evaluate a policy.\"\"\"\n\n def __init__(\n self,\n env: LlvmEnv,\n benchmarks: List[str],\n policy: Policy,\n init_states: List[CompilerEnvState],\n ):\n super().__init__()\n self.env = env\n self.benchmarks = benchmarks\n self.policy = policy\n self.states: List[CompilerEnvState] = init_states\n self.alive = True\n\n def run(self):\n # Determine if we need to print a header.\n header = (\n not Path(FLAGS.leaderboard_results).is_file()\n or os.stat(FLAGS.leaderboard_results).st_size == 0\n )\n with CompilerEnvStateWriter(\n open(FLAGS.leaderboard_results, \"a\"), header=header\n ) as writer:\n for benchmark in self.benchmarks:\n self.env.reset(benchmark=benchmark)\n with Timer() as timer:\n self.policy(self.env)\n\n # Sanity check that the policy didn't change the expected\n # experimental setup.\n assert self.env.in_episode, \"Environment is no longer in an episode\"\n assert self.env.benchmark and (\n self.env.benchmark == benchmark\n ), \"Policy changed environment benchmark\"\n assert self.env.reward_space, \"Policy unset environment reward space\"\n assert (\n self.env.reward_space.id == \"IrInstructionCountOz\"\n ), \"Policy changed environment reward space\"\n\n # Override walltime in the generated state.\n state = self.env.state.copy()\n state.walltime = timer.time\n\n writer.write_state(state, flush=True)\n self.states.append(state)\n\n if not self.alive:\n return\n\n\ndef eval_llvm_instcount_policy(policy: Policy) -> None:\n \"\"\"Evaluate an LLVM codesize policy and generate results for a leaderboard\n submission.\n\n To use it, you define your policy as a function that takes an\n :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>` instance as input and modifies\n it in place. For example, for a trivial random policy:\n\n >>> from compiler_gym.envs import LlvmEnv\n >>> def my_policy(env: LlvmEnv) -> None:\n .... # Defines a policy that takes 10 random steps.\n ... for _ in range(10):\n ... _, _, done, _ = env.step(env.action_space.sample())\n ... if done: break\n\n If your policy is stateful, you can use a class and override the\n :code:`__call__()` method:\n\n >>> class MyPolicy:\n ... def __init__(self):\n ... self.my_stateful_vars = {} # or similar\n ... def __call__(self, env: LlvmEnv) -> None:\n ... pass # ... do fun stuff!\n >>> my_policy = MyPolicy()\n\n The role of your policy is to perform a sequence of actions on the supplied\n environment so as to maximize cumulative reward. By default, no observation\n space is set on the environment, so :meth:`env.step()\n <compiler_gym.envs.CompilerEnv.step>` will return :code:`None` for the\n observation. You may set a new observation space:\n\n >>> env.observation_space = \"InstCount\" # Set a new space for env.step()\n >>> env.observation[\"InstCount\"] # Calculate a one-off observation.\n\n However, the policy may not change the reward space of the environment, or\n the benchmark.\n\n Once you have defined your policy, call the\n :func:`eval_llvm_instcount_policy()\n <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper\n function, passing it your policy as its only argument:\n\n >>> eval_llvm_instcount_policy(my_policy)\n\n Put together as a complete example, a leaderboard submission script may look\n like:\n\n .. code-block:: python\n\n # my_policy.py\n from compiler_gym.leaderboard.llvm_instcount import eval_llvm_instcount_policy\n from compiler_gym.envs import LlvmEnv\n\n def my_policy(env: LlvmEnv) -> None:\n env.observation_space = \"InstCount\" # we're going to use instcount space\n pass # ... do fun stuff!\n\n if __name__ == \"__main__\":\n eval_llvm_instcount_policy(my_policy)\n\n The :func:`eval_llvm_instcount_policy()\n <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper\n defines a number of commandline flags that can be overriden to control the\n behavior of the evaluation. For example the flag :code:`--n` determines the\n number of times the policy is run on each benchmark (default is 10), and\n :code:`--leaderboard_results` determines the path of the generated results file:\n\n .. code-block::\n\n $ python my_policy.py --n=5 --leaderboard_results=my_policy_results.csv\n\n You can use :code:`--helpfull` flag to list all of the flags that are\n defined:\n\n .. code-block::\n\n $ python my_policy.py --helpfull\n\n Once you are happy with your approach, see the `contributing guide\n <https://github.com/facebookresearch/CompilerGym/blob/development/CONTRIBUTING.md#leaderboard-submissions>`_\n for instructions on preparing a submission to the leaderboard.\n \"\"\"\n\n def main(argv):\n assert len(argv) == 1, f\"Unknown args: {argv[:1]}\"\n assert FLAGS.n > 0, \"n must be > 0\"\n\n with gym.make(\"llvm-ic-v0\") as env:\n\n # Stream verbose CompilerGym logs to file.\n logger = logging.getLogger(\"compiler_gym\")\n logger.setLevel(logging.DEBUG)\n env.logger.setLevel(logging.DEBUG)\n log_handler = logging.FileHandler(FLAGS.leaderboard_logfile)\n logger.addHandler(log_handler)\n logger.propagate = False\n\n print(f\"Writing results to {FLAGS.leaderboard_results}\")\n print(f\"Writing logs to {FLAGS.leaderboard_logfile}\")\n\n # Build the list of benchmarks to evaluate.\n benchmarks = env.datasets[FLAGS.test_dataset].benchmark_uris()\n if FLAGS.max_benchmarks:\n benchmarks = islice(benchmarks, FLAGS.max_benchmarks)\n benchmarks = list(benchmarks)\n\n # Repeat the searches for the requested number of iterations.\n benchmarks *= FLAGS.n\n benchmarks = sorted(benchmarks)\n total_count = len(benchmarks)\n\n # If we are resuming from a previous job, read the states that have\n # already been proccessed and remove those benchmarks from the list\n # of benchmarks to evaluate.\n init_states = []\n if FLAGS.resume and Path(FLAGS.leaderboard_results).is_file():\n with CompilerEnvStateReader(open(FLAGS.leaderboard_results)) as reader:\n for state in reader:\n init_states.append(state)\n if state.benchmark in benchmarks:\n benchmarks.remove(state.benchmark)\n\n # Run the benchmark loop in background so that we can asynchronously\n # log progress.\n worker = _EvalPolicyWorker(env, benchmarks, policy, init_states)\n worker.start()\n timer = Timer().reset()\n try:\n print(\n f\"=== Evaluating policy on \"\n f\"{humanize.intcomma(total_count)} \"\n f\"{FLAGS.test_dataset} benchmarks ===\"\n \"\\n\\n\" # Blank lines will be filled below\n )\n while worker.is_alive():\n done_count = len(worker.states)\n remaining_count = total_count - done_count\n time = timer.time\n gmean_reward = geometric_mean([s.reward for s in worker.states])\n mean_walltime = (\n arithmetic_mean([s.walltime for s in worker.states]) or time\n )\n print(\n \"\\r\\033[2A\"\n \"\\033[K\"\n f\"Runtime: {humanize_duration_hms(time)}. \"\n f\"Estimated completion: {humanize_duration_hms(time + mean_walltime * remaining_count)}. \"\n f\"Completed: {humanize.intcomma(done_count)} / {humanize.intcomma(total_count)} \"\n f\"({done_count / total_count:.1%}).\"\n \"\\n\\033[K\"\n f\"Current mean walltime: {mean_walltime:.3f}s / benchmark.\"\n \"\\n\\033[K\"\n f\"Current geomean reward: {gmean_reward:.4f}.\",\n flush=True,\n end=\"\",\n )\n sleep(1)\n except KeyboardInterrupt:\n print(\"\\nkeyboard interrupt\", flush=True)\n worker.alive = False\n # User interrupt, don't validate.\n FLAGS.validate = False\n\n if FLAGS.validate:\n FLAGS.env = \"llvm-ic-v0\"\n validate([\"argv0\", FLAGS.leaderboard_results])\n\n app.run(main)\n", "path": "compiler_gym/leaderboard/llvm_instcount.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\"LLVM is a popular open source compiler used widely in industry and research.\nThe :code:`llvm-ic-v0` environment exposes LLVM's optimizing passes as a set of\nactions that can be applied to a particular program. The goal of the agent is to\nselect the sequence of optimizations that lead to the greatest reduction in\ninstruction count in the program being compiled. Reward is the reduction in\ninstruction count achieved scaled to the reduction achieved by LLVM's builtin\n:code:`-Oz` pipeline.\n\n+--------------------+------------------------------------------------------+\n| Property | Value |\n+====================+======================================================+\n| Environment | :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>`. |\n+--------------------+------------------------------------------------------+\n| Observation Space | Any. |\n+--------------------+------------------------------------------------------+\n| Reward Space | Instruction count reduction relative to :code:`-Oz`. |\n+--------------------+------------------------------------------------------+\n| Test Dataset | The 23 cBench benchmarks. |\n+--------------------+------------------------------------------------------+\n\nUsers who wish to create a submission for this leaderboard may use\n:func:`eval_llvm_instcount_policy()\n<compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` to\nautomatically evaluate their agent on the test set.\n\"\"\"\nimport logging\nimport os\nfrom itertools import islice\nfrom pathlib import Path\nfrom threading import Thread\nfrom time import sleep\nfrom typing import Callable, List\n\nimport gym\nimport humanize\nfrom absl import app, flags\n\nimport compiler_gym.envs # noqa Register environments.\nfrom compiler_gym.bin.validate import main as validate\nfrom compiler_gym.compiler_env_state import (\n CompilerEnvState,\n CompilerEnvStateReader,\n CompilerEnvStateWriter,\n)\nfrom compiler_gym.envs import LlvmEnv\nfrom compiler_gym.util.statistics import arithmetic_mean, geometric_mean\nfrom compiler_gym.util.timer import Timer, humanize_duration_hms\n\nflags.DEFINE_string(\n \"leaderboard_results\",\n \"llvm_instcount-results.csv\",\n \"The path of the file to write results to.\",\n)\nflags.DEFINE_string(\n \"leaderboard_logfile\",\n \"llvm_instcount-results.log\",\n \"The path of a file to stream CompilerGym logs to.\",\n)\nflags.DEFINE_integer(\n \"max_benchmarks\",\n 0,\n \"If > 0, use only the the first --max_benchmarks benchmarks from the \"\n \"dataset, as determined by alphabetical sort. If not set, all benchmarks \"\n \"from the dataset are used.\",\n)\nflags.DEFINE_integer(\n \"n\", 10, \"The number of repetitions of the search to run for each benchmark.\"\n)\nflags.DEFINE_string(\"test_dataset\", \"cbench-v1\", \"The dataset to use for the search.\")\nflags.DEFINE_boolean(\"validate\", True, \"Run validation on the results.\")\nflags.DEFINE_boolean(\n \"resume\",\n False,\n \"If true, read the --leaderboard_results file first and run only the \"\n \"evaluations not already in the results file.\",\n)\nFLAGS = flags.FLAGS\n\n# A policy is a function that accepts as input an LLVM environment, and\n# interacts with that environment with the goal of maximising cumulative reward.\nPolicy = Callable[[LlvmEnv], None]\n\n\nclass _EvalPolicyWorker(Thread):\n \"\"\"Worker thread to evaluate a policy.\"\"\"\n\n def __init__(\n self,\n env: LlvmEnv,\n benchmarks: List[str],\n policy: Policy,\n init_states: List[CompilerEnvState],\n ):\n super().__init__()\n self.env = env\n self.benchmarks = benchmarks\n self.policy = policy\n self.states: List[CompilerEnvState] = init_states\n self.alive = True\n\n def run(self):\n # Determine if we need to print a header.\n header = (\n not Path(FLAGS.leaderboard_results).is_file()\n or os.stat(FLAGS.leaderboard_results).st_size == 0\n )\n with CompilerEnvStateWriter(\n open(FLAGS.leaderboard_results, \"a\"), header=header\n ) as writer:\n for benchmark in self.benchmarks:\n self.env.reset(benchmark=benchmark)\n with Timer() as timer:\n self.policy(self.env)\n\n # Sanity check that the policy didn't change the expected\n # experimental setup.\n assert self.env.in_episode, \"Environment is no longer in an episode\"\n assert self.env.benchmark and (\n self.env.benchmark == benchmark\n ), \"Policy changed environment benchmark\"\n assert self.env.reward_space, \"Policy unset environment reward space\"\n assert (\n self.env.reward_space.id == \"IrInstructionCountOz\"\n ), \"Policy changed environment reward space\"\n\n # Override walltime in the generated state.\n state = self.env.state.copy()\n state.walltime = timer.time\n\n writer.write_state(state, flush=True)\n self.states.append(state)\n\n if not self.alive:\n return\n\n\ndef eval_llvm_instcount_policy(policy: Policy) -> None:\n \"\"\"Evaluate an LLVM codesize policy and generate results for a leaderboard\n submission.\n\n To use it, you define your policy as a function that takes an\n :class:`LlvmEnv <compiler_gym.envs.LlvmEnv>` instance as input and modifies\n it in place. For example, for a trivial random policy:\n\n >>> from compiler_gym.envs import LlvmEnv\n >>> def my_policy(env: LlvmEnv) -> None:\n .... # Defines a policy that takes 10 random steps.\n ... for _ in range(10):\n ... _, _, done, _ = env.step(env.action_space.sample())\n ... if done: break\n\n If your policy is stateful, you can use a class and override the\n :code:`__call__()` method:\n\n >>> class MyPolicy:\n ... def __init__(self):\n ... self.my_stateful_vars = {} # or similar\n ... def __call__(self, env: LlvmEnv) -> None:\n ... pass # ... do fun stuff!\n >>> my_policy = MyPolicy()\n\n The role of your policy is to perform a sequence of actions on the supplied\n environment so as to maximize cumulative reward. By default, no observation\n space is set on the environment, so :meth:`env.step()\n <compiler_gym.envs.CompilerEnv.step>` will return :code:`None` for the\n observation. You may set a new observation space:\n\n >>> env.observation_space = \"InstCount\" # Set a new space for env.step()\n >>> env.observation[\"InstCount\"] # Calculate a one-off observation.\n\n However, the policy may not change the reward space of the environment, or\n the benchmark.\n\n Once you have defined your policy, call the\n :func:`eval_llvm_instcount_policy()\n <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper\n function, passing it your policy as its only argument:\n\n >>> eval_llvm_instcount_policy(my_policy)\n\n Put together as a complete example, a leaderboard submission script may look\n like:\n\n .. code-block:: python\n\n # my_policy.py\n from compiler_gym.leaderboard.llvm_instcount import eval_llvm_instcount_policy\n from compiler_gym.envs import LlvmEnv\n\n def my_policy(env: LlvmEnv) -> None:\n env.observation_space = \"InstCount\" # we're going to use instcount space\n pass # ... do fun stuff!\n\n if __name__ == \"__main__\":\n eval_llvm_instcount_policy(my_policy)\n\n The :func:`eval_llvm_instcount_policy()\n <compiler_gym.leaderboard.llvm_instcount.eval_llvm_instcount_policy>` helper\n defines a number of commandline flags that can be overriden to control the\n behavior of the evaluation. For example the flag :code:`--n` determines the\n number of times the policy is run on each benchmark (default is 10), and\n :code:`--leaderboard_results` determines the path of the generated results file:\n\n .. code-block::\n\n $ python my_policy.py --n=5 --leaderboard_results=my_policy_results.csv\n\n You can use :code:`--helpfull` flag to list all of the flags that are\n defined:\n\n .. code-block::\n\n $ python my_policy.py --helpfull\n\n Once you are happy with your approach, see the `contributing guide\n <https://github.com/facebookresearch/CompilerGym/blob/development/CONTRIBUTING.md#leaderboard-submissions>`_\n for instructions on preparing a submission to the leaderboard.\n \"\"\"\n\n def main(argv):\n assert len(argv) == 1, f\"Unknown args: {argv[:1]}\"\n assert FLAGS.n > 0, \"n must be > 0\"\n\n with gym.make(\"llvm-ic-v0\") as env:\n\n # Stream verbose CompilerGym logs to file.\n logger = logging.getLogger(\"compiler_gym\")\n logger.setLevel(logging.DEBUG)\n env.logger.setLevel(logging.DEBUG)\n log_handler = logging.FileHandler(FLAGS.leaderboard_logfile)\n logger.addHandler(log_handler)\n logger.propagate = False\n\n print(f\"Writing results to {FLAGS.leaderboard_results}\")\n print(f\"Writing logs to {FLAGS.leaderboard_logfile}\")\n\n # Build the list of benchmarks to evaluate.\n benchmarks = env.datasets[FLAGS.test_dataset].benchmark_uris()\n if FLAGS.max_benchmarks:\n benchmarks = islice(benchmarks, FLAGS.max_benchmarks)\n benchmarks = list(benchmarks)\n\n # Repeat the searches for the requested number of iterations.\n benchmarks *= FLAGS.n\n total_count = len(benchmarks)\n\n # If we are resuming from a previous job, read the states that have\n # already been proccessed and remove those benchmarks from the list\n # of benchmarks to evaluate.\n init_states = []\n if FLAGS.resume and Path(FLAGS.leaderboard_results).is_file():\n with CompilerEnvStateReader(open(FLAGS.leaderboard_results)) as reader:\n for state in reader:\n init_states.append(state)\n if state.benchmark in benchmarks:\n benchmarks.remove(state.benchmark)\n\n # Run the benchmark loop in background so that we can asynchronously\n # log progress.\n worker = _EvalPolicyWorker(env, benchmarks, policy, init_states)\n worker.start()\n timer = Timer().reset()\n try:\n print(\n f\"=== Evaluating policy on \"\n f\"{humanize.intcomma(total_count)} \"\n f\"{FLAGS.test_dataset} benchmarks ===\"\n \"\\n\\n\" # Blank lines will be filled below\n )\n while worker.is_alive():\n done_count = len(worker.states)\n remaining_count = total_count - done_count\n time = timer.time\n gmean_reward = geometric_mean([s.reward for s in worker.states])\n mean_walltime = (\n arithmetic_mean([s.walltime for s in worker.states]) or time\n )\n print(\n \"\\r\\033[2A\"\n \"\\033[K\"\n f\"Runtime: {humanize_duration_hms(time)}. \"\n f\"Estimated completion: {humanize_duration_hms(time + mean_walltime * remaining_count)}. \"\n f\"Completed: {humanize.intcomma(done_count)} / {humanize.intcomma(total_count)} \"\n f\"({done_count / total_count:.1%}).\"\n \"\\n\\033[K\"\n f\"Current mean walltime: {mean_walltime:.3f}s / benchmark.\"\n \"\\n\\033[K\"\n f\"Current geomean reward: {gmean_reward:.4f}.\",\n flush=True,\n end=\"\",\n )\n sleep(1)\n except KeyboardInterrupt:\n print(\"\\nkeyboard interrupt\", flush=True)\n worker.alive = False\n # User interrupt, don't validate.\n FLAGS.validate = False\n\n if FLAGS.validate:\n FLAGS.env = \"llvm-ic-v0\"\n validate([\"argv0\", FLAGS.leaderboard_results])\n\n app.run(main)\n", "path": "compiler_gym/leaderboard/llvm_instcount.py"}]}
| 3,832 | 126 |
gh_patches_debug_40866
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5273
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `colossalai/kernel/triton/kvcache_copy.py`
Content:
```
1 import torch
2 import triton
3 import triton.language as tl
4
5
6 # Triton 2.1.0
7 @triton.jit
8 def _copy_to_kvcache_seqlen1_kernel(
9 KV, # K or V
10 KVCache, # KCache or VCache
11 BLOCK_TABLES,
12 context_lengths,
13 stride_kt,
14 stride_kh,
15 stride_kd,
16 stride_cacheb,
17 stride_cacheh,
18 stride_cached,
19 stride_cachebs,
20 stride_bts,
21 stride_btb,
22 block_size,
23 HEAD_DIM: tl.constexpr,
24 ):
25 cur_seq_idx = tl.program_id(0)
26 cur_kv_head_idx = tl.program_id(1)
27
28 cur_kv_seq_len = tl.load(context_lengths + cur_seq_idx)
29 last_bt_block_idx = cur_kv_seq_len // block_size
30 block_table_ptr = BLOCK_TABLES + cur_seq_idx * stride_bts
31 block_id = tl.load(block_table_ptr + last_bt_block_idx * stride_btb)
32 offsets_in_last_block = (cur_kv_seq_len % block_size) * stride_cachebs
33 offsets_dmodel = tl.arange(0, HEAD_DIM)
34 offsets_kv = cur_seq_idx * stride_kt + cur_kv_head_idx * stride_kh + offsets_dmodel * stride_kd
35 kv = tl.load(KV + offsets_kv)
36 offsets_kvcache = (
37 block_id * stride_cacheb
38 + cur_kv_head_idx * stride_cacheh
39 + offsets_dmodel * stride_cached
40 + offsets_in_last_block
41 )
42 tl.store(KVCache + offsets_kvcache, kv)
43 return
44
45
46 # Used with blocked kv cache.
47 # Copy k or v to block k/v cache during decoding stage
48 def copy_kv_to_blocked_cache(
49 k: torch.Tensor, # [bsz, 1, num_kv_heads, head_dim], k or v during decoding stage
50 k_cache: torch.Tensor, # [num_blocks, num_kv_heads, head_dim, block_size], blocked k or v cache (for now, the shapes of them are the same)
51 context_lengths: torch.Tensor, # [bsz], past kv seq len (not incorporating the current kv of length 1)
52 block_tables: torch.Tensor, # [bsz, max_blocks_per_sequence]
53 ):
54 assert k.dim() == 4, "Unsupported shape of k (supposed to be used for decoding stage)"
55 assert k.size(1) == 1, "Unsupported kv seq len (supposed to be used for decoding stage)"
56 assert k.size(-1) == k_cache.size(-2), "Incompatible head dim"
57 assert k.dtype == k_cache.dtype, "Expected consistent dtype for tensor and cache."
58 bsz, _, num_kv_heads, head_dim = k.shape
59 assert context_lengths.shape[0] == block_tables.shape[0] == bsz, (
60 f"Got incompatible batch size (number of seqs):\n"
61 f" Conext lengths bsz {context_lengths.shape[0]}, Block tables bsz {block_tables.shape[0]}, "
62 f"batch size {bsz}"
63 )
64
65 # Modify if the shape of kv cahce is changed.
66 block_size = k_cache.size(-1)
67 # [bsz, 1, num_kv_heads, head_dim] -> [bsz, num_kv_heads, head_dim]
68 k = k.squeeze(dim=1)
69
70 num_warps = 8 if head_dim > 128 else 4
71
72 grid = (bsz, num_kv_heads)
73 _copy_to_kvcache_seqlen1_kernel[grid](
74 k,
75 k_cache,
76 block_tables,
77 context_lengths,
78 k.stride(0),
79 k.stride(1),
80 k.stride(2),
81 k_cache.stride(0),
82 k_cache.stride(1),
83 k_cache.stride(2),
84 k_cache.stride(3),
85 block_tables.stride(0),
86 block_tables.stride(1),
87 block_size,
88 HEAD_DIM=head_dim,
89 num_warps=num_warps,
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/colossalai/kernel/triton/kvcache_copy.py b/colossalai/kernel/triton/kvcache_copy.py
--- a/colossalai/kernel/triton/kvcache_copy.py
+++ b/colossalai/kernel/triton/kvcache_copy.py
@@ -25,11 +25,11 @@
cur_seq_idx = tl.program_id(0)
cur_kv_head_idx = tl.program_id(1)
- cur_kv_seq_len = tl.load(context_lengths + cur_seq_idx)
- last_bt_block_idx = cur_kv_seq_len // block_size
+ past_kv_seq_len = tl.load(context_lengths + cur_seq_idx) - 1
+ last_bt_block_idx = past_kv_seq_len // block_size
block_table_ptr = BLOCK_TABLES + cur_seq_idx * stride_bts
block_id = tl.load(block_table_ptr + last_bt_block_idx * stride_btb)
- offsets_in_last_block = (cur_kv_seq_len % block_size) * stride_cachebs
+ offsets_in_last_block = (past_kv_seq_len % block_size) * stride_cachebs
offsets_dmodel = tl.arange(0, HEAD_DIM)
offsets_kv = cur_seq_idx * stride_kt + cur_kv_head_idx * stride_kh + offsets_dmodel * stride_kd
kv = tl.load(KV + offsets_kv)
@@ -43,23 +43,30 @@
return
-# Used with blocked kv cache.
-# Copy k or v to block k/v cache during decoding stage
def copy_kv_to_blocked_cache(
- k: torch.Tensor, # [bsz, 1, num_kv_heads, head_dim], k or v during decoding stage
- k_cache: torch.Tensor, # [num_blocks, num_kv_heads, head_dim, block_size], blocked k or v cache (for now, the shapes of them are the same)
- context_lengths: torch.Tensor, # [bsz], past kv seq len (not incorporating the current kv of length 1)
- block_tables: torch.Tensor, # [bsz, max_blocks_per_sequence]
+ k: torch.Tensor,
+ k_cache: torch.Tensor,
+ kv_lengths: torch.Tensor,
+ block_tables: torch.Tensor,
):
+ """
+ Copy keys or values to the blocked key/value cache during decoding stage.
+
+ Parameters:
+ - k (torch.Tensor): [bsz, 1, num_kv_heads, head_dim] - Keys or values during decoding with seq len 1.
+ - k_cache (torch.Tensor): [num_blocks, num_kv_heads, head_dim, block_size] - Blocked key or value cache.
+ - kv_lengths (torch.Tensor): [bsz] - Past key/value sequence lengths plus current sequence length for each sequence.
+ - block_tables (torch.Tensor): [bsz, max_blocks_per_sequence] - Block tables for each sequence.
+ """
assert k.dim() == 4, "Unsupported shape of k (supposed to be used for decoding stage)"
assert k.size(1) == 1, "Unsupported kv seq len (supposed to be used for decoding stage)"
assert k.size(-1) == k_cache.size(-2), "Incompatible head dim"
assert k.dtype == k_cache.dtype, "Expected consistent dtype for tensor and cache."
bsz, _, num_kv_heads, head_dim = k.shape
- assert context_lengths.shape[0] == block_tables.shape[0] == bsz, (
+ assert kv_lengths.shape[0] == block_tables.shape[0] == bsz, (
f"Got incompatible batch size (number of seqs):\n"
- f" Conext lengths bsz {context_lengths.shape[0]}, Block tables bsz {block_tables.shape[0]}, "
- f"batch size {bsz}"
+ f" Past kv sequence lengths bsz {kv_lengths.shape[0]}; "
+ f" block tables bsz {block_tables.shape[0]}, input k batch size {bsz}"
)
# Modify if the shape of kv cahce is changed.
@@ -74,7 +81,7 @@
k,
k_cache,
block_tables,
- context_lengths,
+ kv_lengths,
k.stride(0),
k.stride(1),
k.stride(2),
|
{"golden_diff": "diff --git a/colossalai/kernel/triton/kvcache_copy.py b/colossalai/kernel/triton/kvcache_copy.py\n--- a/colossalai/kernel/triton/kvcache_copy.py\n+++ b/colossalai/kernel/triton/kvcache_copy.py\n@@ -25,11 +25,11 @@\n cur_seq_idx = tl.program_id(0)\n cur_kv_head_idx = tl.program_id(1)\n \n- cur_kv_seq_len = tl.load(context_lengths + cur_seq_idx)\n- last_bt_block_idx = cur_kv_seq_len // block_size\n+ past_kv_seq_len = tl.load(context_lengths + cur_seq_idx) - 1\n+ last_bt_block_idx = past_kv_seq_len // block_size\n block_table_ptr = BLOCK_TABLES + cur_seq_idx * stride_bts\n block_id = tl.load(block_table_ptr + last_bt_block_idx * stride_btb)\n- offsets_in_last_block = (cur_kv_seq_len % block_size) * stride_cachebs\n+ offsets_in_last_block = (past_kv_seq_len % block_size) * stride_cachebs\n offsets_dmodel = tl.arange(0, HEAD_DIM)\n offsets_kv = cur_seq_idx * stride_kt + cur_kv_head_idx * stride_kh + offsets_dmodel * stride_kd\n kv = tl.load(KV + offsets_kv)\n@@ -43,23 +43,30 @@\n return\n \n \n-# Used with blocked kv cache.\n-# Copy k or v to block k/v cache during decoding stage\n def copy_kv_to_blocked_cache(\n- k: torch.Tensor, # [bsz, 1, num_kv_heads, head_dim], k or v during decoding stage\n- k_cache: torch.Tensor, # [num_blocks, num_kv_heads, head_dim, block_size], blocked k or v cache (for now, the shapes of them are the same)\n- context_lengths: torch.Tensor, # [bsz], past kv seq len (not incorporating the current kv of length 1)\n- block_tables: torch.Tensor, # [bsz, max_blocks_per_sequence]\n+ k: torch.Tensor,\n+ k_cache: torch.Tensor,\n+ kv_lengths: torch.Tensor,\n+ block_tables: torch.Tensor,\n ):\n+ \"\"\"\n+ Copy keys or values to the blocked key/value cache during decoding stage.\n+\n+ Parameters:\n+ - k (torch.Tensor): [bsz, 1, num_kv_heads, head_dim] - Keys or values during decoding with seq len 1.\n+ - k_cache (torch.Tensor): [num_blocks, num_kv_heads, head_dim, block_size] - Blocked key or value cache.\n+ - kv_lengths (torch.Tensor): [bsz] - Past key/value sequence lengths plus current sequence length for each sequence.\n+ - block_tables (torch.Tensor): [bsz, max_blocks_per_sequence] - Block tables for each sequence.\n+ \"\"\"\n assert k.dim() == 4, \"Unsupported shape of k (supposed to be used for decoding stage)\"\n assert k.size(1) == 1, \"Unsupported kv seq len (supposed to be used for decoding stage)\"\n assert k.size(-1) == k_cache.size(-2), \"Incompatible head dim\"\n assert k.dtype == k_cache.dtype, \"Expected consistent dtype for tensor and cache.\"\n bsz, _, num_kv_heads, head_dim = k.shape\n- assert context_lengths.shape[0] == block_tables.shape[0] == bsz, (\n+ assert kv_lengths.shape[0] == block_tables.shape[0] == bsz, (\n f\"Got incompatible batch size (number of seqs):\\n\"\n- f\" Conext lengths bsz {context_lengths.shape[0]}, Block tables bsz {block_tables.shape[0]}, \"\n- f\"batch size {bsz}\"\n+ f\" Past kv sequence lengths bsz {kv_lengths.shape[0]}; \"\n+ f\" block tables bsz {block_tables.shape[0]}, input k batch size {bsz}\"\n )\n \n # Modify if the shape of kv cahce is changed.\n@@ -74,7 +81,7 @@\n k,\n k_cache,\n block_tables,\n- context_lengths,\n+ kv_lengths,\n k.stride(0),\n k.stride(1),\n k.stride(2),\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import torch\nimport triton\nimport triton.language as tl\n\n\n# Triton 2.1.0\[email protected]\ndef _copy_to_kvcache_seqlen1_kernel(\n KV, # K or V\n KVCache, # KCache or VCache\n BLOCK_TABLES,\n context_lengths,\n stride_kt,\n stride_kh,\n stride_kd,\n stride_cacheb,\n stride_cacheh,\n stride_cached,\n stride_cachebs,\n stride_bts,\n stride_btb,\n block_size,\n HEAD_DIM: tl.constexpr,\n):\n cur_seq_idx = tl.program_id(0)\n cur_kv_head_idx = tl.program_id(1)\n\n cur_kv_seq_len = tl.load(context_lengths + cur_seq_idx)\n last_bt_block_idx = cur_kv_seq_len // block_size\n block_table_ptr = BLOCK_TABLES + cur_seq_idx * stride_bts\n block_id = tl.load(block_table_ptr + last_bt_block_idx * stride_btb)\n offsets_in_last_block = (cur_kv_seq_len % block_size) * stride_cachebs\n offsets_dmodel = tl.arange(0, HEAD_DIM)\n offsets_kv = cur_seq_idx * stride_kt + cur_kv_head_idx * stride_kh + offsets_dmodel * stride_kd\n kv = tl.load(KV + offsets_kv)\n offsets_kvcache = (\n block_id * stride_cacheb\n + cur_kv_head_idx * stride_cacheh\n + offsets_dmodel * stride_cached\n + offsets_in_last_block\n )\n tl.store(KVCache + offsets_kvcache, kv)\n return\n\n\n# Used with blocked kv cache.\n# Copy k or v to block k/v cache during decoding stage\ndef copy_kv_to_blocked_cache(\n k: torch.Tensor, # [bsz, 1, num_kv_heads, head_dim], k or v during decoding stage\n k_cache: torch.Tensor, # [num_blocks, num_kv_heads, head_dim, block_size], blocked k or v cache (for now, the shapes of them are the same)\n context_lengths: torch.Tensor, # [bsz], past kv seq len (not incorporating the current kv of length 1)\n block_tables: torch.Tensor, # [bsz, max_blocks_per_sequence]\n):\n assert k.dim() == 4, \"Unsupported shape of k (supposed to be used for decoding stage)\"\n assert k.size(1) == 1, \"Unsupported kv seq len (supposed to be used for decoding stage)\"\n assert k.size(-1) == k_cache.size(-2), \"Incompatible head dim\"\n assert k.dtype == k_cache.dtype, \"Expected consistent dtype for tensor and cache.\"\n bsz, _, num_kv_heads, head_dim = k.shape\n assert context_lengths.shape[0] == block_tables.shape[0] == bsz, (\n f\"Got incompatible batch size (number of seqs):\\n\"\n f\" Conext lengths bsz {context_lengths.shape[0]}, Block tables bsz {block_tables.shape[0]}, \"\n f\"batch size {bsz}\"\n )\n\n # Modify if the shape of kv cahce is changed.\n block_size = k_cache.size(-1)\n # [bsz, 1, num_kv_heads, head_dim] -> [bsz, num_kv_heads, head_dim]\n k = k.squeeze(dim=1)\n\n num_warps = 8 if head_dim > 128 else 4\n\n grid = (bsz, num_kv_heads)\n _copy_to_kvcache_seqlen1_kernel[grid](\n k,\n k_cache,\n block_tables,\n context_lengths,\n k.stride(0),\n k.stride(1),\n k.stride(2),\n k_cache.stride(0),\n k_cache.stride(1),\n k_cache.stride(2),\n k_cache.stride(3),\n block_tables.stride(0),\n block_tables.stride(1),\n block_size,\n HEAD_DIM=head_dim,\n num_warps=num_warps,\n )\n", "path": "colossalai/kernel/triton/kvcache_copy.py"}], "after_files": [{"content": "import torch\nimport triton\nimport triton.language as tl\n\n\n# Triton 2.1.0\[email protected]\ndef _copy_to_kvcache_seqlen1_kernel(\n KV, # K or V\n KVCache, # KCache or VCache\n BLOCK_TABLES,\n context_lengths,\n stride_kt,\n stride_kh,\n stride_kd,\n stride_cacheb,\n stride_cacheh,\n stride_cached,\n stride_cachebs,\n stride_bts,\n stride_btb,\n block_size,\n HEAD_DIM: tl.constexpr,\n):\n cur_seq_idx = tl.program_id(0)\n cur_kv_head_idx = tl.program_id(1)\n\n past_kv_seq_len = tl.load(context_lengths + cur_seq_idx) - 1\n last_bt_block_idx = past_kv_seq_len // block_size\n block_table_ptr = BLOCK_TABLES + cur_seq_idx * stride_bts\n block_id = tl.load(block_table_ptr + last_bt_block_idx * stride_btb)\n offsets_in_last_block = (past_kv_seq_len % block_size) * stride_cachebs\n offsets_dmodel = tl.arange(0, HEAD_DIM)\n offsets_kv = cur_seq_idx * stride_kt + cur_kv_head_idx * stride_kh + offsets_dmodel * stride_kd\n kv = tl.load(KV + offsets_kv)\n offsets_kvcache = (\n block_id * stride_cacheb\n + cur_kv_head_idx * stride_cacheh\n + offsets_dmodel * stride_cached\n + offsets_in_last_block\n )\n tl.store(KVCache + offsets_kvcache, kv)\n return\n\n\ndef copy_kv_to_blocked_cache(\n k: torch.Tensor,\n k_cache: torch.Tensor,\n kv_lengths: torch.Tensor,\n block_tables: torch.Tensor,\n):\n \"\"\"\n Copy keys or values to the blocked key/value cache during decoding stage.\n\n Parameters:\n - k (torch.Tensor): [bsz, 1, num_kv_heads, head_dim] - Keys or values during decoding with seq len 1.\n - k_cache (torch.Tensor): [num_blocks, num_kv_heads, head_dim, block_size] - Blocked key or value cache.\n - kv_lengths (torch.Tensor): [bsz] - Past key/value sequence lengths plus current sequence length for each sequence.\n - block_tables (torch.Tensor): [bsz, max_blocks_per_sequence] - Block tables for each sequence.\n \"\"\"\n assert k.dim() == 4, \"Unsupported shape of k (supposed to be used for decoding stage)\"\n assert k.size(1) == 1, \"Unsupported kv seq len (supposed to be used for decoding stage)\"\n assert k.size(-1) == k_cache.size(-2), \"Incompatible head dim\"\n assert k.dtype == k_cache.dtype, \"Expected consistent dtype for tensor and cache.\"\n bsz, _, num_kv_heads, head_dim = k.shape\n assert kv_lengths.shape[0] == block_tables.shape[0] == bsz, (\n f\"Got incompatible batch size (number of seqs):\\n\"\n f\" Past kv sequence lengths bsz {kv_lengths.shape[0]}; \"\n f\" block tables bsz {block_tables.shape[0]}, input k batch size {bsz}\"\n )\n\n # Modify if the shape of kv cahce is changed.\n block_size = k_cache.size(-1)\n # [bsz, 1, num_kv_heads, head_dim] -> [bsz, num_kv_heads, head_dim]\n k = k.squeeze(dim=1)\n\n num_warps = 8 if head_dim > 128 else 4\n\n grid = (bsz, num_kv_heads)\n _copy_to_kvcache_seqlen1_kernel[grid](\n k,\n k_cache,\n block_tables,\n kv_lengths,\n k.stride(0),\n k.stride(1),\n k.stride(2),\n k_cache.stride(0),\n k_cache.stride(1),\n k_cache.stride(2),\n k_cache.stride(3),\n block_tables.stride(0),\n block_tables.stride(1),\n block_size,\n HEAD_DIM=head_dim,\n num_warps=num_warps,\n )\n", "path": "colossalai/kernel/triton/kvcache_copy.py"}]}
| 1,345 | 948 |
gh_patches_debug_20577
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-3194
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py`
Content:
```
1 import logging
2 import os
3 from typing import List, Optional
4
5 try:
6 import boto3
7 except ImportError:
8 boto3 = None
9
10 from great_expectations.core.batch import BatchDefinition
11 from great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec
12 from great_expectations.datasource.data_connector import (
13 ConfiguredAssetFilePathDataConnector,
14 )
15 from great_expectations.datasource.data_connector.asset import Asset
16 from great_expectations.datasource.data_connector.util import list_s3_keys
17 from great_expectations.execution_engine import ExecutionEngine
18
19 logger = logging.getLogger(__name__)
20
21
22 class ConfiguredAssetS3DataConnector(ConfiguredAssetFilePathDataConnector):
23 """
24 Extension of ConfiguredAssetFilePathDataConnector used to connect to S3
25
26 DataConnectors produce identifying information, called "batch_spec" that ExecutionEngines
27 can use to get individual batches of data. They add flexibility in how to obtain data
28 such as with time-based partitioning, downsampling, or other techniques appropriate
29 for the Datasource.
30
31 The ConfiguredAssetS3DataConnector is one of two classes (InferredAssetS3DataConnector being the
32 other one) designed for connecting to data on S3.
33
34 A ConfiguredAssetS3DataConnector requires an explicit listing of each DataAsset you want to connect to.
35 This allows more fine-tuning, but also requires more setup.
36 """
37
38 def __init__(
39 self,
40 name: str,
41 datasource_name: str,
42 bucket: str,
43 assets: dict,
44 execution_engine: Optional[ExecutionEngine] = None,
45 default_regex: Optional[dict] = None,
46 sorters: Optional[list] = None,
47 prefix: Optional[str] = "",
48 delimiter: Optional[str] = "/",
49 max_keys: Optional[int] = 1000,
50 boto3_options: Optional[dict] = None,
51 batch_spec_passthrough: Optional[dict] = None,
52 ):
53 """
54 ConfiguredAssetDataConnector for connecting to S3.
55
56 Args:
57 name (str): required name for DataConnector
58 datasource_name (str): required name for datasource
59 bucket (str): bucket for S3
60 assets (dict): dict of asset configuration (required for ConfiguredAssetDataConnector)
61 execution_engine (ExecutionEngine): optional reference to ExecutionEngine
62 default_regex (dict): optional regex configuration for filtering data_references
63 sorters (list): optional list of sorters for sorting data_references
64 prefix (str): S3 prefix
65 delimiter (str): S3 delimiter
66 max_keys (int): S3 max_keys (default is 1000)
67 boto3_options (dict): optional boto3 options
68 batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec
69 """
70 logger.debug(f'Constructing ConfiguredAssetS3DataConnector "{name}".')
71
72 super().__init__(
73 name=name,
74 datasource_name=datasource_name,
75 execution_engine=execution_engine,
76 assets=assets,
77 default_regex=default_regex,
78 sorters=sorters,
79 batch_spec_passthrough=batch_spec_passthrough,
80 )
81 self._bucket = bucket
82 self._prefix = os.path.join(prefix, "")
83 self._delimiter = delimiter
84 self._max_keys = max_keys
85
86 if boto3_options is None:
87 boto3_options = {}
88
89 try:
90 self._s3 = boto3.client("s3", **boto3_options)
91 except (TypeError, AttributeError):
92 raise ImportError(
93 "Unable to load boto3 (it is required for ConfiguredAssetS3DataConnector)."
94 )
95
96 def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:
97 """
98 Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.
99
100 Args:
101 batch_definition (BatchDefinition): to be used to build batch_spec
102
103 Returns:
104 BatchSpec built from batch_definition
105 """
106 batch_spec: PathBatchSpec = super().build_batch_spec(
107 batch_definition=batch_definition
108 )
109 return S3BatchSpec(batch_spec)
110
111 def _get_data_reference_list_for_asset(self, asset: Optional[Asset]) -> List[str]:
112 query_options: dict = {
113 "Bucket": self._bucket,
114 "Prefix": self._prefix,
115 "Delimiter": self._delimiter,
116 "MaxKeys": self._max_keys,
117 }
118 if asset is not None:
119 if asset.bucket:
120 query_options["Bucket"] = asset.bucket
121 if asset.prefix:
122 query_options["Prefix"] = asset.prefix
123 if asset.delimiter:
124 query_options["Delimiter"] = asset.delimiter
125 if asset.max_keys:
126 query_options["MaxKeys"] = asset.max_keys
127
128 path_list: List[str] = [
129 key
130 for key in list_s3_keys(
131 s3=self._s3,
132 query_options=query_options,
133 iterator_dict={},
134 recursive=False,
135 )
136 ]
137 return path_list
138
139 def _get_full_file_path(
140 self,
141 path: str,
142 data_asset_name: Optional[str] = None,
143 ) -> str:
144 # data_assert_name isn't used in this method.
145 # It's only kept for compatibility with parent methods.
146 return f"s3a://{os.path.join(self._bucket, path)}"
147
```
Path: `great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py`
Content:
```
1 import logging
2 import os
3 from typing import List, Optional
4
5 from great_expectations.core.batch import BatchDefinition
6 from great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec
7 from great_expectations.exceptions.exceptions import ParserError
8
9 try:
10 import boto3
11 except ImportError:
12 boto3 = None
13
14 from great_expectations.datasource.data_connector import (
15 InferredAssetFilePathDataConnector,
16 )
17 from great_expectations.datasource.data_connector.util import list_s3_keys
18 from great_expectations.execution_engine import ExecutionEngine
19
20 logger = logging.getLogger(__name__)
21
22 INVALID_S3_CHARS = ["*"]
23
24
25 class InferredAssetS3DataConnector(InferredAssetFilePathDataConnector):
26 """
27 Extension of InferredAssetFilePathDataConnector used to connect to S3
28
29 The InferredAssetS3DataConnector is one of two classes (ConfiguredAssetS3DataConnector being the
30 other one) designed for connecting to filesystem-like data, more specifically files on S3. It connects to assets
31 inferred from bucket, prefix, and file name by default_regex.
32
33 InferredAssetS3DataConnector that operates on S3 buckets and determines
34 the data_asset_name implicitly (e.g., through the combination of the regular expressions pattern and group names)
35
36 """
37
38 def __init__(
39 self,
40 name: str,
41 datasource_name: str,
42 bucket: str,
43 execution_engine: Optional[ExecutionEngine] = None,
44 default_regex: Optional[dict] = None,
45 sorters: Optional[list] = None,
46 prefix: Optional[str] = "",
47 delimiter: Optional[str] = "/",
48 max_keys: Optional[int] = 1000,
49 boto3_options: Optional[dict] = None,
50 batch_spec_passthrough: Optional[dict] = None,
51 ):
52 """
53 InferredAssetS3DataConnector for connecting to S3.
54
55 Args:
56 name (str): required name for data_connector
57 datasource_name (str): required name for datasource
58 bucket (str): bucket for S3
59 execution_engine (ExecutionEngine): optional reference to ExecutionEngine
60 default_regex (dict): optional regex configuration for filtering data_references
61 sorters (list): optional list of sorters for sorting data_references
62 prefix (str): S3 prefix
63 delimiter (str): S3 delimiter
64 max_keys (int): S3 max_keys (default is 1000)
65 boto3_options (dict): optional boto3 options
66 batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec
67 """
68 logger.debug(f'Constructing InferredAssetS3DataConnector "{name}".')
69
70 super().__init__(
71 name=name,
72 datasource_name=datasource_name,
73 execution_engine=execution_engine,
74 default_regex=default_regex,
75 sorters=sorters,
76 batch_spec_passthrough=batch_spec_passthrough,
77 )
78
79 self._bucket = bucket
80 self._prefix = os.path.join(prefix, "")
81 self._delimiter = delimiter
82 self._max_keys = max_keys
83
84 if boto3_options is None:
85 boto3_options = {}
86
87 try:
88 self._s3 = boto3.client("s3", **boto3_options)
89 except (TypeError, AttributeError):
90 raise ImportError(
91 "Unable to load boto3 (it is required for InferredAssetS3DataConnector)."
92 )
93
94 def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:
95 """
96 Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.
97
98 Args:
99 batch_definition (BatchDefinition): to be used to build batch_spec
100
101 Returns:
102 BatchSpec built from batch_definition
103 """
104 batch_spec: PathBatchSpec = super().build_batch_spec(
105 batch_definition=batch_definition
106 )
107 return S3BatchSpec(batch_spec)
108
109 def _get_data_reference_list(
110 self, data_asset_name: Optional[str] = None
111 ) -> List[str]:
112 """
113 List objects in the underlying data store to create a list of data_references.
114
115 This method is used to refresh the cache.
116 """
117 query_options: dict = {
118 "Bucket": self._bucket,
119 "Prefix": self._prefix,
120 "Delimiter": self._delimiter,
121 "MaxKeys": self._max_keys,
122 }
123
124 path_list: List[str] = [
125 key
126 for key in list_s3_keys(
127 s3=self._s3,
128 query_options=query_options,
129 iterator_dict={},
130 recursive=True,
131 )
132 ]
133 return path_list
134
135 def _get_full_file_path(
136 self,
137 path: str,
138 data_asset_name: Optional[str] = None,
139 ) -> str:
140 # data_assert_name isn't used in this method.
141 # It's only kept for compatibility with parent methods.
142 _check_valid_s3_path(path)
143 return f"s3a://{os.path.join(self._bucket, path)}"
144
145
146 def _check_valid_s3_path(
147 path: str,
148 ) -> None:
149 """Performs a basic check for validity of the S3 path"""
150 bad_chars = [c for c in INVALID_S3_CHARS if c in path]
151 if len(bad_chars) > 0:
152 msg = (
153 f"The parsed S3 path={path} contains the invalid characters {bad_chars}."
154 "Please make sure your regex is correct and characters are escaped."
155 )
156 if "*" in bad_chars:
157 msg += "Note: `*` is internally used to replace the regex for `.`."
158 raise ParserError(msg)
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py b/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py
--- a/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py
+++ b/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py
@@ -141,6 +141,6 @@
path: str,
data_asset_name: Optional[str] = None,
) -> str:
- # data_assert_name isn't used in this method.
+ # data_asset_name isn't used in this method.
# It's only kept for compatibility with parent methods.
return f"s3a://{os.path.join(self._bucket, path)}"
diff --git a/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py b/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py
--- a/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py
+++ b/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py
@@ -137,7 +137,7 @@
path: str,
data_asset_name: Optional[str] = None,
) -> str:
- # data_assert_name isn't used in this method.
+ # data_asset_name isn't used in this method.
# It's only kept for compatibility with parent methods.
_check_valid_s3_path(path)
return f"s3a://{os.path.join(self._bucket, path)}"
|
{"golden_diff": "diff --git a/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py b/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py\n--- a/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py\n+++ b/great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py\n@@ -141,6 +141,6 @@\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n- # data_assert_name isn't used in this method.\n+ # data_asset_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n return f\"s3a://{os.path.join(self._bucket, path)}\"\ndiff --git a/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py b/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py\n--- a/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py\n+++ b/great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py\n@@ -137,7 +137,7 @@\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n- # data_assert_name isn't used in this method.\n+ # data_asset_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n _check_valid_s3_path(path)\n return f\"s3a://{os.path.join(self._bucket, path)}\"\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import List, Optional\n\ntry:\n import boto3\nexcept ImportError:\n boto3 = None\n\nfrom great_expectations.core.batch import BatchDefinition\nfrom great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec\nfrom great_expectations.datasource.data_connector import (\n ConfiguredAssetFilePathDataConnector,\n)\nfrom great_expectations.datasource.data_connector.asset import Asset\nfrom great_expectations.datasource.data_connector.util import list_s3_keys\nfrom great_expectations.execution_engine import ExecutionEngine\n\nlogger = logging.getLogger(__name__)\n\n\nclass ConfiguredAssetS3DataConnector(ConfiguredAssetFilePathDataConnector):\n \"\"\"\n Extension of ConfiguredAssetFilePathDataConnector used to connect to S3\n\n DataConnectors produce identifying information, called \"batch_spec\" that ExecutionEngines\n can use to get individual batches of data. They add flexibility in how to obtain data\n such as with time-based partitioning, downsampling, or other techniques appropriate\n for the Datasource.\n\n The ConfiguredAssetS3DataConnector is one of two classes (InferredAssetS3DataConnector being the\n other one) designed for connecting to data on S3.\n\n A ConfiguredAssetS3DataConnector requires an explicit listing of each DataAsset you want to connect to.\n This allows more fine-tuning, but also requires more setup.\n \"\"\"\n\n def __init__(\n self,\n name: str,\n datasource_name: str,\n bucket: str,\n assets: dict,\n execution_engine: Optional[ExecutionEngine] = None,\n default_regex: Optional[dict] = None,\n sorters: Optional[list] = None,\n prefix: Optional[str] = \"\",\n delimiter: Optional[str] = \"/\",\n max_keys: Optional[int] = 1000,\n boto3_options: Optional[dict] = None,\n batch_spec_passthrough: Optional[dict] = None,\n ):\n \"\"\"\n ConfiguredAssetDataConnector for connecting to S3.\n\n Args:\n name (str): required name for DataConnector\n datasource_name (str): required name for datasource\n bucket (str): bucket for S3\n assets (dict): dict of asset configuration (required for ConfiguredAssetDataConnector)\n execution_engine (ExecutionEngine): optional reference to ExecutionEngine\n default_regex (dict): optional regex configuration for filtering data_references\n sorters (list): optional list of sorters for sorting data_references\n prefix (str): S3 prefix\n delimiter (str): S3 delimiter\n max_keys (int): S3 max_keys (default is 1000)\n boto3_options (dict): optional boto3 options\n batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec\n \"\"\"\n logger.debug(f'Constructing ConfiguredAssetS3DataConnector \"{name}\".')\n\n super().__init__(\n name=name,\n datasource_name=datasource_name,\n execution_engine=execution_engine,\n assets=assets,\n default_regex=default_regex,\n sorters=sorters,\n batch_spec_passthrough=batch_spec_passthrough,\n )\n self._bucket = bucket\n self._prefix = os.path.join(prefix, \"\")\n self._delimiter = delimiter\n self._max_keys = max_keys\n\n if boto3_options is None:\n boto3_options = {}\n\n try:\n self._s3 = boto3.client(\"s3\", **boto3_options)\n except (TypeError, AttributeError):\n raise ImportError(\n \"Unable to load boto3 (it is required for ConfiguredAssetS3DataConnector).\"\n )\n\n def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:\n \"\"\"\n Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.\n\n Args:\n batch_definition (BatchDefinition): to be used to build batch_spec\n\n Returns:\n BatchSpec built from batch_definition\n \"\"\"\n batch_spec: PathBatchSpec = super().build_batch_spec(\n batch_definition=batch_definition\n )\n return S3BatchSpec(batch_spec)\n\n def _get_data_reference_list_for_asset(self, asset: Optional[Asset]) -> List[str]:\n query_options: dict = {\n \"Bucket\": self._bucket,\n \"Prefix\": self._prefix,\n \"Delimiter\": self._delimiter,\n \"MaxKeys\": self._max_keys,\n }\n if asset is not None:\n if asset.bucket:\n query_options[\"Bucket\"] = asset.bucket\n if asset.prefix:\n query_options[\"Prefix\"] = asset.prefix\n if asset.delimiter:\n query_options[\"Delimiter\"] = asset.delimiter\n if asset.max_keys:\n query_options[\"MaxKeys\"] = asset.max_keys\n\n path_list: List[str] = [\n key\n for key in list_s3_keys(\n s3=self._s3,\n query_options=query_options,\n iterator_dict={},\n recursive=False,\n )\n ]\n return path_list\n\n def _get_full_file_path(\n self,\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n # data_assert_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n return f\"s3a://{os.path.join(self._bucket, path)}\"\n", "path": "great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py"}, {"content": "import logging\nimport os\nfrom typing import List, Optional\n\nfrom great_expectations.core.batch import BatchDefinition\nfrom great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec\nfrom great_expectations.exceptions.exceptions import ParserError\n\ntry:\n import boto3\nexcept ImportError:\n boto3 = None\n\nfrom great_expectations.datasource.data_connector import (\n InferredAssetFilePathDataConnector,\n)\nfrom great_expectations.datasource.data_connector.util import list_s3_keys\nfrom great_expectations.execution_engine import ExecutionEngine\n\nlogger = logging.getLogger(__name__)\n\nINVALID_S3_CHARS = [\"*\"]\n\n\nclass InferredAssetS3DataConnector(InferredAssetFilePathDataConnector):\n \"\"\"\n Extension of InferredAssetFilePathDataConnector used to connect to S3\n\n The InferredAssetS3DataConnector is one of two classes (ConfiguredAssetS3DataConnector being the\n other one) designed for connecting to filesystem-like data, more specifically files on S3. It connects to assets\n inferred from bucket, prefix, and file name by default_regex.\n\n InferredAssetS3DataConnector that operates on S3 buckets and determines\n the data_asset_name implicitly (e.g., through the combination of the regular expressions pattern and group names)\n\n \"\"\"\n\n def __init__(\n self,\n name: str,\n datasource_name: str,\n bucket: str,\n execution_engine: Optional[ExecutionEngine] = None,\n default_regex: Optional[dict] = None,\n sorters: Optional[list] = None,\n prefix: Optional[str] = \"\",\n delimiter: Optional[str] = \"/\",\n max_keys: Optional[int] = 1000,\n boto3_options: Optional[dict] = None,\n batch_spec_passthrough: Optional[dict] = None,\n ):\n \"\"\"\n InferredAssetS3DataConnector for connecting to S3.\n\n Args:\n name (str): required name for data_connector\n datasource_name (str): required name for datasource\n bucket (str): bucket for S3\n execution_engine (ExecutionEngine): optional reference to ExecutionEngine\n default_regex (dict): optional regex configuration for filtering data_references\n sorters (list): optional list of sorters for sorting data_references\n prefix (str): S3 prefix\n delimiter (str): S3 delimiter\n max_keys (int): S3 max_keys (default is 1000)\n boto3_options (dict): optional boto3 options\n batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec\n \"\"\"\n logger.debug(f'Constructing InferredAssetS3DataConnector \"{name}\".')\n\n super().__init__(\n name=name,\n datasource_name=datasource_name,\n execution_engine=execution_engine,\n default_regex=default_regex,\n sorters=sorters,\n batch_spec_passthrough=batch_spec_passthrough,\n )\n\n self._bucket = bucket\n self._prefix = os.path.join(prefix, \"\")\n self._delimiter = delimiter\n self._max_keys = max_keys\n\n if boto3_options is None:\n boto3_options = {}\n\n try:\n self._s3 = boto3.client(\"s3\", **boto3_options)\n except (TypeError, AttributeError):\n raise ImportError(\n \"Unable to load boto3 (it is required for InferredAssetS3DataConnector).\"\n )\n\n def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:\n \"\"\"\n Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.\n\n Args:\n batch_definition (BatchDefinition): to be used to build batch_spec\n\n Returns:\n BatchSpec built from batch_definition\n \"\"\"\n batch_spec: PathBatchSpec = super().build_batch_spec(\n batch_definition=batch_definition\n )\n return S3BatchSpec(batch_spec)\n\n def _get_data_reference_list(\n self, data_asset_name: Optional[str] = None\n ) -> List[str]:\n \"\"\"\n List objects in the underlying data store to create a list of data_references.\n\n This method is used to refresh the cache.\n \"\"\"\n query_options: dict = {\n \"Bucket\": self._bucket,\n \"Prefix\": self._prefix,\n \"Delimiter\": self._delimiter,\n \"MaxKeys\": self._max_keys,\n }\n\n path_list: List[str] = [\n key\n for key in list_s3_keys(\n s3=self._s3,\n query_options=query_options,\n iterator_dict={},\n recursive=True,\n )\n ]\n return path_list\n\n def _get_full_file_path(\n self,\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n # data_assert_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n _check_valid_s3_path(path)\n return f\"s3a://{os.path.join(self._bucket, path)}\"\n\n\ndef _check_valid_s3_path(\n path: str,\n) -> None:\n \"\"\"Performs a basic check for validity of the S3 path\"\"\"\n bad_chars = [c for c in INVALID_S3_CHARS if c in path]\n if len(bad_chars) > 0:\n msg = (\n f\"The parsed S3 path={path} contains the invalid characters {bad_chars}.\"\n \"Please make sure your regex is correct and characters are escaped.\"\n )\n if \"*\" in bad_chars:\n msg += \"Note: `*` is internally used to replace the regex for `.`.\"\n raise ParserError(msg)\n", "path": "great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py"}], "after_files": [{"content": "import logging\nimport os\nfrom typing import List, Optional\n\ntry:\n import boto3\nexcept ImportError:\n boto3 = None\n\nfrom great_expectations.core.batch import BatchDefinition\nfrom great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec\nfrom great_expectations.datasource.data_connector import (\n ConfiguredAssetFilePathDataConnector,\n)\nfrom great_expectations.datasource.data_connector.asset import Asset\nfrom great_expectations.datasource.data_connector.util import list_s3_keys\nfrom great_expectations.execution_engine import ExecutionEngine\n\nlogger = logging.getLogger(__name__)\n\n\nclass ConfiguredAssetS3DataConnector(ConfiguredAssetFilePathDataConnector):\n \"\"\"\n Extension of ConfiguredAssetFilePathDataConnector used to connect to S3\n\n DataConnectors produce identifying information, called \"batch_spec\" that ExecutionEngines\n can use to get individual batches of data. They add flexibility in how to obtain data\n such as with time-based partitioning, downsampling, or other techniques appropriate\n for the Datasource.\n\n The ConfiguredAssetS3DataConnector is one of two classes (InferredAssetS3DataConnector being the\n other one) designed for connecting to data on S3.\n\n A ConfiguredAssetS3DataConnector requires an explicit listing of each DataAsset you want to connect to.\n This allows more fine-tuning, but also requires more setup.\n \"\"\"\n\n def __init__(\n self,\n name: str,\n datasource_name: str,\n bucket: str,\n assets: dict,\n execution_engine: Optional[ExecutionEngine] = None,\n default_regex: Optional[dict] = None,\n sorters: Optional[list] = None,\n prefix: Optional[str] = \"\",\n delimiter: Optional[str] = \"/\",\n max_keys: Optional[int] = 1000,\n boto3_options: Optional[dict] = None,\n batch_spec_passthrough: Optional[dict] = None,\n ):\n \"\"\"\n ConfiguredAssetDataConnector for connecting to S3.\n\n Args:\n name (str): required name for DataConnector\n datasource_name (str): required name for datasource\n bucket (str): bucket for S3\n assets (dict): dict of asset configuration (required for ConfiguredAssetDataConnector)\n execution_engine (ExecutionEngine): optional reference to ExecutionEngine\n default_regex (dict): optional regex configuration for filtering data_references\n sorters (list): optional list of sorters for sorting data_references\n prefix (str): S3 prefix\n delimiter (str): S3 delimiter\n max_keys (int): S3 max_keys (default is 1000)\n boto3_options (dict): optional boto3 options\n batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec\n \"\"\"\n logger.debug(f'Constructing ConfiguredAssetS3DataConnector \"{name}\".')\n\n super().__init__(\n name=name,\n datasource_name=datasource_name,\n execution_engine=execution_engine,\n assets=assets,\n default_regex=default_regex,\n sorters=sorters,\n batch_spec_passthrough=batch_spec_passthrough,\n )\n self._bucket = bucket\n self._prefix = os.path.join(prefix, \"\")\n self._delimiter = delimiter\n self._max_keys = max_keys\n\n if boto3_options is None:\n boto3_options = {}\n\n try:\n self._s3 = boto3.client(\"s3\", **boto3_options)\n except (TypeError, AttributeError):\n raise ImportError(\n \"Unable to load boto3 (it is required for ConfiguredAssetS3DataConnector).\"\n )\n\n def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:\n \"\"\"\n Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.\n\n Args:\n batch_definition (BatchDefinition): to be used to build batch_spec\n\n Returns:\n BatchSpec built from batch_definition\n \"\"\"\n batch_spec: PathBatchSpec = super().build_batch_spec(\n batch_definition=batch_definition\n )\n return S3BatchSpec(batch_spec)\n\n def _get_data_reference_list_for_asset(self, asset: Optional[Asset]) -> List[str]:\n query_options: dict = {\n \"Bucket\": self._bucket,\n \"Prefix\": self._prefix,\n \"Delimiter\": self._delimiter,\n \"MaxKeys\": self._max_keys,\n }\n if asset is not None:\n if asset.bucket:\n query_options[\"Bucket\"] = asset.bucket\n if asset.prefix:\n query_options[\"Prefix\"] = asset.prefix\n if asset.delimiter:\n query_options[\"Delimiter\"] = asset.delimiter\n if asset.max_keys:\n query_options[\"MaxKeys\"] = asset.max_keys\n\n path_list: List[str] = [\n key\n for key in list_s3_keys(\n s3=self._s3,\n query_options=query_options,\n iterator_dict={},\n recursive=False,\n )\n ]\n return path_list\n\n def _get_full_file_path(\n self,\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n # data_asset_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n return f\"s3a://{os.path.join(self._bucket, path)}\"\n", "path": "great_expectations/datasource/data_connector/configured_asset_s3_data_connector.py"}, {"content": "import logging\nimport os\nfrom typing import List, Optional\n\nfrom great_expectations.core.batch import BatchDefinition\nfrom great_expectations.core.batch_spec import PathBatchSpec, S3BatchSpec\nfrom great_expectations.exceptions.exceptions import ParserError\n\ntry:\n import boto3\nexcept ImportError:\n boto3 = None\n\nfrom great_expectations.datasource.data_connector import (\n InferredAssetFilePathDataConnector,\n)\nfrom great_expectations.datasource.data_connector.util import list_s3_keys\nfrom great_expectations.execution_engine import ExecutionEngine\n\nlogger = logging.getLogger(__name__)\n\nINVALID_S3_CHARS = [\"*\"]\n\n\nclass InferredAssetS3DataConnector(InferredAssetFilePathDataConnector):\n \"\"\"\n Extension of InferredAssetFilePathDataConnector used to connect to S3\n\n The InferredAssetS3DataConnector is one of two classes (ConfiguredAssetS3DataConnector being the\n other one) designed for connecting to filesystem-like data, more specifically files on S3. It connects to assets\n inferred from bucket, prefix, and file name by default_regex.\n\n InferredAssetS3DataConnector that operates on S3 buckets and determines\n the data_asset_name implicitly (e.g., through the combination of the regular expressions pattern and group names)\n\n \"\"\"\n\n def __init__(\n self,\n name: str,\n datasource_name: str,\n bucket: str,\n execution_engine: Optional[ExecutionEngine] = None,\n default_regex: Optional[dict] = None,\n sorters: Optional[list] = None,\n prefix: Optional[str] = \"\",\n delimiter: Optional[str] = \"/\",\n max_keys: Optional[int] = 1000,\n boto3_options: Optional[dict] = None,\n batch_spec_passthrough: Optional[dict] = None,\n ):\n \"\"\"\n InferredAssetS3DataConnector for connecting to S3.\n\n Args:\n name (str): required name for data_connector\n datasource_name (str): required name for datasource\n bucket (str): bucket for S3\n execution_engine (ExecutionEngine): optional reference to ExecutionEngine\n default_regex (dict): optional regex configuration for filtering data_references\n sorters (list): optional list of sorters for sorting data_references\n prefix (str): S3 prefix\n delimiter (str): S3 delimiter\n max_keys (int): S3 max_keys (default is 1000)\n boto3_options (dict): optional boto3 options\n batch_spec_passthrough (dict): dictionary with keys that will be added directly to batch_spec\n \"\"\"\n logger.debug(f'Constructing InferredAssetS3DataConnector \"{name}\".')\n\n super().__init__(\n name=name,\n datasource_name=datasource_name,\n execution_engine=execution_engine,\n default_regex=default_regex,\n sorters=sorters,\n batch_spec_passthrough=batch_spec_passthrough,\n )\n\n self._bucket = bucket\n self._prefix = os.path.join(prefix, \"\")\n self._delimiter = delimiter\n self._max_keys = max_keys\n\n if boto3_options is None:\n boto3_options = {}\n\n try:\n self._s3 = boto3.client(\"s3\", **boto3_options)\n except (TypeError, AttributeError):\n raise ImportError(\n \"Unable to load boto3 (it is required for InferredAssetS3DataConnector).\"\n )\n\n def build_batch_spec(self, batch_definition: BatchDefinition) -> S3BatchSpec:\n \"\"\"\n Build BatchSpec from batch_definition by calling DataConnector's build_batch_spec function.\n\n Args:\n batch_definition (BatchDefinition): to be used to build batch_spec\n\n Returns:\n BatchSpec built from batch_definition\n \"\"\"\n batch_spec: PathBatchSpec = super().build_batch_spec(\n batch_definition=batch_definition\n )\n return S3BatchSpec(batch_spec)\n\n def _get_data_reference_list(\n self, data_asset_name: Optional[str] = None\n ) -> List[str]:\n \"\"\"\n List objects in the underlying data store to create a list of data_references.\n\n This method is used to refresh the cache.\n \"\"\"\n query_options: dict = {\n \"Bucket\": self._bucket,\n \"Prefix\": self._prefix,\n \"Delimiter\": self._delimiter,\n \"MaxKeys\": self._max_keys,\n }\n\n path_list: List[str] = [\n key\n for key in list_s3_keys(\n s3=self._s3,\n query_options=query_options,\n iterator_dict={},\n recursive=True,\n )\n ]\n return path_list\n\n def _get_full_file_path(\n self,\n path: str,\n data_asset_name: Optional[str] = None,\n ) -> str:\n # data_asset_name isn't used in this method.\n # It's only kept for compatibility with parent methods.\n _check_valid_s3_path(path)\n return f\"s3a://{os.path.join(self._bucket, path)}\"\n\n\ndef _check_valid_s3_path(\n path: str,\n) -> None:\n \"\"\"Performs a basic check for validity of the S3 path\"\"\"\n bad_chars = [c for c in INVALID_S3_CHARS if c in path]\n if len(bad_chars) > 0:\n msg = (\n f\"The parsed S3 path={path} contains the invalid characters {bad_chars}.\"\n \"Please make sure your regex is correct and characters are escaped.\"\n )\n if \"*\" in bad_chars:\n msg += \"Note: `*` is internally used to replace the regex for `.`.\"\n raise ParserError(msg)\n", "path": "great_expectations/datasource/data_connector/inferred_asset_s3_data_connector.py"}]}
| 3,377 | 347 |
gh_patches_debug_14637
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-124
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add public properties to google.oauth2.credentials.Credentials
Resolves #124
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `google/oauth2/credentials.py`
Content:
```
1 # Copyright 2016 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """OAuth 2.0 Credentials.
16
17 This module provides credentials based on OAuth 2.0 access and refresh tokens.
18 These credentials usually access resources on behalf of a user (resource
19 owner).
20
21 Specifically, this is intended to use access tokens acquired using the
22 `Authorization Code grant`_ and can refresh those tokens using a
23 optional `refresh token`_.
24
25 Obtaining the initial access and refresh token is outside of the scope of this
26 module. Consult `rfc6749 section 4.1`_ for complete details on the
27 Authorization Code grant flow.
28
29 .. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1
30 .. _refresh token: https://tools.ietf.org/html/rfc6749#section-6
31 .. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1
32 """
33
34 from google.auth import _helpers
35 from google.auth import credentials
36 from google.oauth2 import _client
37
38
39 class Credentials(credentials.Scoped, credentials.Credentials):
40 """Credentials using OAuth 2.0 access and refresh tokens."""
41
42 def __init__(self, token, refresh_token=None, token_uri=None,
43 client_id=None, client_secret=None, scopes=None):
44 """
45 Args:
46 token (Optional(str)): The OAuth 2.0 access token. Can be None
47 if refresh information is provided.
48 refresh_token (str): The OAuth 2.0 refresh token. If specified,
49 credentials can be refreshed.
50 token_uri (str): The OAuth 2.0 authorization server's token
51 endpoint URI. Must be specified for refresh, can be left as
52 None if the token can not be refreshed.
53 client_id (str): The OAuth 2.0 client ID. Must be specified for
54 refresh, can be left as None if the token can not be refreshed.
55 client_secret(str): The OAuth 2.0 client secret. Must be specified
56 for refresh, can be left as None if the token can not be
57 refreshed.
58 scopes (Sequence[str]): The scopes that were originally used
59 to obtain authorization. This is a purely informative parameter
60 that can be used by :meth:`has_scopes`. OAuth 2.0 credentials
61 can not request additional scopes after authorization.
62 """
63 super(Credentials, self).__init__()
64 self.token = token
65 self._refresh_token = refresh_token
66 self._scopes = scopes
67 self._token_uri = token_uri
68 self._client_id = client_id
69 self._client_secret = client_secret
70
71 @property
72 def requires_scopes(self):
73 """False: OAuth 2.0 credentials have their scopes set when
74 the initial token is requested and can not be changed."""
75 return False
76
77 def with_scopes(self, scopes):
78 """Unavailable, OAuth 2.0 credentials can not be re-scoped.
79
80 OAuth 2.0 credentials have their scopes set when the initial token is
81 requested and can not be changed.
82 """
83 raise NotImplementedError(
84 'OAuth 2.0 Credentials can not modify their scopes.')
85
86 @_helpers.copy_docstring(credentials.Credentials)
87 def refresh(self, request):
88 access_token, refresh_token, expiry, _ = _client.refresh_grant(
89 request, self._token_uri, self._refresh_token, self._client_id,
90 self._client_secret)
91
92 self.token = access_token
93 self.expiry = expiry
94 self._refresh_token = refresh_token
95
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py
--- a/google/oauth2/credentials.py
+++ b/google/oauth2/credentials.py
@@ -68,6 +68,27 @@
self._client_id = client_id
self._client_secret = client_secret
+ @property
+ def refresh_token(self):
+ """Optional[str]: The OAuth 2.0 refresh token."""
+ return self._refresh_token
+
+ @property
+ def token_uri(self):
+ """Optional[str]: The OAuth 2.0 authorization server's token endpoint
+ URI."""
+ return self._token_uri
+
+ @property
+ def client_id(self):
+ """Optional[str]: The OAuth 2.0 client ID."""
+ return self._client_id
+
+ @property
+ def client_secret(self):
+ """Optional[str]: The OAuth 2.0 client secret."""
+ return self._client_secret
+
@property
def requires_scopes(self):
"""False: OAuth 2.0 credentials have their scopes set when
|
{"golden_diff": "diff --git a/google/oauth2/credentials.py b/google/oauth2/credentials.py\n--- a/google/oauth2/credentials.py\n+++ b/google/oauth2/credentials.py\n@@ -68,6 +68,27 @@\n self._client_id = client_id\n self._client_secret = client_secret\n \n+ @property\n+ def refresh_token(self):\n+ \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n+ return self._refresh_token\n+\n+ @property\n+ def token_uri(self):\n+ \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n+ URI.\"\"\"\n+ return self._token_uri\n+\n+ @property\n+ def client_id(self):\n+ \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n+ return self._client_id\n+\n+ @property\n+ def client_secret(self):\n+ \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n+ return self._client_secret\n+\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n", "issue": "Add public properties to google.oauth2.credentials.Credentials\nResolves #124 \n", "before_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.Scoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, token_uri=None,\n client_id=None, client_secret=None, scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n def with_scopes(self, scopes):\n \"\"\"Unavailable, OAuth 2.0 credentials can not be re-scoped.\n\n OAuth 2.0 credentials have their scopes set when the initial token is\n requested and can not be changed.\n \"\"\"\n raise NotImplementedError(\n 'OAuth 2.0 Credentials can not modify their scopes.')\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n access_token, refresh_token, expiry, _ = _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret)\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n", "path": "google/oauth2/credentials.py"}], "after_files": [{"content": "# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"OAuth 2.0 Credentials.\n\nThis module provides credentials based on OAuth 2.0 access and refresh tokens.\nThese credentials usually access resources on behalf of a user (resource\nowner).\n\nSpecifically, this is intended to use access tokens acquired using the\n`Authorization Code grant`_ and can refresh those tokens using a\noptional `refresh token`_.\n\nObtaining the initial access and refresh token is outside of the scope of this\nmodule. Consult `rfc6749 section 4.1`_ for complete details on the\nAuthorization Code grant flow.\n\n.. _Authorization Code grant: https://tools.ietf.org/html/rfc6749#section-1.3.1\n.. _refresh token: https://tools.ietf.org/html/rfc6749#section-6\n.. _rfc6749 section 4.1: https://tools.ietf.org/html/rfc6749#section-4.1\n\"\"\"\n\nfrom google.auth import _helpers\nfrom google.auth import credentials\nfrom google.oauth2 import _client\n\n\nclass Credentials(credentials.Scoped, credentials.Credentials):\n \"\"\"Credentials using OAuth 2.0 access and refresh tokens.\"\"\"\n\n def __init__(self, token, refresh_token=None, token_uri=None,\n client_id=None, client_secret=None, scopes=None):\n \"\"\"\n Args:\n token (Optional(str)): The OAuth 2.0 access token. Can be None\n if refresh information is provided.\n refresh_token (str): The OAuth 2.0 refresh token. If specified,\n credentials can be refreshed.\n token_uri (str): The OAuth 2.0 authorization server's token\n endpoint URI. Must be specified for refresh, can be left as\n None if the token can not be refreshed.\n client_id (str): The OAuth 2.0 client ID. Must be specified for\n refresh, can be left as None if the token can not be refreshed.\n client_secret(str): The OAuth 2.0 client secret. Must be specified\n for refresh, can be left as None if the token can not be\n refreshed.\n scopes (Sequence[str]): The scopes that were originally used\n to obtain authorization. This is a purely informative parameter\n that can be used by :meth:`has_scopes`. OAuth 2.0 credentials\n can not request additional scopes after authorization.\n \"\"\"\n super(Credentials, self).__init__()\n self.token = token\n self._refresh_token = refresh_token\n self._scopes = scopes\n self._token_uri = token_uri\n self._client_id = client_id\n self._client_secret = client_secret\n\n @property\n def refresh_token(self):\n \"\"\"Optional[str]: The OAuth 2.0 refresh token.\"\"\"\n return self._refresh_token\n\n @property\n def token_uri(self):\n \"\"\"Optional[str]: The OAuth 2.0 authorization server's token endpoint\n URI.\"\"\"\n return self._token_uri\n\n @property\n def client_id(self):\n \"\"\"Optional[str]: The OAuth 2.0 client ID.\"\"\"\n return self._client_id\n\n @property\n def client_secret(self):\n \"\"\"Optional[str]: The OAuth 2.0 client secret.\"\"\"\n return self._client_secret\n\n @property\n def requires_scopes(self):\n \"\"\"False: OAuth 2.0 credentials have their scopes set when\n the initial token is requested and can not be changed.\"\"\"\n return False\n\n def with_scopes(self, scopes):\n \"\"\"Unavailable, OAuth 2.0 credentials can not be re-scoped.\n\n OAuth 2.0 credentials have their scopes set when the initial token is\n requested and can not be changed.\n \"\"\"\n raise NotImplementedError(\n 'OAuth 2.0 Credentials can not modify their scopes.')\n\n @_helpers.copy_docstring(credentials.Credentials)\n def refresh(self, request):\n access_token, refresh_token, expiry, _ = _client.refresh_grant(\n request, self._token_uri, self._refresh_token, self._client_id,\n self._client_secret)\n\n self.token = access_token\n self.expiry = expiry\n self._refresh_token = refresh_token\n", "path": "google/oauth2/credentials.py"}]}
| 1,356 | 245 |
gh_patches_debug_5344
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use plot_event in a example
The function `plot_event` has currently no example linked to its [doc](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_event.html#nilearn.plotting.plot_event).
It wouldn't be too costly to use it in one example somewhere.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `examples/04_glm_first_level/write_events_file.py`
Content:
```
1 """Example of a events.tsv file generation: the neurospin/localizer events.
2 =============================================================================
3
4 The protocol described is the so-called "archi standard" localizer
5 event sequence. See Pinel et al., BMC neuroscience 2007 for reference.
6 """
7
8 print(__doc__)
9
10 #########################################################################
11 # Define the onset times in seconds. Those are typically extracted
12 # from the stimulation software used.
13 import numpy as np
14 onset = np.array([
15 0., 2.4, 8.7, 11.4, 15., 18., 20.7, 23.7, 26.7, 29.7, 33., 35.4, 39.,
16 41.7, 44.7, 48., 56.4, 59.7, 62.4, 69., 71.4, 75., 83.4, 87., 89.7,
17 96., 108., 116.7, 119.4, 122.7, 125.4, 131.4, 135., 137.7, 140.4,
18 143.4, 146.7, 149.4, 153., 156., 159., 162., 164.4, 167.7, 170.4,
19 173.7, 176.7, 188.4, 191.7, 195., 198., 201., 203.7, 207., 210.,
20 212.7, 215.7, 218.7, 221.4, 224.7, 227.7, 230.7, 234., 236.7, 246.,
21 248.4, 251.7, 254.7, 257.4, 260.4, 264., 266.7, 269.7, 275.4, 278.4,
22 284.4, 288., 291., 293.4, 296.7])
23
24 #########################################################################
25 # Associated trial types: these are numbered between 0 and 9, hence
26 # correspond to 10 different conditions.
27 trial_idx = np.array(
28 [7, 7, 0, 2, 9, 4, 9, 3, 5, 9, 1, 6, 8, 8, 6, 6, 8, 0, 3, 4, 5, 8, 6,
29 2, 9, 1, 6, 5, 9, 1, 7, 8, 6, 6, 1, 2, 9, 0, 7, 1, 8, 2, 7, 8, 3, 6,
30 0, 0, 6, 8, 7, 7, 1, 1, 1, 5, 5, 0, 7, 0, 4, 2, 7, 9, 8, 0, 6, 3, 3,
31 7, 1, 0, 0, 4, 1, 9, 8, 4, 9, 9])
32
33 #########################################################################
34 # We may want to map these indices to explicit condition names.
35 # For that, we define a list of 10 strings.
36 condition_ids = ['horizontal checkerboard',
37 'vertical checkerboard',
38 'right button press, auditory instructions',
39 'left button press, auditory instructions',
40 'right button press, visual instructions',
41 'left button press, visual instructions',
42 'mental computation, auditory instructions',
43 'mental computation, visual instructions',
44 'visual sentence',
45 'auditory sentence']
46
47 trial_type = np.array([condition_ids[i] for i in trial_idx])
48
49 #########################################################################
50 # We also define a duration (required by BIDS conventions).
51 duration = np.ones_like(onset)
52
53
54 #########################################################################
55 # Form an event dataframe from these information.
56 import pandas as pd
57 events = pd.DataFrame({'trial_type': trial_type,
58 'onset': onset,
59 'duration': duration})
60
61 #########################################################################
62 # Export them to a tsv file.
63 tsvfile = 'localizer_events.tsv'
64 events.to_csv(tsvfile, sep='\t', index=False)
65 print("Created the events file in %s " % tsvfile)
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/examples/04_glm_first_level/write_events_file.py b/examples/04_glm_first_level/write_events_file.py
--- a/examples/04_glm_first_level/write_events_file.py
+++ b/examples/04_glm_first_level/write_events_file.py
@@ -63,3 +63,10 @@
tsvfile = 'localizer_events.tsv'
events.to_csv(tsvfile, sep='\t', index=False)
print("Created the events file in %s " % tsvfile)
+
+#########################################################################
+# Optionally, the events can be visualized using the plot_event function.
+from matplotlib import pyplot as plt
+from nilearn.plotting import plot_event
+plot_event(events, figsize=(15, 5))
+plt.show()
|
{"golden_diff": "diff --git a/examples/04_glm_first_level/write_events_file.py b/examples/04_glm_first_level/write_events_file.py\n--- a/examples/04_glm_first_level/write_events_file.py\n+++ b/examples/04_glm_first_level/write_events_file.py\n@@ -63,3 +63,10 @@\n tsvfile = 'localizer_events.tsv'\n events.to_csv(tsvfile, sep='\\t', index=False)\n print(\"Created the events file in %s \" % tsvfile)\n+\n+#########################################################################\n+# Optionally, the events can be visualized using the plot_event function.\n+from matplotlib import pyplot as plt\n+from nilearn.plotting import plot_event\n+plot_event(events, figsize=(15, 5))\n+plt.show()\n", "issue": "Use plot_event in a example\nThe function `plot_event` has currently no example linked to its [doc](https://nilearn.github.io/modules/generated/nilearn.plotting.plot_event.html#nilearn.plotting.plot_event). \r\nIt wouldn't be too costly to use it in one example somewhere.\n", "before_files": [{"content": "\"\"\"Example of a events.tsv file generation: the neurospin/localizer events.\n=============================================================================\n\nThe protocol described is the so-called \"archi standard\" localizer\nevent sequence. See Pinel et al., BMC neuroscience 2007 for reference.\n\"\"\"\n\nprint(__doc__)\n\n#########################################################################\n# Define the onset times in seconds. Those are typically extracted\n# from the stimulation software used.\nimport numpy as np\nonset = np.array([\n 0., 2.4, 8.7, 11.4, 15., 18., 20.7, 23.7, 26.7, 29.7, 33., 35.4, 39.,\n 41.7, 44.7, 48., 56.4, 59.7, 62.4, 69., 71.4, 75., 83.4, 87., 89.7,\n 96., 108., 116.7, 119.4, 122.7, 125.4, 131.4, 135., 137.7, 140.4,\n 143.4, 146.7, 149.4, 153., 156., 159., 162., 164.4, 167.7, 170.4,\n 173.7, 176.7, 188.4, 191.7, 195., 198., 201., 203.7, 207., 210.,\n 212.7, 215.7, 218.7, 221.4, 224.7, 227.7, 230.7, 234., 236.7, 246.,\n 248.4, 251.7, 254.7, 257.4, 260.4, 264., 266.7, 269.7, 275.4, 278.4,\n 284.4, 288., 291., 293.4, 296.7])\n\n#########################################################################\n# Associated trial types: these are numbered between 0 and 9, hence\n# correspond to 10 different conditions.\ntrial_idx = np.array(\n [7, 7, 0, 2, 9, 4, 9, 3, 5, 9, 1, 6, 8, 8, 6, 6, 8, 0, 3, 4, 5, 8, 6,\n 2, 9, 1, 6, 5, 9, 1, 7, 8, 6, 6, 1, 2, 9, 0, 7, 1, 8, 2, 7, 8, 3, 6,\n 0, 0, 6, 8, 7, 7, 1, 1, 1, 5, 5, 0, 7, 0, 4, 2, 7, 9, 8, 0, 6, 3, 3,\n 7, 1, 0, 0, 4, 1, 9, 8, 4, 9, 9])\n\n#########################################################################\n# We may want to map these indices to explicit condition names.\n# For that, we define a list of 10 strings.\ncondition_ids = ['horizontal checkerboard',\n 'vertical checkerboard',\n 'right button press, auditory instructions',\n 'left button press, auditory instructions',\n 'right button press, visual instructions',\n 'left button press, visual instructions',\n 'mental computation, auditory instructions',\n 'mental computation, visual instructions',\n 'visual sentence',\n 'auditory sentence']\n\ntrial_type = np.array([condition_ids[i] for i in trial_idx])\n\n#########################################################################\n# We also define a duration (required by BIDS conventions).\nduration = np.ones_like(onset)\n\n\n#########################################################################\n# Form an event dataframe from these information.\nimport pandas as pd\nevents = pd.DataFrame({'trial_type': trial_type,\n 'onset': onset,\n 'duration': duration})\n\n#########################################################################\n# Export them to a tsv file.\ntsvfile = 'localizer_events.tsv'\nevents.to_csv(tsvfile, sep='\\t', index=False)\nprint(\"Created the events file in %s \" % tsvfile)\n", "path": "examples/04_glm_first_level/write_events_file.py"}], "after_files": [{"content": "\"\"\"Example of a events.tsv file generation: the neurospin/localizer events.\n=============================================================================\n\nThe protocol described is the so-called \"archi standard\" localizer\nevent sequence. See Pinel et al., BMC neuroscience 2007 for reference.\n\"\"\"\n\nprint(__doc__)\n\n#########################################################################\n# Define the onset times in seconds. Those are typically extracted\n# from the stimulation software used.\nimport numpy as np\nonset = np.array([\n 0., 2.4, 8.7, 11.4, 15., 18., 20.7, 23.7, 26.7, 29.7, 33., 35.4, 39.,\n 41.7, 44.7, 48., 56.4, 59.7, 62.4, 69., 71.4, 75., 83.4, 87., 89.7,\n 96., 108., 116.7, 119.4, 122.7, 125.4, 131.4, 135., 137.7, 140.4,\n 143.4, 146.7, 149.4, 153., 156., 159., 162., 164.4, 167.7, 170.4,\n 173.7, 176.7, 188.4, 191.7, 195., 198., 201., 203.7, 207., 210.,\n 212.7, 215.7, 218.7, 221.4, 224.7, 227.7, 230.7, 234., 236.7, 246.,\n 248.4, 251.7, 254.7, 257.4, 260.4, 264., 266.7, 269.7, 275.4, 278.4,\n 284.4, 288., 291., 293.4, 296.7])\n\n#########################################################################\n# Associated trial types: these are numbered between 0 and 9, hence\n# correspond to 10 different conditions.\ntrial_idx = np.array(\n [7, 7, 0, 2, 9, 4, 9, 3, 5, 9, 1, 6, 8, 8, 6, 6, 8, 0, 3, 4, 5, 8, 6,\n 2, 9, 1, 6, 5, 9, 1, 7, 8, 6, 6, 1, 2, 9, 0, 7, 1, 8, 2, 7, 8, 3, 6,\n 0, 0, 6, 8, 7, 7, 1, 1, 1, 5, 5, 0, 7, 0, 4, 2, 7, 9, 8, 0, 6, 3, 3,\n 7, 1, 0, 0, 4, 1, 9, 8, 4, 9, 9])\n\n#########################################################################\n# We may want to map these indices to explicit condition names.\n# For that, we define a list of 10 strings.\ncondition_ids = ['horizontal checkerboard',\n 'vertical checkerboard',\n 'right button press, auditory instructions',\n 'left button press, auditory instructions',\n 'right button press, visual instructions',\n 'left button press, visual instructions',\n 'mental computation, auditory instructions',\n 'mental computation, visual instructions',\n 'visual sentence',\n 'auditory sentence']\n\ntrial_type = np.array([condition_ids[i] for i in trial_idx])\n\n#########################################################################\n# We also define a duration (required by BIDS conventions).\nduration = np.ones_like(onset)\n\n\n#########################################################################\n# Form an event dataframe from these information.\nimport pandas as pd\nevents = pd.DataFrame({'trial_type': trial_type,\n 'onset': onset,\n 'duration': duration})\n\n#########################################################################\n# Export them to a tsv file.\ntsvfile = 'localizer_events.tsv'\nevents.to_csv(tsvfile, sep='\\t', index=False)\nprint(\"Created the events file in %s \" % tsvfile)\n\n#########################################################################\n# Optionally, the events can be visualized using the plot_event function.\nfrom matplotlib import pyplot as plt\nfrom nilearn.plotting import plot_event\nplot_event(events, figsize=(15, 5))\nplt.show()\n", "path": "examples/04_glm_first_level/write_events_file.py"}]}
| 1,552 | 167 |
gh_patches_debug_13652
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-6287
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[PUI] Global login
Global Login (CUI logs in PUI and vice versa) is not working (anymore)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/users/api.py`
Content:
```
1 """DRF API definition for the 'users' app."""
2
3 import datetime
4 import logging
5
6 from django.contrib.auth.models import Group, User
7 from django.urls import include, path, re_path
8
9 from rest_framework import exceptions, permissions
10 from rest_framework.response import Response
11 from rest_framework.views import APIView
12
13 import InvenTree.helpers
14 from InvenTree.filters import SEARCH_ORDER_FILTER
15 from InvenTree.mixins import (
16 ListAPI,
17 ListCreateAPI,
18 RetrieveAPI,
19 RetrieveUpdateAPI,
20 RetrieveUpdateDestroyAPI,
21 )
22 from InvenTree.serializers import ExendedUserSerializer, UserCreateSerializer
23 from users.models import ApiToken, Owner, RuleSet, check_user_role
24 from users.serializers import GroupSerializer, OwnerSerializer
25
26 logger = logging.getLogger('inventree')
27
28
29 class OwnerList(ListAPI):
30 """List API endpoint for Owner model.
31
32 Cannot create.
33 """
34
35 queryset = Owner.objects.all()
36 serializer_class = OwnerSerializer
37
38 def filter_queryset(self, queryset):
39 """Implement text search for the "owner" model.
40
41 Note that an "owner" can be either a group, or a user,
42 so we cannot do a direct text search.
43
44 A "hack" here is to post-process the queryset and simply
45 remove any values which do not match.
46
47 It is not necessarily "efficient" to do it this way,
48 but until we determine a better way, this is what we have...
49 """
50 search_term = str(self.request.query_params.get('search', '')).lower()
51 is_active = self.request.query_params.get('is_active', None)
52
53 queryset = super().filter_queryset(queryset)
54
55 results = []
56
57 # Get a list of all matching users, depending on the *is_active* flag
58 if is_active is not None:
59 is_active = InvenTree.helpers.str2bool(is_active)
60 matching_user_ids = User.objects.filter(is_active=is_active).values_list(
61 'pk', flat=True
62 )
63
64 for result in queryset.all():
65 name = str(result.name()).lower().strip()
66 search_match = True
67
68 # Extract search term f
69 if search_term:
70 for entry in search_term.strip().split(' '):
71 if entry not in name:
72 search_match = False
73 break
74
75 if not search_match:
76 continue
77
78 if is_active is not None:
79 # Skip any users which do not match the required *is_active* value
80 if (
81 result.owner_type.name == 'user'
82 and result.owner_id not in matching_user_ids
83 ):
84 continue
85
86 # If we get here, there is no reason *not* to include this result
87 results.append(result)
88
89 return results
90
91
92 class OwnerDetail(RetrieveAPI):
93 """Detail API endpoint for Owner model.
94
95 Cannot edit or delete
96 """
97
98 queryset = Owner.objects.all()
99 serializer_class = OwnerSerializer
100
101
102 class RoleDetails(APIView):
103 """API endpoint which lists the available role permissions for the current user.
104
105 (Requires authentication)
106 """
107
108 permission_classes = [permissions.IsAuthenticated]
109
110 def get(self, request, *args, **kwargs):
111 """Return the list of roles / permissions available to the current user."""
112 user = request.user
113
114 roles = {}
115
116 for ruleset in RuleSet.RULESET_CHOICES:
117 role, _text = ruleset
118
119 permissions = []
120
121 for permission in RuleSet.RULESET_PERMISSIONS:
122 if check_user_role(user, role, permission):
123 permissions.append(permission)
124
125 if len(permissions) > 0:
126 roles[role] = permissions
127 else:
128 roles[role] = None # pragma: no cover
129
130 data = {
131 'user': user.pk,
132 'username': user.username,
133 'roles': roles,
134 'is_staff': user.is_staff,
135 'is_superuser': user.is_superuser,
136 }
137
138 return Response(data)
139
140
141 class UserDetail(RetrieveUpdateDestroyAPI):
142 """Detail endpoint for a single user."""
143
144 queryset = User.objects.all()
145 serializer_class = ExendedUserSerializer
146 permission_classes = [permissions.IsAuthenticated]
147
148
149 class MeUserDetail(RetrieveUpdateAPI, UserDetail):
150 """Detail endpoint for current user."""
151
152 def get_object(self):
153 """Always return the current user object."""
154 return self.request.user
155
156
157 class UserList(ListCreateAPI):
158 """List endpoint for detail on all users."""
159
160 queryset = User.objects.all()
161 serializer_class = UserCreateSerializer
162 permission_classes = [permissions.IsAuthenticated]
163 filter_backends = SEARCH_ORDER_FILTER
164
165 search_fields = ['first_name', 'last_name', 'username']
166
167 ordering_fields = [
168 'email',
169 'username',
170 'first_name',
171 'last_name',
172 'is_staff',
173 'is_superuser',
174 'is_active',
175 ]
176
177 filterset_fields = ['is_staff', 'is_active', 'is_superuser']
178
179
180 class GroupDetail(RetrieveUpdateDestroyAPI):
181 """Detail endpoint for a particular auth group."""
182
183 queryset = Group.objects.all()
184 serializer_class = GroupSerializer
185 permission_classes = [permissions.IsAuthenticated]
186
187
188 class GroupList(ListCreateAPI):
189 """List endpoint for all auth groups."""
190
191 queryset = Group.objects.all()
192 serializer_class = GroupSerializer
193 permission_classes = [permissions.IsAuthenticated]
194
195 filter_backends = SEARCH_ORDER_FILTER
196
197 search_fields = ['name']
198
199 ordering_fields = ['name']
200
201
202 class GetAuthToken(APIView):
203 """Return authentication token for an authenticated user."""
204
205 permission_classes = [permissions.IsAuthenticated]
206
207 def get(self, request, *args, **kwargs):
208 """Return an API token if the user is authenticated.
209
210 - If the user already has a matching token, delete it and create a new one
211 - Existing tokens are *never* exposed again via the API
212 - Once the token is provided, it can be used for auth until it expires
213 """
214 if request.user.is_authenticated:
215 user = request.user
216 name = request.query_params.get('name', '')
217
218 name = ApiToken.sanitize_name(name)
219
220 today = datetime.date.today()
221
222 # Find existing token, which has not expired
223 token = ApiToken.objects.filter(
224 user=user, name=name, revoked=False, expiry__gte=today
225 ).first()
226
227 if not token:
228 # User is authenticated, and requesting a token against the provided name.
229 token = ApiToken.objects.create(user=request.user, name=name)
230
231 # Add some metadata about the request
232 token.set_metadata('user_agent', request.META.get('HTTP_USER_AGENT', ''))
233 token.set_metadata('remote_addr', request.META.get('REMOTE_ADDR', ''))
234 token.set_metadata('remote_host', request.META.get('REMOTE_HOST', ''))
235 token.set_metadata('remote_user', request.META.get('REMOTE_USER', ''))
236 token.set_metadata('server_name', request.META.get('SERVER_NAME', ''))
237 token.set_metadata('server_port', request.META.get('SERVER_PORT', ''))
238
239 data = {'token': token.key, 'name': token.name, 'expiry': token.expiry}
240
241 logger.info(
242 "Created new API token for user '%s' (name='%s')", user.username, name
243 )
244
245 return Response(data)
246
247 else:
248 raise exceptions.NotAuthenticated()
249
250
251 user_urls = [
252 path('roles/', RoleDetails.as_view(), name='api-user-roles'),
253 path('token/', GetAuthToken.as_view(), name='api-token'),
254 path('me/', MeUserDetail.as_view(), name='api-user-me'),
255 path(
256 'owner/',
257 include([
258 path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),
259 path('', OwnerList.as_view(), name='api-owner-list'),
260 ]),
261 ),
262 path(
263 'group/',
264 include([
265 re_path(
266 r'^(?P<pk>[0-9]+)/?$', GroupDetail.as_view(), name='api-group-detail'
267 ),
268 path('', GroupList.as_view(), name='api-group-list'),
269 ]),
270 ),
271 re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='api-user-detail'),
272 path('', UserList.as_view(), name='api-user-list'),
273 ]
274
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py
--- a/InvenTree/users/api.py
+++ b/InvenTree/users/api.py
@@ -3,6 +3,7 @@
import datetime
import logging
+from django.contrib.auth import get_user, login
from django.contrib.auth.models import Group, User
from django.urls import include, path, re_path
@@ -242,6 +243,10 @@
"Created new API token for user '%s' (name='%s')", user.username, name
)
+ # Ensure that the users session is logged in (PUI -> CUI login)
+ if not get_user(request).is_authenticated:
+ login(request, user)
+
return Response(data)
else:
|
{"golden_diff": "diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py\n--- a/InvenTree/users/api.py\n+++ b/InvenTree/users/api.py\n@@ -3,6 +3,7 @@\n import datetime\n import logging\n \n+from django.contrib.auth import get_user, login\n from django.contrib.auth.models import Group, User\n from django.urls import include, path, re_path\n \n@@ -242,6 +243,10 @@\n \"Created new API token for user '%s' (name='%s')\", user.username, name\n )\n \n+ # Ensure that the users session is logged in (PUI -> CUI login)\n+ if not get_user(request).is_authenticated:\n+ login(request, user)\n+\n return Response(data)\n \n else:\n", "issue": "[PUI] Global login\nGlobal Login (CUI logs in PUI and vice versa) is not working (anymore)\n", "before_files": [{"content": "\"\"\"DRF API definition for the 'users' app.\"\"\"\n\nimport datetime\nimport logging\n\nfrom django.contrib.auth.models import Group, User\nfrom django.urls import include, path, re_path\n\nfrom rest_framework import exceptions, permissions\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nimport InvenTree.helpers\nfrom InvenTree.filters import SEARCH_ORDER_FILTER\nfrom InvenTree.mixins import (\n ListAPI,\n ListCreateAPI,\n RetrieveAPI,\n RetrieveUpdateAPI,\n RetrieveUpdateDestroyAPI,\n)\nfrom InvenTree.serializers import ExendedUserSerializer, UserCreateSerializer\nfrom users.models import ApiToken, Owner, RuleSet, check_user_role\nfrom users.serializers import GroupSerializer, OwnerSerializer\n\nlogger = logging.getLogger('inventree')\n\n\nclass OwnerList(ListAPI):\n \"\"\"List API endpoint for Owner model.\n\n Cannot create.\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"Implement text search for the \"owner\" model.\n\n Note that an \"owner\" can be either a group, or a user,\n so we cannot do a direct text search.\n\n A \"hack\" here is to post-process the queryset and simply\n remove any values which do not match.\n\n It is not necessarily \"efficient\" to do it this way,\n but until we determine a better way, this is what we have...\n \"\"\"\n search_term = str(self.request.query_params.get('search', '')).lower()\n is_active = self.request.query_params.get('is_active', None)\n\n queryset = super().filter_queryset(queryset)\n\n results = []\n\n # Get a list of all matching users, depending on the *is_active* flag\n if is_active is not None:\n is_active = InvenTree.helpers.str2bool(is_active)\n matching_user_ids = User.objects.filter(is_active=is_active).values_list(\n 'pk', flat=True\n )\n\n for result in queryset.all():\n name = str(result.name()).lower().strip()\n search_match = True\n\n # Extract search term f\n if search_term:\n for entry in search_term.strip().split(' '):\n if entry not in name:\n search_match = False\n break\n\n if not search_match:\n continue\n\n if is_active is not None:\n # Skip any users which do not match the required *is_active* value\n if (\n result.owner_type.name == 'user'\n and result.owner_id not in matching_user_ids\n ):\n continue\n\n # If we get here, there is no reason *not* to include this result\n results.append(result)\n\n return results\n\n\nclass OwnerDetail(RetrieveAPI):\n \"\"\"Detail API endpoint for Owner model.\n\n Cannot edit or delete\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n\nclass RoleDetails(APIView):\n \"\"\"API endpoint which lists the available role permissions for the current user.\n\n (Requires authentication)\n \"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return the list of roles / permissions available to the current user.\"\"\"\n user = request.user\n\n roles = {}\n\n for ruleset in RuleSet.RULESET_CHOICES:\n role, _text = ruleset\n\n permissions = []\n\n for permission in RuleSet.RULESET_PERMISSIONS:\n if check_user_role(user, role, permission):\n permissions.append(permission)\n\n if len(permissions) > 0:\n roles[role] = permissions\n else:\n roles[role] = None # pragma: no cover\n\n data = {\n 'user': user.pk,\n 'username': user.username,\n 'roles': roles,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n }\n\n return Response(data)\n\n\nclass UserDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a single user.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = ExendedUserSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass MeUserDetail(RetrieveUpdateAPI, UserDetail):\n \"\"\"Detail endpoint for current user.\"\"\"\n\n def get_object(self):\n \"\"\"Always return the current user object.\"\"\"\n return self.request.user\n\n\nclass UserList(ListCreateAPI):\n \"\"\"List endpoint for detail on all users.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserCreateSerializer\n permission_classes = [permissions.IsAuthenticated]\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['first_name', 'last_name', 'username']\n\n ordering_fields = [\n 'email',\n 'username',\n 'first_name',\n 'last_name',\n 'is_staff',\n 'is_superuser',\n 'is_active',\n ]\n\n filterset_fields = ['is_staff', 'is_active', 'is_superuser']\n\n\nclass GroupDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a particular auth group.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass GroupList(ListCreateAPI):\n \"\"\"List endpoint for all auth groups.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['name']\n\n ordering_fields = ['name']\n\n\nclass GetAuthToken(APIView):\n \"\"\"Return authentication token for an authenticated user.\"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return an API token if the user is authenticated.\n\n - If the user already has a matching token, delete it and create a new one\n - Existing tokens are *never* exposed again via the API\n - Once the token is provided, it can be used for auth until it expires\n \"\"\"\n if request.user.is_authenticated:\n user = request.user\n name = request.query_params.get('name', '')\n\n name = ApiToken.sanitize_name(name)\n\n today = datetime.date.today()\n\n # Find existing token, which has not expired\n token = ApiToken.objects.filter(\n user=user, name=name, revoked=False, expiry__gte=today\n ).first()\n\n if not token:\n # User is authenticated, and requesting a token against the provided name.\n token = ApiToken.objects.create(user=request.user, name=name)\n\n # Add some metadata about the request\n token.set_metadata('user_agent', request.META.get('HTTP_USER_AGENT', ''))\n token.set_metadata('remote_addr', request.META.get('REMOTE_ADDR', ''))\n token.set_metadata('remote_host', request.META.get('REMOTE_HOST', ''))\n token.set_metadata('remote_user', request.META.get('REMOTE_USER', ''))\n token.set_metadata('server_name', request.META.get('SERVER_NAME', ''))\n token.set_metadata('server_port', request.META.get('SERVER_PORT', ''))\n\n data = {'token': token.key, 'name': token.name, 'expiry': token.expiry}\n\n logger.info(\n \"Created new API token for user '%s' (name='%s')\", user.username, name\n )\n\n return Response(data)\n\n else:\n raise exceptions.NotAuthenticated()\n\n\nuser_urls = [\n path('roles/', RoleDetails.as_view(), name='api-user-roles'),\n path('token/', GetAuthToken.as_view(), name='api-token'),\n path('me/', MeUserDetail.as_view(), name='api-user-me'),\n path(\n 'owner/',\n include([\n path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),\n path('', OwnerList.as_view(), name='api-owner-list'),\n ]),\n ),\n path(\n 'group/',\n include([\n re_path(\n r'^(?P<pk>[0-9]+)/?$', GroupDetail.as_view(), name='api-group-detail'\n ),\n path('', GroupList.as_view(), name='api-group-list'),\n ]),\n ),\n re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='api-user-detail'),\n path('', UserList.as_view(), name='api-user-list'),\n]\n", "path": "InvenTree/users/api.py"}], "after_files": [{"content": "\"\"\"DRF API definition for the 'users' app.\"\"\"\n\nimport datetime\nimport logging\n\nfrom django.contrib.auth import get_user, login\nfrom django.contrib.auth.models import Group, User\nfrom django.urls import include, path, re_path\n\nfrom rest_framework import exceptions, permissions\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nimport InvenTree.helpers\nfrom InvenTree.filters import SEARCH_ORDER_FILTER\nfrom InvenTree.mixins import (\n ListAPI,\n ListCreateAPI,\n RetrieveAPI,\n RetrieveUpdateAPI,\n RetrieveUpdateDestroyAPI,\n)\nfrom InvenTree.serializers import ExendedUserSerializer, UserCreateSerializer\nfrom users.models import ApiToken, Owner, RuleSet, check_user_role\nfrom users.serializers import GroupSerializer, OwnerSerializer\n\nlogger = logging.getLogger('inventree')\n\n\nclass OwnerList(ListAPI):\n \"\"\"List API endpoint for Owner model.\n\n Cannot create.\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"Implement text search for the \"owner\" model.\n\n Note that an \"owner\" can be either a group, or a user,\n so we cannot do a direct text search.\n\n A \"hack\" here is to post-process the queryset and simply\n remove any values which do not match.\n\n It is not necessarily \"efficient\" to do it this way,\n but until we determine a better way, this is what we have...\n \"\"\"\n search_term = str(self.request.query_params.get('search', '')).lower()\n is_active = self.request.query_params.get('is_active', None)\n\n queryset = super().filter_queryset(queryset)\n\n results = []\n\n # Get a list of all matching users, depending on the *is_active* flag\n if is_active is not None:\n is_active = InvenTree.helpers.str2bool(is_active)\n matching_user_ids = User.objects.filter(is_active=is_active).values_list(\n 'pk', flat=True\n )\n\n for result in queryset.all():\n name = str(result.name()).lower().strip()\n search_match = True\n\n # Extract search term f\n if search_term:\n for entry in search_term.strip().split(' '):\n if entry not in name:\n search_match = False\n break\n\n if not search_match:\n continue\n\n if is_active is not None:\n # Skip any users which do not match the required *is_active* value\n if (\n result.owner_type.name == 'user'\n and result.owner_id not in matching_user_ids\n ):\n continue\n\n # If we get here, there is no reason *not* to include this result\n results.append(result)\n\n return results\n\n\nclass OwnerDetail(RetrieveAPI):\n \"\"\"Detail API endpoint for Owner model.\n\n Cannot edit or delete\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n\nclass RoleDetails(APIView):\n \"\"\"API endpoint which lists the available role permissions for the current user.\n\n (Requires authentication)\n \"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return the list of roles / permissions available to the current user.\"\"\"\n user = request.user\n\n roles = {}\n\n for ruleset in RuleSet.RULESET_CHOICES:\n role, _text = ruleset\n\n permissions = []\n\n for permission in RuleSet.RULESET_PERMISSIONS:\n if check_user_role(user, role, permission):\n permissions.append(permission)\n\n if len(permissions) > 0:\n roles[role] = permissions\n else:\n roles[role] = None # pragma: no cover\n\n data = {\n 'user': user.pk,\n 'username': user.username,\n 'roles': roles,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n }\n\n return Response(data)\n\n\nclass UserDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a single user.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = ExendedUserSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass MeUserDetail(RetrieveUpdateAPI, UserDetail):\n \"\"\"Detail endpoint for current user.\"\"\"\n\n def get_object(self):\n \"\"\"Always return the current user object.\"\"\"\n return self.request.user\n\n\nclass UserList(ListCreateAPI):\n \"\"\"List endpoint for detail on all users.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserCreateSerializer\n permission_classes = [permissions.IsAuthenticated]\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['first_name', 'last_name', 'username']\n\n ordering_fields = [\n 'email',\n 'username',\n 'first_name',\n 'last_name',\n 'is_staff',\n 'is_superuser',\n 'is_active',\n ]\n\n filterset_fields = ['is_staff', 'is_active', 'is_superuser']\n\n\nclass GroupDetail(RetrieveUpdateDestroyAPI):\n \"\"\"Detail endpoint for a particular auth group.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n\nclass GroupList(ListCreateAPI):\n \"\"\"List endpoint for all auth groups.\"\"\"\n\n queryset = Group.objects.all()\n serializer_class = GroupSerializer\n permission_classes = [permissions.IsAuthenticated]\n\n filter_backends = SEARCH_ORDER_FILTER\n\n search_fields = ['name']\n\n ordering_fields = ['name']\n\n\nclass GetAuthToken(APIView):\n \"\"\"Return authentication token for an authenticated user.\"\"\"\n\n permission_classes = [permissions.IsAuthenticated]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return an API token if the user is authenticated.\n\n - If the user already has a matching token, delete it and create a new one\n - Existing tokens are *never* exposed again via the API\n - Once the token is provided, it can be used for auth until it expires\n \"\"\"\n if request.user.is_authenticated:\n user = request.user\n name = request.query_params.get('name', '')\n\n name = ApiToken.sanitize_name(name)\n\n today = datetime.date.today()\n\n # Find existing token, which has not expired\n token = ApiToken.objects.filter(\n user=user, name=name, revoked=False, expiry__gte=today\n ).first()\n\n if not token:\n # User is authenticated, and requesting a token against the provided name.\n token = ApiToken.objects.create(user=request.user, name=name)\n\n # Add some metadata about the request\n token.set_metadata('user_agent', request.META.get('HTTP_USER_AGENT', ''))\n token.set_metadata('remote_addr', request.META.get('REMOTE_ADDR', ''))\n token.set_metadata('remote_host', request.META.get('REMOTE_HOST', ''))\n token.set_metadata('remote_user', request.META.get('REMOTE_USER', ''))\n token.set_metadata('server_name', request.META.get('SERVER_NAME', ''))\n token.set_metadata('server_port', request.META.get('SERVER_PORT', ''))\n\n data = {'token': token.key, 'name': token.name, 'expiry': token.expiry}\n\n logger.info(\n \"Created new API token for user '%s' (name='%s')\", user.username, name\n )\n\n # Ensure that the users session is logged in (PUI -> CUI login)\n if not get_user(request).is_authenticated:\n login(request, user)\n\n return Response(data)\n\n else:\n raise exceptions.NotAuthenticated()\n\n\nuser_urls = [\n path('roles/', RoleDetails.as_view(), name='api-user-roles'),\n path('token/', GetAuthToken.as_view(), name='api-token'),\n path('me/', MeUserDetail.as_view(), name='api-user-me'),\n path(\n 'owner/',\n include([\n path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),\n path('', OwnerList.as_view(), name='api-owner-list'),\n ]),\n ),\n path(\n 'group/',\n include([\n re_path(\n r'^(?P<pk>[0-9]+)/?$', GroupDetail.as_view(), name='api-group-detail'\n ),\n path('', GroupList.as_view(), name='api-group-list'),\n ]),\n ),\n re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='api-user-detail'),\n path('', UserList.as_view(), name='api-user-list'),\n]\n", "path": "InvenTree/users/api.py"}]}
| 2,806 | 175 |
gh_patches_debug_8407
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-6526
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bokeh 0.12.6 incompatible with Python 2.7.9?
Hi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.
I have a virtual environment with the following contents:
```
> pip freeze
backports-abc==0.5
bkcharts==0.2
bokeh==0.12.6 <--------------
boto3==1.4.4
botocore==1.5.71
certifi==2017.4.17
chardet==3.0.4
click==6.7
cloudpickle==0.3.1
dask==0.15.0 <--------------
distributed==1.17.1 <--------------
docutils==0.13.1
futures==3.1.1
graphviz==0.7.1
HeapDict==1.0.0
idna==2.5
Jinja2==2.9.6
jmespath==0.9.3
locket==0.2.0
MarkupSafe==1.0
msgpack-python==0.4.8
numpy==1.13.0
pandas==0.20.2
partd==0.3.8
psutil==5.2.2
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
requests==2.18.1
s3fs==0.1.1
s3transfer==0.1.10
singledispatch==3.4.0.3
six==1.10.0
sortedcontainers==1.5.7
tblib==1.3.2
toolz==0.8.2
tornado==4.5.1
urllib3==1.21.1
zict==0.1.2
```
When I try to start the dask scheduler, I get the following output:
```
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)
Traceback (most recent call last):
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py", line 404, in start_services
service = v(self, io_loop=self.loop)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py", line 995, in __init__
scheduler)))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py", line 11, in __init__
_check_callback(func, ('doc',))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py", line 12, in _check_callback
sig = signature(callback)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py", line 85, in signature
for name in func.keywords.keys():
AttributeError: 'NoneType' object has no attribute 'keys'
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I can fix this problem by downgrading Bokeh to 0.12.5:
```
> pip install -U bokeh==0.12.5
...
Installing collected packages: bokeh
Found existing installation: bokeh 0.12.6
Uninstalling bokeh-0.12.6:
Successfully uninstalled bokeh-0.12.6
Running setup.py install for bokeh ... done
Successfully installed bokeh-0.12.5
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - bokeh at: 0.0.0.0:8787
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.
Bokeh 0.12.6 incompatible with Python 2.7.9?
Hi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.
I have a virtual environment with the following contents:
```
> pip freeze
backports-abc==0.5
bkcharts==0.2
bokeh==0.12.6 <--------------
boto3==1.4.4
botocore==1.5.71
certifi==2017.4.17
chardet==3.0.4
click==6.7
cloudpickle==0.3.1
dask==0.15.0 <--------------
distributed==1.17.1 <--------------
docutils==0.13.1
futures==3.1.1
graphviz==0.7.1
HeapDict==1.0.0
idna==2.5
Jinja2==2.9.6
jmespath==0.9.3
locket==0.2.0
MarkupSafe==1.0
msgpack-python==0.4.8
numpy==1.13.0
pandas==0.20.2
partd==0.3.8
psutil==5.2.2
python-dateutil==2.6.0
pytz==2017.2
PyYAML==3.12
requests==2.18.1
s3fs==0.1.1
s3transfer==0.1.10
singledispatch==3.4.0.3
six==1.10.0
sortedcontainers==1.5.7
tblib==1.3.2
toolz==0.8.2
tornado==4.5.1
urllib3==1.21.1
zict==0.1.2
```
When I try to start the dask scheduler, I get the following output:
```
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)
Traceback (most recent call last):
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py", line 404, in start_services
service = v(self, io_loop=self.loop)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py", line 995, in __init__
scheduler)))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py", line 11, in __init__
_check_callback(func, ('doc',))
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py", line 12, in _check_callback
sig = signature(callback)
File "/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py", line 85, in signature
for name in func.keywords.keys():
AttributeError: 'NoneType' object has no attribute 'keys'
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I can fix this problem by downgrading Bokeh to 0.12.5:
```
> pip install -U bokeh==0.12.5
...
Installing collected packages: bokeh
Found existing installation: bokeh 0.12.6
Uninstalling bokeh-0.12.6:
Successfully uninstalled bokeh-0.12.6
Running setup.py install for bokeh ... done
Successfully installed bokeh-0.12.5
> dask-scheduler
distributed.scheduler - INFO - -----------------------------------------------
distributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786
distributed.scheduler - INFO - bokeh at: 0.0.0.0:8787
distributed.scheduler - INFO - http at: 0.0.0.0:9786
distributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k
distributed.scheduler - INFO - -----------------------------------------------
^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'
```
I was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `bokeh/util/future.py`
Content:
```
1 ''' Utilities for Py2/Py3 interop.
2
3 '''
4
5 import sys
6
7 def with_metaclass(meta, *bases):
8 """ Add metaclasses in both Python 2 and Python 3.
9
10 Function from jinja2/_compat.py. License: BSD.
11
12 Use it like this::
13
14 class BaseForm(object):
15 pass
16
17 class FormType(type):
18 pass
19
20 class Form(with_metaclass(FormType, BaseForm)):
21 pass
22
23 This requires a bit of explanation: the basic idea is to make a
24 dummy metaclass for one level of class instantiation that replaces
25 itself with the actual metaclass. Because of internal type checks
26 we also need to make sure that we downgrade the custom metaclass
27 for one level to something closer to type (that's why __call__ and
28 __init__ comes back from type etc.).
29
30 This has the advantage over six.with_metaclass of not introducing
31 dummy classes into the final MRO.
32 """
33 class metaclass(meta):
34 __call__ = type.__call__
35 __init__ = type.__init__
36 def __new__(cls, name, this_bases, d):
37 if this_bases is None:
38 return type.__new__(cls, name, (), d)
39 return meta(name, bases, d)
40 return metaclass('temporary_class', None, {})
41
42
43 # There is a problem with using @wraps decorator in combination with functools.partial.
44 # This issue is not present in Python 3.
45 # This redefinition will be triggered only if issue affects user,
46 # otherwise regular definition of @wraps will be used.
47 #
48 # this code snippet was originally posted in following stack overflow discussion:
49 # http://stackoverflow.com/a/28752007
50
51 from functools import wraps, partial, WRAPPER_ASSIGNMENTS
52
53 try:
54 wraps(partial(wraps))(wraps)
55 except AttributeError:
56 @wraps(wraps)
57 def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):
58 return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))
59
60 del partial, WRAPPER_ASSIGNMENTS
61
62
63 # inspect.getargspec and inspect.formatargspec were deprecated in Python 3.5
64 # in favor of the newer inspect.signature introspection
65
66 if sys.version_info[:2] < (3, 4):
67
68 def signature(func):
69 # The modifications in this function are to make results more in line
70 # with Python 3, i.e. self is not included in bound methods, supplied
71 # parameters are not reported in partial, etc. This simplifies the
72 # downstream code considerably.
73 from inspect import getargspec, isfunction, ismethod
74 from functools import partial
75
76 if isfunction(func) or ismethod(func):
77 sig = getargspec(func)
78 if ismethod(func):
79 sig.args.remove('self')
80 return sig
81
82 elif isinstance(func, partial):
83 sig = getargspec(func.func)
84 if 'self' in sig.args: sig.args.remove('self')
85 for name in func.keywords.keys():
86 sig.args.remove(name)
87 for val in func.args:
88 del sig.args[0]
89 return sig
90
91 else:
92 sig = getargspec(func.__call__)
93 sig.args.remove('self')
94 return sig
95
96 def format_signature(sig):
97 from inspect import formatargspec
98 return formatargspec(*sig)
99
100 def get_param_info(sig):
101 return (sig.args, sig.defaults or [])
102
103 else:
104 from inspect import signature; signature
105
106 def format_signature(sig):
107 return str(sig)
108
109 def get_param_info(sig):
110 defaults = []
111 for param in sig.parameters.values():
112 if param.default is not param.empty:
113 defaults.append(param.default)
114 return list(sig.parameters), defaults
115
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/bokeh/util/future.py b/bokeh/util/future.py
--- a/bokeh/util/future.py
+++ b/bokeh/util/future.py
@@ -82,8 +82,9 @@
elif isinstance(func, partial):
sig = getargspec(func.func)
if 'self' in sig.args: sig.args.remove('self')
- for name in func.keywords.keys():
- sig.args.remove(name)
+ if func.keywords is not None:
+ for name in func.keywords.keys():
+ sig.args.remove(name)
for val in func.args:
del sig.args[0]
return sig
|
{"golden_diff": "diff --git a/bokeh/util/future.py b/bokeh/util/future.py\n--- a/bokeh/util/future.py\n+++ b/bokeh/util/future.py\n@@ -82,8 +82,9 @@\n elif isinstance(func, partial):\n sig = getargspec(func.func)\n if 'self' in sig.args: sig.args.remove('self')\n- for name in func.keywords.keys():\n- sig.args.remove(name)\n+ if func.keywords is not None:\n+ for name in func.keywords.keys():\n+ sig.args.remove(name)\n for val in func.args:\n del sig.args[0]\n return sig\n", "issue": "Bokeh 0.12.6 incompatible with Python 2.7.9?\nHi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.\r\n\r\nI have a virtual environment with the following contents:\r\n```\r\n> pip freeze\r\nbackports-abc==0.5\r\nbkcharts==0.2\r\nbokeh==0.12.6 <--------------\r\nboto3==1.4.4\r\nbotocore==1.5.71\r\ncertifi==2017.4.17\r\nchardet==3.0.4\r\nclick==6.7\r\ncloudpickle==0.3.1\r\ndask==0.15.0 <--------------\r\ndistributed==1.17.1 <--------------\r\ndocutils==0.13.1\r\nfutures==3.1.1\r\ngraphviz==0.7.1\r\nHeapDict==1.0.0\r\nidna==2.5\r\nJinja2==2.9.6\r\njmespath==0.9.3\r\nlocket==0.2.0\r\nMarkupSafe==1.0\r\nmsgpack-python==0.4.8\r\nnumpy==1.13.0\r\npandas==0.20.2\r\npartd==0.3.8\r\npsutil==5.2.2\r\npython-dateutil==2.6.0\r\npytz==2017.2\r\nPyYAML==3.12\r\nrequests==2.18.1\r\ns3fs==0.1.1\r\ns3transfer==0.1.10\r\nsingledispatch==3.4.0.3\r\nsix==1.10.0\r\nsortedcontainers==1.5.7\r\ntblib==1.3.2\r\ntoolz==0.8.2\r\ntornado==4.5.1\r\nurllib3==1.21.1\r\nzict==0.1.2\r\n```\r\nWhen I try to start the dask scheduler, I get the following output:\r\n```\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)\r\nTraceback (most recent call last):\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py\", line 404, in start_services\r\n service = v(self, io_loop=self.loop)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py\", line 995, in __init__\r\n scheduler)))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py\", line 11, in __init__\r\n _check_callback(func, ('doc',))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py\", line 12, in _check_callback\r\n sig = signature(callback)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py\", line 85, in signature\r\n for name in func.keywords.keys():\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\nI can fix this problem by downgrading Bokeh to 0.12.5:\r\n```\r\n> pip install -U bokeh==0.12.5\r\n...\r\nInstalling collected packages: bokeh\r\n Found existing installation: bokeh 0.12.6\r\n Uninstalling bokeh-0.12.6:\r\n Successfully uninstalled bokeh-0.12.6\r\n Running setup.py install for bokeh ... done\r\nSuccessfully installed bokeh-0.12.5\r\n\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - bokeh at: 0.0.0.0:8787\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\n\r\nI was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.\r\n\nBokeh 0.12.6 incompatible with Python 2.7.9?\nHi there! I have posted this issue with the [dask.distributed project](https://github.com/dask/distributed/issues/1193#issuecomment-309802212) in which context it appeared, and I was asked to file the issue here, since it seems to be a Bokeh problem.\r\n\r\nI have a virtual environment with the following contents:\r\n```\r\n> pip freeze\r\nbackports-abc==0.5\r\nbkcharts==0.2\r\nbokeh==0.12.6 <--------------\r\nboto3==1.4.4\r\nbotocore==1.5.71\r\ncertifi==2017.4.17\r\nchardet==3.0.4\r\nclick==6.7\r\ncloudpickle==0.3.1\r\ndask==0.15.0 <--------------\r\ndistributed==1.17.1 <--------------\r\ndocutils==0.13.1\r\nfutures==3.1.1\r\ngraphviz==0.7.1\r\nHeapDict==1.0.0\r\nidna==2.5\r\nJinja2==2.9.6\r\njmespath==0.9.3\r\nlocket==0.2.0\r\nMarkupSafe==1.0\r\nmsgpack-python==0.4.8\r\nnumpy==1.13.0\r\npandas==0.20.2\r\npartd==0.3.8\r\npsutil==5.2.2\r\npython-dateutil==2.6.0\r\npytz==2017.2\r\nPyYAML==3.12\r\nrequests==2.18.1\r\ns3fs==0.1.1\r\ns3transfer==0.1.10\r\nsingledispatch==3.4.0.3\r\nsix==1.10.0\r\nsortedcontainers==1.5.7\r\ntblib==1.3.2\r\ntoolz==0.8.2\r\ntornado==4.5.1\r\nurllib3==1.21.1\r\nzict==0.1.2\r\n```\r\nWhen I try to start the dask scheduler, I get the following output:\r\n```\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Could not launch service: ('bokeh', 8787)\r\nTraceback (most recent call last):\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/scheduler.py\", line 404, in start_services\r\n service = v(self, io_loop=self.loop)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/distributed/bokeh/scheduler.py\", line 995, in __init__\r\n scheduler)))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/application/handlers/function.py\", line 11, in __init__\r\n _check_callback(func, ('doc',))\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/callback_manager.py\", line 12, in _check_callback\r\n sig = signature(callback)\r\n File \"/home/vagrant/dask_venv/local/lib/python2.7/site-packages/bokeh/util/future.py\", line 85, in signature\r\n for name in func.keywords.keys():\r\nAttributeError: 'NoneType' object has no attribute 'keys'\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-zmXtOf\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\nI can fix this problem by downgrading Bokeh to 0.12.5:\r\n```\r\n> pip install -U bokeh==0.12.5\r\n...\r\nInstalling collected packages: bokeh\r\n Found existing installation: bokeh 0.12.6\r\n Uninstalling bokeh-0.12.6:\r\n Successfully uninstalled bokeh-0.12.6\r\n Running setup.py install for bokeh ... done\r\nSuccessfully installed bokeh-0.12.5\r\n\r\n> dask-scheduler\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\ndistributed.scheduler - INFO - Scheduler at: tcp://10.0.2.15:8786\r\ndistributed.scheduler - INFO - bokeh at: 0.0.0.0:8787\r\ndistributed.scheduler - INFO - http at: 0.0.0.0:9786\r\ndistributed.scheduler - INFO - Local Directory: /tmp/scheduler-U0qy1k\r\ndistributed.scheduler - INFO - -----------------------------------------------\r\n^Cdistributed.scheduler - INFO - End scheduler at 'tcp://:8786'\r\n```\r\n\r\nI was able to reproduce the issue on my Debian 8 machine with Python 2.7.9. The error does _not_ occur on my Mac with Python 2.7.11. @pitrou could not reproduce the problem with Python 2.7.12.\r\n\n", "before_files": [{"content": "''' Utilities for Py2/Py3 interop.\n\n'''\n\nimport sys\n\ndef with_metaclass(meta, *bases):\n \"\"\" Add metaclasses in both Python 2 and Python 3.\n\n Function from jinja2/_compat.py. License: BSD.\n\n Use it like this::\n\n class BaseForm(object):\n pass\n\n class FormType(type):\n pass\n\n class Form(with_metaclass(FormType, BaseForm)):\n pass\n\n This requires a bit of explanation: the basic idea is to make a\n dummy metaclass for one level of class instantiation that replaces\n itself with the actual metaclass. Because of internal type checks\n we also need to make sure that we downgrade the custom metaclass\n for one level to something closer to type (that's why __call__ and\n __init__ comes back from type etc.).\n\n This has the advantage over six.with_metaclass of not introducing\n dummy classes into the final MRO.\n \"\"\"\n class metaclass(meta):\n __call__ = type.__call__\n __init__ = type.__init__\n def __new__(cls, name, this_bases, d):\n if this_bases is None:\n return type.__new__(cls, name, (), d)\n return meta(name, bases, d)\n return metaclass('temporary_class', None, {})\n\n\n# There is a problem with using @wraps decorator in combination with functools.partial.\n# This issue is not present in Python 3.\n# This redefinition will be triggered only if issue affects user,\n# otherwise regular definition of @wraps will be used.\n#\n# this code snippet was originally posted in following stack overflow discussion:\n# http://stackoverflow.com/a/28752007\n\nfrom functools import wraps, partial, WRAPPER_ASSIGNMENTS\n\ntry:\n wraps(partial(wraps))(wraps)\nexcept AttributeError:\n @wraps(wraps)\n def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):\n return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))\n\ndel partial, WRAPPER_ASSIGNMENTS\n\n\n# inspect.getargspec and inspect.formatargspec were deprecated in Python 3.5\n# in favor of the newer inspect.signature introspection\n\nif sys.version_info[:2] < (3, 4):\n\n def signature(func):\n # The modifications in this function are to make results more in line\n # with Python 3, i.e. self is not included in bound methods, supplied\n # parameters are not reported in partial, etc. This simplifies the\n # downstream code considerably.\n from inspect import getargspec, isfunction, ismethod\n from functools import partial\n\n if isfunction(func) or ismethod(func):\n sig = getargspec(func)\n if ismethod(func):\n sig.args.remove('self')\n return sig\n\n elif isinstance(func, partial):\n sig = getargspec(func.func)\n if 'self' in sig.args: sig.args.remove('self')\n for name in func.keywords.keys():\n sig.args.remove(name)\n for val in func.args:\n del sig.args[0]\n return sig\n\n else:\n sig = getargspec(func.__call__)\n sig.args.remove('self')\n return sig\n\n def format_signature(sig):\n from inspect import formatargspec\n return formatargspec(*sig)\n\n def get_param_info(sig):\n return (sig.args, sig.defaults or [])\n\nelse:\n from inspect import signature; signature\n\n def format_signature(sig):\n return str(sig)\n\n def get_param_info(sig):\n defaults = []\n for param in sig.parameters.values():\n if param.default is not param.empty:\n defaults.append(param.default)\n return list(sig.parameters), defaults\n", "path": "bokeh/util/future.py"}], "after_files": [{"content": "''' Utilities for Py2/Py3 interop.\n\n'''\n\nimport sys\n\ndef with_metaclass(meta, *bases):\n \"\"\" Add metaclasses in both Python 2 and Python 3.\n\n Function from jinja2/_compat.py. License: BSD.\n\n Use it like this::\n\n class BaseForm(object):\n pass\n\n class FormType(type):\n pass\n\n class Form(with_metaclass(FormType, BaseForm)):\n pass\n\n This requires a bit of explanation: the basic idea is to make a\n dummy metaclass for one level of class instantiation that replaces\n itself with the actual metaclass. Because of internal type checks\n we also need to make sure that we downgrade the custom metaclass\n for one level to something closer to type (that's why __call__ and\n __init__ comes back from type etc.).\n\n This has the advantage over six.with_metaclass of not introducing\n dummy classes into the final MRO.\n \"\"\"\n class metaclass(meta):\n __call__ = type.__call__\n __init__ = type.__init__\n def __new__(cls, name, this_bases, d):\n if this_bases is None:\n return type.__new__(cls, name, (), d)\n return meta(name, bases, d)\n return metaclass('temporary_class', None, {})\n\n\n# There is a problem with using @wraps decorator in combination with functools.partial.\n# This issue is not present in Python 3.\n# This redefinition will be triggered only if issue affects user,\n# otherwise regular definition of @wraps will be used.\n#\n# this code snippet was originally posted in following stack overflow discussion:\n# http://stackoverflow.com/a/28752007\n\nfrom functools import wraps, partial, WRAPPER_ASSIGNMENTS\n\ntry:\n wraps(partial(wraps))(wraps)\nexcept AttributeError:\n @wraps(wraps)\n def wraps(obj, attr_names=WRAPPER_ASSIGNMENTS, wraps=wraps):\n return wraps(obj, assigned=(name for name in attr_names if hasattr(obj, name)))\n\ndel partial, WRAPPER_ASSIGNMENTS\n\n\n# inspect.getargspec and inspect.formatargspec were deprecated in Python 3.5\n# in favor of the newer inspect.signature introspection\n\nif sys.version_info[:2] < (3, 4):\n\n def signature(func):\n # The modifications in this function are to make results more in line\n # with Python 3, i.e. self is not included in bound methods, supplied\n # parameters are not reported in partial, etc. This simplifies the\n # downstream code considerably.\n from inspect import getargspec, isfunction, ismethod\n from functools import partial\n\n if isfunction(func) or ismethod(func):\n sig = getargspec(func)\n if ismethod(func):\n sig.args.remove('self')\n return sig\n\n elif isinstance(func, partial):\n sig = getargspec(func.func)\n if 'self' in sig.args: sig.args.remove('self')\n if func.keywords is not None:\n for name in func.keywords.keys():\n sig.args.remove(name)\n for val in func.args:\n del sig.args[0]\n return sig\n\n else:\n sig = getargspec(func.__call__)\n sig.args.remove('self')\n return sig\n\n def format_signature(sig):\n from inspect import formatargspec\n return formatargspec(*sig)\n\n def get_param_info(sig):\n return (sig.args, sig.defaults or [])\n\nelse:\n from inspect import signature; signature\n\n def format_signature(sig):\n return str(sig)\n\n def get_param_info(sig):\n defaults = []\n for param in sig.parameters.values():\n if param.default is not param.empty:\n defaults.append(param.default)\n return list(sig.parameters), defaults\n", "path": "bokeh/util/future.py"}]}
| 3,701 | 142 |
gh_patches_debug_34666
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-3891
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plots: replace --show-json with --show-vega
Requested by @dmpetrov for cml. `--show-vega` should require a target and return a filled vega template. `--show-json` is not needed, let's delete it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dvc/command/plots.py`
Content:
```
1 import argparse
2 import logging
3 import os
4
5 from dvc.command.base import CmdBase, append_doc_link, fix_subparsers
6 from dvc.exceptions import DvcException
7 from dvc.utils import format_link
8
9 logger = logging.getLogger(__name__)
10
11 PAGE_HTML = """<!DOCTYPE html>
12 <html>
13 <head>
14 <title>DVC Plot</title>
15 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
16 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
17 <script src="https://cdn.jsdelivr.net/npm/[email protected]"></script>
18 </head>
19 <body>
20 {divs}
21 </body>
22 </html>"""
23
24 DIV_HTML = """<div id = "{id}"></div>
25 <script type = "text/javascript">
26 var spec = {vega_json};
27 vegaEmbed('#{id}', spec);
28 </script>"""
29
30
31 class CmdPlots(CmdBase):
32 def _func(self, *args, **kwargs):
33 raise NotImplementedError
34
35 def run(self):
36 try:
37 plots = self._func(
38 targets=self.args.targets,
39 template=self.args.template,
40 x_field=self.args.x,
41 y_field=self.args.y,
42 csv_header=not self.args.no_csv_header,
43 title=self.args.title,
44 x_title=self.args.xlab,
45 y_title=self.args.ylab,
46 )
47
48 if self.args.show_json:
49 import json
50
51 logger.info(json.dumps(plots))
52 return 0
53
54 divs = [
55 DIV_HTML.format(id=f"plot{i}", vega_json=plot)
56 for i, plot in enumerate(plots.values())
57 ]
58 html = PAGE_HTML.format(divs="\n".join(divs))
59 path = self.args.out or "plots.html"
60
61 with open(path, "w") as fobj:
62 fobj.write(html)
63
64 logger.info(
65 "file://{}".format(os.path.join(self.repo.root_dir, path))
66 )
67
68 except DvcException:
69 logger.exception("")
70 return 1
71
72 return 0
73
74
75 class CmdPlotsShow(CmdPlots):
76 def _func(self, *args, **kwargs):
77 return self.repo.plots.show(*args, **kwargs)
78
79
80 class CmdPlotsDiff(CmdPlots):
81 def _func(self, *args, **kwargs):
82 return self.repo.plots.diff(*args, revs=self.args.revisions, **kwargs)
83
84
85 def add_parser(subparsers, parent_parser):
86 PLOTS_HELP = (
87 "Generating plots for metrics stored in structured files "
88 "(JSON, CSV, TSV)."
89 )
90
91 plots_parser = subparsers.add_parser(
92 "plots",
93 parents=[parent_parser],
94 description=append_doc_link(PLOTS_HELP, "plots"),
95 help=PLOTS_HELP,
96 formatter_class=argparse.RawDescriptionHelpFormatter,
97 )
98 plots_subparsers = plots_parser.add_subparsers(
99 dest="cmd",
100 help="Use `dvc plots CMD --help` to display command-specific help.",
101 )
102
103 fix_subparsers(plots_subparsers)
104
105 SHOW_HELP = "Generate a plots image file from a metrics file."
106 plots_show_parser = plots_subparsers.add_parser(
107 "show",
108 parents=[parent_parser],
109 description=append_doc_link(SHOW_HELP, "plots/show"),
110 help=SHOW_HELP,
111 formatter_class=argparse.RawDescriptionHelpFormatter,
112 )
113 plots_show_parser.add_argument(
114 "-t",
115 "--template",
116 nargs="?",
117 default=None,
118 help=(
119 "Special JSON or HTML schema file to inject with the data. "
120 "See {}".format(
121 format_link("https://man.dvc.org/plots#plot-templates")
122 )
123 ),
124 )
125 plots_show_parser.add_argument(
126 "-o", "--out", default=None, help="Destination path to save plots to.",
127 )
128 plots_show_parser.add_argument(
129 "-x", default=None, help="Field name for x axis."
130 )
131 plots_show_parser.add_argument(
132 "-y", default=None, help="Field name for y axis."
133 )
134 plots_show_parser.add_argument(
135 "--no-csv-header",
136 action="store_true",
137 default=False,
138 help="Required when CSV or TSV datafile does not have a header.",
139 )
140 plots_show_parser.add_argument(
141 "--show-json",
142 action="store_true",
143 default=False,
144 help="Show output in JSON format.",
145 )
146 plots_show_parser.add_argument("--title", default=None, help="Plot title.")
147 plots_show_parser.add_argument(
148 "--xlab", default=None, help="X axis title."
149 )
150 plots_show_parser.add_argument(
151 "--ylab", default=None, help="Y axis title."
152 )
153 plots_show_parser.add_argument(
154 "targets",
155 nargs="*",
156 help="Metrics files to visualize. Shows all plots by default.",
157 )
158 plots_show_parser.set_defaults(func=CmdPlotsShow)
159
160 PLOTS_DIFF_HELP = (
161 "Plot differences in metrics between commits in the DVC "
162 "repository, or between the last commit and the workspace."
163 )
164 plots_diff_parser = plots_subparsers.add_parser(
165 "diff",
166 parents=[parent_parser],
167 description=append_doc_link(PLOTS_DIFF_HELP, "plots/diff"),
168 help=PLOTS_DIFF_HELP,
169 formatter_class=argparse.RawDescriptionHelpFormatter,
170 )
171 plots_diff_parser.add_argument(
172 "-t",
173 "--template",
174 nargs="?",
175 default=None,
176 help=(
177 "Special JSON or HTML schema file to inject with the data. "
178 "See {}".format(
179 format_link("https://man.dvc.org/plots#plot-templates")
180 )
181 ),
182 )
183 plots_diff_parser.add_argument(
184 "--targets",
185 nargs="*",
186 help="Metrics file to visualize. Shows all plots by default.",
187 )
188 plots_diff_parser.add_argument(
189 "-o", "--out", default=None, help="Destination path to save plots to.",
190 )
191 plots_diff_parser.add_argument(
192 "-x", default=None, help="Field name for x axis."
193 )
194 plots_diff_parser.add_argument(
195 "-y", default=None, help="Field name for y axis."
196 )
197 plots_diff_parser.add_argument(
198 "--no-csv-header",
199 action="store_true",
200 default=False,
201 help="Provided CSV ot TSV datafile does not have a header.",
202 )
203 plots_diff_parser.add_argument(
204 "--show-json",
205 action="store_true",
206 default=False,
207 help="Show output in JSON format.",
208 )
209 plots_diff_parser.add_argument("--title", default=None, help="Plot title.")
210 plots_diff_parser.add_argument(
211 "--xlab", default=None, help="X axis title."
212 )
213 plots_diff_parser.add_argument(
214 "--ylab", default=None, help="Y axis title."
215 )
216 plots_diff_parser.add_argument(
217 "revisions", nargs="*", default=None, help="Git commits to plot from",
218 )
219 plots_diff_parser.set_defaults(func=CmdPlotsDiff)
220
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dvc/command/plots.py b/dvc/command/plots.py
--- a/dvc/command/plots.py
+++ b/dvc/command/plots.py
@@ -33,6 +33,16 @@
raise NotImplementedError
def run(self):
+ if self.args.show_vega:
+ if not self.args.targets:
+ logger.error("please specify a target for `--show-vega`")
+ return 1
+ if len(self.args.targets) > 1:
+ logger.error(
+ "you can only specify one target for `--show-vega`"
+ )
+ return 1
+
try:
plots = self._func(
targets=self.args.targets,
@@ -45,10 +55,9 @@
y_title=self.args.ylab,
)
- if self.args.show_json:
- import json
-
- logger.info(json.dumps(plots))
+ if self.args.show_vega:
+ target = self.args.targets[0]
+ logger.info(plots[target])
return 0
divs = [
@@ -138,10 +147,10 @@
help="Required when CSV or TSV datafile does not have a header.",
)
plots_show_parser.add_argument(
- "--show-json",
+ "--show-vega",
action="store_true",
default=False,
- help="Show output in JSON format.",
+ help="Show output in VEGA format.",
)
plots_show_parser.add_argument("--title", default=None, help="Plot title.")
plots_show_parser.add_argument(
@@ -201,10 +210,10 @@
help="Provided CSV ot TSV datafile does not have a header.",
)
plots_diff_parser.add_argument(
- "--show-json",
+ "--show-vega",
action="store_true",
default=False,
- help="Show output in JSON format.",
+ help="Show output in VEGA format.",
)
plots_diff_parser.add_argument("--title", default=None, help="Plot title.")
plots_diff_parser.add_argument(
|
{"golden_diff": "diff --git a/dvc/command/plots.py b/dvc/command/plots.py\n--- a/dvc/command/plots.py\n+++ b/dvc/command/plots.py\n@@ -33,6 +33,16 @@\n raise NotImplementedError\n \n def run(self):\n+ if self.args.show_vega:\n+ if not self.args.targets:\n+ logger.error(\"please specify a target for `--show-vega`\")\n+ return 1\n+ if len(self.args.targets) > 1:\n+ logger.error(\n+ \"you can only specify one target for `--show-vega`\"\n+ )\n+ return 1\n+\n try:\n plots = self._func(\n targets=self.args.targets,\n@@ -45,10 +55,9 @@\n y_title=self.args.ylab,\n )\n \n- if self.args.show_json:\n- import json\n-\n- logger.info(json.dumps(plots))\n+ if self.args.show_vega:\n+ target = self.args.targets[0]\n+ logger.info(plots[target])\n return 0\n \n divs = [\n@@ -138,10 +147,10 @@\n help=\"Required when CSV or TSV datafile does not have a header.\",\n )\n plots_show_parser.add_argument(\n- \"--show-json\",\n+ \"--show-vega\",\n action=\"store_true\",\n default=False,\n- help=\"Show output in JSON format.\",\n+ help=\"Show output in VEGA format.\",\n )\n plots_show_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_show_parser.add_argument(\n@@ -201,10 +210,10 @@\n help=\"Provided CSV ot TSV datafile does not have a header.\",\n )\n plots_diff_parser.add_argument(\n- \"--show-json\",\n+ \"--show-vega\",\n action=\"store_true\",\n default=False,\n- help=\"Show output in JSON format.\",\n+ help=\"Show output in VEGA format.\",\n )\n plots_diff_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_diff_parser.add_argument(\n", "issue": "plots: replace --show-json with --show-vega\nRequested by @dmpetrov for cml. `--show-vega` should require a target and return a filled vega template. `--show-json` is not needed, let's delete it.\n", "before_files": [{"content": "import argparse\nimport logging\nimport os\n\nfrom dvc.command.base import CmdBase, append_doc_link, fix_subparsers\nfrom dvc.exceptions import DvcException\nfrom dvc.utils import format_link\n\nlogger = logging.getLogger(__name__)\n\nPAGE_HTML = \"\"\"<!DOCTYPE html>\n<html>\n<head>\n <title>DVC Plot</title>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n</head>\n<body>\n {divs}\n</body>\n</html>\"\"\"\n\nDIV_HTML = \"\"\"<div id = \"{id}\"></div>\n<script type = \"text/javascript\">\n var spec = {vega_json};\n vegaEmbed('#{id}', spec);\n</script>\"\"\"\n\n\nclass CmdPlots(CmdBase):\n def _func(self, *args, **kwargs):\n raise NotImplementedError\n\n def run(self):\n try:\n plots = self._func(\n targets=self.args.targets,\n template=self.args.template,\n x_field=self.args.x,\n y_field=self.args.y,\n csv_header=not self.args.no_csv_header,\n title=self.args.title,\n x_title=self.args.xlab,\n y_title=self.args.ylab,\n )\n\n if self.args.show_json:\n import json\n\n logger.info(json.dumps(plots))\n return 0\n\n divs = [\n DIV_HTML.format(id=f\"plot{i}\", vega_json=plot)\n for i, plot in enumerate(plots.values())\n ]\n html = PAGE_HTML.format(divs=\"\\n\".join(divs))\n path = self.args.out or \"plots.html\"\n\n with open(path, \"w\") as fobj:\n fobj.write(html)\n\n logger.info(\n \"file://{}\".format(os.path.join(self.repo.root_dir, path))\n )\n\n except DvcException:\n logger.exception(\"\")\n return 1\n\n return 0\n\n\nclass CmdPlotsShow(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.show(*args, **kwargs)\n\n\nclass CmdPlotsDiff(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.diff(*args, revs=self.args.revisions, **kwargs)\n\n\ndef add_parser(subparsers, parent_parser):\n PLOTS_HELP = (\n \"Generating plots for metrics stored in structured files \"\n \"(JSON, CSV, TSV).\"\n )\n\n plots_parser = subparsers.add_parser(\n \"plots\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_HELP, \"plots\"),\n help=PLOTS_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_subparsers = plots_parser.add_subparsers(\n dest=\"cmd\",\n help=\"Use `dvc plots CMD --help` to display command-specific help.\",\n )\n\n fix_subparsers(plots_subparsers)\n\n SHOW_HELP = \"Generate a plots image file from a metrics file.\"\n plots_show_parser = plots_subparsers.add_parser(\n \"show\",\n parents=[parent_parser],\n description=append_doc_link(SHOW_HELP, \"plots/show\"),\n help=SHOW_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_show_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_show_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_show_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_show_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_show_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Required when CSV or TSV datafile does not have a header.\",\n )\n plots_show_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n plots_show_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_show_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_show_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_show_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"Metrics files to visualize. Shows all plots by default.\",\n )\n plots_show_parser.set_defaults(func=CmdPlotsShow)\n\n PLOTS_DIFF_HELP = (\n \"Plot differences in metrics between commits in the DVC \"\n \"repository, or between the last commit and the workspace.\"\n )\n plots_diff_parser = plots_subparsers.add_parser(\n \"diff\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_DIFF_HELP, \"plots/diff\"),\n help=PLOTS_DIFF_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_diff_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_diff_parser.add_argument(\n \"--targets\",\n nargs=\"*\",\n help=\"Metrics file to visualize. Shows all plots by default.\",\n )\n plots_diff_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_diff_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_diff_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_diff_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Provided CSV ot TSV datafile does not have a header.\",\n )\n plots_diff_parser.add_argument(\n \"--show-json\",\n action=\"store_true\",\n default=False,\n help=\"Show output in JSON format.\",\n )\n plots_diff_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_diff_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_diff_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_diff_parser.add_argument(\n \"revisions\", nargs=\"*\", default=None, help=\"Git commits to plot from\",\n )\n plots_diff_parser.set_defaults(func=CmdPlotsDiff)\n", "path": "dvc/command/plots.py"}], "after_files": [{"content": "import argparse\nimport logging\nimport os\n\nfrom dvc.command.base import CmdBase, append_doc_link, fix_subparsers\nfrom dvc.exceptions import DvcException\nfrom dvc.utils import format_link\n\nlogger = logging.getLogger(__name__)\n\nPAGE_HTML = \"\"\"<!DOCTYPE html>\n<html>\n<head>\n <title>DVC Plot</title>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]\"></script>\n</head>\n<body>\n {divs}\n</body>\n</html>\"\"\"\n\nDIV_HTML = \"\"\"<div id = \"{id}\"></div>\n<script type = \"text/javascript\">\n var spec = {vega_json};\n vegaEmbed('#{id}', spec);\n</script>\"\"\"\n\n\nclass CmdPlots(CmdBase):\n def _func(self, *args, **kwargs):\n raise NotImplementedError\n\n def run(self):\n if self.args.show_vega:\n if not self.args.targets:\n logger.error(\"please specify a target for `--show-vega`\")\n return 1\n if len(self.args.targets) > 1:\n logger.error(\n \"you can only specify one target for `--show-vega`\"\n )\n return 1\n\n try:\n plots = self._func(\n targets=self.args.targets,\n template=self.args.template,\n x_field=self.args.x,\n y_field=self.args.y,\n csv_header=not self.args.no_csv_header,\n title=self.args.title,\n x_title=self.args.xlab,\n y_title=self.args.ylab,\n )\n\n if self.args.show_vega:\n target = self.args.targets[0]\n logger.info(plots[target])\n return 0\n\n divs = [\n DIV_HTML.format(id=f\"plot{i}\", vega_json=plot)\n for i, plot in enumerate(plots.values())\n ]\n html = PAGE_HTML.format(divs=\"\\n\".join(divs))\n path = self.args.out or \"plots.html\"\n\n with open(path, \"w\") as fobj:\n fobj.write(html)\n\n logger.info(\n \"file://{}\".format(os.path.join(self.repo.root_dir, path))\n )\n\n except DvcException:\n logger.exception(\"\")\n return 1\n\n return 0\n\n\nclass CmdPlotsShow(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.show(*args, **kwargs)\n\n\nclass CmdPlotsDiff(CmdPlots):\n def _func(self, *args, **kwargs):\n return self.repo.plots.diff(*args, revs=self.args.revisions, **kwargs)\n\n\ndef add_parser(subparsers, parent_parser):\n PLOTS_HELP = (\n \"Generating plots for metrics stored in structured files \"\n \"(JSON, CSV, TSV).\"\n )\n\n plots_parser = subparsers.add_parser(\n \"plots\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_HELP, \"plots\"),\n help=PLOTS_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_subparsers = plots_parser.add_subparsers(\n dest=\"cmd\",\n help=\"Use `dvc plots CMD --help` to display command-specific help.\",\n )\n\n fix_subparsers(plots_subparsers)\n\n SHOW_HELP = \"Generate a plots image file from a metrics file.\"\n plots_show_parser = plots_subparsers.add_parser(\n \"show\",\n parents=[parent_parser],\n description=append_doc_link(SHOW_HELP, \"plots/show\"),\n help=SHOW_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_show_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_show_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_show_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_show_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_show_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Required when CSV or TSV datafile does not have a header.\",\n )\n plots_show_parser.add_argument(\n \"--show-vega\",\n action=\"store_true\",\n default=False,\n help=\"Show output in VEGA format.\",\n )\n plots_show_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_show_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_show_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_show_parser.add_argument(\n \"targets\",\n nargs=\"*\",\n help=\"Metrics files to visualize. Shows all plots by default.\",\n )\n plots_show_parser.set_defaults(func=CmdPlotsShow)\n\n PLOTS_DIFF_HELP = (\n \"Plot differences in metrics between commits in the DVC \"\n \"repository, or between the last commit and the workspace.\"\n )\n plots_diff_parser = plots_subparsers.add_parser(\n \"diff\",\n parents=[parent_parser],\n description=append_doc_link(PLOTS_DIFF_HELP, \"plots/diff\"),\n help=PLOTS_DIFF_HELP,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n plots_diff_parser.add_argument(\n \"-t\",\n \"--template\",\n nargs=\"?\",\n default=None,\n help=(\n \"Special JSON or HTML schema file to inject with the data. \"\n \"See {}\".format(\n format_link(\"https://man.dvc.org/plots#plot-templates\")\n )\n ),\n )\n plots_diff_parser.add_argument(\n \"--targets\",\n nargs=\"*\",\n help=\"Metrics file to visualize. Shows all plots by default.\",\n )\n plots_diff_parser.add_argument(\n \"-o\", \"--out\", default=None, help=\"Destination path to save plots to.\",\n )\n plots_diff_parser.add_argument(\n \"-x\", default=None, help=\"Field name for x axis.\"\n )\n plots_diff_parser.add_argument(\n \"-y\", default=None, help=\"Field name for y axis.\"\n )\n plots_diff_parser.add_argument(\n \"--no-csv-header\",\n action=\"store_true\",\n default=False,\n help=\"Provided CSV ot TSV datafile does not have a header.\",\n )\n plots_diff_parser.add_argument(\n \"--show-vega\",\n action=\"store_true\",\n default=False,\n help=\"Show output in VEGA format.\",\n )\n plots_diff_parser.add_argument(\"--title\", default=None, help=\"Plot title.\")\n plots_diff_parser.add_argument(\n \"--xlab\", default=None, help=\"X axis title.\"\n )\n plots_diff_parser.add_argument(\n \"--ylab\", default=None, help=\"Y axis title.\"\n )\n plots_diff_parser.add_argument(\n \"revisions\", nargs=\"*\", default=None, help=\"Git commits to plot from\",\n )\n plots_diff_parser.set_defaults(func=CmdPlotsDiff)\n", "path": "dvc/command/plots.py"}]}
| 2,406 | 470 |
gh_patches_debug_4117
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-6178
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MacOS: Clipboard nspaste make app crash when copying text
<!--
The issue tracker is a tool to address bugs.
Please use the #support Discord channel at https://chat.kivy.org/ or Stack Overflow for
support questions, more information at https://git.io/vM1yQ.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://git.io/vM1iE
* prepare a short, runnable example that reproduces the issue
* reproduce the problem with the latest development version of Kivy
* double-check that the issue is indeed a bug and not a support request
-->
### Versions
* Python: 3.7.1
* OS: MacOS 10.13.6
* Kivy: 1.10.1
* Kivy installation method: pypi
### Description
When I try copy text in TextInput, this make app crash. But paste is OK.
### Code and Logs
```log
Traceback (most recent call last):
File "main.py", line 56, in <module>
app.run()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/app.py", line 826, in run
runTouchApp()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/base.py", line 502, in runTouchApp
EventLoop.window.mainloop()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py", line 727, in mainloop
self._mainloop()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py", line 662, in _mainloop
self.modifiers):
File "kivy/_event.pyx", line 703, in kivy._event.EventDispatcher.dispatch
File "kivy/_event.pyx", line 1214, in kivy._event.EventObservers.dispatch
File "kivy/_event.pyx", line 1138, in kivy._event.EventObservers._dispatch
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/__init__.py", line 162, in _on_window_key_down
return self.dispatch('on_key_down', keycode, text, modifiers)
File "kivy/_event.pyx", line 703, in kivy._event.EventDispatcher.dispatch
File "kivy/_event.pyx", line 1214, in kivy._event.EventObservers.dispatch
File "kivy/_event.pyx", line 1138, in kivy._event.EventObservers._dispatch
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py", line 2434, in keyboard_on_key_down
self.copy()
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py", line 1727, in copy
return Clipboard.copy(self.selection_text)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py", line 73, in copy
self._copy(data)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py", line 87, in _copy
self.put(data, self._clip_mime_type)
File "/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/clipboard_nspaste.py", line 40, in put
pb.writeObjects_([data])
File "pyobjus/pyobjus.pyx", line 393, in pyobjus.ObjcMethod.__call__
File "pyobjus/pyobjus_conversions.pxi", line 617, in pyobjus.convert_py_arg_to_cy
File "pyobjus/pyobjus_conversions.pxi", line 441, in pyobjus.convert_py_to_nsobject
File "pyobjus/pyobjus.pyx", line 393, in pyobjus.ObjcMethod.__call__
File "pyobjus/pyobjus_conversions.pxi", line 617, in pyobjus.convert_py_arg_to_cy
File "pyobjus/pyobjus_conversions.pxi", line 452, in pyobjus.convert_py_to_nsobject
File "pyobjus/pyobjus.pyx", line 974, in pyobjus.objc_create_delegate
pyobjus.ObjcException: You've passed b'kivyproject' as delegate, but there is no @protocol methods declared.
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/core/clipboard/clipboard_nspaste.py`
Content:
```
1 '''
2 Clipboard OsX: implementation of clipboard using Appkit
3 '''
4
5 __all__ = ('ClipboardNSPaste', )
6
7 from kivy.core.clipboard import ClipboardBase
8 from kivy.utils import platform
9
10 if platform != 'macosx':
11 raise SystemError('Unsupported platform for appkit clipboard.')
12 try:
13 from pyobjus import autoclass
14 from pyobjus.dylib_manager import load_framework, INCLUDE
15 load_framework(INCLUDE.AppKit)
16 except ImportError:
17 raise SystemError('Pyobjus not installed. Please run the following'
18 ' command to install it. `pip install --user pyobjus`')
19
20 NSPasteboard = autoclass('NSPasteboard')
21 NSString = autoclass('NSString')
22
23
24 class ClipboardNSPaste(ClipboardBase):
25
26 def __init__(self):
27 super(ClipboardNSPaste, self).__init__()
28 self._clipboard = NSPasteboard.generalPasteboard()
29
30 def get(self, mimetype='text/plain'):
31 pb = self._clipboard
32 data = pb.stringForType_('public.utf8-plain-text')
33 if not data:
34 return ""
35 return data.UTF8String()
36
37 def put(self, data, mimetype='text/plain'):
38 pb = self._clipboard
39 pb.clearContents()
40 pb.writeObjects_([data])
41
42 def get_types(self):
43 return list('text/plain',)
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/core/clipboard/clipboard_nspaste.py b/kivy/core/clipboard/clipboard_nspaste.py
--- a/kivy/core/clipboard/clipboard_nspaste.py
+++ b/kivy/core/clipboard/clipboard_nspaste.py
@@ -37,7 +37,8 @@
def put(self, data, mimetype='text/plain'):
pb = self._clipboard
pb.clearContents()
- pb.writeObjects_([data])
+ utf8 = NSString.alloc().initWithUTF8String_(data)
+ pb.setString_forType_(utf8, 'public.utf8-plain-text')
def get_types(self):
return list('text/plain',)
|
{"golden_diff": "diff --git a/kivy/core/clipboard/clipboard_nspaste.py b/kivy/core/clipboard/clipboard_nspaste.py\n--- a/kivy/core/clipboard/clipboard_nspaste.py\n+++ b/kivy/core/clipboard/clipboard_nspaste.py\n@@ -37,7 +37,8 @@\n def put(self, data, mimetype='text/plain'):\n pb = self._clipboard\n pb.clearContents()\n- pb.writeObjects_([data])\n+ utf8 = NSString.alloc().initWithUTF8String_(data)\n+ pb.setString_forType_(utf8, 'public.utf8-plain-text')\n \n def get_types(self):\n return list('text/plain',)\n", "issue": "MacOS: Clipboard nspaste make app crash when copying text\n<!--\r\nThe issue tracker is a tool to address bugs.\r\nPlease use the #support Discord channel at https://chat.kivy.org/ or Stack Overflow for\r\nsupport questions, more information at https://git.io/vM1yQ.\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://git.io/vM1iE\r\n * prepare a short, runnable example that reproduces the issue\r\n * reproduce the problem with the latest development version of Kivy\r\n * double-check that the issue is indeed a bug and not a support request\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: 3.7.1\r\n* OS: MacOS 10.13.6\r\n* Kivy: 1.10.1\r\n* Kivy installation method: pypi\r\n\r\n### Description\r\n\r\nWhen I try copy text in TextInput, this make app crash. But paste is OK.\r\n\r\n### Code and Logs\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"main.py\", line 56, in <module>\r\n app.run()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/app.py\", line 826, in run\r\n runTouchApp()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/base.py\", line 502, in runTouchApp\r\n EventLoop.window.mainloop()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py\", line 727, in mainloop\r\n self._mainloop()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/window_sdl2.py\", line 662, in _mainloop\r\n self.modifiers):\r\n File \"kivy/_event.pyx\", line 703, in kivy._event.EventDispatcher.dispatch\r\n File \"kivy/_event.pyx\", line 1214, in kivy._event.EventObservers.dispatch\r\n File \"kivy/_event.pyx\", line 1138, in kivy._event.EventObservers._dispatch\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/window/__init__.py\", line 162, in _on_window_key_down\r\n return self.dispatch('on_key_down', keycode, text, modifiers)\r\n File \"kivy/_event.pyx\", line 703, in kivy._event.EventDispatcher.dispatch\r\n File \"kivy/_event.pyx\", line 1214, in kivy._event.EventObservers.dispatch\r\n File \"kivy/_event.pyx\", line 1138, in kivy._event.EventObservers._dispatch\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py\", line 2434, in keyboard_on_key_down\r\n self.copy()\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/uix/textinput.py\", line 1727, in copy\r\n return Clipboard.copy(self.selection_text)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py\", line 73, in copy\r\n self._copy(data)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/__init__.py\", line 87, in _copy\r\n self.put(data, self._clip_mime_type)\r\n File \"/Users/ivc/kivy/.env3/lib/python3.7/site-packages/kivy/core/clipboard/clipboard_nspaste.py\", line 40, in put\r\n pb.writeObjects_([data])\r\n File \"pyobjus/pyobjus.pyx\", line 393, in pyobjus.ObjcMethod.__call__\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 617, in pyobjus.convert_py_arg_to_cy\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 441, in pyobjus.convert_py_to_nsobject\r\n File \"pyobjus/pyobjus.pyx\", line 393, in pyobjus.ObjcMethod.__call__\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 617, in pyobjus.convert_py_arg_to_cy\r\n File \"pyobjus/pyobjus_conversions.pxi\", line 452, in pyobjus.convert_py_to_nsobject\r\n File \"pyobjus/pyobjus.pyx\", line 974, in pyobjus.objc_create_delegate\r\n pyobjus.ObjcException: You've passed b'kivyproject' as delegate, but there is no @protocol methods declared.\r\n```\r\n\n", "before_files": [{"content": "'''\nClipboard OsX: implementation of clipboard using Appkit\n'''\n\n__all__ = ('ClipboardNSPaste', )\n\nfrom kivy.core.clipboard import ClipboardBase\nfrom kivy.utils import platform\n\nif platform != 'macosx':\n raise SystemError('Unsupported platform for appkit clipboard.')\ntry:\n from pyobjus import autoclass\n from pyobjus.dylib_manager import load_framework, INCLUDE\n load_framework(INCLUDE.AppKit)\nexcept ImportError:\n raise SystemError('Pyobjus not installed. Please run the following'\n ' command to install it. `pip install --user pyobjus`')\n\nNSPasteboard = autoclass('NSPasteboard')\nNSString = autoclass('NSString')\n\n\nclass ClipboardNSPaste(ClipboardBase):\n\n def __init__(self):\n super(ClipboardNSPaste, self).__init__()\n self._clipboard = NSPasteboard.generalPasteboard()\n\n def get(self, mimetype='text/plain'):\n pb = self._clipboard\n data = pb.stringForType_('public.utf8-plain-text')\n if not data:\n return \"\"\n return data.UTF8String()\n\n def put(self, data, mimetype='text/plain'):\n pb = self._clipboard\n pb.clearContents()\n pb.writeObjects_([data])\n\n def get_types(self):\n return list('text/plain',)\n", "path": "kivy/core/clipboard/clipboard_nspaste.py"}], "after_files": [{"content": "'''\nClipboard OsX: implementation of clipboard using Appkit\n'''\n\n__all__ = ('ClipboardNSPaste', )\n\nfrom kivy.core.clipboard import ClipboardBase\nfrom kivy.utils import platform\n\nif platform != 'macosx':\n raise SystemError('Unsupported platform for appkit clipboard.')\ntry:\n from pyobjus import autoclass\n from pyobjus.dylib_manager import load_framework, INCLUDE\n load_framework(INCLUDE.AppKit)\nexcept ImportError:\n raise SystemError('Pyobjus not installed. Please run the following'\n ' command to install it. `pip install --user pyobjus`')\n\nNSPasteboard = autoclass('NSPasteboard')\nNSString = autoclass('NSString')\n\n\nclass ClipboardNSPaste(ClipboardBase):\n\n def __init__(self):\n super(ClipboardNSPaste, self).__init__()\n self._clipboard = NSPasteboard.generalPasteboard()\n\n def get(self, mimetype='text/plain'):\n pb = self._clipboard\n data = pb.stringForType_('public.utf8-plain-text')\n if not data:\n return \"\"\n return data.UTF8String()\n\n def put(self, data, mimetype='text/plain'):\n pb = self._clipboard\n pb.clearContents()\n utf8 = NSString.alloc().initWithUTF8String_(data)\n pb.setString_forType_(utf8, 'public.utf8-plain-text')\n\n def get_types(self):\n return list('text/plain',)\n", "path": "kivy/core/clipboard/clipboard_nspaste.py"}]}
| 1,723 | 148 |
gh_patches_debug_33844
|
rasdani/github-patches
|
git_diff
|
getredash__redash-4354
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make Cypress tests work with [email protected]
Running our tests with [email protected] doesn't work. Need to figure out what happened, until then pinning the version to 3.4.1 (#4284).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redash/app.py`
Content:
```
1 from flask import Flask
2 from werkzeug.contrib.fixers import ProxyFix
3
4 from . import settings
5
6
7 class Redash(Flask):
8 """A custom Flask app for Redash"""
9 def __init__(self, *args, **kwargs):
10 kwargs.update({
11 'template_folder': settings.STATIC_ASSETS_PATH,
12 'static_folder': settings.STATIC_ASSETS_PATH,
13 'static_url_path': '/static',
14 })
15 super(Redash, self).__init__(__name__, *args, **kwargs)
16 # Make sure we get the right referral address even behind proxies like nginx.
17 self.wsgi_app = ProxyFix(self.wsgi_app, settings.PROXIES_COUNT)
18 # Configure Redash using our settings
19 self.config.from_object('redash.settings')
20
21
22 def create_app():
23 from . import authentication, extensions, handlers, limiter, mail, migrate, security
24 from .handlers import chrome_logger
25 from .handlers.webpack import configure_webpack
26 from .metrics import request as request_metrics
27 from .models import db, users
28 from .utils import sentry
29 from .version_check import reset_new_version_status
30
31 sentry.init()
32 app = Redash()
33
34 # Check and update the cached version for use by the client
35 app.before_first_request(reset_new_version_status)
36
37 security.init_app(app)
38 request_metrics.init_app(app)
39 db.init_app(app)
40 migrate.init_app(app, db)
41 mail.init_app(app)
42 authentication.init_app(app)
43 limiter.init_app(app)
44 handlers.init_app(app)
45 configure_webpack(app)
46 extensions.init_app(app)
47 chrome_logger.init_app(app)
48 users.init_app(app)
49
50 return app
51
```
Path: `redash/handlers/chrome_logger.py`
Content:
```
1 import time
2 import chromelogger
3 from flask import g, request
4 from flask_sqlalchemy import get_debug_queries
5
6
7 def log_queries():
8 total_duration = 0.0
9 queries_count = 0
10
11 chromelogger.group("SQL Queries")
12
13 for q in get_debug_queries():
14 total_duration += q.duration
15 queries_count += 1
16 chromelogger.info(q.statement % q.parameters)
17 chromelogger.info("Runtime: {:.2f}ms".format(1000 * q.duration))
18
19 chromelogger.info("{} queries executed in {:.2f}ms.".format(queries_count, total_duration*1000))
20
21 chromelogger.group_end("SQL Queries")
22
23
24 def chrome_log(response):
25 request_duration = (time.time() - g.start_time) * 1000
26 queries_duration = g.get('queries_duration', 0.0)
27 queries_count = g.get('queries_count', 0)
28
29 group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(
30 request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)
31
32 chromelogger.group_collapsed(group_name)
33
34 endpoint = (request.endpoint or 'unknown').replace('.', '_')
35 chromelogger.info('Endpoint: {}'.format(endpoint))
36 chromelogger.info('Content Type: {}'.format(response.content_type))
37 chromelogger.info('Content Length: {}'.format(response.content_length or -1))
38
39 log_queries()
40
41 chromelogger.group_end(group_name)
42
43 header = chromelogger.get_header()
44 if header is not None:
45 response.headers.add(*header)
46
47 return response
48
49
50 def init_app(app):
51 if not app.debug:
52 return
53
54 app.after_request(chrome_log)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/redash/app.py b/redash/app.py
--- a/redash/app.py
+++ b/redash/app.py
@@ -21,7 +21,6 @@
def create_app():
from . import authentication, extensions, handlers, limiter, mail, migrate, security
- from .handlers import chrome_logger
from .handlers.webpack import configure_webpack
from .metrics import request as request_metrics
from .models import db, users
@@ -44,7 +43,6 @@
handlers.init_app(app)
configure_webpack(app)
extensions.init_app(app)
- chrome_logger.init_app(app)
users.init_app(app)
return app
diff --git a/redash/handlers/chrome_logger.py b/redash/handlers/chrome_logger.py
deleted file mode 100644
--- a/redash/handlers/chrome_logger.py
+++ /dev/null
@@ -1,54 +0,0 @@
-import time
-import chromelogger
-from flask import g, request
-from flask_sqlalchemy import get_debug_queries
-
-
-def log_queries():
- total_duration = 0.0
- queries_count = 0
-
- chromelogger.group("SQL Queries")
-
- for q in get_debug_queries():
- total_duration += q.duration
- queries_count += 1
- chromelogger.info(q.statement % q.parameters)
- chromelogger.info("Runtime: {:.2f}ms".format(1000 * q.duration))
-
- chromelogger.info("{} queries executed in {:.2f}ms.".format(queries_count, total_duration*1000))
-
- chromelogger.group_end("SQL Queries")
-
-
-def chrome_log(response):
- request_duration = (time.time() - g.start_time) * 1000
- queries_duration = g.get('queries_duration', 0.0)
- queries_count = g.get('queries_count', 0)
-
- group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(
- request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)
-
- chromelogger.group_collapsed(group_name)
-
- endpoint = (request.endpoint or 'unknown').replace('.', '_')
- chromelogger.info('Endpoint: {}'.format(endpoint))
- chromelogger.info('Content Type: {}'.format(response.content_type))
- chromelogger.info('Content Length: {}'.format(response.content_length or -1))
-
- log_queries()
-
- chromelogger.group_end(group_name)
-
- header = chromelogger.get_header()
- if header is not None:
- response.headers.add(*header)
-
- return response
-
-
-def init_app(app):
- if not app.debug:
- return
-
- app.after_request(chrome_log)
|
{"golden_diff": "diff --git a/redash/app.py b/redash/app.py\n--- a/redash/app.py\n+++ b/redash/app.py\n@@ -21,7 +21,6 @@\n \n def create_app():\n from . import authentication, extensions, handlers, limiter, mail, migrate, security\n- from .handlers import chrome_logger\n from .handlers.webpack import configure_webpack\n from .metrics import request as request_metrics\n from .models import db, users\n@@ -44,7 +43,6 @@\n handlers.init_app(app)\n configure_webpack(app)\n extensions.init_app(app)\n- chrome_logger.init_app(app)\n users.init_app(app)\n \n return app\ndiff --git a/redash/handlers/chrome_logger.py b/redash/handlers/chrome_logger.py\ndeleted file mode 100644\n--- a/redash/handlers/chrome_logger.py\n+++ /dev/null\n@@ -1,54 +0,0 @@\n-import time\n-import chromelogger\n-from flask import g, request\n-from flask_sqlalchemy import get_debug_queries\n-\n-\n-def log_queries():\n- total_duration = 0.0\n- queries_count = 0\n-\n- chromelogger.group(\"SQL Queries\")\n-\n- for q in get_debug_queries():\n- total_duration += q.duration\n- queries_count += 1\n- chromelogger.info(q.statement % q.parameters)\n- chromelogger.info(\"Runtime: {:.2f}ms\".format(1000 * q.duration))\n-\n- chromelogger.info(\"{} queries executed in {:.2f}ms.\".format(queries_count, total_duration*1000))\n-\n- chromelogger.group_end(\"SQL Queries\")\n-\n-\n-def chrome_log(response):\n- request_duration = (time.time() - g.start_time) * 1000\n- queries_duration = g.get('queries_duration', 0.0)\n- queries_count = g.get('queries_count', 0)\n-\n- group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(\n- request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)\n-\n- chromelogger.group_collapsed(group_name)\n-\n- endpoint = (request.endpoint or 'unknown').replace('.', '_')\n- chromelogger.info('Endpoint: {}'.format(endpoint))\n- chromelogger.info('Content Type: {}'.format(response.content_type))\n- chromelogger.info('Content Length: {}'.format(response.content_length or -1))\n-\n- log_queries()\n-\n- chromelogger.group_end(group_name)\n-\n- header = chromelogger.get_header()\n- if header is not None:\n- response.headers.add(*header)\n-\n- return response\n-\n-\n-def init_app(app):\n- if not app.debug:\n- return\n-\n- app.after_request(chrome_log)\n", "issue": "Make Cypress tests work with [email protected]\nRunning our tests with [email protected] doesn't work. Need to figure out what happened, until then pinning the version to 3.4.1 (#4284).\n", "before_files": [{"content": "from flask import Flask\nfrom werkzeug.contrib.fixers import ProxyFix\n\nfrom . import settings\n\n\nclass Redash(Flask):\n \"\"\"A custom Flask app for Redash\"\"\"\n def __init__(self, *args, **kwargs):\n kwargs.update({\n 'template_folder': settings.STATIC_ASSETS_PATH,\n 'static_folder': settings.STATIC_ASSETS_PATH,\n 'static_url_path': '/static',\n })\n super(Redash, self).__init__(__name__, *args, **kwargs)\n # Make sure we get the right referral address even behind proxies like nginx.\n self.wsgi_app = ProxyFix(self.wsgi_app, settings.PROXIES_COUNT)\n # Configure Redash using our settings\n self.config.from_object('redash.settings')\n\n\ndef create_app():\n from . import authentication, extensions, handlers, limiter, mail, migrate, security\n from .handlers import chrome_logger\n from .handlers.webpack import configure_webpack\n from .metrics import request as request_metrics\n from .models import db, users\n from .utils import sentry\n from .version_check import reset_new_version_status\n\n sentry.init()\n app = Redash()\n\n # Check and update the cached version for use by the client\n app.before_first_request(reset_new_version_status)\n\n security.init_app(app)\n request_metrics.init_app(app)\n db.init_app(app)\n migrate.init_app(app, db)\n mail.init_app(app)\n authentication.init_app(app)\n limiter.init_app(app)\n handlers.init_app(app)\n configure_webpack(app)\n extensions.init_app(app)\n chrome_logger.init_app(app)\n users.init_app(app)\n\n return app\n", "path": "redash/app.py"}, {"content": "import time\nimport chromelogger\nfrom flask import g, request\nfrom flask_sqlalchemy import get_debug_queries\n\n\ndef log_queries():\n total_duration = 0.0\n queries_count = 0\n\n chromelogger.group(\"SQL Queries\")\n\n for q in get_debug_queries():\n total_duration += q.duration\n queries_count += 1\n chromelogger.info(q.statement % q.parameters)\n chromelogger.info(\"Runtime: {:.2f}ms\".format(1000 * q.duration))\n\n chromelogger.info(\"{} queries executed in {:.2f}ms.\".format(queries_count, total_duration*1000))\n\n chromelogger.group_end(\"SQL Queries\")\n\n\ndef chrome_log(response):\n request_duration = (time.time() - g.start_time) * 1000\n queries_duration = g.get('queries_duration', 0.0)\n queries_count = g.get('queries_count', 0)\n\n group_name = '{} {} ({}, {:.2f}ms runtime, {} queries in {:.2f}ms)'.format(\n request.method, request.path, response.status_code, request_duration, queries_count, queries_duration)\n\n chromelogger.group_collapsed(group_name)\n\n endpoint = (request.endpoint or 'unknown').replace('.', '_')\n chromelogger.info('Endpoint: {}'.format(endpoint))\n chromelogger.info('Content Type: {}'.format(response.content_type))\n chromelogger.info('Content Length: {}'.format(response.content_length or -1))\n\n log_queries()\n\n chromelogger.group_end(group_name)\n\n header = chromelogger.get_header()\n if header is not None:\n response.headers.add(*header)\n\n return response\n\n\ndef init_app(app):\n if not app.debug:\n return\n\n app.after_request(chrome_log)\n", "path": "redash/handlers/chrome_logger.py"}], "after_files": [{"content": "from flask import Flask\nfrom werkzeug.contrib.fixers import ProxyFix\n\nfrom . import settings\n\n\nclass Redash(Flask):\n \"\"\"A custom Flask app for Redash\"\"\"\n def __init__(self, *args, **kwargs):\n kwargs.update({\n 'template_folder': settings.STATIC_ASSETS_PATH,\n 'static_folder': settings.STATIC_ASSETS_PATH,\n 'static_url_path': '/static',\n })\n super(Redash, self).__init__(__name__, *args, **kwargs)\n # Make sure we get the right referral address even behind proxies like nginx.\n self.wsgi_app = ProxyFix(self.wsgi_app, settings.PROXIES_COUNT)\n # Configure Redash using our settings\n self.config.from_object('redash.settings')\n\n\ndef create_app():\n from . import authentication, extensions, handlers, limiter, mail, migrate, security\n from .handlers.webpack import configure_webpack\n from .metrics import request as request_metrics\n from .models import db, users\n from .utils import sentry\n from .version_check import reset_new_version_status\n\n sentry.init()\n app = Redash()\n\n # Check and update the cached version for use by the client\n app.before_first_request(reset_new_version_status)\n\n security.init_app(app)\n request_metrics.init_app(app)\n db.init_app(app)\n migrate.init_app(app, db)\n mail.init_app(app)\n authentication.init_app(app)\n limiter.init_app(app)\n handlers.init_app(app)\n configure_webpack(app)\n extensions.init_app(app)\n users.init_app(app)\n\n return app\n", "path": "redash/app.py"}, {"content": null, "path": "redash/handlers/chrome_logger.py"}]}
| 1,291 | 642 |
gh_patches_debug_39704
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-1667
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sitemap indications of alternate language pages
https://support.google.com/webmasters/answer/2620865?hl=en
I do not have a multi-lingual page myself at this time, so I have no interest in implementing this. Nikola should support it, though.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `nikola/plugins/task/sitemap/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2015 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function, absolute_import, unicode_literals
28 import io
29 import datetime
30 import os
31 try:
32 from urlparse import urljoin, urlparse
33 import robotparser as robotparser
34 except ImportError:
35 from urllib.parse import urljoin, urlparse # NOQA
36 import urllib.robotparser as robotparser # NOQA
37
38 from nikola.plugin_categories import LateTask
39 from nikola.utils import config_changed, apply_filters
40
41
42 urlset_header = """<?xml version="1.0" encoding="UTF-8"?>
43 <urlset
44 xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
45 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
46 xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
47 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
48 """
49
50 loc_format = """ <url>
51 <loc>{0}</loc>
52 <lastmod>{1}</lastmod>
53 </url>
54 """
55
56 urlset_footer = "</urlset>"
57
58 sitemapindex_header = """<?xml version="1.0" encoding="UTF-8"?>
59 <sitemapindex
60 xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
61 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
62 xsi:schemaLocation="http://www.sitemaps.org/schemas/sitemap/0.9
63 http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd">
64 """
65
66 sitemap_format = """ <sitemap>
67 <loc>{0}</loc>
68 <lastmod>{1}</lastmod>
69 </sitemap>
70 """
71
72 sitemapindex_footer = "</sitemapindex>"
73
74
75 def get_base_path(base):
76 """returns the path of a base URL if it contains one.
77
78 >>> get_base_path('http://some.site') == '/'
79 True
80 >>> get_base_path('http://some.site/') == '/'
81 True
82 >>> get_base_path('http://some.site/some/sub-path') == '/some/sub-path/'
83 True
84 >>> get_base_path('http://some.site/some/sub-path/') == '/some/sub-path/'
85 True
86 """
87 # first parse the base_url for some path
88 base_parsed = urlparse(base)
89
90 if not base_parsed.path:
91 sub_path = ''
92 else:
93 sub_path = base_parsed.path
94 if sub_path.endswith('/'):
95 return sub_path
96 else:
97 return sub_path + '/'
98
99
100 class Sitemap(LateTask):
101 """Generate a sitemap."""
102
103 name = "sitemap"
104
105 def gen_tasks(self):
106 """Generate a sitemap."""
107 kw = {
108 "base_url": self.site.config["BASE_URL"],
109 "site_url": self.site.config["SITE_URL"],
110 "output_folder": self.site.config["OUTPUT_FOLDER"],
111 "strip_indexes": self.site.config["STRIP_INDEXES"],
112 "index_file": self.site.config["INDEX_FILE"],
113 "sitemap_include_fileless_dirs": self.site.config["SITEMAP_INCLUDE_FILELESS_DIRS"],
114 "mapped_extensions": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),
115 "robots_exclusions": self.site.config["ROBOTS_EXCLUSIONS"],
116 "filters": self.site.config["FILTERS"],
117 }
118
119 output = kw['output_folder']
120 base_url = kw['base_url']
121 mapped_exts = kw['mapped_extensions']
122
123 output_path = kw['output_folder']
124 sitemapindex_path = os.path.join(output_path, "sitemapindex.xml")
125 sitemap_path = os.path.join(output_path, "sitemap.xml")
126 base_path = get_base_path(kw['base_url'])
127 sitemapindex = {}
128 urlset = {}
129
130 def scan_locs():
131 for root, dirs, files in os.walk(output, followlinks=True):
132 if not dirs and not files and not kw['sitemap_include_fileless_dirs']:
133 continue # Totally empty, not on sitemap
134 path = os.path.relpath(root, output)
135 # ignore the current directory.
136 path = (path.replace(os.sep, '/') + '/').replace('./', '')
137 lastmod = self.get_lastmod(root)
138 loc = urljoin(base_url, base_path + path)
139 if kw['index_file'] in files and kw['strip_indexes']: # ignore folders when not stripping urls
140 post = self.site.post_per_file.get(path + kw['index_file'])
141 if post and (post.is_draft or post.is_private or post.publish_later):
142 continue
143 urlset[loc] = loc_format.format(loc, lastmod)
144 for fname in files:
145 if kw['strip_indexes'] and fname == kw['index_file']:
146 continue # We already mapped the folder
147 if os.path.splitext(fname)[-1] in mapped_exts:
148 real_path = os.path.join(root, fname)
149 path = os.path.relpath(real_path, output)
150 if path.endswith(kw['index_file']) and kw['strip_indexes']:
151 # ignore index files when stripping urls
152 continue
153 if not robot_fetch(path):
154 continue
155 if path.endswith('.html') or path.endswith('.htm'):
156 try:
157 if u'<!doctype html' not in io.open(real_path, 'r', encoding='utf8').read(1024).lower():
158 # ignores "html" files without doctype
159 # alexa-verify, google-site-verification, etc.
160 continue
161 except UnicodeDecodeError:
162 # ignore ancient files
163 # most non-utf8 files are worthless anyways
164 continue
165 """ put RSS in sitemapindex[] instead of in urlset[], sitemap_path is included after it is generated """
166 if path.endswith('.xml') or path.endswith('.rss'):
167 filehead = io.open(real_path, 'r', encoding='utf8').read(512)
168 if u'<rss' in filehead or (u'<urlset' in filehead and path != sitemap_path):
169 path = path.replace(os.sep, '/')
170 lastmod = self.get_lastmod(real_path)
171 loc = urljoin(base_url, base_path + path)
172 sitemapindex[loc] = sitemap_format.format(loc, lastmod)
173 continue
174 else:
175 continue # ignores all XML files except those presumed to be RSS
176 post = self.site.post_per_file.get(path)
177 if post and (post.is_draft or post.is_private or post.publish_later):
178 continue
179 path = path.replace(os.sep, '/')
180 lastmod = self.get_lastmod(real_path)
181 loc = urljoin(base_url, base_path + path)
182 urlset[loc] = loc_format.format(loc, lastmod)
183
184 def robot_fetch(path):
185 for rule in kw["robots_exclusions"]:
186 robot = robotparser.RobotFileParser()
187 robot.parse(["User-Agent: *", "Disallow: {0}".format(rule)])
188 if not robot.can_fetch("*", '/' + path):
189 return False # not robot food
190 return True
191
192 def write_sitemap():
193 # Have to rescan, because files may have been added between
194 # task dep scanning and task execution
195 with io.open(sitemap_path, 'w+', encoding='utf8') as outf:
196 outf.write(urlset_header)
197 for k in sorted(urlset.keys()):
198 outf.write(urlset[k])
199 outf.write(urlset_footer)
200 sitemap_url = urljoin(base_url, base_path + "sitemap.xml")
201 sitemapindex[sitemap_url] = sitemap_format.format(sitemap_url, self.get_lastmod(sitemap_path))
202
203 def write_sitemapindex():
204 with io.open(sitemapindex_path, 'w+', encoding='utf8') as outf:
205 outf.write(sitemapindex_header)
206 for k in sorted(sitemapindex.keys()):
207 outf.write(sitemapindex[k])
208 outf.write(sitemapindex_footer)
209
210 # Yield a task to calculate the dependencies of the sitemap
211 # Other tasks can depend on this output, instead of having
212 # to scan locations.
213 def scan_locs_task():
214 scan_locs()
215
216 # Generate a list of file dependencies for the actual generation
217 # task, so rebuilds are triggered. (Issue #1032)
218 output = kw["output_folder"]
219 file_dep = []
220
221 for i in urlset.keys():
222 p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))
223 if not p.endswith('sitemap.xml') and not os.path.isdir(p):
224 file_dep.append(p)
225 if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):
226 file_dep.append(p + 'index.html')
227
228 for i in sitemapindex.keys():
229 p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))
230 if not p.endswith('sitemap.xml') and not os.path.isdir(p):
231 file_dep.append(p)
232 if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):
233 file_dep.append(p + 'index.html')
234
235 return {'file_dep': file_dep}
236
237 yield {
238 "basename": "_scan_locs",
239 "name": "sitemap",
240 "actions": [(scan_locs_task)]
241 }
242
243 yield self.group_task()
244 yield apply_filters({
245 "basename": "sitemap",
246 "name": sitemap_path,
247 "targets": [sitemap_path],
248 "actions": [(write_sitemap,)],
249 "uptodate": [config_changed(kw, 'nikola.plugins.task.sitemap:write')],
250 "clean": True,
251 "task_dep": ["render_site"],
252 "calc_dep": ["_scan_locs:sitemap"],
253 }, kw['filters'])
254 yield apply_filters({
255 "basename": "sitemap",
256 "name": sitemapindex_path,
257 "targets": [sitemapindex_path],
258 "actions": [(write_sitemapindex,)],
259 "uptodate": [config_changed(kw, 'nikola.plugins.task.sitemap:write_index')],
260 "clean": True,
261 "file_dep": [sitemap_path]
262 }, kw['filters'])
263
264 def get_lastmod(self, p):
265 if self.site.invariant:
266 return '2038-01-01'
267 else:
268 return datetime.datetime.fromtimestamp(os.stat(p).st_mtime).isoformat().split('T')[0]
269
270 if __name__ == '__main__':
271 import doctest
272 doctest.testmod()
273
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/nikola/plugins/task/sitemap/__init__.py b/nikola/plugins/task/sitemap/__init__.py
--- a/nikola/plugins/task/sitemap/__init__.py
+++ b/nikola/plugins/task/sitemap/__init__.py
@@ -50,6 +50,7 @@
loc_format = """ <url>
<loc>{0}</loc>
<lastmod>{1}</lastmod>
+ {2}
</url>
"""
@@ -69,6 +70,13 @@
</sitemap>
"""
+alternates_format = """<xhtml:link
+ rel="alternate"
+ hreflang="{0}"
+ href="{1}"
+ />"""
+
+
sitemapindex_footer = "</sitemapindex>"
@@ -114,6 +122,7 @@
"mapped_extensions": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),
"robots_exclusions": self.site.config["ROBOTS_EXCLUSIONS"],
"filters": self.site.config["FILTERS"],
+ "translations": self.site.config["TRANSLATIONS"],
}
output = kw['output_folder']
@@ -140,7 +149,14 @@
post = self.site.post_per_file.get(path + kw['index_file'])
if post and (post.is_draft or post.is_private or post.publish_later):
continue
- urlset[loc] = loc_format.format(loc, lastmod)
+ alternates = []
+ if post:
+ for lang in kw['translations']:
+ alt_url = post.permalink(lang=lang, absolute=True)
+ if loc == alt_url:
+ continue
+ alternates.append(alternates_format.format(lang, alt_url))
+ urlset[loc] = loc_format.format(loc, lastmod, '\n'.join(alternates))
for fname in files:
if kw['strip_indexes'] and fname == kw['index_file']:
continue # We already mapped the folder
@@ -179,7 +195,14 @@
path = path.replace(os.sep, '/')
lastmod = self.get_lastmod(real_path)
loc = urljoin(base_url, base_path + path)
- urlset[loc] = loc_format.format(loc, lastmod)
+ alternates = []
+ if post:
+ for lang in kw['translations']:
+ alt_url = post.permalink(lang=lang, absolute=True)
+ if loc == alt_url:
+ continue
+ alternates.append(alternates_format.format(lang, alt_url))
+ urlset[loc] = loc_format.format(loc, lastmod, '\n'.join(alternates))
def robot_fetch(path):
for rule in kw["robots_exclusions"]:
|
{"golden_diff": "diff --git a/nikola/plugins/task/sitemap/__init__.py b/nikola/plugins/task/sitemap/__init__.py\n--- a/nikola/plugins/task/sitemap/__init__.py\n+++ b/nikola/plugins/task/sitemap/__init__.py\n@@ -50,6 +50,7 @@\n loc_format = \"\"\" <url>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n+ {2}\n </url>\n \"\"\"\n \n@@ -69,6 +70,13 @@\n </sitemap>\n \"\"\"\n \n+alternates_format = \"\"\"<xhtml:link\n+ rel=\"alternate\"\n+ hreflang=\"{0}\"\n+ href=\"{1}\"\n+ />\"\"\"\n+\n+\n sitemapindex_footer = \"</sitemapindex>\"\n \n \n@@ -114,6 +122,7 @@\n \"mapped_extensions\": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),\n \"robots_exclusions\": self.site.config[\"ROBOTS_EXCLUSIONS\"],\n \"filters\": self.site.config[\"FILTERS\"],\n+ \"translations\": self.site.config[\"TRANSLATIONS\"],\n }\n \n output = kw['output_folder']\n@@ -140,7 +149,14 @@\n post = self.site.post_per_file.get(path + kw['index_file'])\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n- urlset[loc] = loc_format.format(loc, lastmod)\n+ alternates = []\n+ if post:\n+ for lang in kw['translations']:\n+ alt_url = post.permalink(lang=lang, absolute=True)\n+ if loc == alt_url:\n+ continue\n+ alternates.append(alternates_format.format(lang, alt_url))\n+ urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n for fname in files:\n if kw['strip_indexes'] and fname == kw['index_file']:\n continue # We already mapped the folder\n@@ -179,7 +195,14 @@\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n- urlset[loc] = loc_format.format(loc, lastmod)\n+ alternates = []\n+ if post:\n+ for lang in kw['translations']:\n+ alt_url = post.permalink(lang=lang, absolute=True)\n+ if loc == alt_url:\n+ continue\n+ alternates.append(alternates_format.format(lang, alt_url))\n+ urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n \n def robot_fetch(path):\n for rule in kw[\"robots_exclusions\"]:\n", "issue": "Sitemap indications of alternate language pages\nhttps://support.google.com/webmasters/answer/2620865?hl=en\n\nI do not have a multi-lingual page myself at this time, so I have no interest in implementing this. Nikola should support it, though.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function, absolute_import, unicode_literals\nimport io\nimport datetime\nimport os\ntry:\n from urlparse import urljoin, urlparse\n import robotparser as robotparser\nexcept ImportError:\n from urllib.parse import urljoin, urlparse # NOQA\n import urllib.robotparser as robotparser # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola.utils import config_changed, apply_filters\n\n\nurlset_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<urlset\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nloc_format = \"\"\" <url>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n </url>\n\"\"\"\n\nurlset_footer = \"</urlset>\"\n\nsitemapindex_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<sitemapindex\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nsitemap_format = \"\"\" <sitemap>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n </sitemap>\n\"\"\"\n\nsitemapindex_footer = \"</sitemapindex>\"\n\n\ndef get_base_path(base):\n \"\"\"returns the path of a base URL if it contains one.\n\n >>> get_base_path('http://some.site') == '/'\n True\n >>> get_base_path('http://some.site/') == '/'\n True\n >>> get_base_path('http://some.site/some/sub-path') == '/some/sub-path/'\n True\n >>> get_base_path('http://some.site/some/sub-path/') == '/some/sub-path/'\n True\n \"\"\"\n # first parse the base_url for some path\n base_parsed = urlparse(base)\n\n if not base_parsed.path:\n sub_path = ''\n else:\n sub_path = base_parsed.path\n if sub_path.endswith('/'):\n return sub_path\n else:\n return sub_path + '/'\n\n\nclass Sitemap(LateTask):\n \"\"\"Generate a sitemap.\"\"\"\n\n name = \"sitemap\"\n\n def gen_tasks(self):\n \"\"\"Generate a sitemap.\"\"\"\n kw = {\n \"base_url\": self.site.config[\"BASE_URL\"],\n \"site_url\": self.site.config[\"SITE_URL\"],\n \"output_folder\": self.site.config[\"OUTPUT_FOLDER\"],\n \"strip_indexes\": self.site.config[\"STRIP_INDEXES\"],\n \"index_file\": self.site.config[\"INDEX_FILE\"],\n \"sitemap_include_fileless_dirs\": self.site.config[\"SITEMAP_INCLUDE_FILELESS_DIRS\"],\n \"mapped_extensions\": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),\n \"robots_exclusions\": self.site.config[\"ROBOTS_EXCLUSIONS\"],\n \"filters\": self.site.config[\"FILTERS\"],\n }\n\n output = kw['output_folder']\n base_url = kw['base_url']\n mapped_exts = kw['mapped_extensions']\n\n output_path = kw['output_folder']\n sitemapindex_path = os.path.join(output_path, \"sitemapindex.xml\")\n sitemap_path = os.path.join(output_path, \"sitemap.xml\")\n base_path = get_base_path(kw['base_url'])\n sitemapindex = {}\n urlset = {}\n\n def scan_locs():\n for root, dirs, files in os.walk(output, followlinks=True):\n if not dirs and not files and not kw['sitemap_include_fileless_dirs']:\n continue # Totally empty, not on sitemap\n path = os.path.relpath(root, output)\n # ignore the current directory.\n path = (path.replace(os.sep, '/') + '/').replace('./', '')\n lastmod = self.get_lastmod(root)\n loc = urljoin(base_url, base_path + path)\n if kw['index_file'] in files and kw['strip_indexes']: # ignore folders when not stripping urls\n post = self.site.post_per_file.get(path + kw['index_file'])\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n urlset[loc] = loc_format.format(loc, lastmod)\n for fname in files:\n if kw['strip_indexes'] and fname == kw['index_file']:\n continue # We already mapped the folder\n if os.path.splitext(fname)[-1] in mapped_exts:\n real_path = os.path.join(root, fname)\n path = os.path.relpath(real_path, output)\n if path.endswith(kw['index_file']) and kw['strip_indexes']:\n # ignore index files when stripping urls\n continue\n if not robot_fetch(path):\n continue\n if path.endswith('.html') or path.endswith('.htm'):\n try:\n if u'<!doctype html' not in io.open(real_path, 'r', encoding='utf8').read(1024).lower():\n # ignores \"html\" files without doctype\n # alexa-verify, google-site-verification, etc.\n continue\n except UnicodeDecodeError:\n # ignore ancient files\n # most non-utf8 files are worthless anyways\n continue\n \"\"\" put RSS in sitemapindex[] instead of in urlset[], sitemap_path is included after it is generated \"\"\"\n if path.endswith('.xml') or path.endswith('.rss'):\n filehead = io.open(real_path, 'r', encoding='utf8').read(512)\n if u'<rss' in filehead or (u'<urlset' in filehead and path != sitemap_path):\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n sitemapindex[loc] = sitemap_format.format(loc, lastmod)\n continue\n else:\n continue # ignores all XML files except those presumed to be RSS\n post = self.site.post_per_file.get(path)\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n urlset[loc] = loc_format.format(loc, lastmod)\n\n def robot_fetch(path):\n for rule in kw[\"robots_exclusions\"]:\n robot = robotparser.RobotFileParser()\n robot.parse([\"User-Agent: *\", \"Disallow: {0}\".format(rule)])\n if not robot.can_fetch(\"*\", '/' + path):\n return False # not robot food\n return True\n\n def write_sitemap():\n # Have to rescan, because files may have been added between\n # task dep scanning and task execution\n with io.open(sitemap_path, 'w+', encoding='utf8') as outf:\n outf.write(urlset_header)\n for k in sorted(urlset.keys()):\n outf.write(urlset[k])\n outf.write(urlset_footer)\n sitemap_url = urljoin(base_url, base_path + \"sitemap.xml\")\n sitemapindex[sitemap_url] = sitemap_format.format(sitemap_url, self.get_lastmod(sitemap_path))\n\n def write_sitemapindex():\n with io.open(sitemapindex_path, 'w+', encoding='utf8') as outf:\n outf.write(sitemapindex_header)\n for k in sorted(sitemapindex.keys()):\n outf.write(sitemapindex[k])\n outf.write(sitemapindex_footer)\n\n # Yield a task to calculate the dependencies of the sitemap\n # Other tasks can depend on this output, instead of having\n # to scan locations.\n def scan_locs_task():\n scan_locs()\n\n # Generate a list of file dependencies for the actual generation\n # task, so rebuilds are triggered. (Issue #1032)\n output = kw[\"output_folder\"]\n file_dep = []\n\n for i in urlset.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n for i in sitemapindex.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n return {'file_dep': file_dep}\n\n yield {\n \"basename\": \"_scan_locs\",\n \"name\": \"sitemap\",\n \"actions\": [(scan_locs_task)]\n }\n\n yield self.group_task()\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemap_path,\n \"targets\": [sitemap_path],\n \"actions\": [(write_sitemap,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write')],\n \"clean\": True,\n \"task_dep\": [\"render_site\"],\n \"calc_dep\": [\"_scan_locs:sitemap\"],\n }, kw['filters'])\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemapindex_path,\n \"targets\": [sitemapindex_path],\n \"actions\": [(write_sitemapindex,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write_index')],\n \"clean\": True,\n \"file_dep\": [sitemap_path]\n }, kw['filters'])\n\n def get_lastmod(self, p):\n if self.site.invariant:\n return '2038-01-01'\n else:\n return datetime.datetime.fromtimestamp(os.stat(p).st_mtime).isoformat().split('T')[0]\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n", "path": "nikola/plugins/task/sitemap/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2015 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function, absolute_import, unicode_literals\nimport io\nimport datetime\nimport os\ntry:\n from urlparse import urljoin, urlparse\n import robotparser as robotparser\nexcept ImportError:\n from urllib.parse import urljoin, urlparse # NOQA\n import urllib.robotparser as robotparser # NOQA\n\nfrom nikola.plugin_categories import LateTask\nfrom nikola.utils import config_changed, apply_filters\n\n\nurlset_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<urlset\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nloc_format = \"\"\" <url>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n {2}\n </url>\n\"\"\"\n\nurlset_footer = \"</urlset>\"\n\nsitemapindex_header = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<sitemapindex\n xmlns=\"http://www.sitemaps.org/schemas/sitemap/0.9\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://www.sitemaps.org/schemas/sitemap/0.9\n http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd\">\n\"\"\"\n\nsitemap_format = \"\"\" <sitemap>\n <loc>{0}</loc>\n <lastmod>{1}</lastmod>\n </sitemap>\n\"\"\"\n\nalternates_format = \"\"\"<xhtml:link\n rel=\"alternate\"\n hreflang=\"{0}\"\n href=\"{1}\"\n />\"\"\"\n\n\nsitemapindex_footer = \"</sitemapindex>\"\n\n\ndef get_base_path(base):\n \"\"\"returns the path of a base URL if it contains one.\n\n >>> get_base_path('http://some.site') == '/'\n True\n >>> get_base_path('http://some.site/') == '/'\n True\n >>> get_base_path('http://some.site/some/sub-path') == '/some/sub-path/'\n True\n >>> get_base_path('http://some.site/some/sub-path/') == '/some/sub-path/'\n True\n \"\"\"\n # first parse the base_url for some path\n base_parsed = urlparse(base)\n\n if not base_parsed.path:\n sub_path = ''\n else:\n sub_path = base_parsed.path\n if sub_path.endswith('/'):\n return sub_path\n else:\n return sub_path + '/'\n\n\nclass Sitemap(LateTask):\n \"\"\"Generate a sitemap.\"\"\"\n\n name = \"sitemap\"\n\n def gen_tasks(self):\n \"\"\"Generate a sitemap.\"\"\"\n kw = {\n \"base_url\": self.site.config[\"BASE_URL\"],\n \"site_url\": self.site.config[\"SITE_URL\"],\n \"output_folder\": self.site.config[\"OUTPUT_FOLDER\"],\n \"strip_indexes\": self.site.config[\"STRIP_INDEXES\"],\n \"index_file\": self.site.config[\"INDEX_FILE\"],\n \"sitemap_include_fileless_dirs\": self.site.config[\"SITEMAP_INCLUDE_FILELESS_DIRS\"],\n \"mapped_extensions\": self.site.config.get('MAPPED_EXTENSIONS', ['.html', '.htm', '.xml', '.rss']),\n \"robots_exclusions\": self.site.config[\"ROBOTS_EXCLUSIONS\"],\n \"filters\": self.site.config[\"FILTERS\"],\n \"translations\": self.site.config[\"TRANSLATIONS\"],\n }\n\n output = kw['output_folder']\n base_url = kw['base_url']\n mapped_exts = kw['mapped_extensions']\n\n output_path = kw['output_folder']\n sitemapindex_path = os.path.join(output_path, \"sitemapindex.xml\")\n sitemap_path = os.path.join(output_path, \"sitemap.xml\")\n base_path = get_base_path(kw['base_url'])\n sitemapindex = {}\n urlset = {}\n\n def scan_locs():\n for root, dirs, files in os.walk(output, followlinks=True):\n if not dirs and not files and not kw['sitemap_include_fileless_dirs']:\n continue # Totally empty, not on sitemap\n path = os.path.relpath(root, output)\n # ignore the current directory.\n path = (path.replace(os.sep, '/') + '/').replace('./', '')\n lastmod = self.get_lastmod(root)\n loc = urljoin(base_url, base_path + path)\n if kw['index_file'] in files and kw['strip_indexes']: # ignore folders when not stripping urls\n post = self.site.post_per_file.get(path + kw['index_file'])\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n alternates = []\n if post:\n for lang in kw['translations']:\n alt_url = post.permalink(lang=lang, absolute=True)\n if loc == alt_url:\n continue\n alternates.append(alternates_format.format(lang, alt_url))\n urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n for fname in files:\n if kw['strip_indexes'] and fname == kw['index_file']:\n continue # We already mapped the folder\n if os.path.splitext(fname)[-1] in mapped_exts:\n real_path = os.path.join(root, fname)\n path = os.path.relpath(real_path, output)\n if path.endswith(kw['index_file']) and kw['strip_indexes']:\n # ignore index files when stripping urls\n continue\n if not robot_fetch(path):\n continue\n if path.endswith('.html') or path.endswith('.htm'):\n try:\n if u'<!doctype html' not in io.open(real_path, 'r', encoding='utf8').read(1024).lower():\n # ignores \"html\" files without doctype\n # alexa-verify, google-site-verification, etc.\n continue\n except UnicodeDecodeError:\n # ignore ancient files\n # most non-utf8 files are worthless anyways\n continue\n \"\"\" put RSS in sitemapindex[] instead of in urlset[], sitemap_path is included after it is generated \"\"\"\n if path.endswith('.xml') or path.endswith('.rss'):\n filehead = io.open(real_path, 'r', encoding='utf8').read(512)\n if u'<rss' in filehead or (u'<urlset' in filehead and path != sitemap_path):\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n sitemapindex[loc] = sitemap_format.format(loc, lastmod)\n continue\n else:\n continue # ignores all XML files except those presumed to be RSS\n post = self.site.post_per_file.get(path)\n if post and (post.is_draft or post.is_private or post.publish_later):\n continue\n path = path.replace(os.sep, '/')\n lastmod = self.get_lastmod(real_path)\n loc = urljoin(base_url, base_path + path)\n alternates = []\n if post:\n for lang in kw['translations']:\n alt_url = post.permalink(lang=lang, absolute=True)\n if loc == alt_url:\n continue\n alternates.append(alternates_format.format(lang, alt_url))\n urlset[loc] = loc_format.format(loc, lastmod, '\\n'.join(alternates))\n\n def robot_fetch(path):\n for rule in kw[\"robots_exclusions\"]:\n robot = robotparser.RobotFileParser()\n robot.parse([\"User-Agent: *\", \"Disallow: {0}\".format(rule)])\n if not robot.can_fetch(\"*\", '/' + path):\n return False # not robot food\n return True\n\n def write_sitemap():\n # Have to rescan, because files may have been added between\n # task dep scanning and task execution\n with io.open(sitemap_path, 'w+', encoding='utf8') as outf:\n outf.write(urlset_header)\n for k in sorted(urlset.keys()):\n outf.write(urlset[k])\n outf.write(urlset_footer)\n sitemap_url = urljoin(base_url, base_path + \"sitemap.xml\")\n sitemapindex[sitemap_url] = sitemap_format.format(sitemap_url, self.get_lastmod(sitemap_path))\n\n def write_sitemapindex():\n with io.open(sitemapindex_path, 'w+', encoding='utf8') as outf:\n outf.write(sitemapindex_header)\n for k in sorted(sitemapindex.keys()):\n outf.write(sitemapindex[k])\n outf.write(sitemapindex_footer)\n\n # Yield a task to calculate the dependencies of the sitemap\n # Other tasks can depend on this output, instead of having\n # to scan locations.\n def scan_locs_task():\n scan_locs()\n\n # Generate a list of file dependencies for the actual generation\n # task, so rebuilds are triggered. (Issue #1032)\n output = kw[\"output_folder\"]\n file_dep = []\n\n for i in urlset.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n for i in sitemapindex.keys():\n p = os.path.join(output, urlparse(i).path.replace(base_path, '', 1))\n if not p.endswith('sitemap.xml') and not os.path.isdir(p):\n file_dep.append(p)\n if os.path.isdir(p) and os.path.exists(os.path.join(p, 'index.html')):\n file_dep.append(p + 'index.html')\n\n return {'file_dep': file_dep}\n\n yield {\n \"basename\": \"_scan_locs\",\n \"name\": \"sitemap\",\n \"actions\": [(scan_locs_task)]\n }\n\n yield self.group_task()\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemap_path,\n \"targets\": [sitemap_path],\n \"actions\": [(write_sitemap,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write')],\n \"clean\": True,\n \"task_dep\": [\"render_site\"],\n \"calc_dep\": [\"_scan_locs:sitemap\"],\n }, kw['filters'])\n yield apply_filters({\n \"basename\": \"sitemap\",\n \"name\": sitemapindex_path,\n \"targets\": [sitemapindex_path],\n \"actions\": [(write_sitemapindex,)],\n \"uptodate\": [config_changed(kw, 'nikola.plugins.task.sitemap:write_index')],\n \"clean\": True,\n \"file_dep\": [sitemap_path]\n }, kw['filters'])\n\n def get_lastmod(self, p):\n if self.site.invariant:\n return '2038-01-01'\n else:\n return datetime.datetime.fromtimestamp(os.stat(p).st_mtime).isoformat().split('T')[0]\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod()\n", "path": "nikola/plugins/task/sitemap/__init__.py"}]}
| 3,559 | 610 |
gh_patches_debug_35467
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2019
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typecating column to JSON_List/Map doesn't work.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
1. Create an empty table.
2. Create a new column and add a JSON array to it.
3. Try to typecast the column to JSON List.
4. Notice there is no error/change to the data type of the column.
## Additional context
<!-- Add any other context about the problem or screenshots here. -->

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mathesar/api/serializers/columns.py`
Content:
```
1 from rest_framework import serializers
2 from rest_framework.exceptions import ValidationError
3 from rest_framework.fields import empty, SerializerMethodField
4 from rest_framework.settings import api_settings
5
6 from mathesar.api.exceptions.mixins import MathesarErrorMessageMixin
7 from mathesar.api.serializers.shared_serializers import (
8 DisplayOptionsMappingSerializer,
9 DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,
10 )
11 from mathesar.models.base import Column
12 from db.types.operations.convert import get_db_type_enum_from_id
13
14
15 class InputValueField(serializers.CharField):
16 """
17 Takes in an arbitrary value. Emulates the record creation endpoint,
18 which takes in arbitrary values (un-validated and un-processed request.data).
19 This field replicates that behavior in a serializer.
20 """
21
22 def to_internal_value(self, data):
23 return data
24
25 def to_representation(self, value):
26 return value
27
28
29 class TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):
30 length = serializers.IntegerField(required=False)
31 precision = serializers.IntegerField(required=False)
32 scale = serializers.IntegerField(required=False)
33 fields = serializers.CharField(required=False)
34
35 def validate(self, attrs):
36 if attrs.get('scale', None) is not None and attrs.get('precision', None) is None:
37 attrs['precision'] = 1000
38 return super().validate(attrs)
39
40 def run_validation(self, data=empty):
41 # Ensure that there are no unknown type options passed in.
42 if data is not empty and data is not None:
43 unknown = set(data) - set(self.fields)
44 if unknown:
45 errors = ['Unknown field: {}'.format(field) for field in unknown]
46 raise serializers.ValidationError({
47 api_settings.NON_FIELD_ERRORS_KEY: errors,
48 })
49
50 return super(TypeOptionSerializer, self).run_validation(data)
51
52
53 TYPE_KEY = 'type'
54 DISPLAY_OPTIONS_KEY = 'display_options'
55
56
57 class SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):
58 class Meta:
59 model = Column
60 fields = ('id',
61 'name',
62 TYPE_KEY,
63 'type_options',
64 DISPLAY_OPTIONS_KEY,
65 )
66 id = serializers.IntegerField(required=False)
67 name = serializers.CharField()
68 # TODO consider renaming type and type_options to db_type and db_type_options
69 # The name of below attribute should match value of TYPE_KEY
70 type = serializers.CharField()
71 type_options = TypeOptionSerializer(required=False, allow_null=True)
72 # The name of below attribute should match value of DISPLAY_OPTIONS_KEY
73 display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)
74
75 def to_representation(self, instance):
76 if isinstance(instance, dict):
77 db_type_id = instance.get(TYPE_KEY)
78 db_type = get_db_type_enum_from_id(db_type_id)
79 else:
80 db_type = instance.db_type
81 # TODO replace or remove this assert before production
82 assert db_type is not None
83 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type
84 representation = super().to_representation(instance)
85 _force_canonical_type(representation, db_type)
86 return representation
87
88 def to_internal_value(self, data):
89 if self.partial and TYPE_KEY not in data:
90 db_type = getattr(self.instance, 'db_type', None)
91 else:
92 db_type_id = data.get(TYPE_KEY, None)
93 db_type = get_db_type_enum_from_id(db_type_id) if db_type_id else None
94 self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type
95 return super().to_internal_value(data)
96
97
98 def _force_canonical_type(representation, db_type):
99 """
100 Sometimes the representation's TYPE_KEY attribute will also include type option information
101 (e.g. `numeric(3, 5)`). We override the attribute's value to a canonical type id.
102
103 This might be better solved upstream, but since our Column model subclasses SA's Column,
104 overriding its TYPE_KEY attribute, might interfere with SA's workings.
105 """
106 representation[TYPE_KEY] = db_type.id
107 return representation
108
109
110 class ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):
111 value = InputValueField()
112 is_dynamic = serializers.BooleanField(read_only=True)
113
114
115 class ColumnSerializer(SimpleColumnSerializer):
116 class Meta(SimpleColumnSerializer.Meta):
117 fields = SimpleColumnSerializer.Meta.fields + (
118 'nullable',
119 'primary_key',
120 'source_column',
121 'copy_source_data',
122 'copy_source_constraints',
123 'valid_target_types',
124 'default',
125 'has_dependents',
126 )
127 model_fields = (DISPLAY_OPTIONS_KEY,)
128
129 name = serializers.CharField(required=False, allow_blank=True)
130
131 # From scratch fields
132 type = serializers.CharField(required=False)
133 nullable = serializers.BooleanField(default=True)
134 primary_key = serializers.BooleanField(default=False)
135 default = ColumnDefaultSerializer(
136 source='column_default_dict', required=False, allow_null=True, default=None
137 )
138
139 # From duplication fields
140 source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)
141 copy_source_data = serializers.BooleanField(default=True, write_only=True)
142 copy_source_constraints = serializers.BooleanField(default=True, write_only=True)
143
144 # Read only fields
145 valid_target_types = SerializerMethodField(method_name='get_valid_target_types', read_only=True)
146
147 def validate(self, data):
148 data = super().validate(data)
149 # Reevaluate column display options based on the new column type.
150 if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:
151 if self.instance:
152 db_type = getattr(self.instance, 'db_type', None)
153 # Invalidate display_options if type has been changed
154 if db_type is not None:
155 if str(db_type.id) != data[TYPE_KEY]:
156 data[DISPLAY_OPTIONS_KEY] = None
157 else:
158 data[DISPLAY_OPTIONS_KEY] = None
159 if not self.partial:
160 from_scratch_required_fields = [TYPE_KEY]
161 from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']
162 from_dupe_required_fields = ['source_column']
163 from_dupe_specific_fields = ['source_column', 'copy_source_data',
164 'copy_source_constraints']
165
166 # Note that we run validation on self.initial_data, as `data` has defaults
167 # filled in for fields that weren't specified by the request
168 from_scratch_required_all = all([
169 f in self.initial_data for f in from_scratch_required_fields
170 ])
171 from_scratch_specific_in = [
172 f for f in from_scratch_specific_fields if f in self.initial_data
173 ]
174 from_dupe_required_all = all([
175 f in self.initial_data for f in from_dupe_required_fields
176 ])
177 from_dupe_specific_in = [
178 f for f in from_dupe_specific_fields if f in self.initial_data
179 ]
180
181 if len(from_dupe_specific_in) and len(from_scratch_specific_in):
182 raise ValidationError(
183 f'{from_scratch_specific_in} cannot be passed in if '
184 f'{from_dupe_specific_in} has also been passed in.'
185 )
186 elif not from_dupe_required_all and not from_scratch_required_all:
187 # We default to from scratch required fields if no fields are passed
188 if len(from_dupe_specific_in) and not len(from_scratch_specific_in):
189 required_fields = from_dupe_required_fields
190 else:
191 required_fields = from_scratch_required_fields
192 raise ValidationError({
193 f: ['This field is required.']
194 for f in required_fields
195 if f not in self.initial_data
196 })
197 return data
198
199 @property
200 def validated_model_fields(self):
201 return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}
202
203 def get_valid_target_types(self, column):
204 valid_target_types = column.valid_target_types
205 if valid_target_types:
206 valid_target_type_ids = tuple(
207 db_type.id for db_type in valid_target_types
208 )
209 return valid_target_type_ids
210
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py
--- a/mathesar/api/serializers/columns.py
+++ b/mathesar/api/serializers/columns.py
@@ -1,4 +1,4 @@
-from rest_framework import serializers
+from rest_framework import serializers, status
from rest_framework.exceptions import ValidationError
from rest_framework.fields import empty, SerializerMethodField
from rest_framework.settings import api_settings
@@ -8,6 +8,10 @@
DisplayOptionsMappingSerializer,
DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,
)
+from mathesar.api.exceptions.database_exceptions import (
+ exceptions as database_api_exceptions,
+)
+from db.columns.exceptions import InvalidTypeError
from mathesar.models.base import Column
from db.types.operations.convert import get_db_type_enum_from_id
@@ -147,15 +151,20 @@
def validate(self, data):
data = super().validate(data)
# Reevaluate column display options based on the new column type.
- if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:
- if self.instance:
+ if TYPE_KEY in data and self.instance:
+ db_type = get_db_type_enum_from_id(data[TYPE_KEY].lower())
+ target_types = self.instance.valid_target_types
+ if db_type not in target_types:
+ raise database_api_exceptions.InvalidTypeCastAPIException(
+ InvalidTypeError,
+ status_code=status.HTTP_400_BAD_REQUEST
+ )
+ if DISPLAY_OPTIONS_KEY not in data:
db_type = getattr(self.instance, 'db_type', None)
# Invalidate display_options if type has been changed
if db_type is not None:
if str(db_type.id) != data[TYPE_KEY]:
data[DISPLAY_OPTIONS_KEY] = None
- else:
- data[DISPLAY_OPTIONS_KEY] = None
if not self.partial:
from_scratch_required_fields = [TYPE_KEY]
from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']
|
{"golden_diff": "diff --git a/mathesar/api/serializers/columns.py b/mathesar/api/serializers/columns.py\n--- a/mathesar/api/serializers/columns.py\n+++ b/mathesar/api/serializers/columns.py\n@@ -1,4 +1,4 @@\n-from rest_framework import serializers\n+from rest_framework import serializers, status\n from rest_framework.exceptions import ValidationError\n from rest_framework.fields import empty, SerializerMethodField\n from rest_framework.settings import api_settings\n@@ -8,6 +8,10 @@\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n )\n+from mathesar.api.exceptions.database_exceptions import (\n+ exceptions as database_api_exceptions,\n+)\n+from db.columns.exceptions import InvalidTypeError\n from mathesar.models.base import Column\n from db.types.operations.convert import get_db_type_enum_from_id\n \n@@ -147,15 +151,20 @@\n def validate(self, data):\n data = super().validate(data)\n # Reevaluate column display options based on the new column type.\n- if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:\n- if self.instance:\n+ if TYPE_KEY in data and self.instance:\n+ db_type = get_db_type_enum_from_id(data[TYPE_KEY].lower())\n+ target_types = self.instance.valid_target_types\n+ if db_type not in target_types:\n+ raise database_api_exceptions.InvalidTypeCastAPIException(\n+ InvalidTypeError,\n+ status_code=status.HTTP_400_BAD_REQUEST\n+ )\n+ if DISPLAY_OPTIONS_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n # Invalidate display_options if type has been changed\n if db_type is not None:\n if str(db_type.id) != data[TYPE_KEY]:\n data[DISPLAY_OPTIONS_KEY] = None\n- else:\n- data[DISPLAY_OPTIONS_KEY] = None\n if not self.partial:\n from_scratch_required_fields = [TYPE_KEY]\n from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']\n", "issue": "Typecating column to JSON_List/Map doesn't work.\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\n\r\n1. Create an empty table.\r\n2. Create a new column and add a JSON array to it.\r\n3. Try to typecast the column to JSON List.\r\n4. Notice there is no error/change to the data type of the column.\r\n\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\r\n\n", "before_files": [{"content": "from rest_framework import serializers\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import empty, SerializerMethodField\nfrom rest_framework.settings import api_settings\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.serializers.shared_serializers import (\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n)\nfrom mathesar.models.base import Column\nfrom db.types.operations.convert import get_db_type_enum_from_id\n\n\nclass InputValueField(serializers.CharField):\n \"\"\"\n Takes in an arbitrary value. Emulates the record creation endpoint,\n which takes in arbitrary values (un-validated and un-processed request.data).\n This field replicates that behavior in a serializer.\n \"\"\"\n\n def to_internal_value(self, data):\n return data\n\n def to_representation(self, value):\n return value\n\n\nclass TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n length = serializers.IntegerField(required=False)\n precision = serializers.IntegerField(required=False)\n scale = serializers.IntegerField(required=False)\n fields = serializers.CharField(required=False)\n\n def validate(self, attrs):\n if attrs.get('scale', None) is not None and attrs.get('precision', None) is None:\n attrs['precision'] = 1000\n return super().validate(attrs)\n\n def run_validation(self, data=empty):\n # Ensure that there are no unknown type options passed in.\n if data is not empty and data is not None:\n unknown = set(data) - set(self.fields)\n if unknown:\n errors = ['Unknown field: {}'.format(field) for field in unknown]\n raise serializers.ValidationError({\n api_settings.NON_FIELD_ERRORS_KEY: errors,\n })\n\n return super(TypeOptionSerializer, self).run_validation(data)\n\n\nTYPE_KEY = 'type'\nDISPLAY_OPTIONS_KEY = 'display_options'\n\n\nclass SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = Column\n fields = ('id',\n 'name',\n TYPE_KEY,\n 'type_options',\n DISPLAY_OPTIONS_KEY,\n )\n id = serializers.IntegerField(required=False)\n name = serializers.CharField()\n # TODO consider renaming type and type_options to db_type and db_type_options\n # The name of below attribute should match value of TYPE_KEY\n type = serializers.CharField()\n type_options = TypeOptionSerializer(required=False, allow_null=True)\n # The name of below attribute should match value of DISPLAY_OPTIONS_KEY\n display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)\n\n def to_representation(self, instance):\n if isinstance(instance, dict):\n db_type_id = instance.get(TYPE_KEY)\n db_type = get_db_type_enum_from_id(db_type_id)\n else:\n db_type = instance.db_type\n # TODO replace or remove this assert before production\n assert db_type is not None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n representation = super().to_representation(instance)\n _force_canonical_type(representation, db_type)\n return representation\n\n def to_internal_value(self, data):\n if self.partial and TYPE_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n else:\n db_type_id = data.get(TYPE_KEY, None)\n db_type = get_db_type_enum_from_id(db_type_id) if db_type_id else None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n return super().to_internal_value(data)\n\n\ndef _force_canonical_type(representation, db_type):\n \"\"\"\n Sometimes the representation's TYPE_KEY attribute will also include type option information\n (e.g. `numeric(3, 5)`). We override the attribute's value to a canonical type id.\n\n This might be better solved upstream, but since our Column model subclasses SA's Column,\n overriding its TYPE_KEY attribute, might interfere with SA's workings.\n \"\"\"\n representation[TYPE_KEY] = db_type.id\n return representation\n\n\nclass ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n value = InputValueField()\n is_dynamic = serializers.BooleanField(read_only=True)\n\n\nclass ColumnSerializer(SimpleColumnSerializer):\n class Meta(SimpleColumnSerializer.Meta):\n fields = SimpleColumnSerializer.Meta.fields + (\n 'nullable',\n 'primary_key',\n 'source_column',\n 'copy_source_data',\n 'copy_source_constraints',\n 'valid_target_types',\n 'default',\n 'has_dependents',\n )\n model_fields = (DISPLAY_OPTIONS_KEY,)\n\n name = serializers.CharField(required=False, allow_blank=True)\n\n # From scratch fields\n type = serializers.CharField(required=False)\n nullable = serializers.BooleanField(default=True)\n primary_key = serializers.BooleanField(default=False)\n default = ColumnDefaultSerializer(\n source='column_default_dict', required=False, allow_null=True, default=None\n )\n\n # From duplication fields\n source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)\n copy_source_data = serializers.BooleanField(default=True, write_only=True)\n copy_source_constraints = serializers.BooleanField(default=True, write_only=True)\n\n # Read only fields\n valid_target_types = SerializerMethodField(method_name='get_valid_target_types', read_only=True)\n\n def validate(self, data):\n data = super().validate(data)\n # Reevaluate column display options based on the new column type.\n if TYPE_KEY in data and DISPLAY_OPTIONS_KEY not in data:\n if self.instance:\n db_type = getattr(self.instance, 'db_type', None)\n # Invalidate display_options if type has been changed\n if db_type is not None:\n if str(db_type.id) != data[TYPE_KEY]:\n data[DISPLAY_OPTIONS_KEY] = None\n else:\n data[DISPLAY_OPTIONS_KEY] = None\n if not self.partial:\n from_scratch_required_fields = [TYPE_KEY]\n from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']\n from_dupe_required_fields = ['source_column']\n from_dupe_specific_fields = ['source_column', 'copy_source_data',\n 'copy_source_constraints']\n\n # Note that we run validation on self.initial_data, as `data` has defaults\n # filled in for fields that weren't specified by the request\n from_scratch_required_all = all([\n f in self.initial_data for f in from_scratch_required_fields\n ])\n from_scratch_specific_in = [\n f for f in from_scratch_specific_fields if f in self.initial_data\n ]\n from_dupe_required_all = all([\n f in self.initial_data for f in from_dupe_required_fields\n ])\n from_dupe_specific_in = [\n f for f in from_dupe_specific_fields if f in self.initial_data\n ]\n\n if len(from_dupe_specific_in) and len(from_scratch_specific_in):\n raise ValidationError(\n f'{from_scratch_specific_in} cannot be passed in if '\n f'{from_dupe_specific_in} has also been passed in.'\n )\n elif not from_dupe_required_all and not from_scratch_required_all:\n # We default to from scratch required fields if no fields are passed\n if len(from_dupe_specific_in) and not len(from_scratch_specific_in):\n required_fields = from_dupe_required_fields\n else:\n required_fields = from_scratch_required_fields\n raise ValidationError({\n f: ['This field is required.']\n for f in required_fields\n if f not in self.initial_data\n })\n return data\n\n @property\n def validated_model_fields(self):\n return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}\n\n def get_valid_target_types(self, column):\n valid_target_types = column.valid_target_types\n if valid_target_types:\n valid_target_type_ids = tuple(\n db_type.id for db_type in valid_target_types\n )\n return valid_target_type_ids\n", "path": "mathesar/api/serializers/columns.py"}], "after_files": [{"content": "from rest_framework import serializers, status\nfrom rest_framework.exceptions import ValidationError\nfrom rest_framework.fields import empty, SerializerMethodField\nfrom rest_framework.settings import api_settings\n\nfrom mathesar.api.exceptions.mixins import MathesarErrorMessageMixin\nfrom mathesar.api.serializers.shared_serializers import (\n DisplayOptionsMappingSerializer,\n DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY,\n)\nfrom mathesar.api.exceptions.database_exceptions import (\n exceptions as database_api_exceptions,\n)\nfrom db.columns.exceptions import InvalidTypeError\nfrom mathesar.models.base import Column\nfrom db.types.operations.convert import get_db_type_enum_from_id\n\n\nclass InputValueField(serializers.CharField):\n \"\"\"\n Takes in an arbitrary value. Emulates the record creation endpoint,\n which takes in arbitrary values (un-validated and un-processed request.data).\n This field replicates that behavior in a serializer.\n \"\"\"\n\n def to_internal_value(self, data):\n return data\n\n def to_representation(self, value):\n return value\n\n\nclass TypeOptionSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n length = serializers.IntegerField(required=False)\n precision = serializers.IntegerField(required=False)\n scale = serializers.IntegerField(required=False)\n fields = serializers.CharField(required=False)\n\n def validate(self, attrs):\n if attrs.get('scale', None) is not None and attrs.get('precision', None) is None:\n attrs['precision'] = 1000\n return super().validate(attrs)\n\n def run_validation(self, data=empty):\n # Ensure that there are no unknown type options passed in.\n if data is not empty and data is not None:\n unknown = set(data) - set(self.fields)\n if unknown:\n errors = ['Unknown field: {}'.format(field) for field in unknown]\n raise serializers.ValidationError({\n api_settings.NON_FIELD_ERRORS_KEY: errors,\n })\n\n return super(TypeOptionSerializer, self).run_validation(data)\n\n\nTYPE_KEY = 'type'\nDISPLAY_OPTIONS_KEY = 'display_options'\n\n\nclass SimpleColumnSerializer(MathesarErrorMessageMixin, serializers.ModelSerializer):\n class Meta:\n model = Column\n fields = ('id',\n 'name',\n TYPE_KEY,\n 'type_options',\n DISPLAY_OPTIONS_KEY,\n )\n id = serializers.IntegerField(required=False)\n name = serializers.CharField()\n # TODO consider renaming type and type_options to db_type and db_type_options\n # The name of below attribute should match value of TYPE_KEY\n type = serializers.CharField()\n type_options = TypeOptionSerializer(required=False, allow_null=True)\n # The name of below attribute should match value of DISPLAY_OPTIONS_KEY\n display_options = DisplayOptionsMappingSerializer(required=False, allow_null=True)\n\n def to_representation(self, instance):\n if isinstance(instance, dict):\n db_type_id = instance.get(TYPE_KEY)\n db_type = get_db_type_enum_from_id(db_type_id)\n else:\n db_type = instance.db_type\n # TODO replace or remove this assert before production\n assert db_type is not None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n representation = super().to_representation(instance)\n _force_canonical_type(representation, db_type)\n return representation\n\n def to_internal_value(self, data):\n if self.partial and TYPE_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n else:\n db_type_id = data.get(TYPE_KEY, None)\n db_type = get_db_type_enum_from_id(db_type_id) if db_type_id else None\n self.context[DISPLAY_OPTIONS_SERIALIZER_MAPPING_KEY] = db_type\n return super().to_internal_value(data)\n\n\ndef _force_canonical_type(representation, db_type):\n \"\"\"\n Sometimes the representation's TYPE_KEY attribute will also include type option information\n (e.g. `numeric(3, 5)`). We override the attribute's value to a canonical type id.\n\n This might be better solved upstream, but since our Column model subclasses SA's Column,\n overriding its TYPE_KEY attribute, might interfere with SA's workings.\n \"\"\"\n representation[TYPE_KEY] = db_type.id\n return representation\n\n\nclass ColumnDefaultSerializer(MathesarErrorMessageMixin, serializers.Serializer):\n value = InputValueField()\n is_dynamic = serializers.BooleanField(read_only=True)\n\n\nclass ColumnSerializer(SimpleColumnSerializer):\n class Meta(SimpleColumnSerializer.Meta):\n fields = SimpleColumnSerializer.Meta.fields + (\n 'nullable',\n 'primary_key',\n 'source_column',\n 'copy_source_data',\n 'copy_source_constraints',\n 'valid_target_types',\n 'default',\n 'has_dependents',\n )\n model_fields = (DISPLAY_OPTIONS_KEY,)\n\n name = serializers.CharField(required=False, allow_blank=True)\n\n # From scratch fields\n type = serializers.CharField(required=False)\n nullable = serializers.BooleanField(default=True)\n primary_key = serializers.BooleanField(default=False)\n default = ColumnDefaultSerializer(\n source='column_default_dict', required=False, allow_null=True, default=None\n )\n\n # From duplication fields\n source_column = serializers.PrimaryKeyRelatedField(queryset=Column.current_objects.all(), required=False, write_only=True)\n copy_source_data = serializers.BooleanField(default=True, write_only=True)\n copy_source_constraints = serializers.BooleanField(default=True, write_only=True)\n\n # Read only fields\n valid_target_types = SerializerMethodField(method_name='get_valid_target_types', read_only=True)\n\n def validate(self, data):\n data = super().validate(data)\n # Reevaluate column display options based on the new column type.\n if TYPE_KEY in data and self.instance:\n db_type = get_db_type_enum_from_id(data[TYPE_KEY].lower())\n target_types = self.instance.valid_target_types\n if db_type not in target_types:\n raise database_api_exceptions.InvalidTypeCastAPIException(\n InvalidTypeError,\n status_code=status.HTTP_400_BAD_REQUEST\n )\n if DISPLAY_OPTIONS_KEY not in data:\n db_type = getattr(self.instance, 'db_type', None)\n # Invalidate display_options if type has been changed\n if db_type is not None:\n if str(db_type.id) != data[TYPE_KEY]:\n data[DISPLAY_OPTIONS_KEY] = None\n if not self.partial:\n from_scratch_required_fields = [TYPE_KEY]\n from_scratch_specific_fields = [TYPE_KEY, 'nullable', 'primary_key']\n from_dupe_required_fields = ['source_column']\n from_dupe_specific_fields = ['source_column', 'copy_source_data',\n 'copy_source_constraints']\n\n # Note that we run validation on self.initial_data, as `data` has defaults\n # filled in for fields that weren't specified by the request\n from_scratch_required_all = all([\n f in self.initial_data for f in from_scratch_required_fields\n ])\n from_scratch_specific_in = [\n f for f in from_scratch_specific_fields if f in self.initial_data\n ]\n from_dupe_required_all = all([\n f in self.initial_data for f in from_dupe_required_fields\n ])\n from_dupe_specific_in = [\n f for f in from_dupe_specific_fields if f in self.initial_data\n ]\n\n if len(from_dupe_specific_in) and len(from_scratch_specific_in):\n raise ValidationError(\n f'{from_scratch_specific_in} cannot be passed in if '\n f'{from_dupe_specific_in} has also been passed in.'\n )\n elif not from_dupe_required_all and not from_scratch_required_all:\n # We default to from scratch required fields if no fields are passed\n if len(from_dupe_specific_in) and not len(from_scratch_specific_in):\n required_fields = from_dupe_required_fields\n else:\n required_fields = from_scratch_required_fields\n raise ValidationError({\n f: ['This field is required.']\n for f in required_fields\n if f not in self.initial_data\n })\n return data\n\n @property\n def validated_model_fields(self):\n return {key: self.validated_data[key] for key in self.validated_data if key in self.Meta.model_fields}\n\n def get_valid_target_types(self, column):\n valid_target_types = column.valid_target_types\n if valid_target_types:\n valid_target_type_ids = tuple(\n db_type.id for db_type in valid_target_types\n )\n return valid_target_type_ids\n", "path": "mathesar/api/serializers/columns.py"}]}
| 2,687 | 441 |
gh_patches_debug_10137
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-7071
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
QNSPSA produces irreproducible results
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
- **Qiskit Terra version**: qiskit-terra 0.17.0
- **Python version**: Python 3.7.3
- **Operating system**: MacOS Big Sur 11.6
### What is the current behavior?
Executing a `QNSPSA` optimization with the same random seed gives different results.
We checked that the same optimization gives reproducible results with other optimizers such as `ADAM`.
### Steps to reproduce the problem
Run an arbitrary optimization, e.g., VQE with QNSPSA, and compare the results.
### What is the expected behavior?
The same results for the same seed.
### Suggested solutions
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/algorithms/optimizers/qnspsa.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2021.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """The QN-SPSA optimizer."""
14
15 from typing import Any, Iterator, Optional, Union, Callable, Dict
16
17 import numpy as np
18 from qiskit.providers import Backend
19 from qiskit.circuit import ParameterVector, QuantumCircuit
20 from qiskit.opflow import StateFn, CircuitSampler, ExpectationBase
21 from qiskit.utils import QuantumInstance
22
23 from .spsa import SPSA, CALLBACK, TERMINATIONCHECKER, _batch_evaluate
24
25 # the function to compute the fidelity
26 FIDELITY = Callable[[np.ndarray, np.ndarray], float]
27
28
29 class QNSPSA(SPSA):
30 r"""The Quantum Natural SPSA (QN-SPSA) optimizer.
31
32 The QN-SPSA optimizer [1] is a stochastic optimizer that belongs to the family of gradient
33 descent methods. This optimizer is based on SPSA but attempts to improve the convergence by
34 sampling the **natural gradient** instead of the vanilla, first-order gradient. It achieves
35 this by approximating Hessian of the ``fidelity`` of the ansatz circuit.
36
37 Compared to natural gradients, which require :math:`\mathcal{O}(d^2)` expectation value
38 evaluations for a circuit with :math:`d` parameters, QN-SPSA only requires
39 :math:`\mathcal{O}(1)` and can therefore significantly speed up the natural gradient calculation
40 by sacrificing some accuracy. Compared to SPSA, QN-SPSA requires 4 additional function
41 evaluations of the fidelity.
42
43 The stochastic approximation of the natural gradient can be systematically improved by
44 increasing the number of ``resamplings``. This leads to a Monte Carlo-style convergence to
45 the exact, analytic value.
46
47 Examples:
48
49 This short example runs QN-SPSA for the ground state calculation of the ``Z ^ Z``
50 observable where the ansatz is a ``PauliTwoDesign`` circuit.
51
52 .. code-block:: python
53
54 import numpy as np
55 from qiskit.algorithms.optimizers import QNSPSA
56 from qiskit.circuit.library import PauliTwoDesign
57 from qiskit.opflow import Z, StateFn
58
59 ansatz = PauliTwoDesign(2, reps=1, seed=2)
60 observable = Z ^ Z
61 initial_point = np.random.random(ansatz.num_parameters)
62
63 def loss(x):
64 bound = ansatz.bind_parameters(x)
65 return np.real((StateFn(observable, is_measurement=True) @ StateFn(bound)).eval())
66
67 fidelity = QNSPSA.get_fidelity(ansatz)
68 qnspsa = QNSPSA(fidelity, maxiter=300)
69 result = qnspsa.optimize(ansatz.num_parameters, loss, initial_point=initial_point)
70
71
72 References:
73
74 [1] J. Gacon et al, "Simultaneous Perturbation Stochastic Approximation of the Quantum
75 Fisher Information", `arXiv:2103.09232 <https://arxiv.org/abs/2103.09232>`_
76
77 """
78
79 def __init__(
80 self,
81 fidelity: FIDELITY,
82 maxiter: int = 100,
83 blocking: bool = True,
84 allowed_increase: Optional[float] = None,
85 learning_rate: Optional[Union[float, Callable[[], Iterator]]] = None,
86 perturbation: Optional[Union[float, Callable[[], Iterator]]] = None,
87 last_avg: int = 1,
88 resamplings: Union[int, Dict[int, int]] = 1,
89 perturbation_dims: Optional[int] = None,
90 regularization: Optional[float] = None,
91 hessian_delay: int = 0,
92 lse_solver: Optional[Callable[[np.ndarray, np.ndarray], np.ndarray]] = None,
93 initial_hessian: Optional[np.ndarray] = None,
94 callback: Optional[CALLBACK] = None,
95 termination_checker: Optional[TERMINATIONCHECKER] = None,
96 ) -> None:
97 r"""
98 Args:
99 fidelity: A function to compute the fidelity of the ansatz state with itself for
100 two different sets of parameters.
101 maxiter: The maximum number of iterations. Note that this is not the maximal number
102 of function evaluations.
103 blocking: If True, only accepts updates that improve the loss (up to some allowed
104 increase, see next argument).
105 allowed_increase: If ``blocking`` is ``True``, this argument determines by how much
106 the loss can increase with the proposed parameters and still be accepted.
107 If ``None``, the allowed increases is calibrated automatically to be twice the
108 approximated standard deviation of the loss function.
109 learning_rate: The update step is the learning rate is multiplied with the gradient.
110 If the learning rate is a float, it remains constant over the course of the
111 optimization. It can also be a callable returning an iterator which yields the
112 learning rates for each optimization step.
113 If ``learning_rate`` is set ``perturbation`` must also be provided.
114 perturbation: Specifies the magnitude of the perturbation for the finite difference
115 approximation of the gradients. Can be either a float or a generator yielding
116 the perturbation magnitudes per step.
117 If ``perturbation`` is set ``learning_rate`` must also be provided.
118 last_avg: Return the average of the ``last_avg`` parameters instead of just the
119 last parameter values.
120 resamplings: The number of times the gradient (and Hessian) is sampled using a random
121 direction to construct a gradient estimate. Per default the gradient is estimated
122 using only one random direction. If an integer, all iterations use the same number
123 of resamplings. If a dictionary, this is interpreted as
124 ``{iteration: number of resamplings per iteration}``.
125 perturbation_dims: The number of perturbed dimensions. Per default, all dimensions
126 are perturbed, but a smaller, fixed number can be perturbed. If set, the perturbed
127 dimensions are chosen uniformly at random.
128 regularization: To ensure the preconditioner is symmetric and positive definite, the
129 identity times a small coefficient is added to it. This generator yields that
130 coefficient.
131 hessian_delay: Start multiplying the gradient with the inverse Hessian only after a
132 certain number of iterations. The Hessian is still evaluated and therefore this
133 argument can be useful to first get a stable average over the last iterations before
134 using it as preconditioner.
135 lse_solver: The method to solve for the inverse of the Hessian. Per default an
136 exact LSE solver is used, but can e.g. be overwritten by a minimization routine.
137 initial_hessian: The initial guess for the Hessian. By default the identity matrix
138 is used.
139 callback: A callback function passed information in each iteration step. The
140 information is, in this order: the parameters, the function value, the number
141 of function evaluations, the stepsize, whether the step was accepted.
142 termination_checker: A callback function executed at the end of each iteration step. The
143 arguments are, in this order: the parameters, the function value, the number
144 of function evaluations, the stepsize, whether the step was accepted. If the callback
145 returns True, the optimization is terminated.
146 To prevent additional evaluations of the objective method, if the objective has not yet
147 been evaluated, the objective is estimated by taking the mean of the objective
148 evaluations used in the estimate of the gradient.
149
150
151 """
152 super().__init__(
153 maxiter,
154 blocking,
155 allowed_increase,
156 # trust region *must* be false for natural gradients to work
157 trust_region=False,
158 learning_rate=learning_rate,
159 perturbation=perturbation,
160 resamplings=resamplings,
161 callback=callback,
162 second_order=True,
163 hessian_delay=hessian_delay,
164 lse_solver=lse_solver,
165 regularization=regularization,
166 perturbation_dims=perturbation_dims,
167 initial_hessian=initial_hessian,
168 termination_checker=termination_checker,
169 )
170
171 self.fidelity = fidelity
172
173 def _point_sample(self, loss, x, eps, delta1, delta2):
174 loss_points = [x + eps * delta1, x - eps * delta1]
175 fidelity_points = [
176 (x, x + eps * delta1),
177 (x, x - eps * delta1),
178 (x, x + eps * (delta1 + delta2)),
179 (x, x + eps * (-delta1 + delta2)),
180 ]
181 self._nfev += 6
182
183 loss_values = _batch_evaluate(loss, loss_points, self._max_evals_grouped)
184 fidelity_values = _batch_evaluate(self.fidelity, fidelity_points, self._max_evals_grouped)
185
186 # compute the gradient approximation and additionally return the loss function evaluations
187 gradient_estimate = (loss_values[0] - loss_values[1]) / (2 * eps) * delta1
188
189 # compute the preconditioner point estimate
190 diff = fidelity_values[2] - fidelity_values[0]
191 diff -= fidelity_values[3] - fidelity_values[1]
192 diff /= 2 * eps ** 2
193
194 rank_one = np.outer(delta1, delta2)
195 # -0.5 factor comes from the fact that we need -0.5 * fidelity
196 hessian_estimate = -0.5 * diff * (rank_one + rank_one.T) / 2
197
198 return np.mean(loss_values), gradient_estimate, hessian_estimate
199
200 @property
201 def settings(self) -> Dict[str, Any]:
202 """The optimizer settings in a dictionary format."""
203 # re-use serialization from SPSA
204 settings = super().settings
205 settings.update({"fidelity": self.fidelity})
206
207 # remove SPSA-specific arguments not in QNSPSA
208 settings.pop("trust_region")
209 settings.pop("second_order")
210
211 return settings
212
213 @staticmethod
214 def get_fidelity(
215 circuit: QuantumCircuit,
216 backend: Optional[Union[Backend, QuantumInstance]] = None,
217 expectation: Optional[ExpectationBase] = None,
218 ) -> Callable[[np.ndarray, np.ndarray], float]:
219 r"""Get a function to compute the fidelity of ``circuit`` with itself.
220
221 Let ``circuit`` be a parameterized quantum circuit performing the operation
222 :math:`U(\theta)` given a set of parameters :math:`\theta`. Then this method returns
223 a function to evaluate
224
225 .. math::
226
227 F(\theta, \phi) = \big|\langle 0 | U^\dagger(\theta) U(\phi) |0\rangle \big|^2.
228
229 The output of this function can be used as input for the ``fidelity`` to the
230 :class:~`qiskit.algorithms.optimizers.QNSPSA` optimizer.
231
232 Args:
233 circuit: The circuit preparing the parameterized ansatz.
234 backend: A backend of quantum instance to evaluate the circuits. If None, plain
235 matrix multiplication will be used.
236 expectation: An expectation converter to specify how the expected value is computed.
237 If a shot-based readout is used this should be set to ``PauliExpectation``.
238
239 Returns:
240 A handle to the function :math:`F`.
241
242 """
243 params_x = ParameterVector("x", circuit.num_parameters)
244 params_y = ParameterVector("y", circuit.num_parameters)
245
246 expression = ~StateFn(circuit.assign_parameters(params_x)) @ StateFn(
247 circuit.assign_parameters(params_y)
248 )
249
250 if expectation is not None:
251 expression = expectation.convert(expression)
252
253 if backend is None:
254
255 def fidelity(values_x, values_y):
256 value_dict = dict(
257 zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())
258 )
259 return np.abs(expression.bind_parameters(value_dict).eval()) ** 2
260
261 else:
262 sampler = CircuitSampler(backend)
263
264 def fidelity(values_x, values_y=None):
265 if values_y is not None: # no batches
266 value_dict = dict(
267 zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())
268 )
269 else:
270 value_dict = {p: [] for p in params_x[:] + params_y[:]}
271 for values_xy in values_x:
272 for value_x, param_x in zip(values_xy[0, :], params_x):
273 value_dict[param_x].append(value_x)
274
275 for value_y, param_y in zip(values_xy[1, :], params_y):
276 value_dict[param_y].append(value_y)
277
278 return np.abs(sampler.convert(expression, params=value_dict).eval()) ** 2
279
280 return fidelity
281
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/qiskit/algorithms/optimizers/qnspsa.py b/qiskit/algorithms/optimizers/qnspsa.py
--- a/qiskit/algorithms/optimizers/qnspsa.py
+++ b/qiskit/algorithms/optimizers/qnspsa.py
@@ -44,6 +44,12 @@
increasing the number of ``resamplings``. This leads to a Monte Carlo-style convergence to
the exact, analytic value.
+ .. note::
+
+ This component has some function that is normally random. If you want to reproduce behavior
+ then you should set the random number generator seed in the algorithm_globals
+ (``qiskit.utils.algorithm_globals.random_seed = seed``).
+
Examples:
This short example runs QN-SPSA for the ground state calculation of the ``Z ^ Z``
|
{"golden_diff": "diff --git a/qiskit/algorithms/optimizers/qnspsa.py b/qiskit/algorithms/optimizers/qnspsa.py\n--- a/qiskit/algorithms/optimizers/qnspsa.py\n+++ b/qiskit/algorithms/optimizers/qnspsa.py\n@@ -44,6 +44,12 @@\n increasing the number of ``resamplings``. This leads to a Monte Carlo-style convergence to\n the exact, analytic value.\n \n+ .. note::\n+\n+ This component has some function that is normally random. If you want to reproduce behavior\n+ then you should set the random number generator seed in the algorithm_globals\n+ (``qiskit.utils.algorithm_globals.random_seed = seed``).\n+\n Examples:\n \n This short example runs QN-SPSA for the ground state calculation of the ``Z ^ Z``\n", "issue": "QNSPSA produces irreproducible results\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\n- **Qiskit Terra version**: qiskit-terra 0.17.0 \r\n- **Python version**: Python 3.7.3\r\n- **Operating system**: MacOS Big Sur 11.6\r\n\r\n### What is the current behavior? \r\nExecuting a `QNSPSA` optimization with the same random seed gives different results.\r\nWe checked that the same optimization gives reproducible results with other optimizers such as `ADAM`.\r\n\r\n### Steps to reproduce the problem\r\nRun an arbitrary optimization, e.g., VQE with QNSPSA, and compare the results.\r\n\r\n\r\n### What is the expected behavior?\r\nThe same results for the same seed.\r\n\r\n\r\n### Suggested solutions\r\n\r\n\r\n\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"The QN-SPSA optimizer.\"\"\"\n\nfrom typing import Any, Iterator, Optional, Union, Callable, Dict\n\nimport numpy as np\nfrom qiskit.providers import Backend\nfrom qiskit.circuit import ParameterVector, QuantumCircuit\nfrom qiskit.opflow import StateFn, CircuitSampler, ExpectationBase\nfrom qiskit.utils import QuantumInstance\n\nfrom .spsa import SPSA, CALLBACK, TERMINATIONCHECKER, _batch_evaluate\n\n# the function to compute the fidelity\nFIDELITY = Callable[[np.ndarray, np.ndarray], float]\n\n\nclass QNSPSA(SPSA):\n r\"\"\"The Quantum Natural SPSA (QN-SPSA) optimizer.\n\n The QN-SPSA optimizer [1] is a stochastic optimizer that belongs to the family of gradient\n descent methods. This optimizer is based on SPSA but attempts to improve the convergence by\n sampling the **natural gradient** instead of the vanilla, first-order gradient. It achieves\n this by approximating Hessian of the ``fidelity`` of the ansatz circuit.\n\n Compared to natural gradients, which require :math:`\\mathcal{O}(d^2)` expectation value\n evaluations for a circuit with :math:`d` parameters, QN-SPSA only requires\n :math:`\\mathcal{O}(1)` and can therefore significantly speed up the natural gradient calculation\n by sacrificing some accuracy. Compared to SPSA, QN-SPSA requires 4 additional function\n evaluations of the fidelity.\n\n The stochastic approximation of the natural gradient can be systematically improved by\n increasing the number of ``resamplings``. This leads to a Monte Carlo-style convergence to\n the exact, analytic value.\n\n Examples:\n\n This short example runs QN-SPSA for the ground state calculation of the ``Z ^ Z``\n observable where the ansatz is a ``PauliTwoDesign`` circuit.\n\n .. code-block:: python\n\n import numpy as np\n from qiskit.algorithms.optimizers import QNSPSA\n from qiskit.circuit.library import PauliTwoDesign\n from qiskit.opflow import Z, StateFn\n\n ansatz = PauliTwoDesign(2, reps=1, seed=2)\n observable = Z ^ Z\n initial_point = np.random.random(ansatz.num_parameters)\n\n def loss(x):\n bound = ansatz.bind_parameters(x)\n return np.real((StateFn(observable, is_measurement=True) @ StateFn(bound)).eval())\n\n fidelity = QNSPSA.get_fidelity(ansatz)\n qnspsa = QNSPSA(fidelity, maxiter=300)\n result = qnspsa.optimize(ansatz.num_parameters, loss, initial_point=initial_point)\n\n\n References:\n\n [1] J. Gacon et al, \"Simultaneous Perturbation Stochastic Approximation of the Quantum\n Fisher Information\", `arXiv:2103.09232 <https://arxiv.org/abs/2103.09232>`_\n\n \"\"\"\n\n def __init__(\n self,\n fidelity: FIDELITY,\n maxiter: int = 100,\n blocking: bool = True,\n allowed_increase: Optional[float] = None,\n learning_rate: Optional[Union[float, Callable[[], Iterator]]] = None,\n perturbation: Optional[Union[float, Callable[[], Iterator]]] = None,\n last_avg: int = 1,\n resamplings: Union[int, Dict[int, int]] = 1,\n perturbation_dims: Optional[int] = None,\n regularization: Optional[float] = None,\n hessian_delay: int = 0,\n lse_solver: Optional[Callable[[np.ndarray, np.ndarray], np.ndarray]] = None,\n initial_hessian: Optional[np.ndarray] = None,\n callback: Optional[CALLBACK] = None,\n termination_checker: Optional[TERMINATIONCHECKER] = None,\n ) -> None:\n r\"\"\"\n Args:\n fidelity: A function to compute the fidelity of the ansatz state with itself for\n two different sets of parameters.\n maxiter: The maximum number of iterations. Note that this is not the maximal number\n of function evaluations.\n blocking: If True, only accepts updates that improve the loss (up to some allowed\n increase, see next argument).\n allowed_increase: If ``blocking`` is ``True``, this argument determines by how much\n the loss can increase with the proposed parameters and still be accepted.\n If ``None``, the allowed increases is calibrated automatically to be twice the\n approximated standard deviation of the loss function.\n learning_rate: The update step is the learning rate is multiplied with the gradient.\n If the learning rate is a float, it remains constant over the course of the\n optimization. It can also be a callable returning an iterator which yields the\n learning rates for each optimization step.\n If ``learning_rate`` is set ``perturbation`` must also be provided.\n perturbation: Specifies the magnitude of the perturbation for the finite difference\n approximation of the gradients. Can be either a float or a generator yielding\n the perturbation magnitudes per step.\n If ``perturbation`` is set ``learning_rate`` must also be provided.\n last_avg: Return the average of the ``last_avg`` parameters instead of just the\n last parameter values.\n resamplings: The number of times the gradient (and Hessian) is sampled using a random\n direction to construct a gradient estimate. Per default the gradient is estimated\n using only one random direction. If an integer, all iterations use the same number\n of resamplings. If a dictionary, this is interpreted as\n ``{iteration: number of resamplings per iteration}``.\n perturbation_dims: The number of perturbed dimensions. Per default, all dimensions\n are perturbed, but a smaller, fixed number can be perturbed. If set, the perturbed\n dimensions are chosen uniformly at random.\n regularization: To ensure the preconditioner is symmetric and positive definite, the\n identity times a small coefficient is added to it. This generator yields that\n coefficient.\n hessian_delay: Start multiplying the gradient with the inverse Hessian only after a\n certain number of iterations. The Hessian is still evaluated and therefore this\n argument can be useful to first get a stable average over the last iterations before\n using it as preconditioner.\n lse_solver: The method to solve for the inverse of the Hessian. Per default an\n exact LSE solver is used, but can e.g. be overwritten by a minimization routine.\n initial_hessian: The initial guess for the Hessian. By default the identity matrix\n is used.\n callback: A callback function passed information in each iteration step. The\n information is, in this order: the parameters, the function value, the number\n of function evaluations, the stepsize, whether the step was accepted.\n termination_checker: A callback function executed at the end of each iteration step. The\n arguments are, in this order: the parameters, the function value, the number\n of function evaluations, the stepsize, whether the step was accepted. If the callback\n returns True, the optimization is terminated.\n To prevent additional evaluations of the objective method, if the objective has not yet\n been evaluated, the objective is estimated by taking the mean of the objective\n evaluations used in the estimate of the gradient.\n\n\n \"\"\"\n super().__init__(\n maxiter,\n blocking,\n allowed_increase,\n # trust region *must* be false for natural gradients to work\n trust_region=False,\n learning_rate=learning_rate,\n perturbation=perturbation,\n resamplings=resamplings,\n callback=callback,\n second_order=True,\n hessian_delay=hessian_delay,\n lse_solver=lse_solver,\n regularization=regularization,\n perturbation_dims=perturbation_dims,\n initial_hessian=initial_hessian,\n termination_checker=termination_checker,\n )\n\n self.fidelity = fidelity\n\n def _point_sample(self, loss, x, eps, delta1, delta2):\n loss_points = [x + eps * delta1, x - eps * delta1]\n fidelity_points = [\n (x, x + eps * delta1),\n (x, x - eps * delta1),\n (x, x + eps * (delta1 + delta2)),\n (x, x + eps * (-delta1 + delta2)),\n ]\n self._nfev += 6\n\n loss_values = _batch_evaluate(loss, loss_points, self._max_evals_grouped)\n fidelity_values = _batch_evaluate(self.fidelity, fidelity_points, self._max_evals_grouped)\n\n # compute the gradient approximation and additionally return the loss function evaluations\n gradient_estimate = (loss_values[0] - loss_values[1]) / (2 * eps) * delta1\n\n # compute the preconditioner point estimate\n diff = fidelity_values[2] - fidelity_values[0]\n diff -= fidelity_values[3] - fidelity_values[1]\n diff /= 2 * eps ** 2\n\n rank_one = np.outer(delta1, delta2)\n # -0.5 factor comes from the fact that we need -0.5 * fidelity\n hessian_estimate = -0.5 * diff * (rank_one + rank_one.T) / 2\n\n return np.mean(loss_values), gradient_estimate, hessian_estimate\n\n @property\n def settings(self) -> Dict[str, Any]:\n \"\"\"The optimizer settings in a dictionary format.\"\"\"\n # re-use serialization from SPSA\n settings = super().settings\n settings.update({\"fidelity\": self.fidelity})\n\n # remove SPSA-specific arguments not in QNSPSA\n settings.pop(\"trust_region\")\n settings.pop(\"second_order\")\n\n return settings\n\n @staticmethod\n def get_fidelity(\n circuit: QuantumCircuit,\n backend: Optional[Union[Backend, QuantumInstance]] = None,\n expectation: Optional[ExpectationBase] = None,\n ) -> Callable[[np.ndarray, np.ndarray], float]:\n r\"\"\"Get a function to compute the fidelity of ``circuit`` with itself.\n\n Let ``circuit`` be a parameterized quantum circuit performing the operation\n :math:`U(\\theta)` given a set of parameters :math:`\\theta`. Then this method returns\n a function to evaluate\n\n .. math::\n\n F(\\theta, \\phi) = \\big|\\langle 0 | U^\\dagger(\\theta) U(\\phi) |0\\rangle \\big|^2.\n\n The output of this function can be used as input for the ``fidelity`` to the\n :class:~`qiskit.algorithms.optimizers.QNSPSA` optimizer.\n\n Args:\n circuit: The circuit preparing the parameterized ansatz.\n backend: A backend of quantum instance to evaluate the circuits. If None, plain\n matrix multiplication will be used.\n expectation: An expectation converter to specify how the expected value is computed.\n If a shot-based readout is used this should be set to ``PauliExpectation``.\n\n Returns:\n A handle to the function :math:`F`.\n\n \"\"\"\n params_x = ParameterVector(\"x\", circuit.num_parameters)\n params_y = ParameterVector(\"y\", circuit.num_parameters)\n\n expression = ~StateFn(circuit.assign_parameters(params_x)) @ StateFn(\n circuit.assign_parameters(params_y)\n )\n\n if expectation is not None:\n expression = expectation.convert(expression)\n\n if backend is None:\n\n def fidelity(values_x, values_y):\n value_dict = dict(\n zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())\n )\n return np.abs(expression.bind_parameters(value_dict).eval()) ** 2\n\n else:\n sampler = CircuitSampler(backend)\n\n def fidelity(values_x, values_y=None):\n if values_y is not None: # no batches\n value_dict = dict(\n zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())\n )\n else:\n value_dict = {p: [] for p in params_x[:] + params_y[:]}\n for values_xy in values_x:\n for value_x, param_x in zip(values_xy[0, :], params_x):\n value_dict[param_x].append(value_x)\n\n for value_y, param_y in zip(values_xy[1, :], params_y):\n value_dict[param_y].append(value_y)\n\n return np.abs(sampler.convert(expression, params=value_dict).eval()) ** 2\n\n return fidelity\n", "path": "qiskit/algorithms/optimizers/qnspsa.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"The QN-SPSA optimizer.\"\"\"\n\nfrom typing import Any, Iterator, Optional, Union, Callable, Dict\n\nimport numpy as np\nfrom qiskit.providers import Backend\nfrom qiskit.circuit import ParameterVector, QuantumCircuit\nfrom qiskit.opflow import StateFn, CircuitSampler, ExpectationBase\nfrom qiskit.utils import QuantumInstance\n\nfrom .spsa import SPSA, CALLBACK, TERMINATIONCHECKER, _batch_evaluate\n\n# the function to compute the fidelity\nFIDELITY = Callable[[np.ndarray, np.ndarray], float]\n\n\nclass QNSPSA(SPSA):\n r\"\"\"The Quantum Natural SPSA (QN-SPSA) optimizer.\n\n The QN-SPSA optimizer [1] is a stochastic optimizer that belongs to the family of gradient\n descent methods. This optimizer is based on SPSA but attempts to improve the convergence by\n sampling the **natural gradient** instead of the vanilla, first-order gradient. It achieves\n this by approximating Hessian of the ``fidelity`` of the ansatz circuit.\n\n Compared to natural gradients, which require :math:`\\mathcal{O}(d^2)` expectation value\n evaluations for a circuit with :math:`d` parameters, QN-SPSA only requires\n :math:`\\mathcal{O}(1)` and can therefore significantly speed up the natural gradient calculation\n by sacrificing some accuracy. Compared to SPSA, QN-SPSA requires 4 additional function\n evaluations of the fidelity.\n\n The stochastic approximation of the natural gradient can be systematically improved by\n increasing the number of ``resamplings``. This leads to a Monte Carlo-style convergence to\n the exact, analytic value.\n\n .. note::\n\n This component has some function that is normally random. If you want to reproduce behavior\n then you should set the random number generator seed in the algorithm_globals\n (``qiskit.utils.algorithm_globals.random_seed = seed``).\n\n Examples:\n\n This short example runs QN-SPSA for the ground state calculation of the ``Z ^ Z``\n observable where the ansatz is a ``PauliTwoDesign`` circuit.\n\n .. code-block:: python\n\n import numpy as np\n from qiskit.algorithms.optimizers import QNSPSA\n from qiskit.circuit.library import PauliTwoDesign\n from qiskit.opflow import Z, StateFn\n\n ansatz = PauliTwoDesign(2, reps=1, seed=2)\n observable = Z ^ Z\n initial_point = np.random.random(ansatz.num_parameters)\n\n def loss(x):\n bound = ansatz.bind_parameters(x)\n return np.real((StateFn(observable, is_measurement=True) @ StateFn(bound)).eval())\n\n fidelity = QNSPSA.get_fidelity(ansatz)\n qnspsa = QNSPSA(fidelity, maxiter=300)\n result = qnspsa.optimize(ansatz.num_parameters, loss, initial_point=initial_point)\n\n\n References:\n\n [1] J. Gacon et al, \"Simultaneous Perturbation Stochastic Approximation of the Quantum\n Fisher Information\", `arXiv:2103.09232 <https://arxiv.org/abs/2103.09232>`_\n\n \"\"\"\n\n def __init__(\n self,\n fidelity: FIDELITY,\n maxiter: int = 100,\n blocking: bool = True,\n allowed_increase: Optional[float] = None,\n learning_rate: Optional[Union[float, Callable[[], Iterator]]] = None,\n perturbation: Optional[Union[float, Callable[[], Iterator]]] = None,\n last_avg: int = 1,\n resamplings: Union[int, Dict[int, int]] = 1,\n perturbation_dims: Optional[int] = None,\n regularization: Optional[float] = None,\n hessian_delay: int = 0,\n lse_solver: Optional[Callable[[np.ndarray, np.ndarray], np.ndarray]] = None,\n initial_hessian: Optional[np.ndarray] = None,\n callback: Optional[CALLBACK] = None,\n termination_checker: Optional[TERMINATIONCHECKER] = None,\n ) -> None:\n r\"\"\"\n Args:\n fidelity: A function to compute the fidelity of the ansatz state with itself for\n two different sets of parameters.\n maxiter: The maximum number of iterations. Note that this is not the maximal number\n of function evaluations.\n blocking: If True, only accepts updates that improve the loss (up to some allowed\n increase, see next argument).\n allowed_increase: If ``blocking`` is ``True``, this argument determines by how much\n the loss can increase with the proposed parameters and still be accepted.\n If ``None``, the allowed increases is calibrated automatically to be twice the\n approximated standard deviation of the loss function.\n learning_rate: The update step is the learning rate is multiplied with the gradient.\n If the learning rate is a float, it remains constant over the course of the\n optimization. It can also be a callable returning an iterator which yields the\n learning rates for each optimization step.\n If ``learning_rate`` is set ``perturbation`` must also be provided.\n perturbation: Specifies the magnitude of the perturbation for the finite difference\n approximation of the gradients. Can be either a float or a generator yielding\n the perturbation magnitudes per step.\n If ``perturbation`` is set ``learning_rate`` must also be provided.\n last_avg: Return the average of the ``last_avg`` parameters instead of just the\n last parameter values.\n resamplings: The number of times the gradient (and Hessian) is sampled using a random\n direction to construct a gradient estimate. Per default the gradient is estimated\n using only one random direction. If an integer, all iterations use the same number\n of resamplings. If a dictionary, this is interpreted as\n ``{iteration: number of resamplings per iteration}``.\n perturbation_dims: The number of perturbed dimensions. Per default, all dimensions\n are perturbed, but a smaller, fixed number can be perturbed. If set, the perturbed\n dimensions are chosen uniformly at random.\n regularization: To ensure the preconditioner is symmetric and positive definite, the\n identity times a small coefficient is added to it. This generator yields that\n coefficient.\n hessian_delay: Start multiplying the gradient with the inverse Hessian only after a\n certain number of iterations. The Hessian is still evaluated and therefore this\n argument can be useful to first get a stable average over the last iterations before\n using it as preconditioner.\n lse_solver: The method to solve for the inverse of the Hessian. Per default an\n exact LSE solver is used, but can e.g. be overwritten by a minimization routine.\n initial_hessian: The initial guess for the Hessian. By default the identity matrix\n is used.\n callback: A callback function passed information in each iteration step. The\n information is, in this order: the parameters, the function value, the number\n of function evaluations, the stepsize, whether the step was accepted.\n termination_checker: A callback function executed at the end of each iteration step. The\n arguments are, in this order: the parameters, the function value, the number\n of function evaluations, the stepsize, whether the step was accepted. If the callback\n returns True, the optimization is terminated.\n To prevent additional evaluations of the objective method, if the objective has not yet\n been evaluated, the objective is estimated by taking the mean of the objective\n evaluations used in the estimate of the gradient.\n\n\n \"\"\"\n super().__init__(\n maxiter,\n blocking,\n allowed_increase,\n # trust region *must* be false for natural gradients to work\n trust_region=False,\n learning_rate=learning_rate,\n perturbation=perturbation,\n resamplings=resamplings,\n callback=callback,\n second_order=True,\n hessian_delay=hessian_delay,\n lse_solver=lse_solver,\n regularization=regularization,\n perturbation_dims=perturbation_dims,\n initial_hessian=initial_hessian,\n termination_checker=termination_checker,\n )\n\n self.fidelity = fidelity\n\n def _point_sample(self, loss, x, eps, delta1, delta2):\n loss_points = [x + eps * delta1, x - eps * delta1]\n fidelity_points = [\n (x, x + eps * delta1),\n (x, x - eps * delta1),\n (x, x + eps * (delta1 + delta2)),\n (x, x + eps * (-delta1 + delta2)),\n ]\n self._nfev += 6\n\n loss_values = _batch_evaluate(loss, loss_points, self._max_evals_grouped)\n fidelity_values = _batch_evaluate(self.fidelity, fidelity_points, self._max_evals_grouped)\n\n # compute the gradient approximation and additionally return the loss function evaluations\n gradient_estimate = (loss_values[0] - loss_values[1]) / (2 * eps) * delta1\n\n # compute the preconditioner point estimate\n diff = fidelity_values[2] - fidelity_values[0]\n diff -= fidelity_values[3] - fidelity_values[1]\n diff /= 2 * eps ** 2\n\n rank_one = np.outer(delta1, delta2)\n # -0.5 factor comes from the fact that we need -0.5 * fidelity\n hessian_estimate = -0.5 * diff * (rank_one + rank_one.T) / 2\n\n return np.mean(loss_values), gradient_estimate, hessian_estimate\n\n @property\n def settings(self) -> Dict[str, Any]:\n \"\"\"The optimizer settings in a dictionary format.\"\"\"\n # re-use serialization from SPSA\n settings = super().settings\n settings.update({\"fidelity\": self.fidelity})\n\n # remove SPSA-specific arguments not in QNSPSA\n settings.pop(\"trust_region\")\n settings.pop(\"second_order\")\n\n return settings\n\n @staticmethod\n def get_fidelity(\n circuit: QuantumCircuit,\n backend: Optional[Union[Backend, QuantumInstance]] = None,\n expectation: Optional[ExpectationBase] = None,\n ) -> Callable[[np.ndarray, np.ndarray], float]:\n r\"\"\"Get a function to compute the fidelity of ``circuit`` with itself.\n\n Let ``circuit`` be a parameterized quantum circuit performing the operation\n :math:`U(\\theta)` given a set of parameters :math:`\\theta`. Then this method returns\n a function to evaluate\n\n .. math::\n\n F(\\theta, \\phi) = \\big|\\langle 0 | U^\\dagger(\\theta) U(\\phi) |0\\rangle \\big|^2.\n\n The output of this function can be used as input for the ``fidelity`` to the\n :class:~`qiskit.algorithms.optimizers.QNSPSA` optimizer.\n\n Args:\n circuit: The circuit preparing the parameterized ansatz.\n backend: A backend of quantum instance to evaluate the circuits. If None, plain\n matrix multiplication will be used.\n expectation: An expectation converter to specify how the expected value is computed.\n If a shot-based readout is used this should be set to ``PauliExpectation``.\n\n Returns:\n A handle to the function :math:`F`.\n\n \"\"\"\n params_x = ParameterVector(\"x\", circuit.num_parameters)\n params_y = ParameterVector(\"y\", circuit.num_parameters)\n\n expression = ~StateFn(circuit.assign_parameters(params_x)) @ StateFn(\n circuit.assign_parameters(params_y)\n )\n\n if expectation is not None:\n expression = expectation.convert(expression)\n\n if backend is None:\n\n def fidelity(values_x, values_y):\n value_dict = dict(\n zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())\n )\n return np.abs(expression.bind_parameters(value_dict).eval()) ** 2\n\n else:\n sampler = CircuitSampler(backend)\n\n def fidelity(values_x, values_y=None):\n if values_y is not None: # no batches\n value_dict = dict(\n zip(params_x[:] + params_y[:], values_x.tolist() + values_y.tolist())\n )\n else:\n value_dict = {p: [] for p in params_x[:] + params_y[:]}\n for values_xy in values_x:\n for value_x, param_x in zip(values_xy[0, :], params_x):\n value_dict[param_x].append(value_x)\n\n for value_y, param_y in zip(values_xy[1, :], params_y):\n value_dict[param_y].append(value_y)\n\n return np.abs(sampler.convert(expression, params=value_dict).eval()) ** 2\n\n return fidelity\n", "path": "qiskit/algorithms/optimizers/qnspsa.py"}]}
| 4,084 | 190 |
gh_patches_debug_7863
|
rasdani/github-patches
|
git_diff
|
facebookresearch__hydra-1363
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Nevergrad-Plugin] Add support for Python 3.9
Python 3.9 support pending on scikit 2.4.0 release. Relevant comment: scikit-learn/scikit-learn#18621 (comment)
Related to #1062
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plugins/hydra_nevergrad_sweeper/setup.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from setuptools import find_namespace_packages, setup
4
5 with open("README.md", "r") as fh:
6 LONG_DESC = fh.read()
7 setup(
8 name="hydra-nevergrad-sweeper",
9 version="1.1.0rc1",
10 author="Jeremy Rapin, Omry Yadan, Jieru Hu",
11 author_email="[email protected], [email protected], [email protected]",
12 description="Hydra Nevergrad Sweeper plugin",
13 long_description=LONG_DESC,
14 long_description_content_type="text/markdown",
15 url="https://github.com/facebookresearch/hydra/",
16 packages=find_namespace_packages(include=["hydra_plugins.*"]),
17 classifiers=[
18 "License :: OSI Approved :: MIT License",
19 "Programming Language :: Python :: 3.6",
20 "Programming Language :: Python :: 3.7",
21 "Programming Language :: Python :: 3.8",
22 # "Programming Language :: Python :: 3.9",
23 "Operating System :: OS Independent",
24 "Development Status :: 4 - Beta",
25 ],
26 install_requires=["hydra-core>=1.0.0", "nevergrad>=0.4.1.post4"],
27 include_package_data=True,
28 )
29
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py
--- a/plugins/hydra_nevergrad_sweeper/setup.py
+++ b/plugins/hydra_nevergrad_sweeper/setup.py
@@ -19,7 +19,7 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
- # "Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.9",
"Operating System :: OS Independent",
"Development Status :: 4 - Beta",
],
|
{"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py\n--- a/plugins/hydra_nevergrad_sweeper/setup.py\n+++ b/plugins/hydra_nevergrad_sweeper/setup.py\n@@ -19,7 +19,7 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n- # \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n", "issue": "[Nevergrad-Plugin] Add support for Python 3.9\nPython 3.9 support pending on scikit 2.4.0 release. Relevant comment: scikit-learn/scikit-learn#18621 (comment)\r\n\r\nRelated to #1062\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom setuptools import find_namespace_packages, setup\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n name=\"hydra-nevergrad-sweeper\",\n version=\"1.1.0rc1\",\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n # \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\"hydra-core>=1.0.0\", \"nevergrad>=0.4.1.post4\"],\n include_package_data=True,\n )\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom setuptools import find_namespace_packages, setup\n\nwith open(\"README.md\", \"r\") as fh:\n LONG_DESC = fh.read()\n setup(\n name=\"hydra-nevergrad-sweeper\",\n version=\"1.1.0rc1\",\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=LONG_DESC,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\"hydra-core>=1.0.0\", \"nevergrad>=0.4.1.post4\"],\n include_package_data=True,\n )\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}]}
| 663 | 155 |
gh_patches_debug_13339
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1491
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[requires.io] dependency update on master branch
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `warehouse/celery.py`
Content:
```
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import celery.backends
14
15 # We need to trick Celery into supporting rediss:// URLs which is how redis-py
16 # signals that you should use Redis with TLS.
17 celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
18
19 from celery import Celery, Task
20 from celery.backends.redis import RedisBackend as _RedisBackend
21 from celery.signals import celeryd_init
22 from pyramid import scripting
23 from pyramid.threadlocal import get_current_request
24 from raven.contrib.celery import register_signal, register_logger_signal
25
26 from warehouse.config import Environment, configure
27
28
29 @celeryd_init.connect
30 def _configure_celery(*args, **kwargs):
31 config = configure()
32 register_logger_signal(config.registry["raven.client"])
33 register_signal(config.registry["raven.client"])
34
35
36 class TLSRedisBackend(_RedisBackend):
37
38 def _params_from_url(self, url, defaults):
39 params = super()._params_from_url(url, defaults)
40 params.update({"connection_class": self.redis.SSLConnection})
41 return params
42
43
44 class WarehouseTask(Task):
45
46 abstract = True
47
48 def __call__(self, *args, **kwargs):
49 registry = self.app.pyramid_config.registry
50 pyramid_env = scripting.prepare(registry=registry)
51
52 try:
53 return super().__call__(pyramid_env["request"], *args, **kwargs)
54 finally:
55 pyramid_env["closer"]()
56
57 def apply_async(self, *args, **kwargs):
58 # The API design of Celery makes this threadlocal pretty impossible to
59 # avoid :(
60 request = get_current_request()
61
62 # If for whatever reason we were unable to get a request we'll just
63 # skip this and call the original method to send this immediately.
64 if request is None or not hasattr(request, "tm"):
65 return super().apply_async(*args, **kwargs)
66
67 # This will break things that expect to get an AsyncResult because
68 # we're no longer going to be returning an async result from this when
69 # called from within a request, response cycle. Ideally we shouldn't be
70 # waiting for responses in a request/response cycle anyways though.
71 request.tm.get().addAfterCommitHook(
72 self._after_commit_hook,
73 args=args,
74 kws=kwargs,
75 )
76
77 def _after_commit_hook(self, success, *args, **kwargs):
78 if success:
79 super().apply_async(*args, **kwargs)
80
81
82 app = Celery("warehouse")
83 app.Task = WarehouseTask
84
85
86 task = app.task
87
88
89 def includeme(config):
90 s = config.registry.settings
91 app.pyramid_config = config
92 app.conf.update(
93 BROKER_URL=s["celery.broker_url"],
94 BROKER_USE_SSL=s["warehouse.env"] == Environment.production,
95 CELERY_DISABLE_RATE_LIMITS=True,
96 CELERY_RESULT_BACKEND=s["celery.result_url"],
97 CELERY_RESULT_SERIALIZER="json",
98 CELERY_TASK_SERIALIZER="json",
99 CELERY_ACCEPT_CONTENT=["json", "msgpack"],
100 CELERY_MESSAGE_COMPRESSION="gzip",
101 CELERY_QUEUE_HA_POLICY="all",
102 )
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/warehouse/celery.py b/warehouse/celery.py
--- a/warehouse/celery.py
+++ b/warehouse/celery.py
@@ -10,11 +10,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-import celery.backends
+import celery.app.backends
# We need to trick Celery into supporting rediss:// URLs which is how redis-py
# signals that you should use Redis with TLS.
-celery.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
+celery.app.backends.BACKEND_ALIASES["rediss"] = "warehouse.celery:TLSRedisBackend" # noqa
from celery import Celery, Task
from celery.backends.redis import RedisBackend as _RedisBackend
|
{"golden_diff": "diff --git a/warehouse/celery.py b/warehouse/celery.py\n--- a/warehouse/celery.py\n+++ b/warehouse/celery.py\n@@ -10,11 +10,11 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-import celery.backends\n+import celery.app.backends\n \n # We need to trick Celery into supporting rediss:// URLs which is how redis-py\n # signals that you should use Redis with TLS.\n-celery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n+celery.app.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n \n from celery import Celery, Task\n from celery.backends.redis import RedisBackend as _RedisBackend\n", "issue": "[requires.io] dependency update on master branch\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\nfrom raven.contrib.celery import register_signal, register_logger_signal\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n config = configure()\n register_logger_signal(config.registry[\"raven.client\"])\n register_signal(config.registry[\"raven.client\"])\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport celery.app.backends\n\n# We need to trick Celery into supporting rediss:// URLs which is how redis-py\n# signals that you should use Redis with TLS.\ncelery.app.backends.BACKEND_ALIASES[\"rediss\"] = \"warehouse.celery:TLSRedisBackend\" # noqa\n\nfrom celery import Celery, Task\nfrom celery.backends.redis import RedisBackend as _RedisBackend\nfrom celery.signals import celeryd_init\nfrom pyramid import scripting\nfrom pyramid.threadlocal import get_current_request\nfrom raven.contrib.celery import register_signal, register_logger_signal\n\nfrom warehouse.config import Environment, configure\n\n\n@celeryd_init.connect\ndef _configure_celery(*args, **kwargs):\n config = configure()\n register_logger_signal(config.registry[\"raven.client\"])\n register_signal(config.registry[\"raven.client\"])\n\n\nclass TLSRedisBackend(_RedisBackend):\n\n def _params_from_url(self, url, defaults):\n params = super()._params_from_url(url, defaults)\n params.update({\"connection_class\": self.redis.SSLConnection})\n return params\n\n\nclass WarehouseTask(Task):\n\n abstract = True\n\n def __call__(self, *args, **kwargs):\n registry = self.app.pyramid_config.registry\n pyramid_env = scripting.prepare(registry=registry)\n\n try:\n return super().__call__(pyramid_env[\"request\"], *args, **kwargs)\n finally:\n pyramid_env[\"closer\"]()\n\n def apply_async(self, *args, **kwargs):\n # The API design of Celery makes this threadlocal pretty impossible to\n # avoid :(\n request = get_current_request()\n\n # If for whatever reason we were unable to get a request we'll just\n # skip this and call the original method to send this immediately.\n if request is None or not hasattr(request, \"tm\"):\n return super().apply_async(*args, **kwargs)\n\n # This will break things that expect to get an AsyncResult because\n # we're no longer going to be returning an async result from this when\n # called from within a request, response cycle. Ideally we shouldn't be\n # waiting for responses in a request/response cycle anyways though.\n request.tm.get().addAfterCommitHook(\n self._after_commit_hook,\n args=args,\n kws=kwargs,\n )\n\n def _after_commit_hook(self, success, *args, **kwargs):\n if success:\n super().apply_async(*args, **kwargs)\n\n\napp = Celery(\"warehouse\")\napp.Task = WarehouseTask\n\n\ntask = app.task\n\n\ndef includeme(config):\n s = config.registry.settings\n app.pyramid_config = config\n app.conf.update(\n BROKER_URL=s[\"celery.broker_url\"],\n BROKER_USE_SSL=s[\"warehouse.env\"] == Environment.production,\n CELERY_DISABLE_RATE_LIMITS=True,\n CELERY_RESULT_BACKEND=s[\"celery.result_url\"],\n CELERY_RESULT_SERIALIZER=\"json\",\n CELERY_TASK_SERIALIZER=\"json\",\n CELERY_ACCEPT_CONTENT=[\"json\", \"msgpack\"],\n CELERY_MESSAGE_COMPRESSION=\"gzip\",\n CELERY_QUEUE_HA_POLICY=\"all\",\n )\n", "path": "warehouse/celery.py"}]}
| 1,269 | 188 |
gh_patches_debug_21400
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-1461
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[ENH] Imputer for multivariate timeseries
**Is your feature request related to a problem? Please describe.**
Imputer Transformation (sktime.transformations.series.impute.Imputer) works only with univariate time series. So that one does not have to manipulate the data laboriously before, a multivariate version of Imputer would help. sktime.transformations.series.compose -> ColumnwiseTransformer could work with the Imputer. Is it planned to provide a multivariate imputer or should the ColumnwiseTransformer always be applied?
**Describe the solution you'd like**
A query of the dimension of the input data could be prefixed so that only one Imputer version is needed.
```
from sktime.transformations.base import _SeriesToSeriesTransformer
from sktime.transformations.series.compose import ColumnwiseTransformer
from sktime.transformations.series.impute import Imputer
__author__ = ["Martin Walter"]
__all__ = ["ImputerMultivariate"]
class ImputerMultivariate(_SeriesToSeriesTransformer):
"""Missing value imputation of multivariate timeseries.
The Imputer transforms input series by replacing missing values according
to an imputation strategy specified by `method`.
Parameters
----------
method : str, default="drift"
Method to fill the missing values values.
* "drift" : drift/trend values by sktime.PolynomialTrendForecaster()
* "linear" : linear interpolation, by pd.Series.interpolate()
* "nearest" : use nearest value, by pd.Series.interpolate()
* "constant" : same constant value (given in arg value) for all NaN
* "mean" : pd.Series.mean()
* "median" : pd.Series.median()
* "backfill" ot "bfill" : adapted from pd.Series.fillna()
* "pad" or "ffill" : adapted from pd.Series.fillna()
* "random" : random values between pd.Series.min() and .max()
* "forecaster" : use an sktime Forecaster, given in arg forecaster
missing_values : int/float/str, default=None
The placeholder for the missing values. All occurrences of
missing_values will be imputed. If None then np.nan is used.
value : int/float, default=None
Value to use to fill missing values when method="constant".
forecaster : Any Forecaster based on sktime.BaseForecaster, default=None
Use a given Forecaster to impute by insample predictions when
method="forecaster". Before fitting, missing data is imputed with
method="ffill" or "bfill" as heuristic.
random_state : int/float/str, optional
Value to set random.seed() if method="random", default None
Examples
--------
>>> from sktime.transformations.series.impute import Imputer
>>> from sktime.datasets import load_airline
>>> y = load_airline()
>>> transformer = Imputer(method="drift")
>>> y_hat = transformer.fit_transform(y)
"""
_tags = {
"fit-in-transform": True,
"handles-missing-data": True,
"skip-inverse-transform": True,
}
def __init__(
self,
method="drift",
random_state=None,
value=None,
forecaster=None,
missing_values=None):
self.transformer = ColumnwiseTransformer(
Imputer(
method=method,
random_state=random_state,
value=value,
forecaster=forecaster,
missing_values=missing_values,
)
)
super(ImputerMultivariate, self).__init__()
def fit(self, X, y=None):
self._is_fitted = True
self.transformer.fit(X, y)
return self
def transform(self, X, y=None):
X = self.transformer.transform(X, y)
return X
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/transformations/series/impute.py`
Content:
```
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
4 """Utilities to impute series with missing values."""
5
6 __author__ = ["Martin Walter"]
7 __all__ = ["Imputer"]
8
9 from sktime.transformations.base import _SeriesToSeriesTransformer
10 from sktime.utils.validation.series import check_series
11 from sktime.forecasting.trend import PolynomialTrendForecaster
12 from sklearn.utils import check_random_state
13 from sktime.forecasting.base import ForecastingHorizon
14 from sklearn.base import clone
15
16 import numpy as np
17 import pandas as pd
18
19
20 class Imputer(_SeriesToSeriesTransformer):
21 """Missing value imputation.
22
23 The Imputer transforms input series by replacing missing values according
24 to an imputation strategy specified by `method`.
25
26 Parameters
27 ----------
28 method : str, default="drift"
29 Method to fill the missing values values.
30
31 * "drift" : drift/trend values by sktime.PolynomialTrendForecaster()
32 * "linear" : linear interpolation, by pd.Series.interpolate()
33 * "nearest" : use nearest value, by pd.Series.interpolate()
34 * "constant" : same constant value (given in arg value) for all NaN
35 * "mean" : pd.Series.mean()
36 * "median" : pd.Series.median()
37 * "backfill" ot "bfill" : adapted from pd.Series.fillna()
38 * "pad" or "ffill" : adapted from pd.Series.fillna()
39 * "random" : random values between pd.Series.min() and .max()
40 * "forecaster" : use an sktime Forecaster, given in arg forecaster
41
42 missing_values : int/float/str, default=None
43 The placeholder for the missing values. All occurrences of
44 missing_values will be imputed. If None then np.nan is used.
45 value : int/float, default=None
46 Value to use to fill missing values when method="constant".
47 forecaster : Any Forecaster based on sktime.BaseForecaster, default=None
48 Use a given Forecaster to impute by insample predictions when
49 method="forecaster". Before fitting, missing data is imputed with
50 method="ffill" or "bfill" as heuristic.
51 random_state : int/float/str, optional
52 Value to set random.seed() if method="random", default None
53
54 Examples
55 --------
56 >>> from sktime.transformations.series.impute import Imputer
57 >>> from sktime.datasets import load_airline
58 >>> y = load_airline()
59 >>> transformer = Imputer(method="drift")
60 >>> y_hat = transformer.fit_transform(y)
61 """
62
63 _tags = {
64 "fit-in-transform": True,
65 "handles-missing-data": True,
66 "skip-inverse-transform": True,
67 }
68
69 def __init__(
70 self,
71 method="drift",
72 random_state=None,
73 value=None,
74 forecaster=None,
75 missing_values=None,
76 ):
77
78 self.method = method
79 self.missing_values = missing_values
80 self.value = value
81 self.forecaster = forecaster
82 self.random_state = random_state
83 super(Imputer, self).__init__()
84
85 def transform(self, Z, X=None):
86 """Transform data.
87
88 Returns a transformed version of Z.
89
90 Parameters
91 ----------
92 Z : pd.Series, pd.DataFrame
93
94 Returns
95 -------
96 Z : pd.Series, pd.DataFrame
97 Transformed time series(es).
98 """
99 self.check_is_fitted()
100 self._check_method()
101 Z = check_series(Z)
102 Z = Z.copy()
103
104 # replace missing_values with np.nan
105 if self.missing_values:
106 Z = Z.replace(to_replace=self.missing_values, value=np.nan)
107
108 if not _has_missing_values(Z):
109 return Z
110
111 elif self.method == "random":
112 if isinstance(Z, pd.DataFrame):
113 for col in Z:
114 Z[col] = Z[col].apply(
115 lambda i: self._get_random(Z[col]) if np.isnan(i) else i
116 )
117 else:
118 Z = Z.apply(lambda i: self._get_random(Z) if np.isnan(i) else i)
119 elif self.method == "constant":
120 Z = Z.fillna(value=self.value)
121 elif self.method in ["backfill", "bfill", "pad", "ffill"]:
122 Z = Z.fillna(method=self.method)
123 elif self.method == "drift":
124 forecaster = PolynomialTrendForecaster(degree=1)
125 Z = _impute_with_forecaster(forecaster, Z)
126 elif self.method == "forecaster":
127 forecaster = clone(self.forecaster)
128 Z = _impute_with_forecaster(forecaster, Z)
129 elif self.method == "mean":
130 Z = Z.fillna(value=Z.mean())
131 elif self.method == "median":
132 Z = Z.fillna(value=Z.median())
133 elif self.method in ["nearest", "linear"]:
134 Z = Z.interpolate(method=self.method)
135 else:
136 raise ValueError(f"`method`: {self.method} not available.")
137 # fill first/last elements of series,
138 # as some methods (e.g. "linear") cant impute those
139 Z = Z.fillna(method="ffill").fillna(method="backfill")
140 return Z
141
142 def _check_method(self):
143 if (
144 self.value is not None
145 and self.method != "constant"
146 or self.method == "constant"
147 and self.value is None
148 ):
149 raise ValueError(
150 """Imputing with a value can only be
151 used if method="constant" and if parameter "value" is not None"""
152 )
153 elif (
154 self.forecaster is not None
155 and self.method != "forecaster"
156 or self.method == "forecaster"
157 and self.forecaster is None
158 ):
159 raise ValueError(
160 """Imputing with a forecaster can only be used if
161 method=\"forecaster\" and if arg forecaster is not None"""
162 )
163 else:
164 pass
165
166 def _get_random(self, Z):
167 """Create a random int or float value.
168
169 :param Z: Series
170 :type Z: pd.Series
171 :return: Random int or float between min and max of Z
172 :rtype: int/float
173 """
174 rng = check_random_state(self.random_state)
175 # check if series contains only int or int-like values (e.g. 3.0)
176 if (Z.dropna() % 1 == 0).all():
177 return rng.randint(Z.min(), Z.max())
178 else:
179 return rng.uniform(Z.min(), Z.max())
180
181
182 def _impute_with_forecaster(forecaster, Z):
183 """Use a given forecaster for imputation by in-sample predictions.
184
185 Parameters
186 ----------
187 forecaster: Forecaster
188 Forecaster to use for imputation
189 Z : pd.Series or pd.DataFrame
190 Series to impute.
191
192 Returns
193 -------
194 zt : pd.Series or pd.DataFrame
195 Series with imputed values.
196 """
197 if isinstance(Z, pd.Series):
198 series = [Z]
199 elif isinstance(Z, pd.DataFrame):
200 series = [Z[column] for column in Z]
201
202 for z in series:
203 # define fh based on index of missing values
204 na_index = z.index[z.isna()]
205 fh = ForecastingHorizon(values=na_index, is_relative=False)
206
207 # fill NaN before fitting with ffill and backfill (heuristic)
208 forecaster.fit(y=z.fillna(method="ffill").fillna(method="backfill"), fh=fh)
209
210 # replace missing values with predicted values
211 z[na_index] = forecaster.predict()
212 return Z
213
214
215 def _has_missing_values(Z):
216 return Z.isnull().to_numpy().any()
217
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/transformations/series/impute.py b/sktime/transformations/series/impute.py
--- a/sktime/transformations/series/impute.py
+++ b/sktime/transformations/series/impute.py
@@ -6,15 +6,17 @@
__author__ = ["Martin Walter"]
__all__ = ["Imputer"]
+
+import numpy as np
+import pandas as pd
+
+from sklearn.base import clone
+from sklearn.utils import check_random_state
+
from sktime.transformations.base import _SeriesToSeriesTransformer
from sktime.utils.validation.series import check_series
from sktime.forecasting.trend import PolynomialTrendForecaster
-from sklearn.utils import check_random_state
from sktime.forecasting.base import ForecastingHorizon
-from sklearn.base import clone
-
-import numpy as np
-import pandas as pd
class Imputer(_SeriesToSeriesTransformer):
@@ -64,6 +66,7 @@
"fit-in-transform": True,
"handles-missing-data": True,
"skip-inverse-transform": True,
+ "univariate-only": False,
}
def __init__(
|
{"golden_diff": "diff --git a/sktime/transformations/series/impute.py b/sktime/transformations/series/impute.py\n--- a/sktime/transformations/series/impute.py\n+++ b/sktime/transformations/series/impute.py\n@@ -6,15 +6,17 @@\n __author__ = [\"Martin Walter\"]\n __all__ = [\"Imputer\"]\n \n+\n+import numpy as np\n+import pandas as pd\n+\n+from sklearn.base import clone\n+from sklearn.utils import check_random_state\n+\n from sktime.transformations.base import _SeriesToSeriesTransformer\n from sktime.utils.validation.series import check_series\n from sktime.forecasting.trend import PolynomialTrendForecaster\n-from sklearn.utils import check_random_state\n from sktime.forecasting.base import ForecastingHorizon\n-from sklearn.base import clone\n-\n-import numpy as np\n-import pandas as pd\n \n \n class Imputer(_SeriesToSeriesTransformer):\n@@ -64,6 +66,7 @@\n \"fit-in-transform\": True,\n \"handles-missing-data\": True,\n \"skip-inverse-transform\": True,\n+ \"univariate-only\": False,\n }\n \n def __init__(\n", "issue": "[ENH] Imputer for multivariate timeseries\n**Is your feature request related to a problem? Please describe.**\r\n\r\nImputer Transformation (sktime.transformations.series.impute.Imputer) works only with univariate time series. So that one does not have to manipulate the data laboriously before, a multivariate version of Imputer would help. sktime.transformations.series.compose -> ColumnwiseTransformer could work with the Imputer. Is it planned to provide a multivariate imputer or should the ColumnwiseTransformer always be applied?\r\n\r\n**Describe the solution you'd like**\r\nA query of the dimension of the input data could be prefixed so that only one Imputer version is needed.\r\n\r\n```\r\nfrom sktime.transformations.base import _SeriesToSeriesTransformer\r\nfrom sktime.transformations.series.compose import ColumnwiseTransformer\r\nfrom sktime.transformations.series.impute import Imputer\r\n\r\n__author__ = [\"Martin Walter\"]\r\n__all__ = [\"ImputerMultivariate\"]\r\n\r\nclass ImputerMultivariate(_SeriesToSeriesTransformer):\r\n \"\"\"Missing value imputation of multivariate timeseries.\r\n\r\n The Imputer transforms input series by replacing missing values according\r\n to an imputation strategy specified by `method`.\r\n\r\n Parameters\r\n ----------\r\n method : str, default=\"drift\"\r\n Method to fill the missing values values.\r\n\r\n * \"drift\" : drift/trend values by sktime.PolynomialTrendForecaster()\r\n * \"linear\" : linear interpolation, by pd.Series.interpolate()\r\n * \"nearest\" : use nearest value, by pd.Series.interpolate()\r\n * \"constant\" : same constant value (given in arg value) for all NaN\r\n * \"mean\" : pd.Series.mean()\r\n * \"median\" : pd.Series.median()\r\n * \"backfill\" ot \"bfill\" : adapted from pd.Series.fillna()\r\n * \"pad\" or \"ffill\" : adapted from pd.Series.fillna()\r\n * \"random\" : random values between pd.Series.min() and .max()\r\n * \"forecaster\" : use an sktime Forecaster, given in arg forecaster\r\n\r\n missing_values : int/float/str, default=None\r\n The placeholder for the missing values. All occurrences of\r\n missing_values will be imputed. If None then np.nan is used.\r\n value : int/float, default=None\r\n Value to use to fill missing values when method=\"constant\".\r\n forecaster : Any Forecaster based on sktime.BaseForecaster, default=None\r\n Use a given Forecaster to impute by insample predictions when\r\n method=\"forecaster\". Before fitting, missing data is imputed with\r\n method=\"ffill\" or \"bfill\" as heuristic.\r\n random_state : int/float/str, optional\r\n Value to set random.seed() if method=\"random\", default None\r\n\r\n Examples\r\n --------\r\n >>> from sktime.transformations.series.impute import Imputer\r\n >>> from sktime.datasets import load_airline\r\n >>> y = load_airline()\r\n >>> transformer = Imputer(method=\"drift\")\r\n >>> y_hat = transformer.fit_transform(y)\r\n \"\"\"\r\n _tags = {\r\n \"fit-in-transform\": True,\r\n \"handles-missing-data\": True,\r\n \"skip-inverse-transform\": True,\r\n }\r\n def __init__(\r\n self, \r\n method=\"drift\", \r\n random_state=None, \r\n value=None,\r\n forecaster=None,\r\n missing_values=None):\r\n\r\n self.transformer = ColumnwiseTransformer(\r\n Imputer(\r\n method=method,\r\n random_state=random_state,\r\n value=value,\r\n forecaster=forecaster,\r\n missing_values=missing_values,\r\n )\r\n )\r\n super(ImputerMultivariate, self).__init__()\r\n \r\n def fit(self, X, y=None):\r\n self._is_fitted = True\r\n self.transformer.fit(X, y)\r\n return self\r\n\r\n def transform(self, X, y=None):\r\n X = self.transformer.transform(X, y)\r\n return X\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Utilities to impute series with missing values.\"\"\"\n\n__author__ = [\"Martin Walter\"]\n__all__ = [\"Imputer\"]\n\nfrom sktime.transformations.base import _SeriesToSeriesTransformer\nfrom sktime.utils.validation.series import check_series\nfrom sktime.forecasting.trend import PolynomialTrendForecaster\nfrom sklearn.utils import check_random_state\nfrom sktime.forecasting.base import ForecastingHorizon\nfrom sklearn.base import clone\n\nimport numpy as np\nimport pandas as pd\n\n\nclass Imputer(_SeriesToSeriesTransformer):\n \"\"\"Missing value imputation.\n\n The Imputer transforms input series by replacing missing values according\n to an imputation strategy specified by `method`.\n\n Parameters\n ----------\n method : str, default=\"drift\"\n Method to fill the missing values values.\n\n * \"drift\" : drift/trend values by sktime.PolynomialTrendForecaster()\n * \"linear\" : linear interpolation, by pd.Series.interpolate()\n * \"nearest\" : use nearest value, by pd.Series.interpolate()\n * \"constant\" : same constant value (given in arg value) for all NaN\n * \"mean\" : pd.Series.mean()\n * \"median\" : pd.Series.median()\n * \"backfill\" ot \"bfill\" : adapted from pd.Series.fillna()\n * \"pad\" or \"ffill\" : adapted from pd.Series.fillna()\n * \"random\" : random values between pd.Series.min() and .max()\n * \"forecaster\" : use an sktime Forecaster, given in arg forecaster\n\n missing_values : int/float/str, default=None\n The placeholder for the missing values. All occurrences of\n missing_values will be imputed. If None then np.nan is used.\n value : int/float, default=None\n Value to use to fill missing values when method=\"constant\".\n forecaster : Any Forecaster based on sktime.BaseForecaster, default=None\n Use a given Forecaster to impute by insample predictions when\n method=\"forecaster\". Before fitting, missing data is imputed with\n method=\"ffill\" or \"bfill\" as heuristic.\n random_state : int/float/str, optional\n Value to set random.seed() if method=\"random\", default None\n\n Examples\n --------\n >>> from sktime.transformations.series.impute import Imputer\n >>> from sktime.datasets import load_airline\n >>> y = load_airline()\n >>> transformer = Imputer(method=\"drift\")\n >>> y_hat = transformer.fit_transform(y)\n \"\"\"\n\n _tags = {\n \"fit-in-transform\": True,\n \"handles-missing-data\": True,\n \"skip-inverse-transform\": True,\n }\n\n def __init__(\n self,\n method=\"drift\",\n random_state=None,\n value=None,\n forecaster=None,\n missing_values=None,\n ):\n\n self.method = method\n self.missing_values = missing_values\n self.value = value\n self.forecaster = forecaster\n self.random_state = random_state\n super(Imputer, self).__init__()\n\n def transform(self, Z, X=None):\n \"\"\"Transform data.\n\n Returns a transformed version of Z.\n\n Parameters\n ----------\n Z : pd.Series, pd.DataFrame\n\n Returns\n -------\n Z : pd.Series, pd.DataFrame\n Transformed time series(es).\n \"\"\"\n self.check_is_fitted()\n self._check_method()\n Z = check_series(Z)\n Z = Z.copy()\n\n # replace missing_values with np.nan\n if self.missing_values:\n Z = Z.replace(to_replace=self.missing_values, value=np.nan)\n\n if not _has_missing_values(Z):\n return Z\n\n elif self.method == \"random\":\n if isinstance(Z, pd.DataFrame):\n for col in Z:\n Z[col] = Z[col].apply(\n lambda i: self._get_random(Z[col]) if np.isnan(i) else i\n )\n else:\n Z = Z.apply(lambda i: self._get_random(Z) if np.isnan(i) else i)\n elif self.method == \"constant\":\n Z = Z.fillna(value=self.value)\n elif self.method in [\"backfill\", \"bfill\", \"pad\", \"ffill\"]:\n Z = Z.fillna(method=self.method)\n elif self.method == \"drift\":\n forecaster = PolynomialTrendForecaster(degree=1)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"forecaster\":\n forecaster = clone(self.forecaster)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"mean\":\n Z = Z.fillna(value=Z.mean())\n elif self.method == \"median\":\n Z = Z.fillna(value=Z.median())\n elif self.method in [\"nearest\", \"linear\"]:\n Z = Z.interpolate(method=self.method)\n else:\n raise ValueError(f\"`method`: {self.method} not available.\")\n # fill first/last elements of series,\n # as some methods (e.g. \"linear\") cant impute those\n Z = Z.fillna(method=\"ffill\").fillna(method=\"backfill\")\n return Z\n\n def _check_method(self):\n if (\n self.value is not None\n and self.method != \"constant\"\n or self.method == \"constant\"\n and self.value is None\n ):\n raise ValueError(\n \"\"\"Imputing with a value can only be\n used if method=\"constant\" and if parameter \"value\" is not None\"\"\"\n )\n elif (\n self.forecaster is not None\n and self.method != \"forecaster\"\n or self.method == \"forecaster\"\n and self.forecaster is None\n ):\n raise ValueError(\n \"\"\"Imputing with a forecaster can only be used if\n method=\\\"forecaster\\\" and if arg forecaster is not None\"\"\"\n )\n else:\n pass\n\n def _get_random(self, Z):\n \"\"\"Create a random int or float value.\n\n :param Z: Series\n :type Z: pd.Series\n :return: Random int or float between min and max of Z\n :rtype: int/float\n \"\"\"\n rng = check_random_state(self.random_state)\n # check if series contains only int or int-like values (e.g. 3.0)\n if (Z.dropna() % 1 == 0).all():\n return rng.randint(Z.min(), Z.max())\n else:\n return rng.uniform(Z.min(), Z.max())\n\n\ndef _impute_with_forecaster(forecaster, Z):\n \"\"\"Use a given forecaster for imputation by in-sample predictions.\n\n Parameters\n ----------\n forecaster: Forecaster\n Forecaster to use for imputation\n Z : pd.Series or pd.DataFrame\n Series to impute.\n\n Returns\n -------\n zt : pd.Series or pd.DataFrame\n Series with imputed values.\n \"\"\"\n if isinstance(Z, pd.Series):\n series = [Z]\n elif isinstance(Z, pd.DataFrame):\n series = [Z[column] for column in Z]\n\n for z in series:\n # define fh based on index of missing values\n na_index = z.index[z.isna()]\n fh = ForecastingHorizon(values=na_index, is_relative=False)\n\n # fill NaN before fitting with ffill and backfill (heuristic)\n forecaster.fit(y=z.fillna(method=\"ffill\").fillna(method=\"backfill\"), fh=fh)\n\n # replace missing values with predicted values\n z[na_index] = forecaster.predict()\n return Z\n\n\ndef _has_missing_values(Z):\n return Z.isnull().to_numpy().any()\n", "path": "sktime/transformations/series/impute.py"}], "after_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Utilities to impute series with missing values.\"\"\"\n\n__author__ = [\"Martin Walter\"]\n__all__ = [\"Imputer\"]\n\n\nimport numpy as np\nimport pandas as pd\n\nfrom sklearn.base import clone\nfrom sklearn.utils import check_random_state\n\nfrom sktime.transformations.base import _SeriesToSeriesTransformer\nfrom sktime.utils.validation.series import check_series\nfrom sktime.forecasting.trend import PolynomialTrendForecaster\nfrom sktime.forecasting.base import ForecastingHorizon\n\n\nclass Imputer(_SeriesToSeriesTransformer):\n \"\"\"Missing value imputation.\n\n The Imputer transforms input series by replacing missing values according\n to an imputation strategy specified by `method`.\n\n Parameters\n ----------\n method : str, default=\"drift\"\n Method to fill the missing values values.\n\n * \"drift\" : drift/trend values by sktime.PolynomialTrendForecaster()\n * \"linear\" : linear interpolation, by pd.Series.interpolate()\n * \"nearest\" : use nearest value, by pd.Series.interpolate()\n * \"constant\" : same constant value (given in arg value) for all NaN\n * \"mean\" : pd.Series.mean()\n * \"median\" : pd.Series.median()\n * \"backfill\" ot \"bfill\" : adapted from pd.Series.fillna()\n * \"pad\" or \"ffill\" : adapted from pd.Series.fillna()\n * \"random\" : random values between pd.Series.min() and .max()\n * \"forecaster\" : use an sktime Forecaster, given in arg forecaster\n\n missing_values : int/float/str, default=None\n The placeholder for the missing values. All occurrences of\n missing_values will be imputed. If None then np.nan is used.\n value : int/float, default=None\n Value to use to fill missing values when method=\"constant\".\n forecaster : Any Forecaster based on sktime.BaseForecaster, default=None\n Use a given Forecaster to impute by insample predictions when\n method=\"forecaster\". Before fitting, missing data is imputed with\n method=\"ffill\" or \"bfill\" as heuristic.\n random_state : int/float/str, optional\n Value to set random.seed() if method=\"random\", default None\n\n Examples\n --------\n >>> from sktime.transformations.series.impute import Imputer\n >>> from sktime.datasets import load_airline\n >>> y = load_airline()\n >>> transformer = Imputer(method=\"drift\")\n >>> y_hat = transformer.fit_transform(y)\n \"\"\"\n\n _tags = {\n \"fit-in-transform\": True,\n \"handles-missing-data\": True,\n \"skip-inverse-transform\": True,\n \"univariate-only\": False,\n }\n\n def __init__(\n self,\n method=\"drift\",\n random_state=None,\n value=None,\n forecaster=None,\n missing_values=None,\n ):\n\n self.method = method\n self.missing_values = missing_values\n self.value = value\n self.forecaster = forecaster\n self.random_state = random_state\n super(Imputer, self).__init__()\n\n def transform(self, Z, X=None):\n \"\"\"Transform data.\n\n Returns a transformed version of Z.\n\n Parameters\n ----------\n Z : pd.Series, pd.DataFrame\n\n Returns\n -------\n Z : pd.Series, pd.DataFrame\n Transformed time series(es).\n \"\"\"\n self.check_is_fitted()\n self._check_method()\n Z = check_series(Z)\n Z = Z.copy()\n\n # replace missing_values with np.nan\n if self.missing_values:\n Z = Z.replace(to_replace=self.missing_values, value=np.nan)\n\n if not _has_missing_values(Z):\n return Z\n\n elif self.method == \"random\":\n if isinstance(Z, pd.DataFrame):\n for col in Z:\n Z[col] = Z[col].apply(\n lambda i: self._get_random(Z[col]) if np.isnan(i) else i\n )\n else:\n Z = Z.apply(lambda i: self._get_random(Z) if np.isnan(i) else i)\n elif self.method == \"constant\":\n Z = Z.fillna(value=self.value)\n elif self.method in [\"backfill\", \"bfill\", \"pad\", \"ffill\"]:\n Z = Z.fillna(method=self.method)\n elif self.method == \"drift\":\n forecaster = PolynomialTrendForecaster(degree=1)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"forecaster\":\n forecaster = clone(self.forecaster)\n Z = _impute_with_forecaster(forecaster, Z)\n elif self.method == \"mean\":\n Z = Z.fillna(value=Z.mean())\n elif self.method == \"median\":\n Z = Z.fillna(value=Z.median())\n elif self.method in [\"nearest\", \"linear\"]:\n Z = Z.interpolate(method=self.method)\n else:\n raise ValueError(f\"`method`: {self.method} not available.\")\n # fill first/last elements of series,\n # as some methods (e.g. \"linear\") cant impute those\n Z = Z.fillna(method=\"ffill\").fillna(method=\"backfill\")\n return Z\n\n def _check_method(self):\n if (\n self.value is not None\n and self.method != \"constant\"\n or self.method == \"constant\"\n and self.value is None\n ):\n raise ValueError(\n \"\"\"Imputing with a value can only be\n used if method=\"constant\" and if parameter \"value\" is not None\"\"\"\n )\n elif (\n self.forecaster is not None\n and self.method != \"forecaster\"\n or self.method == \"forecaster\"\n and self.forecaster is None\n ):\n raise ValueError(\n \"\"\"Imputing with a forecaster can only be used if\n method=\\\"forecaster\\\" and if arg forecaster is not None\"\"\"\n )\n else:\n pass\n\n def _get_random(self, Z):\n \"\"\"Create a random int or float value.\n\n :param Z: Series\n :type Z: pd.Series\n :return: Random int or float between min and max of Z\n :rtype: int/float\n \"\"\"\n rng = check_random_state(self.random_state)\n # check if series contains only int or int-like values (e.g. 3.0)\n if (Z.dropna() % 1 == 0).all():\n return rng.randint(Z.min(), Z.max())\n else:\n return rng.uniform(Z.min(), Z.max())\n\n\ndef _impute_with_forecaster(forecaster, Z):\n \"\"\"Use a given forecaster for imputation by in-sample predictions.\n\n Parameters\n ----------\n forecaster: Forecaster\n Forecaster to use for imputation\n Z : pd.Series or pd.DataFrame\n Series to impute.\n\n Returns\n -------\n zt : pd.Series or pd.DataFrame\n Series with imputed values.\n \"\"\"\n if isinstance(Z, pd.Series):\n series = [Z]\n elif isinstance(Z, pd.DataFrame):\n series = [Z[column] for column in Z]\n\n for z in series:\n # define fh based on index of missing values\n na_index = z.index[z.isna()]\n fh = ForecastingHorizon(values=na_index, is_relative=False)\n\n # fill NaN before fitting with ffill and backfill (heuristic)\n forecaster.fit(y=z.fillna(method=\"ffill\").fillna(method=\"backfill\"), fh=fh)\n\n # replace missing values with predicted values\n z[na_index] = forecaster.predict()\n return Z\n\n\ndef _has_missing_values(Z):\n return Z.isnull().to_numpy().any()\n", "path": "sktime/transformations/series/impute.py"}]}
| 3,354 | 254 |
gh_patches_debug_3705
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-323
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make sure everything works with Django-Rest-Framework
We should django-rest-framework's `request.data` instead of trying to extract a structured body ourselves
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/django/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import
3
4 import sys
5 import weakref
6
7 from django import VERSION as DJANGO_VERSION # type: ignore
8 from django.db.models.query import QuerySet # type: ignore
9 from django.core import signals # type: ignore
10
11 if False:
12 from typing import Any
13 from typing import Dict
14 from typing import Tuple
15 from typing import Union
16 from sentry_sdk.integrations.wsgi import _ScopedResponse
17 from typing import Callable
18 from django.core.handlers.wsgi import WSGIRequest # type: ignore
19 from django.http.response import HttpResponse # type: ignore
20 from django.http.request import QueryDict # type: ignore
21 from django.utils.datastructures import MultiValueDict # type: ignore
22 from typing import List
23
24
25 try:
26 from django.urls import resolve # type: ignore
27 except ImportError:
28 from django.core.urlresolvers import resolve # type: ignore
29
30 from sentry_sdk import Hub
31 from sentry_sdk.hub import _should_send_default_pii
32 from sentry_sdk.scope import add_global_event_processor
33 from sentry_sdk.utils import (
34 add_global_repr_processor,
35 capture_internal_exceptions,
36 event_from_exception,
37 safe_repr,
38 format_and_strip,
39 transaction_from_function,
40 walk_exception_chain,
41 )
42 from sentry_sdk.integrations import Integration
43 from sentry_sdk.integrations.logging import ignore_logger
44 from sentry_sdk.integrations.wsgi import SentryWsgiMiddleware
45 from sentry_sdk.integrations._wsgi_common import RequestExtractor
46 from sentry_sdk.integrations.django.transactions import LEGACY_RESOLVER
47 from sentry_sdk.integrations.django.templates import get_template_frame_from_exception
48
49
50 if DJANGO_VERSION < (1, 10):
51
52 def is_authenticated(request_user):
53 # type: (Any) -> bool
54 return request_user.is_authenticated()
55
56
57 else:
58
59 def is_authenticated(request_user):
60 # type: (Any) -> bool
61 return request_user.is_authenticated
62
63
64 class DjangoIntegration(Integration):
65 identifier = "django"
66
67 transaction_style = None
68
69 def __init__(self, transaction_style="url"):
70 # type: (str) -> None
71 TRANSACTION_STYLE_VALUES = ("function_name", "url")
72 if transaction_style not in TRANSACTION_STYLE_VALUES:
73 raise ValueError(
74 "Invalid value for transaction_style: %s (must be in %s)"
75 % (transaction_style, TRANSACTION_STYLE_VALUES)
76 )
77 self.transaction_style = transaction_style
78
79 @staticmethod
80 def setup_once():
81 # type: () -> None
82 install_sql_hook()
83 # Patch in our custom middleware.
84
85 # logs an error for every 500
86 ignore_logger("django.server")
87 ignore_logger("django.request")
88
89 from django.core.handlers.wsgi import WSGIHandler
90
91 old_app = WSGIHandler.__call__
92
93 def sentry_patched_wsgi_handler(self, environ, start_response):
94 # type: (Any, Dict[str, str], Callable) -> _ScopedResponse
95 if Hub.current.get_integration(DjangoIntegration) is None:
96 return old_app(self, environ, start_response)
97
98 return SentryWsgiMiddleware(lambda *a, **kw: old_app(self, *a, **kw))(
99 environ, start_response
100 )
101
102 WSGIHandler.__call__ = sentry_patched_wsgi_handler
103
104 # patch get_response, because at that point we have the Django request
105 # object
106 from django.core.handlers.base import BaseHandler # type: ignore
107
108 old_get_response = BaseHandler.get_response
109
110 def sentry_patched_get_response(self, request):
111 # type: (Any, WSGIRequest) -> Union[HttpResponse, BaseException]
112 hub = Hub.current
113 integration = hub.get_integration(DjangoIntegration)
114 if integration is not None:
115 with hub.configure_scope() as scope:
116 scope.add_event_processor(
117 _make_event_processor(weakref.ref(request), integration)
118 )
119 return old_get_response(self, request)
120
121 BaseHandler.get_response = sentry_patched_get_response
122
123 signals.got_request_exception.connect(_got_request_exception)
124
125 @add_global_event_processor
126 def process_django_templates(event, hint):
127 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
128 exc_info = hint.get("exc_info", None)
129
130 if exc_info is None:
131 return event
132
133 exception = event.get("exception", None)
134
135 if exception is None:
136 return event
137
138 values = exception.get("values", None)
139
140 if values is None:
141 return event
142
143 for exception, (_, exc_value, _) in zip(
144 values, walk_exception_chain(exc_info)
145 ):
146 frame = get_template_frame_from_exception(exc_value)
147 if frame is not None:
148 frames = exception.get("stacktrace", {}).get("frames", [])
149
150 for i in reversed(range(len(frames))):
151 f = frames[i]
152 if (
153 f.get("function") in ("parse", "render")
154 and f.get("module") == "django.template.base"
155 ):
156 i += 1
157 break
158 else:
159 i = len(frames)
160
161 frames.insert(i, frame)
162
163 return event
164
165 @add_global_repr_processor
166 def _django_queryset_repr(value, hint):
167 if not isinstance(value, QuerySet) or value._result_cache:
168 return NotImplemented
169
170 # Do not call Hub.get_integration here. It is intentional that
171 # running under a new hub does not suddenly start executing
172 # querysets. This might be surprising to the user but it's likely
173 # less annoying.
174
175 return u"<%s from %s at 0x%x>" % (
176 value.__class__.__name__,
177 value.__module__,
178 id(value),
179 )
180
181
182 def _make_event_processor(weak_request, integration):
183 # type: (Callable[[], WSGIRequest], DjangoIntegration) -> Callable
184 def event_processor(event, hint):
185 # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]
186 # if the request is gone we are fine not logging the data from
187 # it. This might happen if the processor is pushed away to
188 # another thread.
189 request = weak_request()
190 if request is None:
191 return event
192
193 try:
194 if integration.transaction_style == "function_name":
195 event["transaction"] = transaction_from_function(
196 resolve(request.path).func
197 )
198 elif integration.transaction_style == "url":
199 event["transaction"] = LEGACY_RESOLVER.resolve(request.path)
200 except Exception:
201 pass
202
203 with capture_internal_exceptions():
204 DjangoRequestExtractor(request).extract_into_event(event)
205
206 if _should_send_default_pii():
207 with capture_internal_exceptions():
208 _set_user_info(request, event)
209
210 return event
211
212 return event_processor
213
214
215 def _got_request_exception(request=None, **kwargs):
216 # type: (WSGIRequest, **Any) -> None
217 hub = Hub.current
218 integration = hub.get_integration(DjangoIntegration)
219 if integration is not None:
220 event, hint = event_from_exception(
221 sys.exc_info(),
222 client_options=hub.client.options,
223 mechanism={"type": "django", "handled": False},
224 )
225 hub.capture_event(event, hint=hint)
226
227
228 class DjangoRequestExtractor(RequestExtractor):
229 def env(self):
230 # type: () -> Dict[str, str]
231 return self.request.META
232
233 def cookies(self):
234 # type: () -> Dict[str, str]
235 return self.request.COOKIES
236
237 def raw_data(self):
238 # type: () -> bytes
239 return self.request.body
240
241 def form(self):
242 # type: () -> QueryDict
243 return self.request.POST
244
245 def files(self):
246 # type: () -> MultiValueDict
247 return self.request.FILES
248
249 def size_of_file(self, file):
250 return file.size
251
252
253 def _set_user_info(request, event):
254 # type: (WSGIRequest, Dict[str, Any]) -> None
255 user_info = event.setdefault("user", {})
256
257 user = getattr(request, "user", None)
258
259 if user is None or not is_authenticated(user):
260 return
261
262 try:
263 user_info["id"] = str(user.pk)
264 except Exception:
265 pass
266
267 try:
268 user_info["email"] = user.email
269 except Exception:
270 pass
271
272 try:
273 user_info["username"] = user.get_username()
274 except Exception:
275 pass
276
277
278 class _FormatConverter(object):
279 def __init__(self, param_mapping):
280 # type: (Dict[str, int]) -> None
281
282 self.param_mapping = param_mapping
283 self.params = [] # type: List[Any]
284
285 def __getitem__(self, val):
286 # type: (str) -> str
287 self.params.append(self.param_mapping.get(val))
288 return "%s"
289
290
291 def format_sql(sql, params):
292 # type: (Any, Any) -> Tuple[str, List[str]]
293 rv = []
294
295 if isinstance(params, dict):
296 # convert sql with named parameters to sql with unnamed parameters
297 conv = _FormatConverter(params)
298 if params:
299 sql = sql % conv
300 params = conv.params
301 else:
302 params = ()
303
304 for param in params or ():
305 if param is None:
306 rv.append("NULL")
307 param = safe_repr(param)
308 rv.append(param)
309
310 return sql, rv
311
312
313 def record_sql(sql, params, cursor=None):
314 # type: (Any, Any, Any) -> None
315 hub = Hub.current
316 if hub.get_integration(DjangoIntegration) is None:
317 return
318
319 with capture_internal_exceptions():
320 if cursor and hasattr(cursor, "mogrify"): # psycopg2
321 real_sql = cursor.mogrify(sql, params)
322 with capture_internal_exceptions():
323 if isinstance(real_sql, bytes):
324 real_sql = real_sql.decode(cursor.connection.encoding)
325 else:
326 real_sql, real_params = format_sql(sql, params)
327
328 if real_params:
329 try:
330 real_sql = format_and_strip(real_sql, real_params)
331 except Exception:
332 pass
333 hub.add_breadcrumb(message=real_sql, category="query")
334
335
336 def install_sql_hook():
337 # type: () -> None
338 """If installed this causes Django's queries to be captured."""
339 try:
340 from django.db.backends.utils import CursorWrapper # type: ignore
341 except ImportError:
342 from django.db.backends.util import CursorWrapper # type: ignore
343
344 try:
345 real_execute = CursorWrapper.execute
346 real_executemany = CursorWrapper.executemany
347 except AttributeError:
348 # This won't work on Django versions < 1.6
349 return
350
351 def record_many_sql(sql, param_list, cursor):
352 for params in param_list:
353 record_sql(sql, params, cursor)
354
355 def execute(self, sql, params=None):
356 try:
357 return real_execute(self, sql, params)
358 finally:
359 record_sql(sql, params, self.cursor)
360
361 def executemany(self, sql, param_list):
362 try:
363 return real_executemany(self, sql, param_list)
364 finally:
365 record_many_sql(sql, param_list, self.cursor)
366
367 CursorWrapper.execute = execute
368 CursorWrapper.executemany = executemany
369 ignore_logger("django.db.backends")
370
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sentry_sdk/integrations/django/__init__.py b/sentry_sdk/integrations/django/__init__.py
--- a/sentry_sdk/integrations/django/__init__.py
+++ b/sentry_sdk/integrations/django/__init__.py
@@ -265,6 +265,12 @@
def size_of_file(self, file):
return file.size
+ def parsed_body(self):
+ try:
+ return self.request.data
+ except AttributeError:
+ return RequestExtractor.parsed_body(self)
+
def _set_user_info(request, event):
# type: (WSGIRequest, Dict[str, Any]) -> None
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/django/__init__.py b/sentry_sdk/integrations/django/__init__.py\n--- a/sentry_sdk/integrations/django/__init__.py\n+++ b/sentry_sdk/integrations/django/__init__.py\n@@ -265,6 +265,12 @@\n def size_of_file(self, file):\n return file.size\n \n+ def parsed_body(self):\n+ try:\n+ return self.request.data\n+ except AttributeError:\n+ return RequestExtractor.parsed_body(self)\n+\n \n def _set_user_info(request, event):\n # type: (WSGIRequest, Dict[str, Any]) -> None\n", "issue": "Make sure everything works with Django-Rest-Framework\nWe should django-rest-framework's `request.data` instead of trying to extract a structured body ourselves\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\n\nimport sys\nimport weakref\n\nfrom django import VERSION as DJANGO_VERSION # type: ignore\nfrom django.db.models.query import QuerySet # type: ignore\nfrom django.core import signals # type: ignore\n\nif False:\n from typing import Any\n from typing import Dict\n from typing import Tuple\n from typing import Union\n from sentry_sdk.integrations.wsgi import _ScopedResponse\n from typing import Callable\n from django.core.handlers.wsgi import WSGIRequest # type: ignore\n from django.http.response import HttpResponse # type: ignore\n from django.http.request import QueryDict # type: ignore\n from django.utils.datastructures import MultiValueDict # type: ignore\n from typing import List\n\n\ntry:\n from django.urls import resolve # type: ignore\nexcept ImportError:\n from django.core.urlresolvers import resolve # type: ignore\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.hub import _should_send_default_pii\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import (\n add_global_repr_processor,\n capture_internal_exceptions,\n event_from_exception,\n safe_repr,\n format_and_strip,\n transaction_from_function,\n walk_exception_chain,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk.integrations.wsgi import SentryWsgiMiddleware\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor\nfrom sentry_sdk.integrations.django.transactions import LEGACY_RESOLVER\nfrom sentry_sdk.integrations.django.templates import get_template_frame_from_exception\n\n\nif DJANGO_VERSION < (1, 10):\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated()\n\n\nelse:\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated\n\n\nclass DjangoIntegration(Integration):\n identifier = \"django\"\n\n transaction_style = None\n\n def __init__(self, transaction_style=\"url\"):\n # type: (str) -> None\n TRANSACTION_STYLE_VALUES = (\"function_name\", \"url\")\n if transaction_style not in TRANSACTION_STYLE_VALUES:\n raise ValueError(\n \"Invalid value for transaction_style: %s (must be in %s)\"\n % (transaction_style, TRANSACTION_STYLE_VALUES)\n )\n self.transaction_style = transaction_style\n\n @staticmethod\n def setup_once():\n # type: () -> None\n install_sql_hook()\n # Patch in our custom middleware.\n\n # logs an error for every 500\n ignore_logger(\"django.server\")\n ignore_logger(\"django.request\")\n\n from django.core.handlers.wsgi import WSGIHandler\n\n old_app = WSGIHandler.__call__\n\n def sentry_patched_wsgi_handler(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable) -> _ScopedResponse\n if Hub.current.get_integration(DjangoIntegration) is None:\n return old_app(self, environ, start_response)\n\n return SentryWsgiMiddleware(lambda *a, **kw: old_app(self, *a, **kw))(\n environ, start_response\n )\n\n WSGIHandler.__call__ = sentry_patched_wsgi_handler\n\n # patch get_response, because at that point we have the Django request\n # object\n from django.core.handlers.base import BaseHandler # type: ignore\n\n old_get_response = BaseHandler.get_response\n\n def sentry_patched_get_response(self, request):\n # type: (Any, WSGIRequest) -> Union[HttpResponse, BaseException]\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n with hub.configure_scope() as scope:\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n return old_get_response(self, request)\n\n BaseHandler.get_response = sentry_patched_get_response\n\n signals.got_request_exception.connect(_got_request_exception)\n\n @add_global_event_processor\n def process_django_templates(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_, exc_value, _) in zip(\n values, walk_exception_chain(exc_info)\n ):\n frame = get_template_frame_from_exception(exc_value)\n if frame is not None:\n frames = exception.get(\"stacktrace\", {}).get(\"frames\", [])\n\n for i in reversed(range(len(frames))):\n f = frames[i]\n if (\n f.get(\"function\") in (\"parse\", \"render\")\n and f.get(\"module\") == \"django.template.base\"\n ):\n i += 1\n break\n else:\n i = len(frames)\n\n frames.insert(i, frame)\n\n return event\n\n @add_global_repr_processor\n def _django_queryset_repr(value, hint):\n if not isinstance(value, QuerySet) or value._result_cache:\n return NotImplemented\n\n # Do not call Hub.get_integration here. It is intentional that\n # running under a new hub does not suddenly start executing\n # querysets. This might be surprising to the user but it's likely\n # less annoying.\n\n return u\"<%s from %s at 0x%x>\" % (\n value.__class__.__name__,\n value.__module__,\n id(value),\n )\n\n\ndef _make_event_processor(weak_request, integration):\n # type: (Callable[[], WSGIRequest], DjangoIntegration) -> Callable\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n # if the request is gone we are fine not logging the data from\n # it. This might happen if the processor is pushed away to\n # another thread.\n request = weak_request()\n if request is None:\n return event\n\n try:\n if integration.transaction_style == \"function_name\":\n event[\"transaction\"] = transaction_from_function(\n resolve(request.path).func\n )\n elif integration.transaction_style == \"url\":\n event[\"transaction\"] = LEGACY_RESOLVER.resolve(request.path)\n except Exception:\n pass\n\n with capture_internal_exceptions():\n DjangoRequestExtractor(request).extract_into_event(event)\n\n if _should_send_default_pii():\n with capture_internal_exceptions():\n _set_user_info(request, event)\n\n return event\n\n return event_processor\n\n\ndef _got_request_exception(request=None, **kwargs):\n # type: (WSGIRequest, **Any) -> None\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n event, hint = event_from_exception(\n sys.exc_info(),\n client_options=hub.client.options,\n mechanism={\"type\": \"django\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\nclass DjangoRequestExtractor(RequestExtractor):\n def env(self):\n # type: () -> Dict[str, str]\n return self.request.META\n\n def cookies(self):\n # type: () -> Dict[str, str]\n return self.request.COOKIES\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> QueryDict\n return self.request.POST\n\n def files(self):\n # type: () -> MultiValueDict\n return self.request.FILES\n\n def size_of_file(self, file):\n return file.size\n\n\ndef _set_user_info(request, event):\n # type: (WSGIRequest, Dict[str, Any]) -> None\n user_info = event.setdefault(\"user\", {})\n\n user = getattr(request, \"user\", None)\n\n if user is None or not is_authenticated(user):\n return\n\n try:\n user_info[\"id\"] = str(user.pk)\n except Exception:\n pass\n\n try:\n user_info[\"email\"] = user.email\n except Exception:\n pass\n\n try:\n user_info[\"username\"] = user.get_username()\n except Exception:\n pass\n\n\nclass _FormatConverter(object):\n def __init__(self, param_mapping):\n # type: (Dict[str, int]) -> None\n\n self.param_mapping = param_mapping\n self.params = [] # type: List[Any]\n\n def __getitem__(self, val):\n # type: (str) -> str\n self.params.append(self.param_mapping.get(val))\n return \"%s\"\n\n\ndef format_sql(sql, params):\n # type: (Any, Any) -> Tuple[str, List[str]]\n rv = []\n\n if isinstance(params, dict):\n # convert sql with named parameters to sql with unnamed parameters\n conv = _FormatConverter(params)\n if params:\n sql = sql % conv\n params = conv.params\n else:\n params = ()\n\n for param in params or ():\n if param is None:\n rv.append(\"NULL\")\n param = safe_repr(param)\n rv.append(param)\n\n return sql, rv\n\n\ndef record_sql(sql, params, cursor=None):\n # type: (Any, Any, Any) -> None\n hub = Hub.current\n if hub.get_integration(DjangoIntegration) is None:\n return\n\n with capture_internal_exceptions():\n if cursor and hasattr(cursor, \"mogrify\"): # psycopg2\n real_sql = cursor.mogrify(sql, params)\n with capture_internal_exceptions():\n if isinstance(real_sql, bytes):\n real_sql = real_sql.decode(cursor.connection.encoding)\n else:\n real_sql, real_params = format_sql(sql, params)\n\n if real_params:\n try:\n real_sql = format_and_strip(real_sql, real_params)\n except Exception:\n pass\n hub.add_breadcrumb(message=real_sql, category=\"query\")\n\n\ndef install_sql_hook():\n # type: () -> None\n \"\"\"If installed this causes Django's queries to be captured.\"\"\"\n try:\n from django.db.backends.utils import CursorWrapper # type: ignore\n except ImportError:\n from django.db.backends.util import CursorWrapper # type: ignore\n\n try:\n real_execute = CursorWrapper.execute\n real_executemany = CursorWrapper.executemany\n except AttributeError:\n # This won't work on Django versions < 1.6\n return\n\n def record_many_sql(sql, param_list, cursor):\n for params in param_list:\n record_sql(sql, params, cursor)\n\n def execute(self, sql, params=None):\n try:\n return real_execute(self, sql, params)\n finally:\n record_sql(sql, params, self.cursor)\n\n def executemany(self, sql, param_list):\n try:\n return real_executemany(self, sql, param_list)\n finally:\n record_many_sql(sql, param_list, self.cursor)\n\n CursorWrapper.execute = execute\n CursorWrapper.executemany = executemany\n ignore_logger(\"django.db.backends\")\n", "path": "sentry_sdk/integrations/django/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\n\nimport sys\nimport weakref\n\nfrom django import VERSION as DJANGO_VERSION # type: ignore\nfrom django.db.models.query import QuerySet # type: ignore\nfrom django.core import signals # type: ignore\n\nif False:\n from typing import Any\n from typing import Dict\n from typing import Tuple\n from typing import Union\n from sentry_sdk.integrations.wsgi import _ScopedResponse\n from typing import Callable\n from django.core.handlers.wsgi import WSGIRequest # type: ignore\n from django.http.response import HttpResponse # type: ignore\n from django.http.request import QueryDict # type: ignore\n from django.utils.datastructures import MultiValueDict # type: ignore\n from typing import List\n\ntry:\n import psycopg2.sql # type: ignore\n\n def sql_to_string(sql):\n # type: (Any) -> str\n if isinstance(sql, psycopg2.sql.SQL):\n return sql.string\n return sql\n\n\nexcept ImportError:\n\n def sql_to_string(sql):\n # type: (Any) -> str\n return sql\n\n\ntry:\n from django.urls import resolve # type: ignore\nexcept ImportError:\n from django.core.urlresolvers import resolve # type: ignore\n\nfrom sentry_sdk import Hub\nfrom sentry_sdk.hub import _should_send_default_pii\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.utils import (\n add_global_repr_processor,\n capture_internal_exceptions,\n event_from_exception,\n safe_repr,\n format_and_strip,\n transaction_from_function,\n walk_exception_chain,\n)\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk.integrations.wsgi import SentryWsgiMiddleware\nfrom sentry_sdk.integrations._wsgi_common import RequestExtractor\nfrom sentry_sdk.integrations.django.transactions import LEGACY_RESOLVER\nfrom sentry_sdk.integrations.django.templates import get_template_frame_from_exception\n\n\nif DJANGO_VERSION < (1, 10):\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated()\n\n\nelse:\n\n def is_authenticated(request_user):\n # type: (Any) -> bool\n return request_user.is_authenticated\n\n\nclass DjangoIntegration(Integration):\n identifier = \"django\"\n\n transaction_style = None\n\n def __init__(self, transaction_style=\"url\"):\n # type: (str) -> None\n TRANSACTION_STYLE_VALUES = (\"function_name\", \"url\")\n if transaction_style not in TRANSACTION_STYLE_VALUES:\n raise ValueError(\n \"Invalid value for transaction_style: %s (must be in %s)\"\n % (transaction_style, TRANSACTION_STYLE_VALUES)\n )\n self.transaction_style = transaction_style\n\n @staticmethod\n def setup_once():\n # type: () -> None\n install_sql_hook()\n # Patch in our custom middleware.\n\n # logs an error for every 500\n ignore_logger(\"django.server\")\n ignore_logger(\"django.request\")\n\n from django.core.handlers.wsgi import WSGIHandler\n\n old_app = WSGIHandler.__call__\n\n def sentry_patched_wsgi_handler(self, environ, start_response):\n # type: (Any, Dict[str, str], Callable) -> _ScopedResponse\n if Hub.current.get_integration(DjangoIntegration) is None:\n return old_app(self, environ, start_response)\n\n return SentryWsgiMiddleware(lambda *a, **kw: old_app(self, *a, **kw))(\n environ, start_response\n )\n\n WSGIHandler.__call__ = sentry_patched_wsgi_handler\n\n # patch get_response, because at that point we have the Django request\n # object\n from django.core.handlers.base import BaseHandler # type: ignore\n\n old_get_response = BaseHandler.get_response\n\n def sentry_patched_get_response(self, request):\n # type: (Any, WSGIRequest) -> Union[HttpResponse, BaseException]\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n with hub.configure_scope() as scope:\n scope.add_event_processor(\n _make_event_processor(weakref.ref(request), integration)\n )\n return old_get_response(self, request)\n\n BaseHandler.get_response = sentry_patched_get_response\n\n signals.got_request_exception.connect(_got_request_exception)\n\n @add_global_event_processor\n def process_django_templates(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n exc_info = hint.get(\"exc_info\", None)\n\n if exc_info is None:\n return event\n\n exception = event.get(\"exception\", None)\n\n if exception is None:\n return event\n\n values = exception.get(\"values\", None)\n\n if values is None:\n return event\n\n for exception, (_, exc_value, _) in zip(\n values, walk_exception_chain(exc_info)\n ):\n frame = get_template_frame_from_exception(exc_value)\n if frame is not None:\n frames = exception.get(\"stacktrace\", {}).get(\"frames\", [])\n\n for i in reversed(range(len(frames))):\n f = frames[i]\n if (\n f.get(\"function\") in (\"parse\", \"render\")\n and f.get(\"module\") == \"django.template.base\"\n ):\n i += 1\n break\n else:\n i = len(frames)\n\n frames.insert(i, frame)\n\n return event\n\n @add_global_repr_processor\n def _django_queryset_repr(value, hint):\n if not isinstance(value, QuerySet) or value._result_cache:\n return NotImplemented\n\n # Do not call Hub.get_integration here. It is intentional that\n # running under a new hub does not suddenly start executing\n # querysets. This might be surprising to the user but it's likely\n # less annoying.\n\n return u\"<%s from %s at 0x%x>\" % (\n value.__class__.__name__,\n value.__module__,\n id(value),\n )\n\n\ndef _make_event_processor(weak_request, integration):\n # type: (Callable[[], WSGIRequest], DjangoIntegration) -> Callable\n def event_processor(event, hint):\n # type: (Dict[str, Any], Dict[str, Any]) -> Dict[str, Any]\n # if the request is gone we are fine not logging the data from\n # it. This might happen if the processor is pushed away to\n # another thread.\n request = weak_request()\n if request is None:\n return event\n\n try:\n if integration.transaction_style == \"function_name\":\n event[\"transaction\"] = transaction_from_function(\n resolve(request.path).func\n )\n elif integration.transaction_style == \"url\":\n event[\"transaction\"] = LEGACY_RESOLVER.resolve(request.path)\n except Exception:\n pass\n\n with capture_internal_exceptions():\n DjangoRequestExtractor(request).extract_into_event(event)\n\n if _should_send_default_pii():\n with capture_internal_exceptions():\n _set_user_info(request, event)\n\n return event\n\n return event_processor\n\n\ndef _got_request_exception(request=None, **kwargs):\n # type: (WSGIRequest, **Any) -> None\n hub = Hub.current\n integration = hub.get_integration(DjangoIntegration)\n if integration is not None:\n event, hint = event_from_exception(\n sys.exc_info(),\n client_options=hub.client.options,\n mechanism={\"type\": \"django\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n\nclass DjangoRequestExtractor(RequestExtractor):\n def env(self):\n # type: () -> Dict[str, str]\n return self.request.META\n\n def cookies(self):\n # type: () -> Dict[str, str]\n return self.request.COOKIES\n\n def raw_data(self):\n # type: () -> bytes\n return self.request.body\n\n def form(self):\n # type: () -> QueryDict\n return self.request.POST\n\n def files(self):\n # type: () -> MultiValueDict\n return self.request.FILES\n\n def size_of_file(self, file):\n return file.size\n\n def parsed_body(self):\n try:\n return self.request.data\n except AttributeError:\n return RequestExtractor.parsed_body(self)\n\n\ndef _set_user_info(request, event):\n # type: (WSGIRequest, Dict[str, Any]) -> None\n user_info = event.setdefault(\"user\", {})\n\n user = getattr(request, \"user\", None)\n\n if user is None or not is_authenticated(user):\n return\n\n try:\n user_info[\"id\"] = str(user.pk)\n except Exception:\n pass\n\n try:\n user_info[\"email\"] = user.email\n except Exception:\n pass\n\n try:\n user_info[\"username\"] = user.get_username()\n except Exception:\n pass\n\n\nclass _FormatConverter(object):\n def __init__(self, param_mapping):\n # type: (Dict[str, int]) -> None\n\n self.param_mapping = param_mapping\n self.params = [] # type: List[Any]\n\n def __getitem__(self, val):\n # type: (str) -> str\n self.params.append(self.param_mapping.get(val))\n return \"%s\"\n\n\ndef format_sql(sql, params):\n # type: (Any, Any) -> Tuple[str, List[str]]\n rv = []\n\n if isinstance(params, dict):\n # convert sql with named parameters to sql with unnamed parameters\n conv = _FormatConverter(params)\n if params:\n sql = sql % conv\n params = conv.params\n else:\n params = ()\n\n for param in params or ():\n if param is None:\n rv.append(\"NULL\")\n param = safe_repr(param)\n rv.append(param)\n\n return sql, rv\n\n\ndef record_sql(sql, params, cursor=None):\n # type: (Any, Any, Any) -> None\n hub = Hub.current\n if hub.get_integration(DjangoIntegration) is None:\n return\n\n with capture_internal_exceptions():\n if cursor and hasattr(cursor, \"mogrify\"): # psycopg2\n real_sql = cursor.mogrify(sql, params)\n with capture_internal_exceptions():\n if isinstance(real_sql, bytes):\n real_sql = real_sql.decode(cursor.connection.encoding)\n else:\n real_sql, real_params = format_sql(sql, params)\n\n if real_params:\n try:\n real_sql = format_and_strip(real_sql, real_params)\n except Exception:\n pass\n hub.add_breadcrumb(message=real_sql, category=\"query\")\n\n\ndef install_sql_hook():\n # type: () -> None\n \"\"\"If installed this causes Django's queries to be captured.\"\"\"\n try:\n from django.db.backends.utils import CursorWrapper # type: ignore\n except ImportError:\n from django.db.backends.util import CursorWrapper # type: ignore\n\n try:\n real_execute = CursorWrapper.execute\n real_executemany = CursorWrapper.executemany\n except AttributeError:\n # This won't work on Django versions < 1.6\n return\n\n def record_many_sql(sql, param_list, cursor):\n for params in param_list:\n record_sql(sql, params, cursor)\n\n def execute(self, sql, params=None):\n try:\n return real_execute(self, sql, params)\n finally:\n record_sql(sql, params, self.cursor)\n\n def executemany(self, sql, param_list):\n try:\n return real_executemany(self, sql, param_list)\n finally:\n record_many_sql(sql, param_list, self.cursor)\n\n CursorWrapper.execute = execute\n CursorWrapper.executemany = executemany\n ignore_logger(\"django.db.backends\")\n", "path": "sentry_sdk/integrations/django/__init__.py"}]}
| 3,804 | 150 |
gh_patches_debug_4007
|
rasdani/github-patches
|
git_diff
|
translate__pootle-5706
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Stats table shows no zero counts
This can be seen in the following screenshot:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/core/views/display.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django.utils.functional import cached_property
10 from django.utils.html import escape, format_html
11 from django.utils.safestring import mark_safe
12
13 from pootle.i18n import formatter
14 from pootle.i18n.gettext import ugettext as _
15 from pootle.local.dates import timesince
16 from pootle_misc.checks import get_qualitycheck_list
17
18
19 class ActionDisplay(object):
20
21 def __init__(self, action):
22 self.action = action
23
24 @property
25 def since(self):
26 return timesince(self.action["mtime"])
27
28 @property
29 def check_name(self):
30 return self.action.get("check_name")
31
32 @property
33 def checks_url(self):
34 return self.action.get("checks_url")
35
36 @property
37 def check_display_name(self):
38 return escape(self.action["check_display_name"])
39
40 @property
41 def display_name(self):
42 return escape(self.action["displayname"])
43
44 @property
45 def profile_url(self):
46 return self.action["profile_url"]
47
48 @property
49 def unit_url(self):
50 return self.action.get("unit_url")
51
52 @property
53 def unit_source(self):
54 return self.action.get("unit_source")
55
56 @property
57 def params(self):
58 params = dict(
59 user=self.formatted_user,
60 source=self.formatted_source)
61 if self.check_name:
62 params["check"] = format_html(
63 u"<a href='{}'>{}</a>",
64 self.checks_url,
65 self.check_display_name)
66 return params
67
68 @property
69 def formatted_user(self):
70 return format_html(
71 u"<a href='{}' class='user-name'>{}</a>",
72 self.profile_url,
73 self.display_name)
74
75 @property
76 def formatted_source(self):
77 return format_html(
78 u"<a href='{}'>{}</a>",
79 self.unit_url,
80 self.unit_source)
81
82 @property
83 def action_type(self):
84 return self.action["type"]
85
86 @property
87 def translation_action_type(self):
88 return self.action.get("translation_action_type")
89
90 @property
91 def message(self):
92 msg = ""
93 params = self.params
94 if (self.action_type == 2):
95 msg = _('%(user)s removed translation for %(source)s', params)
96 if (self.action_type == 3):
97 msg = _('%(user)s accepted suggestion for %(source)s', params)
98 if (self.action_type == 4):
99 msg = _('%(user)s uploaded file', params)
100 if (self.action_type == 6):
101 msg = _('%(user)s muted %(check)s for %(source)s', params)
102 if (self.action_type == 7):
103 msg = _('%(user)s unmuted %(check)s for %(source)s', params)
104 if (self.action_type == 8):
105 msg = _('%(user)s added suggestion for %(source)s', params)
106 if (self.action_type == 9):
107 msg = _('%(user)s rejected suggestion for %(source)s', params)
108 if (self.action_type in [1, 5]):
109 if self.translation_action_type == 0:
110 msg = _('%(user)s translated %(source)s', params)
111 if self.translation_action_type == 1:
112 msg = _('%(user)s edited %(source)s', params)
113 if self.translation_action_type == 2:
114 msg = _('%(user)s pre-translated %(source)s', params)
115 if self.translation_action_type == 3:
116 msg = _('%(user)s removed translation for %(source)s', params)
117 if self.translation_action_type == 4:
118 msg = _('%(user)s reviewed %(source)s', params)
119 if self.translation_action_type == 5:
120 msg = _('%(user)s marked as needs work %(source)s', params)
121 return mark_safe(msg)
122
123
124 class ChecksDisplay(object):
125
126 def __init__(self, context):
127 self.context = context
128
129 @property
130 def check_schema(self):
131 return get_qualitycheck_list(self.context)
132
133 @cached_property
134 def check_data(self):
135 return self.context.data_tool.get_checks()
136
137 @property
138 def checks_by_category(self):
139 _checks = []
140 for check in self.check_schema:
141 if check["code"] not in self.check_data:
142 continue
143 check["count"] = self.check_data[check["code"]]
144 check["count_display"] = formatter.number(check["count"])
145 _checks.append(check)
146 return _checks
147
148
149 class StatsDisplay(object):
150
151 def __init__(self, context, stats=None):
152 self.context = context
153 self._stats = stats
154
155 @staticmethod
156 def make_display_stat(d, keys=["total", "critical", "incomplete",
157 "suggestions", "fuzzy", "untranslated"]):
158 assert isinstance(d, dict)
159 for k in keys:
160 if d.get(k):
161 d[k + '_display'] = formatter.number(d[k])
162
163 @cached_property
164 def stat_data(self):
165 if self._stats is not None:
166 return self._stats
167 return self.context.data_tool.get_stats()
168
169 @cached_property
170 def stats(self):
171 stats = self.stat_data
172 self.add_children_info(stats)
173 self.make_display_stat(stats)
174 if stats.get("last_submission"):
175 stats["last_submission"]["msg"] = (
176 self.get_action_message(stats["last_submission"]))
177 return stats
178
179 def add_children_info(self, stats):
180 for k, child in stats["children"].items():
181 child["incomplete"] = child["total"] - child["translated"]
182 child["untranslated"] = child["total"] - child["translated"]
183 self.make_display_stat(child)
184
185 def get_action_message(self, action):
186 return ActionDisplay(action).message
187
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pootle/core/views/display.py b/pootle/core/views/display.py
--- a/pootle/core/views/display.py
+++ b/pootle/core/views/display.py
@@ -157,7 +157,7 @@
"suggestions", "fuzzy", "untranslated"]):
assert isinstance(d, dict)
for k in keys:
- if d.get(k):
+ if k in d:
d[k + '_display'] = formatter.number(d[k])
@cached_property
|
{"golden_diff": "diff --git a/pootle/core/views/display.py b/pootle/core/views/display.py\n--- a/pootle/core/views/display.py\n+++ b/pootle/core/views/display.py\n@@ -157,7 +157,7 @@\n \"suggestions\", \"fuzzy\", \"untranslated\"]):\n assert isinstance(d, dict)\n for k in keys:\n- if d.get(k):\n+ if k in d:\n d[k + '_display'] = formatter.number(d[k])\n \n @cached_property\n", "issue": "Stats table shows no zero counts\nThis can be seen in the following screenshot:\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.utils.functional import cached_property\nfrom django.utils.html import escape, format_html\nfrom django.utils.safestring import mark_safe\n\nfrom pootle.i18n import formatter\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle.local.dates import timesince\nfrom pootle_misc.checks import get_qualitycheck_list\n\n\nclass ActionDisplay(object):\n\n def __init__(self, action):\n self.action = action\n\n @property\n def since(self):\n return timesince(self.action[\"mtime\"])\n\n @property\n def check_name(self):\n return self.action.get(\"check_name\")\n\n @property\n def checks_url(self):\n return self.action.get(\"checks_url\")\n\n @property\n def check_display_name(self):\n return escape(self.action[\"check_display_name\"])\n\n @property\n def display_name(self):\n return escape(self.action[\"displayname\"])\n\n @property\n def profile_url(self):\n return self.action[\"profile_url\"]\n\n @property\n def unit_url(self):\n return self.action.get(\"unit_url\")\n\n @property\n def unit_source(self):\n return self.action.get(\"unit_source\")\n\n @property\n def params(self):\n params = dict(\n user=self.formatted_user,\n source=self.formatted_source)\n if self.check_name:\n params[\"check\"] = format_html(\n u\"<a href='{}'>{}</a>\",\n self.checks_url,\n self.check_display_name)\n return params\n\n @property\n def formatted_user(self):\n return format_html(\n u\"<a href='{}' class='user-name'>{}</a>\",\n self.profile_url,\n self.display_name)\n\n @property\n def formatted_source(self):\n return format_html(\n u\"<a href='{}'>{}</a>\",\n self.unit_url,\n self.unit_source)\n\n @property\n def action_type(self):\n return self.action[\"type\"]\n\n @property\n def translation_action_type(self):\n return self.action.get(\"translation_action_type\")\n\n @property\n def message(self):\n msg = \"\"\n params = self.params\n if (self.action_type == 2):\n msg = _('%(user)s removed translation for %(source)s', params)\n if (self.action_type == 3):\n msg = _('%(user)s accepted suggestion for %(source)s', params)\n if (self.action_type == 4):\n msg = _('%(user)s uploaded file', params)\n if (self.action_type == 6):\n msg = _('%(user)s muted %(check)s for %(source)s', params)\n if (self.action_type == 7):\n msg = _('%(user)s unmuted %(check)s for %(source)s', params)\n if (self.action_type == 8):\n msg = _('%(user)s added suggestion for %(source)s', params)\n if (self.action_type == 9):\n msg = _('%(user)s rejected suggestion for %(source)s', params)\n if (self.action_type in [1, 5]):\n if self.translation_action_type == 0:\n msg = _('%(user)s translated %(source)s', params)\n if self.translation_action_type == 1:\n msg = _('%(user)s edited %(source)s', params)\n if self.translation_action_type == 2:\n msg = _('%(user)s pre-translated %(source)s', params)\n if self.translation_action_type == 3:\n msg = _('%(user)s removed translation for %(source)s', params)\n if self.translation_action_type == 4:\n msg = _('%(user)s reviewed %(source)s', params)\n if self.translation_action_type == 5:\n msg = _('%(user)s marked as needs work %(source)s', params)\n return mark_safe(msg)\n\n\nclass ChecksDisplay(object):\n\n def __init__(self, context):\n self.context = context\n\n @property\n def check_schema(self):\n return get_qualitycheck_list(self.context)\n\n @cached_property\n def check_data(self):\n return self.context.data_tool.get_checks()\n\n @property\n def checks_by_category(self):\n _checks = []\n for check in self.check_schema:\n if check[\"code\"] not in self.check_data:\n continue\n check[\"count\"] = self.check_data[check[\"code\"]]\n check[\"count_display\"] = formatter.number(check[\"count\"])\n _checks.append(check)\n return _checks\n\n\nclass StatsDisplay(object):\n\n def __init__(self, context, stats=None):\n self.context = context\n self._stats = stats\n\n @staticmethod\n def make_display_stat(d, keys=[\"total\", \"critical\", \"incomplete\",\n \"suggestions\", \"fuzzy\", \"untranslated\"]):\n assert isinstance(d, dict)\n for k in keys:\n if d.get(k):\n d[k + '_display'] = formatter.number(d[k])\n\n @cached_property\n def stat_data(self):\n if self._stats is not None:\n return self._stats\n return self.context.data_tool.get_stats()\n\n @cached_property\n def stats(self):\n stats = self.stat_data\n self.add_children_info(stats)\n self.make_display_stat(stats)\n if stats.get(\"last_submission\"):\n stats[\"last_submission\"][\"msg\"] = (\n self.get_action_message(stats[\"last_submission\"]))\n return stats\n\n def add_children_info(self, stats):\n for k, child in stats[\"children\"].items():\n child[\"incomplete\"] = child[\"total\"] - child[\"translated\"]\n child[\"untranslated\"] = child[\"total\"] - child[\"translated\"]\n self.make_display_stat(child)\n\n def get_action_message(self, action):\n return ActionDisplay(action).message\n", "path": "pootle/core/views/display.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django.utils.functional import cached_property\nfrom django.utils.html import escape, format_html\nfrom django.utils.safestring import mark_safe\n\nfrom pootle.i18n import formatter\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle.local.dates import timesince\nfrom pootle_misc.checks import get_qualitycheck_list\n\n\nclass ActionDisplay(object):\n\n def __init__(self, action):\n self.action = action\n\n @property\n def since(self):\n return timesince(self.action[\"mtime\"])\n\n @property\n def check_name(self):\n return self.action.get(\"check_name\")\n\n @property\n def checks_url(self):\n return self.action.get(\"checks_url\")\n\n @property\n def check_display_name(self):\n return escape(self.action[\"check_display_name\"])\n\n @property\n def display_name(self):\n return escape(self.action[\"displayname\"])\n\n @property\n def profile_url(self):\n return self.action[\"profile_url\"]\n\n @property\n def unit_url(self):\n return self.action.get(\"unit_url\")\n\n @property\n def unit_source(self):\n return self.action.get(\"unit_source\")\n\n @property\n def params(self):\n params = dict(\n user=self.formatted_user,\n source=self.formatted_source)\n if self.check_name:\n params[\"check\"] = format_html(\n u\"<a href='{}'>{}</a>\",\n self.checks_url,\n self.check_display_name)\n return params\n\n @property\n def formatted_user(self):\n return format_html(\n u\"<a href='{}' class='user-name'>{}</a>\",\n self.profile_url,\n self.display_name)\n\n @property\n def formatted_source(self):\n return format_html(\n u\"<a href='{}'>{}</a>\",\n self.unit_url,\n self.unit_source)\n\n @property\n def action_type(self):\n return self.action[\"type\"]\n\n @property\n def translation_action_type(self):\n return self.action.get(\"translation_action_type\")\n\n @property\n def message(self):\n msg = \"\"\n params = self.params\n if (self.action_type == 2):\n msg = _('%(user)s removed translation for %(source)s', params)\n if (self.action_type == 3):\n msg = _('%(user)s accepted suggestion for %(source)s', params)\n if (self.action_type == 4):\n msg = _('%(user)s uploaded file', params)\n if (self.action_type == 6):\n msg = _('%(user)s muted %(check)s for %(source)s', params)\n if (self.action_type == 7):\n msg = _('%(user)s unmuted %(check)s for %(source)s', params)\n if (self.action_type == 8):\n msg = _('%(user)s added suggestion for %(source)s', params)\n if (self.action_type == 9):\n msg = _('%(user)s rejected suggestion for %(source)s', params)\n if (self.action_type in [1, 5]):\n if self.translation_action_type == 0:\n msg = _('%(user)s translated %(source)s', params)\n if self.translation_action_type == 1:\n msg = _('%(user)s edited %(source)s', params)\n if self.translation_action_type == 2:\n msg = _('%(user)s pre-translated %(source)s', params)\n if self.translation_action_type == 3:\n msg = _('%(user)s removed translation for %(source)s', params)\n if self.translation_action_type == 4:\n msg = _('%(user)s reviewed %(source)s', params)\n if self.translation_action_type == 5:\n msg = _('%(user)s marked as needs work %(source)s', params)\n return mark_safe(msg)\n\n\nclass ChecksDisplay(object):\n\n def __init__(self, context):\n self.context = context\n\n @property\n def check_schema(self):\n return get_qualitycheck_list(self.context)\n\n @cached_property\n def check_data(self):\n return self.context.data_tool.get_checks()\n\n @property\n def checks_by_category(self):\n _checks = []\n for check in self.check_schema:\n if check[\"code\"] not in self.check_data:\n continue\n check[\"count\"] = self.check_data[check[\"code\"]]\n check[\"count_display\"] = formatter.number(check[\"count\"])\n _checks.append(check)\n return _checks\n\n\nclass StatsDisplay(object):\n\n def __init__(self, context, stats=None):\n self.context = context\n self._stats = stats\n\n @staticmethod\n def make_display_stat(d, keys=[\"total\", \"critical\", \"incomplete\",\n \"suggestions\", \"fuzzy\", \"untranslated\"]):\n assert isinstance(d, dict)\n for k in keys:\n if k in d:\n d[k + '_display'] = formatter.number(d[k])\n\n @cached_property\n def stat_data(self):\n if self._stats is not None:\n return self._stats\n return self.context.data_tool.get_stats()\n\n @cached_property\n def stats(self):\n stats = self.stat_data\n self.add_children_info(stats)\n self.make_display_stat(stats)\n if stats.get(\"last_submission\"):\n stats[\"last_submission\"][\"msg\"] = (\n self.get_action_message(stats[\"last_submission\"]))\n return stats\n\n def add_children_info(self, stats):\n for k, child in stats[\"children\"].items():\n child[\"incomplete\"] = child[\"total\"] - child[\"translated\"]\n child[\"untranslated\"] = child[\"total\"] - child[\"translated\"]\n self.make_display_stat(child)\n\n def get_action_message(self, action):\n return ActionDisplay(action).message\n", "path": "pootle/core/views/display.py"}]}
| 2,125 | 114 |
gh_patches_debug_17543
|
rasdani/github-patches
|
git_diff
|
coreruleset__coreruleset-3002
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Move data files from util/regexp-assemble directory to the top level
### Description
Data files used to generate regular expressions have been somehow in a difficult-to-find place, dependent on the tool.
Now with the new crs-toolchain, this is not needed anymore.
So let's move the data files to the top level directory.
### Requirements
- move all data files to the top level dir
- review dependencies and check that all references are updated
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `util/regexp-assemble/lib/context.py`
Content:
```
1 import argparse
2 from pathlib import Path
3 import logging
4
5
6
7 class Context(object):
8 def __init__(self, root_directory: Path, namespace: argparse.Namespace=None):
9 self.root_directory = root_directory
10 self.rules_directory = self.root_directory / "rules"
11 self.util_directory = self.root_directory / "util"
12 self.regexp_assemble_directory = self.util_directory / "regexp-assemble"
13 self.data_files_directory = self.regexp_assemble_directory / "data"
14 self.include_files_directory = self.regexp_assemble_directory / "data" / "include"
15 self.regexp_assemble_pl_path = self.regexp_assemble_directory / "lib" / "regexp-assemble.pl"
16 self.single_rule_id = namespace.rule_id if namespace else None
17 self.single_chain_offset = None
18 if namespace and "chain_offset" in namespace:
19 self.single_chain_offset = namespace.chain_offset
20
21 self._dump_to_debug_log()
22
23 assert (
24 self.rules_directory.exists()
25 and self.util_directory.exists()
26 and self.regexp_assemble_directory.exists()
27 and self.data_files_directory.exists()
28 and self.include_files_directory.exists()
29 )
30
31
32 def _dump_to_debug_log(self):
33 logger = logging.getLogger()
34 logger.debug("Root directory: %s", self.root_directory)
35 logger.debug("Rules directory: %s", self.rules_directory)
36 logger.debug("Data files directory: %s", self.data_files_directory)
37 logger.debug("Include files directory: %s", self.include_files_directory)
38 logger.debug("Parsed rule ID: %s", self.single_rule_id)
39 logger.debug("Parsed chain offset: %s", self.single_chain_offset)
40
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/util/regexp-assemble/lib/context.py b/util/regexp-assemble/lib/context.py
--- a/util/regexp-assemble/lib/context.py
+++ b/util/regexp-assemble/lib/context.py
@@ -10,8 +10,8 @@
self.rules_directory = self.root_directory / "rules"
self.util_directory = self.root_directory / "util"
self.regexp_assemble_directory = self.util_directory / "regexp-assemble"
- self.data_files_directory = self.regexp_assemble_directory / "data"
- self.include_files_directory = self.regexp_assemble_directory / "data" / "include"
+ self.data_files_directory = self.root_directory / "data"
+ self.include_files_directory = self.root_directory / "data" / "include"
self.regexp_assemble_pl_path = self.regexp_assemble_directory / "lib" / "regexp-assemble.pl"
self.single_rule_id = namespace.rule_id if namespace else None
self.single_chain_offset = None
|
{"golden_diff": "diff --git a/util/regexp-assemble/lib/context.py b/util/regexp-assemble/lib/context.py\n--- a/util/regexp-assemble/lib/context.py\n+++ b/util/regexp-assemble/lib/context.py\n@@ -10,8 +10,8 @@\n self.rules_directory = self.root_directory / \"rules\"\n self.util_directory = self.root_directory / \"util\"\n self.regexp_assemble_directory = self.util_directory / \"regexp-assemble\"\n- self.data_files_directory = self.regexp_assemble_directory / \"data\"\n- self.include_files_directory = self.regexp_assemble_directory / \"data\" / \"include\"\n+ self.data_files_directory = self.root_directory / \"data\"\n+ self.include_files_directory = self.root_directory / \"data\" / \"include\"\n self.regexp_assemble_pl_path = self.regexp_assemble_directory / \"lib\" / \"regexp-assemble.pl\"\n self.single_rule_id = namespace.rule_id if namespace else None\n self.single_chain_offset = None\n", "issue": "Move data files from util/regexp-assemble directory to the top level\n### Description\r\n\r\nData files used to generate regular expressions have been somehow in a difficult-to-find place, dependent on the tool.\r\n\r\nNow with the new crs-toolchain, this is not needed anymore.\r\n\r\nSo let's move the data files to the top level directory.\r\n\r\n### Requirements\r\n\r\n- move all data files to the top level dir\r\n- review dependencies and check that all references are updated\n", "before_files": [{"content": "import argparse\nfrom pathlib import Path\nimport logging\n\n\n\nclass Context(object):\n def __init__(self, root_directory: Path, namespace: argparse.Namespace=None):\n self.root_directory = root_directory\n self.rules_directory = self.root_directory / \"rules\"\n self.util_directory = self.root_directory / \"util\"\n self.regexp_assemble_directory = self.util_directory / \"regexp-assemble\"\n self.data_files_directory = self.regexp_assemble_directory / \"data\"\n self.include_files_directory = self.regexp_assemble_directory / \"data\" / \"include\"\n self.regexp_assemble_pl_path = self.regexp_assemble_directory / \"lib\" / \"regexp-assemble.pl\"\n self.single_rule_id = namespace.rule_id if namespace else None\n self.single_chain_offset = None\n if namespace and \"chain_offset\" in namespace:\n self.single_chain_offset = namespace.chain_offset\n\n self._dump_to_debug_log()\n\n assert (\n self.rules_directory.exists()\n and self.util_directory.exists()\n and self.regexp_assemble_directory.exists()\n and self.data_files_directory.exists()\n and self.include_files_directory.exists()\n )\n\n\n def _dump_to_debug_log(self):\n logger = logging.getLogger()\n logger.debug(\"Root directory: %s\", self.root_directory)\n logger.debug(\"Rules directory: %s\", self.rules_directory)\n logger.debug(\"Data files directory: %s\", self.data_files_directory)\n logger.debug(\"Include files directory: %s\", self.include_files_directory)\n logger.debug(\"Parsed rule ID: %s\", self.single_rule_id)\n logger.debug(\"Parsed chain offset: %s\", self.single_chain_offset)\n", "path": "util/regexp-assemble/lib/context.py"}], "after_files": [{"content": "import argparse\nfrom pathlib import Path\nimport logging\n\n\n\nclass Context(object):\n def __init__(self, root_directory: Path, namespace: argparse.Namespace=None):\n self.root_directory = root_directory\n self.rules_directory = self.root_directory / \"rules\"\n self.util_directory = self.root_directory / \"util\"\n self.regexp_assemble_directory = self.util_directory / \"regexp-assemble\"\n self.data_files_directory = self.root_directory / \"data\"\n self.include_files_directory = self.root_directory / \"data\" / \"include\"\n self.regexp_assemble_pl_path = self.regexp_assemble_directory / \"lib\" / \"regexp-assemble.pl\"\n self.single_rule_id = namespace.rule_id if namespace else None\n self.single_chain_offset = None\n if namespace and \"chain_offset\" in namespace:\n self.single_chain_offset = namespace.chain_offset\n\n self._dump_to_debug_log()\n\n assert (\n self.rules_directory.exists()\n and self.util_directory.exists()\n and self.regexp_assemble_directory.exists()\n and self.data_files_directory.exists()\n and self.include_files_directory.exists()\n )\n\n\n def _dump_to_debug_log(self):\n logger = logging.getLogger()\n logger.debug(\"Root directory: %s\", self.root_directory)\n logger.debug(\"Rules directory: %s\", self.rules_directory)\n logger.debug(\"Data files directory: %s\", self.data_files_directory)\n logger.debug(\"Include files directory: %s\", self.include_files_directory)\n logger.debug(\"Parsed rule ID: %s\", self.single_rule_id)\n logger.debug(\"Parsed chain offset: %s\", self.single_chain_offset)\n", "path": "util/regexp-assemble/lib/context.py"}]}
| 779 | 215 |
gh_patches_debug_4420
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-220
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
List of own upcoming shifts
As a user, I want to see a list of shifts that I have been confirmed for on the main page.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ephios/event_management/templatetags/event_extras.py`
Content:
```
1 from django import template
2 from django.utils.safestring import mark_safe
3
4 from ephios.event_management.models import AbstractParticipation
5
6 register = template.Library()
7
8
9 @register.filter(name="shift_status")
10 def shift_status(shift, user):
11 participation = user.as_participant().participation_for(shift)
12 if participation is not None:
13 color = {
14 AbstractParticipation.States.USER_DECLINED: "text-danger",
15 AbstractParticipation.States.RESPONSIBLE_REJECTED: "text-danger",
16 AbstractParticipation.States.REQUESTED: "text-warning",
17 AbstractParticipation.States.CONFIRMED: "text-success",
18 }[participation.state]
19 return mark_safe(f'<span class="{color}">{participation.get_state_display()}</span><br>')
20 return ""
21
22
23 @register.filter(name="can_sign_up")
24 def can_sign_up(shift, user):
25 return shift.signup_method.can_sign_up(user.as_participant())
26
27
28 @register.filter(name="render_shift_state")
29 def render_shift_state(shift, request):
30 return shift.signup_method.render_shift_state(request)
31
32
33 @register.filter(name="signup_errors")
34 def signup_errors(shift, user):
35 return shift.signup_method.get_signup_errors(user.as_participant())
36
37
38 @register.filter(name="can_decline")
39 def can_decline(shift, user):
40 return shift.signup_method.can_decline(user.as_participant())
41
42
43 @register.filter(name="decline_errors")
44 def decline_errors(shift, user):
45 return shift.signup_method.get_decline_errors(user.as_participant())
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py
--- a/ephios/event_management/templatetags/event_extras.py
+++ b/ephios/event_management/templatetags/event_extras.py
@@ -43,3 +43,10 @@
@register.filter(name="decline_errors")
def decline_errors(shift, user):
return shift.signup_method.get_decline_errors(user.as_participant())
+
+
[email protected](name="confirmed_shifts")
+def confirmed_shifts(user):
+ return user.get_shifts(
+ with_participation_state_in=[AbstractParticipation.States.CONFIRMED]
+ ).order_by("start_time")
|
{"golden_diff": "diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py\n--- a/ephios/event_management/templatetags/event_extras.py\n+++ b/ephios/event_management/templatetags/event_extras.py\n@@ -43,3 +43,10 @@\n @register.filter(name=\"decline_errors\")\n def decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n+\n+\[email protected](name=\"confirmed_shifts\")\n+def confirmed_shifts(user):\n+ return user.get_shifts(\n+ with_participation_state_in=[AbstractParticipation.States.CONFIRMED]\n+ ).order_by(\"start_time\")\n", "issue": "List of own upcoming shifts\nAs a user, I want to see a list of shifts that I have been confirmed for on the main page.\n", "before_files": [{"content": "from django import template\nfrom django.utils.safestring import mark_safe\n\nfrom ephios.event_management.models import AbstractParticipation\n\nregister = template.Library()\n\n\[email protected](name=\"shift_status\")\ndef shift_status(shift, user):\n participation = user.as_participant().participation_for(shift)\n if participation is not None:\n color = {\n AbstractParticipation.States.USER_DECLINED: \"text-danger\",\n AbstractParticipation.States.RESPONSIBLE_REJECTED: \"text-danger\",\n AbstractParticipation.States.REQUESTED: \"text-warning\",\n AbstractParticipation.States.CONFIRMED: \"text-success\",\n }[participation.state]\n return mark_safe(f'<span class=\"{color}\">{participation.get_state_display()}</span><br>')\n return \"\"\n\n\[email protected](name=\"can_sign_up\")\ndef can_sign_up(shift, user):\n return shift.signup_method.can_sign_up(user.as_participant())\n\n\[email protected](name=\"render_shift_state\")\ndef render_shift_state(shift, request):\n return shift.signup_method.render_shift_state(request)\n\n\[email protected](name=\"signup_errors\")\ndef signup_errors(shift, user):\n return shift.signup_method.get_signup_errors(user.as_participant())\n\n\[email protected](name=\"can_decline\")\ndef can_decline(shift, user):\n return shift.signup_method.can_decline(user.as_participant())\n\n\[email protected](name=\"decline_errors\")\ndef decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n", "path": "ephios/event_management/templatetags/event_extras.py"}], "after_files": [{"content": "from django import template\nfrom django.utils.safestring import mark_safe\n\nfrom ephios.event_management.models import AbstractParticipation\n\nregister = template.Library()\n\n\[email protected](name=\"shift_status\")\ndef shift_status(shift, user):\n participation = user.as_participant().participation_for(shift)\n if participation is not None:\n color = {\n AbstractParticipation.States.USER_DECLINED: \"text-danger\",\n AbstractParticipation.States.RESPONSIBLE_REJECTED: \"text-danger\",\n AbstractParticipation.States.REQUESTED: \"text-warning\",\n AbstractParticipation.States.CONFIRMED: \"text-success\",\n }[participation.state]\n return mark_safe(f'<span class=\"{color}\">{participation.get_state_display()}</span><br>')\n return \"\"\n\n\[email protected](name=\"can_sign_up\")\ndef can_sign_up(shift, user):\n return shift.signup_method.can_sign_up(user.as_participant())\n\n\[email protected](name=\"render_shift_state\")\ndef render_shift_state(shift, request):\n return shift.signup_method.render_shift_state(request)\n\n\[email protected](name=\"signup_errors\")\ndef signup_errors(shift, user):\n return shift.signup_method.get_signup_errors(user.as_participant())\n\n\[email protected](name=\"can_decline\")\ndef can_decline(shift, user):\n return shift.signup_method.can_decline(user.as_participant())\n\n\[email protected](name=\"decline_errors\")\ndef decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n\n\[email protected](name=\"confirmed_shifts\")\ndef confirmed_shifts(user):\n return user.get_shifts(\n with_participation_state_in=[AbstractParticipation.States.CONFIRMED]\n ).order_by(\"start_time\")\n", "path": "ephios/event_management/templatetags/event_extras.py"}]}
| 700 | 165 |
gh_patches_debug_38844
|
rasdani/github-patches
|
git_diff
|
carpentries__amy-521
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
New blurred database for development
Update the `db.sql` file.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/anonymizer.py`
Content:
```
1 import sys
2 from datetime import date, timedelta
3 import random
4 import shutil
5 import sqlite3
6
7 #------------------------------------------------------------
8
9 ALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'
10
11 LOREM_IPSUM = [
12 '''Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a
13 diam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam
14 viverra nec consectetur ante hendrerit. Donec et mollis
15 dolor. Praesent et diam eget libero egestas mattis sit amet vitae
16 augue. Nam tincidunt congue enim, ut porta lorem lacinia
17 consectetur. Donec ut libero sed arcu vehicula ultricies a non
18 tortor. Lorem ipsum dolor sit amet, consectetur adipiscing
19 elit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed,
20 adipiscing id dolor. Pellentesque auctor nisi id magna consequat
21 sagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt
22 feugiat nisl imperdiet. Ut convallis libero in urna ultrices
23 accumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at
24 malesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis
25 parturient montes, nascetur ridiculus mus. In rutrum accumsan
26 ultricies. Mauris vitae nisi at sem facilisis semper ac in est.'''
27 ,
28 '''Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut
29 tristique vitae, sagittis vel odio. Maecenas convallis ullamcorper
30 ultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi
31 diam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec
32 pretium volutpat, arcu ante placerat erat, non tristique elit urna et
33 turpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et
34 orci. Fusce eget orci a orci congue vestibulum. Ut dolor diam,
35 elementum et vestibulum eu, porttitor vel elit. Curabitur venenatis
36 pulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod
37 ultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo
38 sodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at
39 massa ipsum. Quisque bibendum purus convallis nulla ultrices
40 ultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit
41 laoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat
42 elit. Nam sagittis nisi dui.'''
43 ,
44 '''Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis
45 neque. Etiam luctus porttitor lorem, sed suscipit est rutrum
46 non. Curabitur lobortis nisl a enim congue semper. Aenean commodo
47 ultrices imperdiet. Vestibulum ut justo vel sapien venenatis
48 tincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum
49 vitae quis lectus. Aliquam ut massa in turpis dapibus
50 convallis. Praesent elit lacus, vestibulum at malesuada et, ornare et
51 est. Ut augue nunc, sodales ut euismod non, adipiscing vitae
52 orci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec
53 est eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin
54 lectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem,
55 vel elementum mi. Phasellus non ullamcorper urna.'''
56 ,
57 '''Class aptent taciti sociosqu ad litora torquent per conubia nostra,
58 per inceptos himenaeos. In euismod ultrices facilisis. Vestibulum
59 porta sapien adipiscing augue congue id pretium lectus molestie. Proin
60 quis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis
61 elementum rutrum mauris sed convallis. Proin vestibulum magna
62 mi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit
63 ut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc
64 eget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean
65 luctus, elit commodo laoreet commodo, justo nisi consequat massa, sed
66 vulputate quam urna quis eros. Donec vel.''']
67
68 #------------------------------------------------------------
69
70 def get_one(cursor, statement, *params):
71 cursor.execute(statement, params)
72 results = cursor.fetchall()
73 if len(results) == 0:
74 return None
75 return results[0][0]
76
77 RANDWORD_SEEN = set()
78
79 def randword(low, high):
80 while True:
81 r = ''.join([random.choice(ALPHA) for i in range(random.randrange(low, high))])
82 if r not in RANDWORD_SEEN:
83 RANDWORD_SEEN.add(r)
84 return r
85
86 def change(cursor, table, field, func, *args):
87 lower = get_one(cursor, 'select min(id) from {0};'.format(table))
88 upper = get_one(cursor, 'select max(id) from {0};'.format(table))
89 assert (lower is not None) and (upper is not None), \
90 'No lower/upper bounds for {0}.{1}'.format(table, field)
91
92 if isinstance(field, str):
93 stmt = 'update {0} set {1}=? where id=?;'.format(table, field)
94 elif isinstance(field, tuple):
95 filler = ', '.join(['{0}=?'.format(f) for f in field])
96 stmt = 'update {0} set {1} where id=?;'.format(table, filler)
97 else:
98 assert False, 'Unknown field type "{0}" for "{1}"'.format(type(field), field)
99
100 for i in range(lower, upper+1):
101 vals = func(cursor, i, *args) + (i, )
102 try:
103 cursor.execute(stmt, vals)
104 except sqlite3.OperationalError as e:
105 print('FAILED (operational error):', stmt, vals, e)
106 except sqlite3.IntegrityError as e:
107 print('FAILED (integrity error):', stmt, vals, e)
108
109 def tuplify(func):
110 def f(*args, **kwargs):
111 result = func(*args, **kwargs)
112 return (result,)
113 return f
114
115 #------------------------------------------------------------
116
117 def dates(cursor, i):
118 '''Generate start and end dates for workshop.'''
119 start = date(2012, 1, 1) + timedelta(random.randrange(4 * 365))
120 end = start + timedelta(random.randrange(4))
121 if end == start:
122 end = None
123 return (start, end)
124
125 @tuplify
126 def event_reg_key(cursor, i):
127 '''Generate random event registration key.'''
128 return str(1000000 + i)
129
130 @tuplify
131 def event_slug(cursor, i):
132 '''Generate event slugs once start/end dates and site names are set.'''
133 start = get_one(cursor, 'select start from workshops_event where id=?;', i)
134 if start is None:
135 return
136 year, month, day = start.split('-')
137 return '{0}-{1}-{2}-{3}'.format(year, month, day, randword(3, 8))
138
139 @tuplify
140 def url(cursor, i):
141 '''Generate something that looks like a URL.'''
142 _url = get_one(cursor, 'select url from workshops_event where id=?;', i)
143 if not _url:
144 return
145 return 'http://{0}.{1}/{2}-{3}'.format(*[randword(2, 10) for x in range(4)])
146
147 @tuplify
148 def lorem_ipsum(cursor, i):
149 '''Fill in a large text field.'''
150 result = '\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])
151 return result
152
153 @tuplify
154 def monicker(cursor, i):
155 '''Generate a username-style field.'''
156 return randword(2, 10)
157
158 @tuplify
159 def multi_word(cursor, i, prob_multi, prob_null=0.0):
160 '''Fill in a multi-word field (e.g., site name or person's name).'''
161 if random.uniform(0.0, 1.0) < prob_null:
162 return None
163 elif random.uniform(0.0, 1.0) < prob_multi:
164 return '{0} {1}'.format(randword(2, 10), randword(2, 12))
165 else:
166 return randword(2, 10)
167
168 @tuplify
169 def domain(cursor, i):
170 '''Fill in site.domain.'''
171 fields = []
172 for x in range(2, random.randrange(4, 5)):
173 fields.append(randword(2, 10))
174 return '.'.join(fields)
175
176 @tuplify
177 def gender(cursor, i):
178 return random.choice('FMO')
179
180 @tuplify
181 def email(cursor, i):
182 if random.uniform(0.0, 1.0) < 0.05:
183 return None
184 return '{0}@{1}.{2}'.format(*[randword(2, 8) for x in range(3)])
185
186
187 @tuplify
188 def empty_string(cursor, i):
189 return ''
190
191 #------------------------------------------------------------
192
193 def main():
194 assert len(sys.argv) == 4, 'Usage: {0} seed /path/to/source/db /path/to/destination/db'.format(sys.argv[0])
195 assert sys.argv[2] != sys.argv[3], 'Source and destination must be different database'
196
197 seed = int(sys.argv[1])
198 if seed == 0:
199 seed = None
200 db_src = sys.argv[2]
201 db_dst = sys.argv[3]
202
203 random.seed(seed)
204 shutil.copyfile(db_src, db_dst)
205 cnx = sqlite3.connect(db_dst)
206 cur = cnx.cursor()
207
208 change(cur, 'workshops_site', 'domain', domain)
209 change(cur, 'workshops_site', 'fullname', multi_word, 1.0)
210 change(cur, 'workshops_site', 'notes', lorem_ipsum)
211
212 change(cur, 'workshops_person', 'personal', multi_word, 0.1)
213 change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)
214 change(cur, 'workshops_person', 'family', multi_word, 0.1)
215 change(cur, 'workshops_person', 'gender', gender)
216 change(cur, 'workshops_person', 'email', email)
217 change(cur, 'workshops_person', 'github', monicker)
218 change(cur, 'workshops_person', 'twitter', monicker)
219 change(cur, 'workshops_person', 'url', url)
220 change(cur, 'workshops_person', 'username', monicker)
221 change(cur, 'workshops_person', 'password', empty_string)
222
223 change(cur, 'workshops_event', ('start', 'end'), dates)
224 change(cur, 'workshops_event', 'slug', event_slug)
225 change(cur, 'workshops_event', 'url', url)
226 change(cur, 'workshops_event', 'reg_key', event_reg_key)
227 change(cur, 'workshops_event', 'notes', lorem_ipsum)
228
229 # we can't store historical changes!
230 cur.execute('delete from reversion_version;')
231 cur.execute('delete from reversion_revision;')
232
233 cnx.commit()
234 cur.close()
235 cnx.close()
236
237 if __name__ == '__main__':
238 main()
239 # we need to populate reversion_* tables so that no-one needs to do that
240 # upon every `make database` call
241 print("REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW")
242
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/anonymizer.py b/scripts/anonymizer.py
--- a/scripts/anonymizer.py
+++ b/scripts/anonymizer.py
@@ -147,6 +147,8 @@
@tuplify
def lorem_ipsum(cursor, i):
'''Fill in a large text field.'''
+ if random.uniform(0.0, 1.0) < 0.05:
+ return ''
result = '\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])
return result
@@ -188,6 +190,17 @@
def empty_string(cursor, i):
return ''
+
+@tuplify
+def rand_latitude(cursor, i):
+ return random.uniform(-90, 90)
+
+
+@tuplify
+def rand_longitude(cursor, i):
+ return random.uniform(0, 180)
+
+
#------------------------------------------------------------
def main():
@@ -205,9 +218,9 @@
cnx = sqlite3.connect(db_dst)
cur = cnx.cursor()
- change(cur, 'workshops_site', 'domain', domain)
- change(cur, 'workshops_site', 'fullname', multi_word, 1.0)
- change(cur, 'workshops_site', 'notes', lorem_ipsum)
+ change(cur, 'workshops_host', 'domain', domain)
+ change(cur, 'workshops_host', 'fullname', multi_word, 1.0)
+ change(cur, 'workshops_host', 'notes', lorem_ipsum)
change(cur, 'workshops_person', 'personal', multi_word, 0.1)
change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)
@@ -219,10 +232,16 @@
change(cur, 'workshops_person', 'url', url)
change(cur, 'workshops_person', 'username', monicker)
change(cur, 'workshops_person', 'password', empty_string)
+ change(cur, 'workshops_person', 'affiliation', empty_string)
change(cur, 'workshops_event', ('start', 'end'), dates)
change(cur, 'workshops_event', 'slug', event_slug)
change(cur, 'workshops_event', 'url', url)
+ change(cur, 'workshops_event', 'contact', empty_string)
+ change(cur, 'workshops_event', 'venue', empty_string)
+ change(cur, 'workshops_event', 'address', empty_string)
+ change(cur, 'workshops_event', 'latitude', rand_latitude)
+ change(cur, 'workshops_event', 'longitude', rand_longitude)
change(cur, 'workshops_event', 'reg_key', event_reg_key)
change(cur, 'workshops_event', 'notes', lorem_ipsum)
@@ -238,4 +257,6 @@
main()
# we need to populate reversion_* tables so that no-one needs to do that
# upon every `make database` call
- print("REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW")
+ print("REMEMBER! to run `./manage.py createinitialrevisions` on the new "
+ "database NOW.")
+ print("Next, run `sqlite3 SRC -cmd '.dump' > DEST.sql`")
|
{"golden_diff": "diff --git a/scripts/anonymizer.py b/scripts/anonymizer.py\n--- a/scripts/anonymizer.py\n+++ b/scripts/anonymizer.py\n@@ -147,6 +147,8 @@\n @tuplify\n def lorem_ipsum(cursor, i):\n '''Fill in a large text field.'''\n+ if random.uniform(0.0, 1.0) < 0.05:\n+ return ''\n result = '\\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])\n return result\n \n@@ -188,6 +190,17 @@\n def empty_string(cursor, i):\n return ''\n \n+\n+@tuplify\n+def rand_latitude(cursor, i):\n+ return random.uniform(-90, 90)\n+\n+\n+@tuplify\n+def rand_longitude(cursor, i):\n+ return random.uniform(0, 180)\n+\n+\n #------------------------------------------------------------\n \n def main():\n@@ -205,9 +218,9 @@\n cnx = sqlite3.connect(db_dst)\n cur = cnx.cursor()\n \n- change(cur, 'workshops_site', 'domain', domain)\n- change(cur, 'workshops_site', 'fullname', multi_word, 1.0)\n- change(cur, 'workshops_site', 'notes', lorem_ipsum)\n+ change(cur, 'workshops_host', 'domain', domain)\n+ change(cur, 'workshops_host', 'fullname', multi_word, 1.0)\n+ change(cur, 'workshops_host', 'notes', lorem_ipsum)\n \n change(cur, 'workshops_person', 'personal', multi_word, 0.1)\n change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)\n@@ -219,10 +232,16 @@\n change(cur, 'workshops_person', 'url', url)\n change(cur, 'workshops_person', 'username', monicker)\n change(cur, 'workshops_person', 'password', empty_string)\n+ change(cur, 'workshops_person', 'affiliation', empty_string)\n \n change(cur, 'workshops_event', ('start', 'end'), dates)\n change(cur, 'workshops_event', 'slug', event_slug)\n change(cur, 'workshops_event', 'url', url)\n+ change(cur, 'workshops_event', 'contact', empty_string)\n+ change(cur, 'workshops_event', 'venue', empty_string)\n+ change(cur, 'workshops_event', 'address', empty_string)\n+ change(cur, 'workshops_event', 'latitude', rand_latitude)\n+ change(cur, 'workshops_event', 'longitude', rand_longitude)\n change(cur, 'workshops_event', 'reg_key', event_reg_key)\n change(cur, 'workshops_event', 'notes', lorem_ipsum)\n \n@@ -238,4 +257,6 @@\n main()\n # we need to populate reversion_* tables so that no-one needs to do that\n # upon every `make database` call\n- print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW\")\n+ print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new \"\n+ \"database NOW.\")\n+ print(\"Next, run `sqlite3 SRC -cmd '.dump' > DEST.sql`\")\n", "issue": "New blurred database for development\nUpdate the `db.sql` file.\n\n", "before_files": [{"content": "import sys\nfrom datetime import date, timedelta\nimport random\nimport shutil\nimport sqlite3\n\n#------------------------------------------------------------\n\nALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'\n\nLOREM_IPSUM = [\n'''Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a\ndiam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam\nviverra nec consectetur ante hendrerit. Donec et mollis\ndolor. Praesent et diam eget libero egestas mattis sit amet vitae\naugue. Nam tincidunt congue enim, ut porta lorem lacinia\nconsectetur. Donec ut libero sed arcu vehicula ultricies a non\ntortor. Lorem ipsum dolor sit amet, consectetur adipiscing\nelit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed,\nadipiscing id dolor. Pellentesque auctor nisi id magna consequat\nsagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt\nfeugiat nisl imperdiet. Ut convallis libero in urna ultrices\naccumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at\nmalesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis\nparturient montes, nascetur ridiculus mus. In rutrum accumsan\nultricies. Mauris vitae nisi at sem facilisis semper ac in est.'''\n,\n'''Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut\ntristique vitae, sagittis vel odio. Maecenas convallis ullamcorper\nultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi\ndiam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec\npretium volutpat, arcu ante placerat erat, non tristique elit urna et\nturpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et\norci. Fusce eget orci a orci congue vestibulum. Ut dolor diam,\nelementum et vestibulum eu, porttitor vel elit. Curabitur venenatis\npulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod\nultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo\nsodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at\nmassa ipsum. Quisque bibendum purus convallis nulla ultrices\nultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit\nlaoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat\nelit. Nam sagittis nisi dui.'''\n,\n'''Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis\nneque. Etiam luctus porttitor lorem, sed suscipit est rutrum\nnon. Curabitur lobortis nisl a enim congue semper. Aenean commodo\nultrices imperdiet. Vestibulum ut justo vel sapien venenatis\ntincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum\nvitae quis lectus. Aliquam ut massa in turpis dapibus\nconvallis. Praesent elit lacus, vestibulum at malesuada et, ornare et\nest. Ut augue nunc, sodales ut euismod non, adipiscing vitae\norci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec\nest eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin\nlectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem,\nvel elementum mi. Phasellus non ullamcorper urna.'''\n,\n'''Class aptent taciti sociosqu ad litora torquent per conubia nostra,\nper inceptos himenaeos. In euismod ultrices facilisis. Vestibulum\nporta sapien adipiscing augue congue id pretium lectus molestie. Proin\nquis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis\nelementum rutrum mauris sed convallis. Proin vestibulum magna\nmi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit\nut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc\neget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean\nluctus, elit commodo laoreet commodo, justo nisi consequat massa, sed\nvulputate quam urna quis eros. Donec vel.''']\n\n#------------------------------------------------------------\n\ndef get_one(cursor, statement, *params):\n cursor.execute(statement, params)\n results = cursor.fetchall()\n if len(results) == 0:\n return None\n return results[0][0]\n\nRANDWORD_SEEN = set()\n\ndef randword(low, high):\n while True:\n r = ''.join([random.choice(ALPHA) for i in range(random.randrange(low, high))])\n if r not in RANDWORD_SEEN:\n RANDWORD_SEEN.add(r)\n return r\n\ndef change(cursor, table, field, func, *args):\n lower = get_one(cursor, 'select min(id) from {0};'.format(table))\n upper = get_one(cursor, 'select max(id) from {0};'.format(table))\n assert (lower is not None) and (upper is not None), \\\n 'No lower/upper bounds for {0}.{1}'.format(table, field)\n\n if isinstance(field, str):\n stmt = 'update {0} set {1}=? where id=?;'.format(table, field)\n elif isinstance(field, tuple):\n filler = ', '.join(['{0}=?'.format(f) for f in field])\n stmt = 'update {0} set {1} where id=?;'.format(table, filler)\n else:\n assert False, 'Unknown field type \"{0}\" for \"{1}\"'.format(type(field), field)\n\n for i in range(lower, upper+1):\n vals = func(cursor, i, *args) + (i, )\n try:\n cursor.execute(stmt, vals)\n except sqlite3.OperationalError as e:\n print('FAILED (operational error):', stmt, vals, e)\n except sqlite3.IntegrityError as e:\n print('FAILED (integrity error):', stmt, vals, e)\n\ndef tuplify(func):\n def f(*args, **kwargs):\n result = func(*args, **kwargs)\n return (result,)\n return f\n\n#------------------------------------------------------------\n\ndef dates(cursor, i):\n '''Generate start and end dates for workshop.'''\n start = date(2012, 1, 1) + timedelta(random.randrange(4 * 365))\n end = start + timedelta(random.randrange(4))\n if end == start:\n end = None\n return (start, end)\n\n@tuplify\ndef event_reg_key(cursor, i):\n '''Generate random event registration key.'''\n return str(1000000 + i)\n\n@tuplify\ndef event_slug(cursor, i):\n '''Generate event slugs once start/end dates and site names are set.'''\n start = get_one(cursor, 'select start from workshops_event where id=?;', i)\n if start is None:\n return\n year, month, day = start.split('-')\n return '{0}-{1}-{2}-{3}'.format(year, month, day, randword(3, 8))\n\n@tuplify\ndef url(cursor, i):\n '''Generate something that looks like a URL.'''\n _url = get_one(cursor, 'select url from workshops_event where id=?;', i)\n if not _url:\n return\n return 'http://{0}.{1}/{2}-{3}'.format(*[randword(2, 10) for x in range(4)])\n\n@tuplify\ndef lorem_ipsum(cursor, i):\n '''Fill in a large text field.'''\n result = '\\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])\n return result\n\n@tuplify\ndef monicker(cursor, i):\n '''Generate a username-style field.'''\n return randword(2, 10)\n\n@tuplify\ndef multi_word(cursor, i, prob_multi, prob_null=0.0):\n '''Fill in a multi-word field (e.g., site name or person's name).'''\n if random.uniform(0.0, 1.0) < prob_null:\n return None\n elif random.uniform(0.0, 1.0) < prob_multi:\n return '{0} {1}'.format(randword(2, 10), randword(2, 12))\n else:\n return randword(2, 10)\n\n@tuplify\ndef domain(cursor, i):\n '''Fill in site.domain.'''\n fields = []\n for x in range(2, random.randrange(4, 5)):\n fields.append(randword(2, 10))\n return '.'.join(fields)\n\n@tuplify\ndef gender(cursor, i):\n return random.choice('FMO')\n\n@tuplify\ndef email(cursor, i):\n if random.uniform(0.0, 1.0) < 0.05:\n return None\n return '{0}@{1}.{2}'.format(*[randword(2, 8) for x in range(3)])\n\n\n@tuplify\ndef empty_string(cursor, i):\n return ''\n\n#------------------------------------------------------------\n\ndef main():\n assert len(sys.argv) == 4, 'Usage: {0} seed /path/to/source/db /path/to/destination/db'.format(sys.argv[0])\n assert sys.argv[2] != sys.argv[3], 'Source and destination must be different database'\n\n seed = int(sys.argv[1])\n if seed == 0:\n seed = None\n db_src = sys.argv[2]\n db_dst = sys.argv[3]\n\n random.seed(seed)\n shutil.copyfile(db_src, db_dst)\n cnx = sqlite3.connect(db_dst)\n cur = cnx.cursor()\n\n change(cur, 'workshops_site', 'domain', domain)\n change(cur, 'workshops_site', 'fullname', multi_word, 1.0)\n change(cur, 'workshops_site', 'notes', lorem_ipsum)\n\n change(cur, 'workshops_person', 'personal', multi_word, 0.1)\n change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)\n change(cur, 'workshops_person', 'family', multi_word, 0.1)\n change(cur, 'workshops_person', 'gender', gender)\n change(cur, 'workshops_person', 'email', email)\n change(cur, 'workshops_person', 'github', monicker)\n change(cur, 'workshops_person', 'twitter', monicker)\n change(cur, 'workshops_person', 'url', url)\n change(cur, 'workshops_person', 'username', monicker)\n change(cur, 'workshops_person', 'password', empty_string)\n\n change(cur, 'workshops_event', ('start', 'end'), dates)\n change(cur, 'workshops_event', 'slug', event_slug)\n change(cur, 'workshops_event', 'url', url)\n change(cur, 'workshops_event', 'reg_key', event_reg_key)\n change(cur, 'workshops_event', 'notes', lorem_ipsum)\n\n # we can't store historical changes!\n cur.execute('delete from reversion_version;')\n cur.execute('delete from reversion_revision;')\n\n cnx.commit()\n cur.close()\n cnx.close()\n\nif __name__ == '__main__':\n main()\n # we need to populate reversion_* tables so that no-one needs to do that\n # upon every `make database` call\n print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new database NOW\")\n", "path": "scripts/anonymizer.py"}], "after_files": [{"content": "import sys\nfrom datetime import date, timedelta\nimport random\nimport shutil\nimport sqlite3\n\n#------------------------------------------------------------\n\nALPHA = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_'\n\nLOREM_IPSUM = [\n'''Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a\ndiam lectus. Sed sit amet ipsum mauris. Maecenas congue ligula ac quam\nviverra nec consectetur ante hendrerit. Donec et mollis\ndolor. Praesent et diam eget libero egestas mattis sit amet vitae\naugue. Nam tincidunt congue enim, ut porta lorem lacinia\nconsectetur. Donec ut libero sed arcu vehicula ultricies a non\ntortor. Lorem ipsum dolor sit amet, consectetur adipiscing\nelit. Aenean ut gravida lorem. Ut turpis felis, pulvinar a semper sed,\nadipiscing id dolor. Pellentesque auctor nisi id magna consequat\nsagittis. Curabitur dapibus enim sit amet elit pharetra tincidunt\nfeugiat nisl imperdiet. Ut convallis libero in urna ultrices\naccumsan. Donec sed odio eros. Donec viverra mi quis quam pulvinar at\nmalesuada arcu rhoncus. Cum sociis natoque penatibus et magnis dis\nparturient montes, nascetur ridiculus mus. In rutrum accumsan\nultricies. Mauris vitae nisi at sem facilisis semper ac in est.'''\n,\n'''Vivamus fermentum semper porta. Nunc diam velit, adipiscing ut\ntristique vitae, sagittis vel odio. Maecenas convallis ullamcorper\nultricies. Curabitur ornare, ligula semper consectetur sagittis, nisi\ndiam iaculis velit, id fringilla sem nunc vel mi. Nam dictum, odio nec\npretium volutpat, arcu ante placerat erat, non tristique elit urna et\nturpis. Quisque mi metus, ornare sit amet fermentum et, tincidunt et\norci. Fusce eget orci a orci congue vestibulum. Ut dolor diam,\nelementum et vestibulum eu, porttitor vel elit. Curabitur venenatis\npulvinar tellus gravida ornare. Sed et erat faucibus nunc euismod\nultricies ut id justo. Nullam cursus suscipit nisi, et ultrices justo\nsodales nec. Fusce venenatis facilisis lectus ac semper. Aliquam at\nmassa ipsum. Quisque bibendum purus convallis nulla ultrices\nultricies. Nullam aliquam, mi eu aliquam tincidunt, purus velit\nlaoreet tortor, viverra pretium nisi quam vitae mi. Fusce vel volutpat\nelit. Nam sagittis nisi dui.'''\n,\n'''Suspendisse lectus leo, consectetur in tempor sit amet, placerat quis\nneque. Etiam luctus porttitor lorem, sed suscipit est rutrum\nnon. Curabitur lobortis nisl a enim congue semper. Aenean commodo\nultrices imperdiet. Vestibulum ut justo vel sapien venenatis\ntincidunt. Phasellus eget dolor sit amet ipsum dapibus condimentum\nvitae quis lectus. Aliquam ut massa in turpis dapibus\nconvallis. Praesent elit lacus, vestibulum at malesuada et, ornare et\nest. Ut augue nunc, sodales ut euismod non, adipiscing vitae\norci. Mauris ut placerat justo. Mauris in ultricies enim. Quisque nec\nest eleifend nulla ultrices egestas quis ut quam. Donec sollicitudin\nlectus a mauris pulvinar id aliquam urna cursus. Cras quis ligula sem,\nvel elementum mi. Phasellus non ullamcorper urna.'''\n,\n'''Class aptent taciti sociosqu ad litora torquent per conubia nostra,\nper inceptos himenaeos. In euismod ultrices facilisis. Vestibulum\nporta sapien adipiscing augue congue id pretium lectus molestie. Proin\nquis dictum nisl. Morbi id quam sapien, sed vestibulum sem. Duis\nelementum rutrum mauris sed convallis. Proin vestibulum magna\nmi. Aenean tristique hendrerit magna, ac facilisis nulla hendrerit\nut. Sed non tortor sodales quam auctor elementum. Donec hendrerit nunc\neget elit pharetra pulvinar. Suspendisse id tempus tortor. Aenean\nluctus, elit commodo laoreet commodo, justo nisi consequat massa, sed\nvulputate quam urna quis eros. Donec vel.''']\n\n#------------------------------------------------------------\n\ndef get_one(cursor, statement, *params):\n cursor.execute(statement, params)\n results = cursor.fetchall()\n if len(results) == 0:\n return None\n return results[0][0]\n\nRANDWORD_SEEN = set()\n\ndef randword(low, high):\n while True:\n r = ''.join([random.choice(ALPHA) for i in range(random.randrange(low, high))])\n if r not in RANDWORD_SEEN:\n RANDWORD_SEEN.add(r)\n return r\n\ndef change(cursor, table, field, func, *args):\n lower = get_one(cursor, 'select min(id) from {0};'.format(table))\n upper = get_one(cursor, 'select max(id) from {0};'.format(table))\n assert (lower is not None) and (upper is not None), \\\n 'No lower/upper bounds for {0}.{1}'.format(table, field)\n\n if isinstance(field, str):\n stmt = 'update {0} set {1}=? where id=?;'.format(table, field)\n elif isinstance(field, tuple):\n filler = ', '.join(['{0}=?'.format(f) for f in field])\n stmt = 'update {0} set {1} where id=?;'.format(table, filler)\n else:\n assert False, 'Unknown field type \"{0}\" for \"{1}\"'.format(type(field), field)\n\n for i in range(lower, upper+1):\n vals = func(cursor, i, *args) + (i, )\n try:\n cursor.execute(stmt, vals)\n except sqlite3.OperationalError as e:\n print('FAILED (operational error):', stmt, vals, e)\n except sqlite3.IntegrityError as e:\n print('FAILED (integrity error):', stmt, vals, e)\n\ndef tuplify(func):\n def f(*args, **kwargs):\n result = func(*args, **kwargs)\n return (result,)\n return f\n\n#------------------------------------------------------------\n\ndef dates(cursor, i):\n '''Generate start and end dates for workshop.'''\n start = date(2012, 1, 1) + timedelta(random.randrange(4 * 365))\n end = start + timedelta(random.randrange(4))\n if end == start:\n end = None\n return (start, end)\n\n@tuplify\ndef event_reg_key(cursor, i):\n '''Generate random event registration key.'''\n return str(1000000 + i)\n\n@tuplify\ndef event_slug(cursor, i):\n '''Generate event slugs once start/end dates and site names are set.'''\n start = get_one(cursor, 'select start from workshops_event where id=?;', i)\n if start is None:\n return\n year, month, day = start.split('-')\n return '{0}-{1}-{2}-{3}'.format(year, month, day, randword(3, 8))\n\n@tuplify\ndef url(cursor, i):\n '''Generate something that looks like a URL.'''\n _url = get_one(cursor, 'select url from workshops_event where id=?;', i)\n if not _url:\n return\n return 'http://{0}.{1}/{2}-{3}'.format(*[randword(2, 10) for x in range(4)])\n\n@tuplify\ndef lorem_ipsum(cursor, i):\n '''Fill in a large text field.'''\n if random.uniform(0.0, 1.0) < 0.05:\n return ''\n result = '\\n'.join(LOREM_IPSUM[0:random.randrange(len(LOREM_IPSUM))])\n return result\n\n@tuplify\ndef monicker(cursor, i):\n '''Generate a username-style field.'''\n return randword(2, 10)\n\n@tuplify\ndef multi_word(cursor, i, prob_multi, prob_null=0.0):\n '''Fill in a multi-word field (e.g., site name or person's name).'''\n if random.uniform(0.0, 1.0) < prob_null:\n return None\n elif random.uniform(0.0, 1.0) < prob_multi:\n return '{0} {1}'.format(randword(2, 10), randword(2, 12))\n else:\n return randword(2, 10)\n\n@tuplify\ndef domain(cursor, i):\n '''Fill in site.domain.'''\n fields = []\n for x in range(2, random.randrange(4, 5)):\n fields.append(randword(2, 10))\n return '.'.join(fields)\n\n@tuplify\ndef gender(cursor, i):\n return random.choice('FMO')\n\n@tuplify\ndef email(cursor, i):\n if random.uniform(0.0, 1.0) < 0.05:\n return None\n return '{0}@{1}.{2}'.format(*[randword(2, 8) for x in range(3)])\n\n\n@tuplify\ndef empty_string(cursor, i):\n return ''\n\n\n@tuplify\ndef rand_latitude(cursor, i):\n return random.uniform(-90, 90)\n\n\n@tuplify\ndef rand_longitude(cursor, i):\n return random.uniform(0, 180)\n\n\n#------------------------------------------------------------\n\ndef main():\n assert len(sys.argv) == 4, 'Usage: {0} seed /path/to/source/db /path/to/destination/db'.format(sys.argv[0])\n assert sys.argv[2] != sys.argv[3], 'Source and destination must be different database'\n\n seed = int(sys.argv[1])\n if seed == 0:\n seed = None\n db_src = sys.argv[2]\n db_dst = sys.argv[3]\n\n random.seed(seed)\n shutil.copyfile(db_src, db_dst)\n cnx = sqlite3.connect(db_dst)\n cur = cnx.cursor()\n\n change(cur, 'workshops_host', 'domain', domain)\n change(cur, 'workshops_host', 'fullname', multi_word, 1.0)\n change(cur, 'workshops_host', 'notes', lorem_ipsum)\n\n change(cur, 'workshops_person', 'personal', multi_word, 0.1)\n change(cur, 'workshops_person', 'middle', multi_word, 0.0, 0.9)\n change(cur, 'workshops_person', 'family', multi_word, 0.1)\n change(cur, 'workshops_person', 'gender', gender)\n change(cur, 'workshops_person', 'email', email)\n change(cur, 'workshops_person', 'github', monicker)\n change(cur, 'workshops_person', 'twitter', monicker)\n change(cur, 'workshops_person', 'url', url)\n change(cur, 'workshops_person', 'username', monicker)\n change(cur, 'workshops_person', 'password', empty_string)\n change(cur, 'workshops_person', 'affiliation', empty_string)\n\n change(cur, 'workshops_event', ('start', 'end'), dates)\n change(cur, 'workshops_event', 'slug', event_slug)\n change(cur, 'workshops_event', 'url', url)\n change(cur, 'workshops_event', 'contact', empty_string)\n change(cur, 'workshops_event', 'venue', empty_string)\n change(cur, 'workshops_event', 'address', empty_string)\n change(cur, 'workshops_event', 'latitude', rand_latitude)\n change(cur, 'workshops_event', 'longitude', rand_longitude)\n change(cur, 'workshops_event', 'reg_key', event_reg_key)\n change(cur, 'workshops_event', 'notes', lorem_ipsum)\n\n # we can't store historical changes!\n cur.execute('delete from reversion_version;')\n cur.execute('delete from reversion_revision;')\n\n cnx.commit()\n cur.close()\n cnx.close()\n\nif __name__ == '__main__':\n main()\n # we need to populate reversion_* tables so that no-one needs to do that\n # upon every `make database` call\n print(\"REMEMBER! to run `./manage.py createinitialrevisions` on the new \"\n \"database NOW.\")\n print(\"Next, run `sqlite3 SRC -cmd '.dump' > DEST.sql`\")\n", "path": "scripts/anonymizer.py"}]}
| 3,576 | 758 |
gh_patches_debug_8348
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2999
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No shortcut for replacements editor
##### Steps to reproduce the problem:
1. run `mitmproxy`
2. type `O` to open the options editor
3. move the cursor down to `replacements` and press Enter
4. type `a` then `/~s/foo/bar` to add a replacement, and press Esc to commit
5. type `q` and again `q` to return to the flows list
6. the status bar now says “[Replacing]” with the ‘R’ highlighted, as if it were a shortcut
7. however, typing `R` doesn’t do anything
##### Any other comments? What have you tried so far?
It seems like `R` was intended to be a shortcut for the replacements editor (which would be very convenient), but left out. It’s not listed in the online help, either.
If it wasn’t intended to be a shortcut, it shouldn’t be highlighted in the status bar.
##### System information
Mitmproxy: 3.0.3 binary
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mitmproxy/tools/console/statusbar.py`
Content:
```
1 import os.path
2
3 import urwid
4
5 from mitmproxy.tools.console import common
6 from mitmproxy.tools.console import signals
7 from mitmproxy.tools.console import commandexecutor
8 import mitmproxy.tools.console.master # noqa
9 from mitmproxy.tools.console.commander import commander
10
11
12 class PromptPath:
13 def __init__(self, callback, args):
14 self.callback, self.args = callback, args
15
16 def __call__(self, pth):
17 if not pth:
18 return
19 pth = os.path.expanduser(pth)
20 try:
21 return self.callback(pth, *self.args)
22 except IOError as v:
23 signals.status_message.send(message=v.strerror)
24
25
26 class PromptStub:
27 def __init__(self, callback, args):
28 self.callback, self.args = callback, args
29
30 def __call__(self, txt):
31 return self.callback(txt, *self.args)
32
33
34 class ActionBar(urwid.WidgetWrap):
35
36 def __init__(self, master):
37 self.master = master
38 urwid.WidgetWrap.__init__(self, None)
39 self.clear()
40 signals.status_message.connect(self.sig_message)
41 signals.status_prompt.connect(self.sig_prompt)
42 signals.status_prompt_onekey.connect(self.sig_prompt_onekey)
43 signals.status_prompt_command.connect(self.sig_prompt_command)
44
45 self.prompting = None
46
47 self.onekey = False
48
49 def sig_message(self, sender, message, expire=1):
50 if self.prompting:
51 return
52 cols, _ = self.master.ui.get_cols_rows()
53 w = urwid.Text(self.shorten_message(message, cols))
54 self._w = w
55 if expire:
56 def cb(*args):
57 if w == self._w:
58 self.clear()
59 signals.call_in.send(seconds=expire, callback=cb)
60
61 def prep_prompt(self, p):
62 return p.strip() + ": "
63
64 def shorten_message(self, msg, max_width):
65 """
66 Shorten message so that it fits into a single line in the statusbar.
67 """
68 if isinstance(msg, tuple):
69 disp_attr, msg_text = msg
70 elif isinstance(msg, str):
71 disp_attr, msg_text = None, msg
72 else:
73 return msg
74 msg_end = "\u2026" # unicode ellipsis for the end of shortened message
75 prompt = "(more in eventlog)"
76
77 msg_lines = msg_text.split("\n")
78 first_line = msg_lines[0]
79 if len(msg_lines) > 1:
80 # First line of messages with a few lines must end with prompt.
81 line_length = len(first_line) + len(prompt)
82 else:
83 line_length = len(first_line)
84
85 if line_length > max_width:
86 shortening_index = max(0, max_width - len(prompt) - len(msg_end))
87 first_line = first_line[:shortening_index] + msg_end
88 else:
89 if len(msg_lines) == 1:
90 prompt = ""
91
92 return [(disp_attr, first_line), ("warn", prompt)]
93
94 def sig_prompt(self, sender, prompt, text, callback, args=()):
95 signals.focus.send(self, section="footer")
96 self._w = urwid.Edit(self.prep_prompt(prompt), text or "")
97 self.prompting = PromptStub(callback, args)
98
99 def sig_prompt_command(self, sender, partial=""):
100 signals.focus.send(self, section="footer")
101 self._w = commander.CommandEdit(self.master, partial)
102 self.prompting = commandexecutor.CommandExecutor(self.master)
103
104 def sig_prompt_onekey(self, sender, prompt, keys, callback, args=()):
105 """
106 Keys are a set of (word, key) tuples. The appropriate key in the
107 word is highlighted.
108 """
109 signals.focus.send(self, section="footer")
110 prompt = [prompt, " ("]
111 mkup = []
112 for i, e in enumerate(keys):
113 mkup.extend(common.highlight_key(e[0], e[1]))
114 if i < len(keys) - 1:
115 mkup.append(",")
116 prompt.extend(mkup)
117 prompt.append(")? ")
118 self.onekey = set(i[1] for i in keys)
119 self._w = urwid.Edit(prompt, "")
120 self.prompting = PromptStub(callback, args)
121
122 def selectable(self):
123 return True
124
125 def keypress(self, size, k):
126 if self.prompting:
127 if k == "esc":
128 self.prompt_done()
129 elif self.onekey:
130 if k == "enter":
131 self.prompt_done()
132 elif k in self.onekey:
133 self.prompt_execute(k)
134 elif k == "enter":
135 self.prompt_execute(self._w.get_edit_text())
136 else:
137 if common.is_keypress(k):
138 self._w.keypress(size, k)
139 else:
140 return k
141
142 def clear(self):
143 self._w = urwid.Text("")
144 self.prompting = None
145
146 def prompt_done(self):
147 self.prompting = None
148 self.onekey = False
149 signals.status_message.send(message="")
150 signals.focus.send(self, section="body")
151
152 def prompt_execute(self, txt):
153 p = self.prompting
154 self.prompt_done()
155 msg = p(txt)
156 if msg:
157 signals.status_message.send(message=msg, expire=1)
158
159
160 class StatusBar(urwid.WidgetWrap):
161 keyctx = ""
162
163 def __init__(
164 self, master: "mitmproxy.tools.console.master.ConsoleMaster"
165 ) -> None:
166 self.master = master
167 self.ib = urwid.WidgetWrap(urwid.Text(""))
168 self.ab = ActionBar(self.master)
169 super().__init__(urwid.Pile([self.ib, self.ab]))
170 signals.update_settings.connect(self.sig_update)
171 signals.flowlist_change.connect(self.sig_update)
172 master.options.changed.connect(self.sig_update)
173 master.view.focus.sig_change.connect(self.sig_update)
174 master.view.sig_view_add.connect(self.sig_update)
175 self.redraw()
176
177 def sig_update(self, sender, flow=None, updated=None):
178 self.redraw()
179
180 def keypress(self, *args, **kwargs):
181 return self.ab.keypress(*args, **kwargs)
182
183 def get_status(self):
184 r = []
185
186 sreplay = self.master.addons.get("serverplayback")
187 creplay = self.master.addons.get("clientplayback")
188
189 if len(self.master.options.setheaders):
190 r.append("[")
191 r.append(("heading_key", "H"))
192 r.append("eaders]")
193 if len(self.master.options.replacements):
194 r.append("[")
195 r.append(("heading_key", "R"))
196 r.append("eplacing]")
197 if creplay.count():
198 r.append("[")
199 r.append(("heading_key", "cplayback"))
200 r.append(":%s]" % creplay.count())
201 if sreplay.count():
202 r.append("[")
203 r.append(("heading_key", "splayback"))
204 r.append(":%s]" % sreplay.count())
205 if self.master.options.ignore_hosts:
206 r.append("[")
207 r.append(("heading_key", "I"))
208 r.append("gnore:%d]" % len(self.master.options.ignore_hosts))
209 if self.master.options.tcp_hosts:
210 r.append("[")
211 r.append(("heading_key", "T"))
212 r.append("CP:%d]" % len(self.master.options.tcp_hosts))
213 if self.master.options.intercept:
214 r.append("[")
215 if not self.master.options.intercept_active:
216 r.append("X")
217 r.append(("heading_key", "i"))
218 r.append(":%s]" % self.master.options.intercept)
219 if self.master.options.view_filter:
220 r.append("[")
221 r.append(("heading_key", "f"))
222 r.append(":%s]" % self.master.options.view_filter)
223 if self.master.options.stickycookie:
224 r.append("[")
225 r.append(("heading_key", "t"))
226 r.append(":%s]" % self.master.options.stickycookie)
227 if self.master.options.stickyauth:
228 r.append("[")
229 r.append(("heading_key", "u"))
230 r.append(":%s]" % self.master.options.stickyauth)
231 if self.master.options.console_default_contentview != 'auto':
232 r.append("[contentview:%s]" % (self.master.options.console_default_contentview))
233 if self.master.options.has_changed("view_order"):
234 r.append("[")
235 r.append(("heading_key", "o"))
236 r.append(":%s]" % self.master.options.view_order)
237
238 opts = []
239 if self.master.options.anticache:
240 opts.append("anticache")
241 if self.master.options.anticomp:
242 opts.append("anticomp")
243 if self.master.options.showhost:
244 opts.append("showhost")
245 if not self.master.options.server_replay_refresh:
246 opts.append("norefresh")
247 if self.master.options.server_replay_kill_extra:
248 opts.append("killextra")
249 if not self.master.options.upstream_cert:
250 opts.append("no-upstream-cert")
251 if self.master.options.console_focus_follow:
252 opts.append("following")
253 if self.master.options.stream_large_bodies:
254 opts.append(self.master.options.stream_large_bodies)
255
256 if opts:
257 r.append("[%s]" % (":".join(opts)))
258
259 if self.master.options.mode != "regular":
260 r.append("[%s]" % self.master.options.mode)
261 if self.master.options.scripts:
262 r.append("[scripts:%s]" % len(self.master.options.scripts))
263
264 if self.master.options.save_stream_file:
265 r.append("[W:%s]" % self.master.options.save_stream_file)
266
267 return r
268
269 def redraw(self):
270 fc = len(self.master.view)
271 if self.master.view.focus.flow is None:
272 offset = 0
273 else:
274 offset = self.master.view.focus.index + 1
275
276 if self.master.options.view_order_reversed:
277 arrow = common.SYMBOL_UP
278 else:
279 arrow = common.SYMBOL_DOWN
280
281 marked = ""
282 if self.master.view.show_marked:
283 marked = "M"
284
285 t = [
286 ('heading', ("%s %s [%s/%s]" % (arrow, marked, offset, fc)).ljust(11)),
287 ]
288
289 if self.master.options.server:
290 host = self.master.options.listen_host
291 if host == "0.0.0.0" or host == "":
292 host = "*"
293 boundaddr = "[%s:%s]" % (host, self.master.options.listen_port)
294 else:
295 boundaddr = ""
296 t.extend(self.get_status())
297 status = urwid.AttrWrap(urwid.Columns([
298 urwid.Text(t),
299 urwid.Text(boundaddr, align="right"),
300 ]), "heading")
301 self.ib._w = status
302
303 def selectable(self):
304 return True
305
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mitmproxy/tools/console/statusbar.py b/mitmproxy/tools/console/statusbar.py
--- a/mitmproxy/tools/console/statusbar.py
+++ b/mitmproxy/tools/console/statusbar.py
@@ -191,9 +191,7 @@
r.append(("heading_key", "H"))
r.append("eaders]")
if len(self.master.options.replacements):
- r.append("[")
- r.append(("heading_key", "R"))
- r.append("eplacing]")
+ r.append("[%d replacements]" % len(self.master.options.replacements))
if creplay.count():
r.append("[")
r.append(("heading_key", "cplayback"))
|
{"golden_diff": "diff --git a/mitmproxy/tools/console/statusbar.py b/mitmproxy/tools/console/statusbar.py\n--- a/mitmproxy/tools/console/statusbar.py\n+++ b/mitmproxy/tools/console/statusbar.py\n@@ -191,9 +191,7 @@\n r.append((\"heading_key\", \"H\"))\n r.append(\"eaders]\")\n if len(self.master.options.replacements):\n- r.append(\"[\")\n- r.append((\"heading_key\", \"R\"))\n- r.append(\"eplacing]\")\n+ r.append(\"[%d replacements]\" % len(self.master.options.replacements))\n if creplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"cplayback\"))\n", "issue": "No shortcut for replacements editor\n##### Steps to reproduce the problem:\r\n\r\n1. run `mitmproxy`\r\n2. type `O` to open the options editor\r\n3. move the cursor down to `replacements` and press Enter\r\n4. type `a` then `/~s/foo/bar` to add a replacement, and press Esc to commit\r\n5. type `q` and again `q` to return to the flows list\r\n6. the status bar now says \u201c[Replacing]\u201d with the \u2018R\u2019 highlighted, as if it were a shortcut\r\n7. however, typing `R` doesn\u2019t do anything\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nIt seems like `R` was intended to be a shortcut for the replacements editor (which would be very convenient), but left out. It\u2019s not listed in the online help, either.\r\n\r\nIf it wasn\u2019t intended to be a shortcut, it shouldn\u2019t be highlighted in the status bar.\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.3 binary\r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid\n", "before_files": [{"content": "import os.path\n\nimport urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import signals\nfrom mitmproxy.tools.console import commandexecutor\nimport mitmproxy.tools.console.master # noqa\nfrom mitmproxy.tools.console.commander import commander\n\n\nclass PromptPath:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, pth):\n if not pth:\n return\n pth = os.path.expanduser(pth)\n try:\n return self.callback(pth, *self.args)\n except IOError as v:\n signals.status_message.send(message=v.strerror)\n\n\nclass PromptStub:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, txt):\n return self.callback(txt, *self.args)\n\n\nclass ActionBar(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n urwid.WidgetWrap.__init__(self, None)\n self.clear()\n signals.status_message.connect(self.sig_message)\n signals.status_prompt.connect(self.sig_prompt)\n signals.status_prompt_onekey.connect(self.sig_prompt_onekey)\n signals.status_prompt_command.connect(self.sig_prompt_command)\n\n self.prompting = None\n\n self.onekey = False\n\n def sig_message(self, sender, message, expire=1):\n if self.prompting:\n return\n cols, _ = self.master.ui.get_cols_rows()\n w = urwid.Text(self.shorten_message(message, cols))\n self._w = w\n if expire:\n def cb(*args):\n if w == self._w:\n self.clear()\n signals.call_in.send(seconds=expire, callback=cb)\n\n def prep_prompt(self, p):\n return p.strip() + \": \"\n\n def shorten_message(self, msg, max_width):\n \"\"\"\n Shorten message so that it fits into a single line in the statusbar.\n \"\"\"\n if isinstance(msg, tuple):\n disp_attr, msg_text = msg\n elif isinstance(msg, str):\n disp_attr, msg_text = None, msg\n else:\n return msg\n msg_end = \"\\u2026\" # unicode ellipsis for the end of shortened message\n prompt = \"(more in eventlog)\"\n\n msg_lines = msg_text.split(\"\\n\")\n first_line = msg_lines[0]\n if len(msg_lines) > 1:\n # First line of messages with a few lines must end with prompt.\n line_length = len(first_line) + len(prompt)\n else:\n line_length = len(first_line)\n\n if line_length > max_width:\n shortening_index = max(0, max_width - len(prompt) - len(msg_end))\n first_line = first_line[:shortening_index] + msg_end\n else:\n if len(msg_lines) == 1:\n prompt = \"\"\n\n return [(disp_attr, first_line), (\"warn\", prompt)]\n\n def sig_prompt(self, sender, prompt, text, callback, args=()):\n signals.focus.send(self, section=\"footer\")\n self._w = urwid.Edit(self.prep_prompt(prompt), text or \"\")\n self.prompting = PromptStub(callback, args)\n\n def sig_prompt_command(self, sender, partial=\"\"):\n signals.focus.send(self, section=\"footer\")\n self._w = commander.CommandEdit(self.master, partial)\n self.prompting = commandexecutor.CommandExecutor(self.master)\n\n def sig_prompt_onekey(self, sender, prompt, keys, callback, args=()):\n \"\"\"\n Keys are a set of (word, key) tuples. The appropriate key in the\n word is highlighted.\n \"\"\"\n signals.focus.send(self, section=\"footer\")\n prompt = [prompt, \" (\"]\n mkup = []\n for i, e in enumerate(keys):\n mkup.extend(common.highlight_key(e[0], e[1]))\n if i < len(keys) - 1:\n mkup.append(\",\")\n prompt.extend(mkup)\n prompt.append(\")? \")\n self.onekey = set(i[1] for i in keys)\n self._w = urwid.Edit(prompt, \"\")\n self.prompting = PromptStub(callback, args)\n\n def selectable(self):\n return True\n\n def keypress(self, size, k):\n if self.prompting:\n if k == \"esc\":\n self.prompt_done()\n elif self.onekey:\n if k == \"enter\":\n self.prompt_done()\n elif k in self.onekey:\n self.prompt_execute(k)\n elif k == \"enter\":\n self.prompt_execute(self._w.get_edit_text())\n else:\n if common.is_keypress(k):\n self._w.keypress(size, k)\n else:\n return k\n\n def clear(self):\n self._w = urwid.Text(\"\")\n self.prompting = None\n\n def prompt_done(self):\n self.prompting = None\n self.onekey = False\n signals.status_message.send(message=\"\")\n signals.focus.send(self, section=\"body\")\n\n def prompt_execute(self, txt):\n p = self.prompting\n self.prompt_done()\n msg = p(txt)\n if msg:\n signals.status_message.send(message=msg, expire=1)\n\n\nclass StatusBar(urwid.WidgetWrap):\n keyctx = \"\"\n\n def __init__(\n self, master: \"mitmproxy.tools.console.master.ConsoleMaster\"\n ) -> None:\n self.master = master\n self.ib = urwid.WidgetWrap(urwid.Text(\"\"))\n self.ab = ActionBar(self.master)\n super().__init__(urwid.Pile([self.ib, self.ab]))\n signals.update_settings.connect(self.sig_update)\n signals.flowlist_change.connect(self.sig_update)\n master.options.changed.connect(self.sig_update)\n master.view.focus.sig_change.connect(self.sig_update)\n master.view.sig_view_add.connect(self.sig_update)\n self.redraw()\n\n def sig_update(self, sender, flow=None, updated=None):\n self.redraw()\n\n def keypress(self, *args, **kwargs):\n return self.ab.keypress(*args, **kwargs)\n\n def get_status(self):\n r = []\n\n sreplay = self.master.addons.get(\"serverplayback\")\n creplay = self.master.addons.get(\"clientplayback\")\n\n if len(self.master.options.setheaders):\n r.append(\"[\")\n r.append((\"heading_key\", \"H\"))\n r.append(\"eaders]\")\n if len(self.master.options.replacements):\n r.append(\"[\")\n r.append((\"heading_key\", \"R\"))\n r.append(\"eplacing]\")\n if creplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"cplayback\"))\n r.append(\":%s]\" % creplay.count())\n if sreplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"splayback\"))\n r.append(\":%s]\" % sreplay.count())\n if self.master.options.ignore_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"I\"))\n r.append(\"gnore:%d]\" % len(self.master.options.ignore_hosts))\n if self.master.options.tcp_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"T\"))\n r.append(\"CP:%d]\" % len(self.master.options.tcp_hosts))\n if self.master.options.intercept:\n r.append(\"[\")\n if not self.master.options.intercept_active:\n r.append(\"X\")\n r.append((\"heading_key\", \"i\"))\n r.append(\":%s]\" % self.master.options.intercept)\n if self.master.options.view_filter:\n r.append(\"[\")\n r.append((\"heading_key\", \"f\"))\n r.append(\":%s]\" % self.master.options.view_filter)\n if self.master.options.stickycookie:\n r.append(\"[\")\n r.append((\"heading_key\", \"t\"))\n r.append(\":%s]\" % self.master.options.stickycookie)\n if self.master.options.stickyauth:\n r.append(\"[\")\n r.append((\"heading_key\", \"u\"))\n r.append(\":%s]\" % self.master.options.stickyauth)\n if self.master.options.console_default_contentview != 'auto':\n r.append(\"[contentview:%s]\" % (self.master.options.console_default_contentview))\n if self.master.options.has_changed(\"view_order\"):\n r.append(\"[\")\n r.append((\"heading_key\", \"o\"))\n r.append(\":%s]\" % self.master.options.view_order)\n\n opts = []\n if self.master.options.anticache:\n opts.append(\"anticache\")\n if self.master.options.anticomp:\n opts.append(\"anticomp\")\n if self.master.options.showhost:\n opts.append(\"showhost\")\n if not self.master.options.server_replay_refresh:\n opts.append(\"norefresh\")\n if self.master.options.server_replay_kill_extra:\n opts.append(\"killextra\")\n if not self.master.options.upstream_cert:\n opts.append(\"no-upstream-cert\")\n if self.master.options.console_focus_follow:\n opts.append(\"following\")\n if self.master.options.stream_large_bodies:\n opts.append(self.master.options.stream_large_bodies)\n\n if opts:\n r.append(\"[%s]\" % (\":\".join(opts)))\n\n if self.master.options.mode != \"regular\":\n r.append(\"[%s]\" % self.master.options.mode)\n if self.master.options.scripts:\n r.append(\"[scripts:%s]\" % len(self.master.options.scripts))\n\n if self.master.options.save_stream_file:\n r.append(\"[W:%s]\" % self.master.options.save_stream_file)\n\n return r\n\n def redraw(self):\n fc = len(self.master.view)\n if self.master.view.focus.flow is None:\n offset = 0\n else:\n offset = self.master.view.focus.index + 1\n\n if self.master.options.view_order_reversed:\n arrow = common.SYMBOL_UP\n else:\n arrow = common.SYMBOL_DOWN\n\n marked = \"\"\n if self.master.view.show_marked:\n marked = \"M\"\n\n t = [\n ('heading', (\"%s %s [%s/%s]\" % (arrow, marked, offset, fc)).ljust(11)),\n ]\n\n if self.master.options.server:\n host = self.master.options.listen_host\n if host == \"0.0.0.0\" or host == \"\":\n host = \"*\"\n boundaddr = \"[%s:%s]\" % (host, self.master.options.listen_port)\n else:\n boundaddr = \"\"\n t.extend(self.get_status())\n status = urwid.AttrWrap(urwid.Columns([\n urwid.Text(t),\n urwid.Text(boundaddr, align=\"right\"),\n ]), \"heading\")\n self.ib._w = status\n\n def selectable(self):\n return True\n", "path": "mitmproxy/tools/console/statusbar.py"}], "after_files": [{"content": "import os.path\n\nimport urwid\n\nfrom mitmproxy.tools.console import common\nfrom mitmproxy.tools.console import signals\nfrom mitmproxy.tools.console import commandexecutor\nimport mitmproxy.tools.console.master # noqa\nfrom mitmproxy.tools.console.commander import commander\n\n\nclass PromptPath:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, pth):\n if not pth:\n return\n pth = os.path.expanduser(pth)\n try:\n return self.callback(pth, *self.args)\n except IOError as v:\n signals.status_message.send(message=v.strerror)\n\n\nclass PromptStub:\n def __init__(self, callback, args):\n self.callback, self.args = callback, args\n\n def __call__(self, txt):\n return self.callback(txt, *self.args)\n\n\nclass ActionBar(urwid.WidgetWrap):\n\n def __init__(self, master):\n self.master = master\n urwid.WidgetWrap.__init__(self, None)\n self.clear()\n signals.status_message.connect(self.sig_message)\n signals.status_prompt.connect(self.sig_prompt)\n signals.status_prompt_onekey.connect(self.sig_prompt_onekey)\n signals.status_prompt_command.connect(self.sig_prompt_command)\n\n self.prompting = None\n\n self.onekey = False\n\n def sig_message(self, sender, message, expire=1):\n if self.prompting:\n return\n cols, _ = self.master.ui.get_cols_rows()\n w = urwid.Text(self.shorten_message(message, cols))\n self._w = w\n if expire:\n def cb(*args):\n if w == self._w:\n self.clear()\n signals.call_in.send(seconds=expire, callback=cb)\n\n def prep_prompt(self, p):\n return p.strip() + \": \"\n\n def shorten_message(self, msg, max_width):\n \"\"\"\n Shorten message so that it fits into a single line in the statusbar.\n \"\"\"\n if isinstance(msg, tuple):\n disp_attr, msg_text = msg\n elif isinstance(msg, str):\n disp_attr, msg_text = None, msg\n else:\n return msg\n msg_end = \"\\u2026\" # unicode ellipsis for the end of shortened message\n prompt = \"(more in eventlog)\"\n\n msg_lines = msg_text.split(\"\\n\")\n first_line = msg_lines[0]\n if len(msg_lines) > 1:\n # First line of messages with a few lines must end with prompt.\n line_length = len(first_line) + len(prompt)\n else:\n line_length = len(first_line)\n\n if line_length > max_width:\n shortening_index = max(0, max_width - len(prompt) - len(msg_end))\n first_line = first_line[:shortening_index] + msg_end\n else:\n if len(msg_lines) == 1:\n prompt = \"\"\n\n return [(disp_attr, first_line), (\"warn\", prompt)]\n\n def sig_prompt(self, sender, prompt, text, callback, args=()):\n signals.focus.send(self, section=\"footer\")\n self._w = urwid.Edit(self.prep_prompt(prompt), text or \"\")\n self.prompting = PromptStub(callback, args)\n\n def sig_prompt_command(self, sender, partial=\"\"):\n signals.focus.send(self, section=\"footer\")\n self._w = commander.CommandEdit(self.master, partial)\n self.prompting = commandexecutor.CommandExecutor(self.master)\n\n def sig_prompt_onekey(self, sender, prompt, keys, callback, args=()):\n \"\"\"\n Keys are a set of (word, key) tuples. The appropriate key in the\n word is highlighted.\n \"\"\"\n signals.focus.send(self, section=\"footer\")\n prompt = [prompt, \" (\"]\n mkup = []\n for i, e in enumerate(keys):\n mkup.extend(common.highlight_key(e[0], e[1]))\n if i < len(keys) - 1:\n mkup.append(\",\")\n prompt.extend(mkup)\n prompt.append(\")? \")\n self.onekey = set(i[1] for i in keys)\n self._w = urwid.Edit(prompt, \"\")\n self.prompting = PromptStub(callback, args)\n\n def selectable(self):\n return True\n\n def keypress(self, size, k):\n if self.prompting:\n if k == \"esc\":\n self.prompt_done()\n elif self.onekey:\n if k == \"enter\":\n self.prompt_done()\n elif k in self.onekey:\n self.prompt_execute(k)\n elif k == \"enter\":\n self.prompt_execute(self._w.get_edit_text())\n else:\n if common.is_keypress(k):\n self._w.keypress(size, k)\n else:\n return k\n\n def clear(self):\n self._w = urwid.Text(\"\")\n self.prompting = None\n\n def prompt_done(self):\n self.prompting = None\n self.onekey = False\n signals.status_message.send(message=\"\")\n signals.focus.send(self, section=\"body\")\n\n def prompt_execute(self, txt):\n p = self.prompting\n self.prompt_done()\n msg = p(txt)\n if msg:\n signals.status_message.send(message=msg, expire=1)\n\n\nclass StatusBar(urwid.WidgetWrap):\n keyctx = \"\"\n\n def __init__(\n self, master: \"mitmproxy.tools.console.master.ConsoleMaster\"\n ) -> None:\n self.master = master\n self.ib = urwid.WidgetWrap(urwid.Text(\"\"))\n self.ab = ActionBar(self.master)\n super().__init__(urwid.Pile([self.ib, self.ab]))\n signals.update_settings.connect(self.sig_update)\n signals.flowlist_change.connect(self.sig_update)\n master.options.changed.connect(self.sig_update)\n master.view.focus.sig_change.connect(self.sig_update)\n master.view.sig_view_add.connect(self.sig_update)\n self.redraw()\n\n def sig_update(self, sender, flow=None, updated=None):\n self.redraw()\n\n def keypress(self, *args, **kwargs):\n return self.ab.keypress(*args, **kwargs)\n\n def get_status(self):\n r = []\n\n sreplay = self.master.addons.get(\"serverplayback\")\n creplay = self.master.addons.get(\"clientplayback\")\n\n if len(self.master.options.setheaders):\n r.append(\"[\")\n r.append((\"heading_key\", \"H\"))\n r.append(\"eaders]\")\n if len(self.master.options.replacements):\n r.append(\"[%d replacements]\" % len(self.master.options.replacements))\n if creplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"cplayback\"))\n r.append(\":%s]\" % creplay.count())\n if sreplay.count():\n r.append(\"[\")\n r.append((\"heading_key\", \"splayback\"))\n r.append(\":%s]\" % sreplay.count())\n if self.master.options.ignore_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"I\"))\n r.append(\"gnore:%d]\" % len(self.master.options.ignore_hosts))\n if self.master.options.tcp_hosts:\n r.append(\"[\")\n r.append((\"heading_key\", \"T\"))\n r.append(\"CP:%d]\" % len(self.master.options.tcp_hosts))\n if self.master.options.intercept:\n r.append(\"[\")\n if not self.master.options.intercept_active:\n r.append(\"X\")\n r.append((\"heading_key\", \"i\"))\n r.append(\":%s]\" % self.master.options.intercept)\n if self.master.options.view_filter:\n r.append(\"[\")\n r.append((\"heading_key\", \"f\"))\n r.append(\":%s]\" % self.master.options.view_filter)\n if self.master.options.stickycookie:\n r.append(\"[\")\n r.append((\"heading_key\", \"t\"))\n r.append(\":%s]\" % self.master.options.stickycookie)\n if self.master.options.stickyauth:\n r.append(\"[\")\n r.append((\"heading_key\", \"u\"))\n r.append(\":%s]\" % self.master.options.stickyauth)\n if self.master.options.console_default_contentview != 'auto':\n r.append(\"[contentview:%s]\" % (self.master.options.console_default_contentview))\n if self.master.options.has_changed(\"view_order\"):\n r.append(\"[\")\n r.append((\"heading_key\", \"o\"))\n r.append(\":%s]\" % self.master.options.view_order)\n\n opts = []\n if self.master.options.anticache:\n opts.append(\"anticache\")\n if self.master.options.anticomp:\n opts.append(\"anticomp\")\n if self.master.options.showhost:\n opts.append(\"showhost\")\n if not self.master.options.server_replay_refresh:\n opts.append(\"norefresh\")\n if self.master.options.server_replay_kill_extra:\n opts.append(\"killextra\")\n if not self.master.options.upstream_cert:\n opts.append(\"no-upstream-cert\")\n if self.master.options.console_focus_follow:\n opts.append(\"following\")\n if self.master.options.stream_large_bodies:\n opts.append(self.master.options.stream_large_bodies)\n\n if opts:\n r.append(\"[%s]\" % (\":\".join(opts)))\n\n if self.master.options.mode != \"regular\":\n r.append(\"[%s]\" % self.master.options.mode)\n if self.master.options.scripts:\n r.append(\"[scripts:%s]\" % len(self.master.options.scripts))\n\n if self.master.options.save_stream_file:\n r.append(\"[W:%s]\" % self.master.options.save_stream_file)\n\n return r\n\n def redraw(self):\n fc = len(self.master.view)\n if self.master.view.focus.flow is None:\n offset = 0\n else:\n offset = self.master.view.focus.index + 1\n\n if self.master.options.view_order_reversed:\n arrow = common.SYMBOL_UP\n else:\n arrow = common.SYMBOL_DOWN\n\n marked = \"\"\n if self.master.view.show_marked:\n marked = \"M\"\n\n t = [\n ('heading', (\"%s %s [%s/%s]\" % (arrow, marked, offset, fc)).ljust(11)),\n ]\n\n if self.master.options.server:\n host = self.master.options.listen_host\n if host == \"0.0.0.0\" or host == \"\":\n host = \"*\"\n boundaddr = \"[%s:%s]\" % (host, self.master.options.listen_port)\n else:\n boundaddr = \"\"\n t.extend(self.get_status())\n status = urwid.AttrWrap(urwid.Columns([\n urwid.Text(t),\n urwid.Text(boundaddr, align=\"right\"),\n ]), \"heading\")\n self.ib._w = status\n\n def selectable(self):\n return True\n", "path": "mitmproxy/tools/console/statusbar.py"}]}
| 3,662 | 148 |
gh_patches_debug_37517
|
rasdani/github-patches
|
git_diff
|
Flexget__Flexget-1138
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Some words confuse movie name parsing.
Entry title is "dan in real life 2007", parser stops at "real"
```
2016-05-01 09:32 DEBUG imdb_lookup library_movies_cleanup_test lookup for d
an in real life 2007
2016-05-01 09:32 VERBOSE imdb_lookup library_movies_cleanup_test Searching fr
om imdb `dan in real life 2007`
2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing mo
vie: `dan in real life 2007` kwargs: {}
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test parts: [u'da
n', u'in', u'real', u'life', u'2007'], cut is: real
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test after parts
check, cut data would be: `dan in` abs_cut: 6
2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test data cut to
`dan in` - this will be the name
2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing re
sult: <MovieParser(name=dan in,year=2007,quality=unknown)> (in 0.92 ms)
2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test smart_match
name=dan in year=2007
2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test Searching: d
an in
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `flexget/utils/titles/movie.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import re
6
7 from flexget.utils.titles.parser import TitleParser
8 from flexget.utils import qualities
9 from flexget.utils.tools import str_to_int
10
11 log = logging.getLogger('movieparser')
12
13
14 def diff_pos(string1, string2):
15 """Returns first position where string1 and string2 differ."""
16 for (count, c) in enumerate(string1):
17 if len(string2) <= count:
18 return count
19 if string2[count] != c:
20 return count
21
22
23 class MovieParser(TitleParser):
24
25 def __init__(self):
26 self.data = None
27 self.reset()
28 TitleParser.__init__(self)
29
30 @property
31 def fields(self):
32 """
33 Return a dict of all parser fields
34 """
35 return {
36 'movie_parser': self,
37 'movie_name': self.name,
38 'movie_year': self.year,
39 'proper': self.proper,
40 'proper_count': self.proper_count
41 }
42
43 @property
44 def valid(self):
45 return True
46
47 @property
48 def proper(self):
49 return self.proper_count > 0
50
51 @property
52 def is_series(self):
53 return False
54
55 @property
56 def is_movie(self):
57 return True
58
59 def reset(self):
60 # parsing results
61 self.name = None
62 self.year = None
63 self.quality = qualities.Quality()
64 self.proper_count = 0
65
66 def __str__(self):
67 return "<MovieParser(name=%s,year=%s,quality=%s)>" % (self.name, self.year, self.quality)
68
69 def parse(self, data=None):
70 """Parse movie name. Populates name, year, quality and proper_count attributes"""
71
72 # Reset before parsing, so the parser can be reused.
73 self.reset()
74
75 if data is None:
76 data = self.data
77
78 # Move anything in leading brackets to the end
79 data = re.sub(r'^\[(.*?)\](.*)', r'\2 \1', data)
80
81 for char in '[]()_,.':
82 data = data.replace(char, ' ')
83
84 # if there are no spaces
85 if data.find(' ') == -1:
86 data = data.replace('-', ' ')
87
88 # remove unwanted words (imax, ..)
89 self.remove_words(data, self.remove)
90
91 data = self.strip_spaces(data)
92
93 # split to parts
94 parts = data.split(' ')
95 cut_part = 256
96 all_caps = True
97 for part_pos, part in enumerate(parts):
98 cut = False
99 # Don't let the first word be cutoff word
100 if part_pos < 1:
101 continue
102 # check for year
103 num = str_to_int(part)
104 if num is not None:
105 if 1930 < num < 2050:
106 self.year = num
107 cut = True
108 # Don't consider all caps words cut words if the whole title has been all caps
109 if not part.isupper():
110 all_caps = False
111 # if length > 3 and whole word in uppers, consider as cut word (most likely a group name)
112 if len(part) > 3 and part.isupper() and part.isalpha() and not all_caps:
113 cut = True
114 # check for cutoff words
115 if part.lower() in self.cutoffs:
116 cut = True
117 # check for propers
118 if part.lower() in self.propers:
119 self.proper_count += 1
120 cut = True
121 # update cut position
122 if cut and parts.index(part) < cut_part:
123 cut_part = part_pos
124
125 if cut_part != 256:
126 log.debug('parts: %s, cut is: %s', parts, parts[cut_part])
127
128 # calculate cut positon from cut_part
129 abs_cut = len(' '.join(parts[:cut_part]))
130
131 log.debug('after parts check, cut data would be: `%s` abs_cut: %i', data[:abs_cut], abs_cut)
132
133 # parse quality
134 quality = qualities.Quality(data)
135 if quality:
136 self.quality = quality
137 # remaining string is same as data but quality information removed
138 # find out position where there is first difference, this is earliest
139 # quality bit, anything after that has no relevance to the movie name
140 dp = diff_pos(data, quality.clean_text)
141 if dp is not None:
142 log.debug('quality start: %s', dp)
143 if dp < abs_cut:
144 log.debug('quality cut is even shorter')
145 abs_cut = dp
146
147 # make cut
148 data = data[:abs_cut].strip()
149 log.debug('data cut to `%s` - this will be the name', data)
150
151 # save results
152 self.name = data
153
```
Path: `flexget/utils/titles/parser.py`
Content:
```
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3
4 import re
5
6
7 class TitleParser(object):
8 propers = ['proper', 'repack', 'rerip', 'real', 'final']
9
10 specials = ['special', 'bonus', 'extra', 'omake', 'ova']
11
12 editions = ['dc', 'extended', 'uncut', 'remastered', 'unrated', 'theatrical', 'chrono', 'se']
13
14 # TODO: All of the quality related keywords can probably be removed from here, as the quality module handles them
15 codecs = ['x264', 'x.264', 'h264', 'h.264', 'XViD']
16
17 # lowercase required
18 cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',
19 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions
20
21 remove = ['imax']
22
23 sounds = ['AC3', 'DD5.1', 'DTS']
24
25 @staticmethod
26 def re_not_in_word(regexp):
27 return r'(?<![^\W_])' + regexp + r'(?![^\W_])'
28
29 @staticmethod
30 def strip_spaces(text):
31 """Removes all unnecessary duplicate spaces from a text"""
32 return ' '.join(text.split())
33
34 @staticmethod
35 def remove_words(text, words, not_in_word=False):
36 """Clean all given :words: from :text: case insensitively"""
37 for word in words:
38 text = TitleParser.ireplace(text, word, '', not_in_word=not_in_word)
39 # remove duplicate spaces
40 text = ' '.join(text.split())
41 return text
42
43 @staticmethod
44 def ireplace(data, old, new, count=0, not_in_word=False):
45 """Case insensitive string replace"""
46 old = re.escape(old)
47 if not_in_word:
48 old = TitleParser.re_not_in_word(old)
49 pattern = re.compile(old, re.I)
50 return re.sub(pattern, new, data, count)
51
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/flexget/utils/titles/movie.py b/flexget/utils/titles/movie.py
--- a/flexget/utils/titles/movie.py
+++ b/flexget/utils/titles/movie.py
@@ -7,6 +7,7 @@
from flexget.utils.titles.parser import TitleParser
from flexget.utils import qualities
from flexget.utils.tools import str_to_int
+from datetime import datetime
log = logging.getLogger('movieparser')
@@ -60,6 +61,7 @@
# parsing results
self.name = None
self.year = None
+ self.year_pos = None
self.quality = qualities.Quality()
self.proper_count = 0
@@ -102,8 +104,13 @@
# check for year
num = str_to_int(part)
if num is not None:
- if 1930 < num < 2050:
+ if 1930 < num <= datetime.now().year:
+ if self.year_pos == cut_part:
+ # Looks like a year, but we already set the cutpoint to a year, let's move it forward
+ cut_part = part_pos
+
self.year = num
+ self.year_pos = part_pos
cut = True
# Don't consider all caps words cut words if the whole title has been all caps
if not part.isupper():
@@ -116,8 +123,10 @@
cut = True
# check for propers
if part.lower() in self.propers:
- self.proper_count += 1
- cut = True
+ # 'real' and 'final' are too common in movie titles, only cut if it comes after year
+ if part.lower() not in ['real', 'final'] or self.year:
+ self.proper_count += 1
+ cut = True
# update cut position
if cut and parts.index(part) < cut_part:
cut_part = part_pos
diff --git a/flexget/utils/titles/parser.py b/flexget/utils/titles/parser.py
--- a/flexget/utils/titles/parser.py
+++ b/flexget/utils/titles/parser.py
@@ -16,7 +16,7 @@
# lowercase required
cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',
- 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions
+ 'directors', 'director\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + specials + editions
remove = ['imax']
|
{"golden_diff": "diff --git a/flexget/utils/titles/movie.py b/flexget/utils/titles/movie.py\n--- a/flexget/utils/titles/movie.py\n+++ b/flexget/utils/titles/movie.py\n@@ -7,6 +7,7 @@\n from flexget.utils.titles.parser import TitleParser\n from flexget.utils import qualities\n from flexget.utils.tools import str_to_int\n+from datetime import datetime\n \n log = logging.getLogger('movieparser')\n \n@@ -60,6 +61,7 @@\n # parsing results\n self.name = None\n self.year = None\n+ self.year_pos = None\n self.quality = qualities.Quality()\n self.proper_count = 0\n \n@@ -102,8 +104,13 @@\n # check for year\n num = str_to_int(part)\n if num is not None:\n- if 1930 < num < 2050:\n+ if 1930 < num <= datetime.now().year:\n+ if self.year_pos == cut_part:\n+ # Looks like a year, but we already set the cutpoint to a year, let's move it forward\n+ cut_part = part_pos\n+ \n self.year = num\n+ self.year_pos = part_pos\n cut = True\n # Don't consider all caps words cut words if the whole title has been all caps\n if not part.isupper():\n@@ -116,8 +123,10 @@\n cut = True\n # check for propers\n if part.lower() in self.propers:\n- self.proper_count += 1\n- cut = True\n+ # 'real' and 'final' are too common in movie titles, only cut if it comes after year\n+ if part.lower() not in ['real', 'final'] or self.year:\n+ self.proper_count += 1\n+ cut = True\n # update cut position\n if cut and parts.index(part) < cut_part:\n cut_part = part_pos\ndiff --git a/flexget/utils/titles/parser.py b/flexget/utils/titles/parser.py\n--- a/flexget/utils/titles/parser.py\n+++ b/flexget/utils/titles/parser.py\n@@ -16,7 +16,7 @@\n \n # lowercase required\n cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',\n- 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions\n+ 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + specials + editions\n \n remove = ['imax']\n", "issue": "Some words confuse movie name parsing.\nEntry title is \"dan in real life 2007\", parser stops at \"real\"\n\n```\n2016-05-01 09:32 DEBUG imdb_lookup library_movies_cleanup_test lookup for d\nan in real life 2007\n2016-05-01 09:32 VERBOSE imdb_lookup library_movies_cleanup_test Searching fr\nom imdb `dan in real life 2007`\n2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing mo\nvie: `dan in real life 2007` kwargs: {}\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test parts: [u'da\nn', u'in', u'real', u'life', u'2007'], cut is: real\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test after parts \ncheck, cut data would be: `dan in` abs_cut: 6\n2016-05-01 09:32 DEBUG movieparser library_movies_cleanup_test data cut to \n`dan in` - this will be the name\n2016-05-01 09:32 DEBUG parser_internal library_movies_cleanup_test Parsing re\nsult: <MovieParser(name=dan in,year=2007,quality=unknown)> (in 0.92 ms)\n2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test smart_match \nname=dan in year=2007\n2016-05-01 09:32 DEBUG utils.imdb library_movies_cleanup_test Searching: d\nan in\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget.utils.titles.parser import TitleParser\nfrom flexget.utils import qualities\nfrom flexget.utils.tools import str_to_int\n\nlog = logging.getLogger('movieparser')\n\n\ndef diff_pos(string1, string2):\n \"\"\"Returns first position where string1 and string2 differ.\"\"\"\n for (count, c) in enumerate(string1):\n if len(string2) <= count:\n return count\n if string2[count] != c:\n return count\n\n\nclass MovieParser(TitleParser):\n\n def __init__(self):\n self.data = None\n self.reset()\n TitleParser.__init__(self)\n\n @property\n def fields(self):\n \"\"\"\n Return a dict of all parser fields\n \"\"\"\n return {\n 'movie_parser': self,\n 'movie_name': self.name,\n 'movie_year': self.year,\n 'proper': self.proper,\n 'proper_count': self.proper_count\n }\n\n @property\n def valid(self):\n return True\n\n @property\n def proper(self):\n return self.proper_count > 0\n\n @property\n def is_series(self):\n return False\n\n @property\n def is_movie(self):\n return True\n\n def reset(self):\n # parsing results\n self.name = None\n self.year = None\n self.quality = qualities.Quality()\n self.proper_count = 0\n\n def __str__(self):\n return \"<MovieParser(name=%s,year=%s,quality=%s)>\" % (self.name, self.year, self.quality)\n\n def parse(self, data=None):\n \"\"\"Parse movie name. Populates name, year, quality and proper_count attributes\"\"\"\n\n # Reset before parsing, so the parser can be reused.\n self.reset()\n\n if data is None:\n data = self.data\n\n # Move anything in leading brackets to the end\n data = re.sub(r'^\\[(.*?)\\](.*)', r'\\2 \\1', data)\n\n for char in '[]()_,.':\n data = data.replace(char, ' ')\n\n # if there are no spaces\n if data.find(' ') == -1:\n data = data.replace('-', ' ')\n\n # remove unwanted words (imax, ..)\n self.remove_words(data, self.remove)\n\n data = self.strip_spaces(data)\n\n # split to parts\n parts = data.split(' ')\n cut_part = 256\n all_caps = True\n for part_pos, part in enumerate(parts):\n cut = False\n # Don't let the first word be cutoff word\n if part_pos < 1:\n continue\n # check for year\n num = str_to_int(part)\n if num is not None:\n if 1930 < num < 2050:\n self.year = num\n cut = True\n # Don't consider all caps words cut words if the whole title has been all caps\n if not part.isupper():\n all_caps = False\n # if length > 3 and whole word in uppers, consider as cut word (most likely a group name)\n if len(part) > 3 and part.isupper() and part.isalpha() and not all_caps:\n cut = True\n # check for cutoff words\n if part.lower() in self.cutoffs:\n cut = True\n # check for propers\n if part.lower() in self.propers:\n self.proper_count += 1\n cut = True\n # update cut position\n if cut and parts.index(part) < cut_part:\n cut_part = part_pos\n\n if cut_part != 256:\n log.debug('parts: %s, cut is: %s', parts, parts[cut_part])\n\n # calculate cut positon from cut_part\n abs_cut = len(' '.join(parts[:cut_part]))\n\n log.debug('after parts check, cut data would be: `%s` abs_cut: %i', data[:abs_cut], abs_cut)\n\n # parse quality\n quality = qualities.Quality(data)\n if quality:\n self.quality = quality\n # remaining string is same as data but quality information removed\n # find out position where there is first difference, this is earliest\n # quality bit, anything after that has no relevance to the movie name\n dp = diff_pos(data, quality.clean_text)\n if dp is not None:\n log.debug('quality start: %s', dp)\n if dp < abs_cut:\n log.debug('quality cut is even shorter')\n abs_cut = dp\n\n # make cut\n data = data[:abs_cut].strip()\n log.debug('data cut to `%s` - this will be the name', data)\n\n # save results\n self.name = data\n", "path": "flexget/utils/titles/movie.py"}, {"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport re\n\n\nclass TitleParser(object):\n propers = ['proper', 'repack', 'rerip', 'real', 'final']\n\n specials = ['special', 'bonus', 'extra', 'omake', 'ova']\n\n editions = ['dc', 'extended', 'uncut', 'remastered', 'unrated', 'theatrical', 'chrono', 'se']\n\n # TODO: All of the quality related keywords can probably be removed from here, as the quality module handles them\n codecs = ['x264', 'x.264', 'h264', 'h.264', 'XViD']\n\n # lowercase required\n cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',\n 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + propers + specials + editions\n\n remove = ['imax']\n\n sounds = ['AC3', 'DD5.1', 'DTS']\n\n @staticmethod\n def re_not_in_word(regexp):\n return r'(?<![^\\W_])' + regexp + r'(?![^\\W_])'\n\n @staticmethod\n def strip_spaces(text):\n \"\"\"Removes all unnecessary duplicate spaces from a text\"\"\"\n return ' '.join(text.split())\n\n @staticmethod\n def remove_words(text, words, not_in_word=False):\n \"\"\"Clean all given :words: from :text: case insensitively\"\"\"\n for word in words:\n text = TitleParser.ireplace(text, word, '', not_in_word=not_in_word)\n # remove duplicate spaces\n text = ' '.join(text.split())\n return text\n\n @staticmethod\n def ireplace(data, old, new, count=0, not_in_word=False):\n \"\"\"Case insensitive string replace\"\"\"\n old = re.escape(old)\n if not_in_word:\n old = TitleParser.re_not_in_word(old)\n pattern = re.compile(old, re.I)\n return re.sub(pattern, new, data, count)\n", "path": "flexget/utils/titles/parser.py"}], "after_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget.utils.titles.parser import TitleParser\nfrom flexget.utils import qualities\nfrom flexget.utils.tools import str_to_int\nfrom datetime import datetime\n\nlog = logging.getLogger('movieparser')\n\n\ndef diff_pos(string1, string2):\n \"\"\"Returns first position where string1 and string2 differ.\"\"\"\n for (count, c) in enumerate(string1):\n if len(string2) <= count:\n return count\n if string2[count] != c:\n return count\n\n\nclass MovieParser(TitleParser):\n\n def __init__(self):\n self.data = None\n self.reset()\n TitleParser.__init__(self)\n\n @property\n def fields(self):\n \"\"\"\n Return a dict of all parser fields\n \"\"\"\n return {\n 'movie_parser': self,\n 'movie_name': self.name,\n 'movie_year': self.year,\n 'proper': self.proper,\n 'proper_count': self.proper_count\n }\n\n @property\n def valid(self):\n return True\n\n @property\n def proper(self):\n return self.proper_count > 0\n\n @property\n def is_series(self):\n return False\n\n @property\n def is_movie(self):\n return True\n\n def reset(self):\n # parsing results\n self.name = None\n self.year = None\n self.year_pos = None\n self.quality = qualities.Quality()\n self.proper_count = 0\n\n def __str__(self):\n return \"<MovieParser(name=%s,year=%s,quality=%s)>\" % (self.name, self.year, self.quality)\n\n def parse(self, data=None):\n \"\"\"Parse movie name. Populates name, year, quality and proper_count attributes\"\"\"\n\n # Reset before parsing, so the parser can be reused.\n self.reset()\n\n if data is None:\n data = self.data\n\n # Move anything in leading brackets to the end\n data = re.sub(r'^\\[(.*?)\\](.*)', r'\\2 \\1', data)\n\n for char in '[]()_,.':\n data = data.replace(char, ' ')\n\n # if there are no spaces\n if data.find(' ') == -1:\n data = data.replace('-', ' ')\n\n # remove unwanted words (imax, ..)\n self.remove_words(data, self.remove)\n\n data = self.strip_spaces(data)\n\n # split to parts\n parts = data.split(' ')\n cut_part = 256\n all_caps = True\n for part_pos, part in enumerate(parts):\n cut = False\n # Don't let the first word be cutoff word\n if part_pos < 1:\n continue\n # check for year\n num = str_to_int(part)\n if num is not None:\n if 1930 < num <= datetime.now().year:\n if self.year_pos == cut_part:\n # Looks like a year, but we already set the cutpoint to a year, let's move it forward\n cut_part = part_pos\n \n self.year = num\n self.year_pos = part_pos\n cut = True\n # Don't consider all caps words cut words if the whole title has been all caps\n if not part.isupper():\n all_caps = False\n # if length > 3 and whole word in uppers, consider as cut word (most likely a group name)\n if len(part) > 3 and part.isupper() and part.isalpha() and not all_caps:\n cut = True\n # check for cutoff words\n if part.lower() in self.cutoffs:\n cut = True\n # check for propers\n if part.lower() in self.propers:\n # 'real' and 'final' are too common in movie titles, only cut if it comes after year\n if part.lower() not in ['real', 'final'] or self.year:\n self.proper_count += 1\n cut = True\n # update cut position\n if cut and parts.index(part) < cut_part:\n cut_part = part_pos\n\n if cut_part != 256:\n log.debug('parts: %s, cut is: %s', parts, parts[cut_part])\n\n # calculate cut positon from cut_part\n abs_cut = len(' '.join(parts[:cut_part]))\n\n log.debug('after parts check, cut data would be: `%s` abs_cut: %i', data[:abs_cut], abs_cut)\n\n # parse quality\n quality = qualities.Quality(data)\n if quality:\n self.quality = quality\n # remaining string is same as data but quality information removed\n # find out position where there is first difference, this is earliest\n # quality bit, anything after that has no relevance to the movie name\n dp = diff_pos(data, quality.clean_text)\n if dp is not None:\n log.debug('quality start: %s', dp)\n if dp < abs_cut:\n log.debug('quality cut is even shorter')\n abs_cut = dp\n\n # make cut\n data = data[:abs_cut].strip()\n log.debug('data cut to `%s` - this will be the name', data)\n\n # save results\n self.name = data\n", "path": "flexget/utils/titles/movie.py"}, {"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport re\n\n\nclass TitleParser(object):\n propers = ['proper', 'repack', 'rerip', 'real', 'final']\n\n specials = ['special', 'bonus', 'extra', 'omake', 'ova']\n\n editions = ['dc', 'extended', 'uncut', 'remastered', 'unrated', 'theatrical', 'chrono', 'se']\n\n # TODO: All of the quality related keywords can probably be removed from here, as the quality module handles them\n codecs = ['x264', 'x.264', 'h264', 'h.264', 'XViD']\n\n # lowercase required\n cutoffs = ['limited', 'xvid', 'h264', 'x264', 'h.264', 'x.264', 'screener', 'unrated', '3d', 'extended',\n 'directors', 'director\\'s', 'multisubs', 'dubbed', 'subbed', 'multi'] + specials + editions\n\n remove = ['imax']\n\n sounds = ['AC3', 'DD5.1', 'DTS']\n\n @staticmethod\n def re_not_in_word(regexp):\n return r'(?<![^\\W_])' + regexp + r'(?![^\\W_])'\n\n @staticmethod\n def strip_spaces(text):\n \"\"\"Removes all unnecessary duplicate spaces from a text\"\"\"\n return ' '.join(text.split())\n\n @staticmethod\n def remove_words(text, words, not_in_word=False):\n \"\"\"Clean all given :words: from :text: case insensitively\"\"\"\n for word in words:\n text = TitleParser.ireplace(text, word, '', not_in_word=not_in_word)\n # remove duplicate spaces\n text = ' '.join(text.split())\n return text\n\n @staticmethod\n def ireplace(data, old, new, count=0, not_in_word=False):\n \"\"\"Case insensitive string replace\"\"\"\n old = re.escape(old)\n if not_in_word:\n old = TitleParser.re_not_in_word(old)\n pattern = re.compile(old, re.I)\n return re.sub(pattern, new, data, count)\n", "path": "flexget/utils/titles/parser.py"}]}
| 2,759 | 638 |
gh_patches_debug_15929
|
rasdani/github-patches
|
git_diff
|
microsoft__DeepSpeed-4405
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[REQUEST] Add timeout as entry-point option or environment variable
**Is your feature request related to a problem? Please describe.**
I am using Hugging Face `transformers` for my deep learning, and it has a nice option to restrict specific processing to the main process only. This is useful if a function caches the result: the main process does the processing while the other processes wait, and when main is done, the other processes can just load from the cache. That's pretty neat.
The problem arises when these are long running processes. In distributed environment (torch or deepspeed, for instance), the communication between processes has a default timeout. If no communication has occurred for `timeout` seconds, the whole program will exit.
**Describe the solution you'd like**
Both [`torch`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) and [`deepspeed`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) provide options in the Python init methods to set the timeout parameter to a higher value than the default 30 minutes, but this option is not available from the command-line or through an environment, which is what I would like.
**Describe alternatives you've considered**
I could make a custom fork but I think that this is something that more people might need as soon as they scale to larger projects.
**Additional context**
I can work on this, depending on what you suggest as a solution (CLI argument for the `deepspeed` command or as environment variable).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `deepspeed/constants.py`
Content:
```
1 # Copyright (c) Microsoft Corporation.
2 # SPDX-License-Identifier: Apache-2.0
3
4 # DeepSpeed Team
5
6 from datetime import timedelta
7
8 #############################################
9 # Torch distributed constants
10 #############################################
11 TORCH_DISTRIBUTED_DEFAULT_PORT = 29500
12
13 # Default process group wide timeout, if applicable.
14 # This only applies to the gloo and nccl backends
15 # (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).
16 # To make an attempt at backwards compatibility with THD, we use an
17 # extraordinarily high default timeout, given that THD did not have timeouts.
18 default_pg_timeout = timedelta(minutes=30)
19 INFERENCE_GENERIC_MODE = 'generic'
20 INFERENCE_SPECIALIZED_MODE = 'specialized'
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/deepspeed/constants.py b/deepspeed/constants.py
--- a/deepspeed/constants.py
+++ b/deepspeed/constants.py
@@ -3,6 +3,7 @@
# DeepSpeed Team
+import os
from datetime import timedelta
#############################################
@@ -15,6 +16,6 @@
# (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).
# To make an attempt at backwards compatibility with THD, we use an
# extraordinarily high default timeout, given that THD did not have timeouts.
-default_pg_timeout = timedelta(minutes=30)
+default_pg_timeout = timedelta(minutes=int(os.getenv("DEEPSPEED_TIMEOUT", default=30)))
INFERENCE_GENERIC_MODE = 'generic'
INFERENCE_SPECIALIZED_MODE = 'specialized'
|
{"golden_diff": "diff --git a/deepspeed/constants.py b/deepspeed/constants.py\n--- a/deepspeed/constants.py\n+++ b/deepspeed/constants.py\n@@ -3,6 +3,7 @@\n \n # DeepSpeed Team\n \n+import os\n from datetime import timedelta\n \n #############################################\n@@ -15,6 +16,6 @@\n # (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).\n # To make an attempt at backwards compatibility with THD, we use an\n # extraordinarily high default timeout, given that THD did not have timeouts.\n-default_pg_timeout = timedelta(minutes=30)\n+default_pg_timeout = timedelta(minutes=int(os.getenv(\"DEEPSPEED_TIMEOUT\", default=30)))\n INFERENCE_GENERIC_MODE = 'generic'\n INFERENCE_SPECIALIZED_MODE = 'specialized'\n", "issue": "[REQUEST] Add timeout as entry-point option or environment variable\n**Is your feature request related to a problem? Please describe.**\r\nI am using Hugging Face `transformers` for my deep learning, and it has a nice option to restrict specific processing to the main process only. This is useful if a function caches the result: the main process does the processing while the other processes wait, and when main is done, the other processes can just load from the cache. That's pretty neat.\r\n\r\nThe problem arises when these are long running processes. In distributed environment (torch or deepspeed, for instance), the communication between processes has a default timeout. If no communication has occurred for `timeout` seconds, the whole program will exit. \r\n\r\n**Describe the solution you'd like**\r\n\r\nBoth [`torch`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) and [`deepspeed`](https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group) provide options in the Python init methods to set the timeout parameter to a higher value than the default 30 minutes, but this option is not available from the command-line or through an environment, which is what I would like.\r\n\r\n**Describe alternatives you've considered**\r\nI could make a custom fork but I think that this is something that more people might need as soon as they scale to larger projects.\r\n\r\n**Additional context**\r\n\r\nI can work on this, depending on what you suggest as a solution (CLI argument for the `deepspeed` command or as environment variable).\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nfrom datetime import timedelta\n\n#############################################\n# Torch distributed constants\n#############################################\nTORCH_DISTRIBUTED_DEFAULT_PORT = 29500\n\n# Default process group wide timeout, if applicable.\n# This only applies to the gloo and nccl backends\n# (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).\n# To make an attempt at backwards compatibility with THD, we use an\n# extraordinarily high default timeout, given that THD did not have timeouts.\ndefault_pg_timeout = timedelta(minutes=30)\nINFERENCE_GENERIC_MODE = 'generic'\nINFERENCE_SPECIALIZED_MODE = 'specialized'\n", "path": "deepspeed/constants.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nimport os\nfrom datetime import timedelta\n\n#############################################\n# Torch distributed constants\n#############################################\nTORCH_DISTRIBUTED_DEFAULT_PORT = 29500\n\n# Default process group wide timeout, if applicable.\n# This only applies to the gloo and nccl backends\n# (only if NCCL_BLOCKING_WAIT or NCCL_ASYNC_ERROR_HANDLING is set to 1).\n# To make an attempt at backwards compatibility with THD, we use an\n# extraordinarily high default timeout, given that THD did not have timeouts.\ndefault_pg_timeout = timedelta(minutes=int(os.getenv(\"DEEPSPEED_TIMEOUT\", default=30)))\nINFERENCE_GENERIC_MODE = 'generic'\nINFERENCE_SPECIALIZED_MODE = 'specialized'\n", "path": "deepspeed/constants.py"}]}
| 778 | 174 |
gh_patches_debug_6053
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3123
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Double import
I noticed that in `networkx/algorithms/__init__.py`the statement `from networkx.algorithms.triads import *` occurs twice. Is there any reason for this or is this just a blunder?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/algorithms/__init__.py`
Content:
```
1 from networkx.algorithms.assortativity import *
2 from networkx.algorithms.boundary import *
3 from networkx.algorithms.bridges import *
4 from networkx.algorithms.chains import *
5 from networkx.algorithms.centrality import *
6 from networkx.algorithms.chordal import *
7 from networkx.algorithms.cluster import *
8 from networkx.algorithms.clique import *
9 from networkx.algorithms.communicability_alg import *
10 from networkx.algorithms.components import *
11 from networkx.algorithms.coloring import *
12 from networkx.algorithms.core import *
13 from networkx.algorithms.covering import *
14 from networkx.algorithms.cycles import *
15 from networkx.algorithms.cuts import *
16 from networkx.algorithms.dag import *
17 from networkx.algorithms.distance_measures import *
18 from networkx.algorithms.distance_regular import *
19 from networkx.algorithms.dominance import *
20 from networkx.algorithms.dominating import *
21 from networkx.algorithms.efficiency import *
22 from networkx.algorithms.euler import *
23 from networkx.algorithms.graphical import *
24 from networkx.algorithms.hierarchy import *
25 from networkx.algorithms.hybrid import *
26 from networkx.algorithms.link_analysis import *
27 from networkx.algorithms.link_prediction import *
28 from networkx.algorithms.lowest_common_ancestors import *
29 from networkx.algorithms.isolate import *
30 from networkx.algorithms.matching import *
31 from networkx.algorithms.minors import *
32 from networkx.algorithms.mis import *
33 from networkx.algorithms.operators import *
34 from networkx.algorithms.planarity import *
35 from networkx.algorithms.reciprocity import *
36 from networkx.algorithms.richclub import *
37 from networkx.algorithms.shortest_paths import *
38 from networkx.algorithms.similarity import *
39 from networkx.algorithms.simple_paths import *
40 from networkx.algorithms.smallworld import *
41 from networkx.algorithms.smetric import *
42 from networkx.algorithms.structuralholes import *
43 from networkx.algorithms.triads import *
44 from networkx.algorithms.sparsifiers import *
45 from networkx.algorithms.swap import *
46 from networkx.algorithms.traversal import *
47 from networkx.algorithms.triads import *
48 from networkx.algorithms.vitality import *
49 from networkx.algorithms.voronoi import *
50 from networkx.algorithms.wiener import *
51
52 # Make certain subpackages available to the user as direct imports from
53 # the `networkx` namespace.
54 import networkx.algorithms.assortativity
55 import networkx.algorithms.bipartite
56 import networkx.algorithms.node_classification
57 import networkx.algorithms.centrality
58 import networkx.algorithms.chordal
59 import networkx.algorithms.cluster
60 import networkx.algorithms.clique
61 import networkx.algorithms.components
62 import networkx.algorithms.connectivity
63 import networkx.algorithms.community
64 import networkx.algorithms.coloring
65 import networkx.algorithms.flow
66 import networkx.algorithms.isomorphism
67 import networkx.algorithms.link_analysis
68 import networkx.algorithms.lowest_common_ancestors
69 import networkx.algorithms.operators
70 import networkx.algorithms.shortest_paths
71 import networkx.algorithms.tournament
72 import networkx.algorithms.traversal
73 import networkx.algorithms.tree
74
75 # Make certain functions from some of the previous subpackages available
76 # to the user as direct imports from the `networkx` namespace.
77 from networkx.algorithms.bipartite import complete_bipartite_graph
78 from networkx.algorithms.bipartite import is_bipartite
79 from networkx.algorithms.bipartite import project
80 from networkx.algorithms.bipartite import projected_graph
81 from networkx.algorithms.connectivity import all_pairs_node_connectivity
82 from networkx.algorithms.connectivity import all_node_cuts
83 from networkx.algorithms.connectivity import average_node_connectivity
84 from networkx.algorithms.connectivity import edge_connectivity
85 from networkx.algorithms.connectivity import edge_disjoint_paths
86 from networkx.algorithms.connectivity import k_components
87 from networkx.algorithms.connectivity import k_edge_components
88 from networkx.algorithms.connectivity import k_edge_subgraphs
89 from networkx.algorithms.connectivity import k_edge_augmentation
90 from networkx.algorithms.connectivity import is_k_edge_connected
91 from networkx.algorithms.connectivity import minimum_edge_cut
92 from networkx.algorithms.connectivity import minimum_node_cut
93 from networkx.algorithms.connectivity import node_connectivity
94 from networkx.algorithms.connectivity import node_disjoint_paths
95 from networkx.algorithms.connectivity import stoer_wagner
96 from networkx.algorithms.flow import capacity_scaling
97 from networkx.algorithms.flow import cost_of_flow
98 from networkx.algorithms.flow import gomory_hu_tree
99 from networkx.algorithms.flow import max_flow_min_cost
100 from networkx.algorithms.flow import maximum_flow
101 from networkx.algorithms.flow import maximum_flow_value
102 from networkx.algorithms.flow import min_cost_flow
103 from networkx.algorithms.flow import min_cost_flow_cost
104 from networkx.algorithms.flow import minimum_cut
105 from networkx.algorithms.flow import minimum_cut_value
106 from networkx.algorithms.flow import network_simplex
107 from networkx.algorithms.isomorphism import could_be_isomorphic
108 from networkx.algorithms.isomorphism import fast_could_be_isomorphic
109 from networkx.algorithms.isomorphism import faster_could_be_isomorphic
110 from networkx.algorithms.isomorphism import is_isomorphic
111 from networkx.algorithms.tree.branchings import maximum_branching
112 from networkx.algorithms.tree.branchings import maximum_spanning_arborescence
113 from networkx.algorithms.tree.branchings import minimum_branching
114 from networkx.algorithms.tree.branchings import minimum_spanning_arborescence
115 from networkx.algorithms.tree.coding import *
116 from networkx.algorithms.tree.operations import *
117 from networkx.algorithms.tree.recognition import *
118 from networkx.algorithms.tree.mst import *
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py
--- a/networkx/algorithms/__init__.py
+++ b/networkx/algorithms/__init__.py
@@ -40,7 +40,6 @@
from networkx.algorithms.smallworld import *
from networkx.algorithms.smetric import *
from networkx.algorithms.structuralholes import *
-from networkx.algorithms.triads import *
from networkx.algorithms.sparsifiers import *
from networkx.algorithms.swap import *
from networkx.algorithms.traversal import *
|
{"golden_diff": "diff --git a/networkx/algorithms/__init__.py b/networkx/algorithms/__init__.py\n--- a/networkx/algorithms/__init__.py\n+++ b/networkx/algorithms/__init__.py\n@@ -40,7 +40,6 @@\n from networkx.algorithms.smallworld import *\n from networkx.algorithms.smetric import *\n from networkx.algorithms.structuralholes import *\n-from networkx.algorithms.triads import *\n from networkx.algorithms.sparsifiers import *\n from networkx.algorithms.swap import *\n from networkx.algorithms.traversal import *\n", "issue": "Double import\nI noticed that in `networkx/algorithms/__init__.py`the statement `from networkx.algorithms.triads import *` occurs twice. Is there any reason for this or is this just a blunder?\n", "before_files": [{"content": "from networkx.algorithms.assortativity import *\nfrom networkx.algorithms.boundary import *\nfrom networkx.algorithms.bridges import *\nfrom networkx.algorithms.chains import *\nfrom networkx.algorithms.centrality import *\nfrom networkx.algorithms.chordal import *\nfrom networkx.algorithms.cluster import *\nfrom networkx.algorithms.clique import *\nfrom networkx.algorithms.communicability_alg import *\nfrom networkx.algorithms.components import *\nfrom networkx.algorithms.coloring import *\nfrom networkx.algorithms.core import *\nfrom networkx.algorithms.covering import *\nfrom networkx.algorithms.cycles import *\nfrom networkx.algorithms.cuts import *\nfrom networkx.algorithms.dag import *\nfrom networkx.algorithms.distance_measures import *\nfrom networkx.algorithms.distance_regular import *\nfrom networkx.algorithms.dominance import *\nfrom networkx.algorithms.dominating import *\nfrom networkx.algorithms.efficiency import *\nfrom networkx.algorithms.euler import *\nfrom networkx.algorithms.graphical import *\nfrom networkx.algorithms.hierarchy import *\nfrom networkx.algorithms.hybrid import *\nfrom networkx.algorithms.link_analysis import *\nfrom networkx.algorithms.link_prediction import *\nfrom networkx.algorithms.lowest_common_ancestors import *\nfrom networkx.algorithms.isolate import *\nfrom networkx.algorithms.matching import *\nfrom networkx.algorithms.minors import *\nfrom networkx.algorithms.mis import *\nfrom networkx.algorithms.operators import *\nfrom networkx.algorithms.planarity import *\nfrom networkx.algorithms.reciprocity import *\nfrom networkx.algorithms.richclub import *\nfrom networkx.algorithms.shortest_paths import *\nfrom networkx.algorithms.similarity import *\nfrom networkx.algorithms.simple_paths import *\nfrom networkx.algorithms.smallworld import *\nfrom networkx.algorithms.smetric import *\nfrom networkx.algorithms.structuralholes import *\nfrom networkx.algorithms.triads import *\nfrom networkx.algorithms.sparsifiers import *\nfrom networkx.algorithms.swap import *\nfrom networkx.algorithms.traversal import *\nfrom networkx.algorithms.triads import *\nfrom networkx.algorithms.vitality import *\nfrom networkx.algorithms.voronoi import *\nfrom networkx.algorithms.wiener import *\n\n# Make certain subpackages available to the user as direct imports from\n# the `networkx` namespace.\nimport networkx.algorithms.assortativity\nimport networkx.algorithms.bipartite\nimport networkx.algorithms.node_classification\nimport networkx.algorithms.centrality\nimport networkx.algorithms.chordal\nimport networkx.algorithms.cluster\nimport networkx.algorithms.clique\nimport networkx.algorithms.components\nimport networkx.algorithms.connectivity\nimport networkx.algorithms.community\nimport networkx.algorithms.coloring\nimport networkx.algorithms.flow\nimport networkx.algorithms.isomorphism\nimport networkx.algorithms.link_analysis\nimport networkx.algorithms.lowest_common_ancestors\nimport networkx.algorithms.operators\nimport networkx.algorithms.shortest_paths\nimport networkx.algorithms.tournament\nimport networkx.algorithms.traversal\nimport networkx.algorithms.tree\n\n# Make certain functions from some of the previous subpackages available\n# to the user as direct imports from the `networkx` namespace.\nfrom networkx.algorithms.bipartite import complete_bipartite_graph\nfrom networkx.algorithms.bipartite import is_bipartite\nfrom networkx.algorithms.bipartite import project\nfrom networkx.algorithms.bipartite import projected_graph\nfrom networkx.algorithms.connectivity import all_pairs_node_connectivity\nfrom networkx.algorithms.connectivity import all_node_cuts\nfrom networkx.algorithms.connectivity import average_node_connectivity\nfrom networkx.algorithms.connectivity import edge_connectivity\nfrom networkx.algorithms.connectivity import edge_disjoint_paths\nfrom networkx.algorithms.connectivity import k_components\nfrom networkx.algorithms.connectivity import k_edge_components\nfrom networkx.algorithms.connectivity import k_edge_subgraphs\nfrom networkx.algorithms.connectivity import k_edge_augmentation\nfrom networkx.algorithms.connectivity import is_k_edge_connected\nfrom networkx.algorithms.connectivity import minimum_edge_cut\nfrom networkx.algorithms.connectivity import minimum_node_cut\nfrom networkx.algorithms.connectivity import node_connectivity\nfrom networkx.algorithms.connectivity import node_disjoint_paths\nfrom networkx.algorithms.connectivity import stoer_wagner\nfrom networkx.algorithms.flow import capacity_scaling\nfrom networkx.algorithms.flow import cost_of_flow\nfrom networkx.algorithms.flow import gomory_hu_tree\nfrom networkx.algorithms.flow import max_flow_min_cost\nfrom networkx.algorithms.flow import maximum_flow\nfrom networkx.algorithms.flow import maximum_flow_value\nfrom networkx.algorithms.flow import min_cost_flow\nfrom networkx.algorithms.flow import min_cost_flow_cost\nfrom networkx.algorithms.flow import minimum_cut\nfrom networkx.algorithms.flow import minimum_cut_value\nfrom networkx.algorithms.flow import network_simplex\nfrom networkx.algorithms.isomorphism import could_be_isomorphic\nfrom networkx.algorithms.isomorphism import fast_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import faster_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import is_isomorphic\nfrom networkx.algorithms.tree.branchings import maximum_branching\nfrom networkx.algorithms.tree.branchings import maximum_spanning_arborescence\nfrom networkx.algorithms.tree.branchings import minimum_branching\nfrom networkx.algorithms.tree.branchings import minimum_spanning_arborescence\nfrom networkx.algorithms.tree.coding import *\nfrom networkx.algorithms.tree.operations import *\nfrom networkx.algorithms.tree.recognition import *\nfrom networkx.algorithms.tree.mst import *\n", "path": "networkx/algorithms/__init__.py"}], "after_files": [{"content": "from networkx.algorithms.assortativity import *\nfrom networkx.algorithms.boundary import *\nfrom networkx.algorithms.bridges import *\nfrom networkx.algorithms.chains import *\nfrom networkx.algorithms.centrality import *\nfrom networkx.algorithms.chordal import *\nfrom networkx.algorithms.cluster import *\nfrom networkx.algorithms.clique import *\nfrom networkx.algorithms.communicability_alg import *\nfrom networkx.algorithms.components import *\nfrom networkx.algorithms.coloring import *\nfrom networkx.algorithms.core import *\nfrom networkx.algorithms.covering import *\nfrom networkx.algorithms.cycles import *\nfrom networkx.algorithms.cuts import *\nfrom networkx.algorithms.dag import *\nfrom networkx.algorithms.distance_measures import *\nfrom networkx.algorithms.distance_regular import *\nfrom networkx.algorithms.dominance import *\nfrom networkx.algorithms.dominating import *\nfrom networkx.algorithms.efficiency import *\nfrom networkx.algorithms.euler import *\nfrom networkx.algorithms.graphical import *\nfrom networkx.algorithms.hierarchy import *\nfrom networkx.algorithms.hybrid import *\nfrom networkx.algorithms.link_analysis import *\nfrom networkx.algorithms.link_prediction import *\nfrom networkx.algorithms.lowest_common_ancestors import *\nfrom networkx.algorithms.isolate import *\nfrom networkx.algorithms.matching import *\nfrom networkx.algorithms.minors import *\nfrom networkx.algorithms.mis import *\nfrom networkx.algorithms.operators import *\nfrom networkx.algorithms.planarity import *\nfrom networkx.algorithms.reciprocity import *\nfrom networkx.algorithms.richclub import *\nfrom networkx.algorithms.shortest_paths import *\nfrom networkx.algorithms.similarity import *\nfrom networkx.algorithms.simple_paths import *\nfrom networkx.algorithms.smallworld import *\nfrom networkx.algorithms.smetric import *\nfrom networkx.algorithms.structuralholes import *\nfrom networkx.algorithms.sparsifiers import *\nfrom networkx.algorithms.swap import *\nfrom networkx.algorithms.traversal import *\nfrom networkx.algorithms.triads import *\nfrom networkx.algorithms.vitality import *\nfrom networkx.algorithms.voronoi import *\nfrom networkx.algorithms.wiener import *\n\n# Make certain subpackages available to the user as direct imports from\n# the `networkx` namespace.\nimport networkx.algorithms.assortativity\nimport networkx.algorithms.bipartite\nimport networkx.algorithms.node_classification\nimport networkx.algorithms.centrality\nimport networkx.algorithms.chordal\nimport networkx.algorithms.cluster\nimport networkx.algorithms.clique\nimport networkx.algorithms.components\nimport networkx.algorithms.connectivity\nimport networkx.algorithms.community\nimport networkx.algorithms.coloring\nimport networkx.algorithms.flow\nimport networkx.algorithms.isomorphism\nimport networkx.algorithms.link_analysis\nimport networkx.algorithms.lowest_common_ancestors\nimport networkx.algorithms.operators\nimport networkx.algorithms.shortest_paths\nimport networkx.algorithms.tournament\nimport networkx.algorithms.traversal\nimport networkx.algorithms.tree\n\n# Make certain functions from some of the previous subpackages available\n# to the user as direct imports from the `networkx` namespace.\nfrom networkx.algorithms.bipartite import complete_bipartite_graph\nfrom networkx.algorithms.bipartite import is_bipartite\nfrom networkx.algorithms.bipartite import project\nfrom networkx.algorithms.bipartite import projected_graph\nfrom networkx.algorithms.connectivity import all_pairs_node_connectivity\nfrom networkx.algorithms.connectivity import all_node_cuts\nfrom networkx.algorithms.connectivity import average_node_connectivity\nfrom networkx.algorithms.connectivity import edge_connectivity\nfrom networkx.algorithms.connectivity import edge_disjoint_paths\nfrom networkx.algorithms.connectivity import k_components\nfrom networkx.algorithms.connectivity import k_edge_components\nfrom networkx.algorithms.connectivity import k_edge_subgraphs\nfrom networkx.algorithms.connectivity import k_edge_augmentation\nfrom networkx.algorithms.connectivity import is_k_edge_connected\nfrom networkx.algorithms.connectivity import minimum_edge_cut\nfrom networkx.algorithms.connectivity import minimum_node_cut\nfrom networkx.algorithms.connectivity import node_connectivity\nfrom networkx.algorithms.connectivity import node_disjoint_paths\nfrom networkx.algorithms.connectivity import stoer_wagner\nfrom networkx.algorithms.flow import capacity_scaling\nfrom networkx.algorithms.flow import cost_of_flow\nfrom networkx.algorithms.flow import gomory_hu_tree\nfrom networkx.algorithms.flow import max_flow_min_cost\nfrom networkx.algorithms.flow import maximum_flow\nfrom networkx.algorithms.flow import maximum_flow_value\nfrom networkx.algorithms.flow import min_cost_flow\nfrom networkx.algorithms.flow import min_cost_flow_cost\nfrom networkx.algorithms.flow import minimum_cut\nfrom networkx.algorithms.flow import minimum_cut_value\nfrom networkx.algorithms.flow import network_simplex\nfrom networkx.algorithms.isomorphism import could_be_isomorphic\nfrom networkx.algorithms.isomorphism import fast_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import faster_could_be_isomorphic\nfrom networkx.algorithms.isomorphism import is_isomorphic\nfrom networkx.algorithms.tree.branchings import maximum_branching\nfrom networkx.algorithms.tree.branchings import maximum_spanning_arborescence\nfrom networkx.algorithms.tree.branchings import minimum_branching\nfrom networkx.algorithms.tree.branchings import minimum_spanning_arborescence\nfrom networkx.algorithms.tree.coding import *\nfrom networkx.algorithms.tree.operations import *\nfrom networkx.algorithms.tree.recognition import *\nfrom networkx.algorithms.tree.mst import *\n", "path": "networkx/algorithms/__init__.py"}]}
| 1,778 | 121 |
gh_patches_debug_28442
|
rasdani/github-patches
|
git_diff
|
pypa__pipenv-1326
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pipenv starts slow when IPython is installed.
IPython is imported when importing dotenv.
(ref: theskumar/python-dotenv#84 and [import profile](https://paste.ubuntu.com/26409167/))
Since pipenv uses patched version of dotenv, pipenv should port upstream fix
or patch `dotenv/__init__.py` to stop importing dotenv.ipython.
##### Describe your environment
1. Ubuntu 17.10
1. Python version: 3.7.0a4
1. Pipenv version: 9.0.3
##### Steps to replicate
* Install Python 3.7.0a4 or newer
* ` PYTHONPROFILEIMPORTTIME=1 path/to/pipenv --version 2>pipenv-version`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/patched/dotenv/__init__.py`
Content:
```
1 from .cli import get_cli_string
2 from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv
3 try:
4 from .ipython import load_ipython_extension
5 except ImportError:
6 pass
7
8 __all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']
9
```
Path: `pipenv/patched/dotenv/ipython.py`
Content:
```
1 from __future__ import print_function
2 from .main import load_dotenv, find_dotenv
3
4 from IPython.core.magic import Magics, magics_class, line_magic
5 from IPython.core.magic_arguments import (argument, magic_arguments,
6 parse_argstring)
7
8
9 @magics_class
10 class IPythonDotEnv(Magics):
11
12 @magic_arguments()
13 @argument(
14 '-o', '--override', action='store_true',
15 help="Indicate to override existing variables"
16 )
17 @argument(
18 '-v', '--verbose', action='store_true',
19 help="Indicate function calls to be verbose"
20 )
21 @argument('dotenv_path', nargs='?', type=str, default='.env',
22 help='Search in increasingly higher folders for the `dotenv_path`')
23 @line_magic
24 def dotenv(self, line):
25 args = parse_argstring(self.dotenv, line)
26 # Locate the .env file
27 dotenv_path = args.dotenv_path
28 try:
29 dotenv_path = find_dotenv(dotenv_path, True, True)
30 except IOError:
31 print("cannot find .env file")
32 return
33
34 # Load the .env file
35 load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
36
37
38 def load_ipython_extension(ipython):
39 """Register the %dotenv magic."""
40 ipython.register_magics(IPythonDotEnv)
41
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pipenv/patched/dotenv/__init__.py b/pipenv/patched/dotenv/__init__.py
--- a/pipenv/patched/dotenv/__init__.py
+++ b/pipenv/patched/dotenv/__init__.py
@@ -1,8 +1,4 @@
from .cli import get_cli_string
from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv
-try:
- from .ipython import load_ipython_extension
-except ImportError:
- pass
-__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']
+__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']
diff --git a/pipenv/patched/dotenv/ipython.py b/pipenv/patched/dotenv/ipython.py
deleted file mode 100644
--- a/pipenv/patched/dotenv/ipython.py
+++ /dev/null
@@ -1,40 +0,0 @@
-from __future__ import print_function
-from .main import load_dotenv, find_dotenv
-
-from IPython.core.magic import Magics, magics_class, line_magic
-from IPython.core.magic_arguments import (argument, magic_arguments,
- parse_argstring)
-
-
-@magics_class
-class IPythonDotEnv(Magics):
-
- @magic_arguments()
- @argument(
- '-o', '--override', action='store_true',
- help="Indicate to override existing variables"
- )
- @argument(
- '-v', '--verbose', action='store_true',
- help="Indicate function calls to be verbose"
- )
- @argument('dotenv_path', nargs='?', type=str, default='.env',
- help='Search in increasingly higher folders for the `dotenv_path`')
- @line_magic
- def dotenv(self, line):
- args = parse_argstring(self.dotenv, line)
- # Locate the .env file
- dotenv_path = args.dotenv_path
- try:
- dotenv_path = find_dotenv(dotenv_path, True, True)
- except IOError:
- print("cannot find .env file")
- return
-
- # Load the .env file
- load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)
-
-
-def load_ipython_extension(ipython):
- """Register the %dotenv magic."""
- ipython.register_magics(IPythonDotEnv)
|
{"golden_diff": "diff --git a/pipenv/patched/dotenv/__init__.py b/pipenv/patched/dotenv/__init__.py\n--- a/pipenv/patched/dotenv/__init__.py\n+++ b/pipenv/patched/dotenv/__init__.py\n@@ -1,8 +1,4 @@\n from .cli import get_cli_string\n from .main import load_dotenv, get_key, set_key, unset_key, find_dotenv\n-try:\n- from .ipython import load_ipython_extension\n-except ImportError:\n- pass\n \n-__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']\n+__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']\ndiff --git a/pipenv/patched/dotenv/ipython.py b/pipenv/patched/dotenv/ipython.py\ndeleted file mode 100644\n--- a/pipenv/patched/dotenv/ipython.py\n+++ /dev/null\n@@ -1,40 +0,0 @@\n-from __future__ import print_function\n-from .main import load_dotenv, find_dotenv\n-\n-from IPython.core.magic import Magics, magics_class, line_magic\n-from IPython.core.magic_arguments import (argument, magic_arguments,\n- parse_argstring)\n-\n-\n-@magics_class\n-class IPythonDotEnv(Magics):\n-\n- @magic_arguments()\n- @argument(\n- '-o', '--override', action='store_true',\n- help=\"Indicate to override existing variables\"\n- )\n- @argument(\n- '-v', '--verbose', action='store_true',\n- help=\"Indicate function calls to be verbose\"\n- )\n- @argument('dotenv_path', nargs='?', type=str, default='.env',\n- help='Search in increasingly higher folders for the `dotenv_path`')\n- @line_magic\n- def dotenv(self, line):\n- args = parse_argstring(self.dotenv, line)\n- # Locate the .env file\n- dotenv_path = args.dotenv_path\n- try:\n- dotenv_path = find_dotenv(dotenv_path, True, True)\n- except IOError:\n- print(\"cannot find .env file\")\n- return\n-\n- # Load the .env file\n- load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)\n-\n-\n-def load_ipython_extension(ipython):\n- \"\"\"Register the %dotenv magic.\"\"\"\n- ipython.register_magics(IPythonDotEnv)\n", "issue": "pipenv starts slow when IPython is installed.\nIPython is imported when importing dotenv. \r\n(ref: theskumar/python-dotenv#84 and [import profile](https://paste.ubuntu.com/26409167/))\r\n\r\nSince pipenv uses patched version of dotenv, pipenv should port upstream fix\r\nor patch `dotenv/__init__.py` to stop importing dotenv.ipython.\r\n\r\n##### Describe your environment\r\n\r\n1. Ubuntu 17.10\r\n1. Python version: 3.7.0a4\r\n1. Pipenv version: 9.0.3\r\n\r\n##### Steps to replicate\r\n\r\n* Install Python 3.7.0a4 or newer\r\n* ` PYTHONPROFILEIMPORTTIME=1 path/to/pipenv --version 2>pipenv-version`\n", "before_files": [{"content": "from .cli import get_cli_string\nfrom .main import load_dotenv, get_key, set_key, unset_key, find_dotenv\ntry:\n from .ipython import load_ipython_extension\nexcept ImportError:\n pass\n\n__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv', 'load_ipython_extension']\n", "path": "pipenv/patched/dotenv/__init__.py"}, {"content": "from __future__ import print_function\nfrom .main import load_dotenv, find_dotenv\n\nfrom IPython.core.magic import Magics, magics_class, line_magic\nfrom IPython.core.magic_arguments import (argument, magic_arguments,\n parse_argstring)\n\n\n@magics_class\nclass IPythonDotEnv(Magics):\n\n @magic_arguments()\n @argument(\n '-o', '--override', action='store_true',\n help=\"Indicate to override existing variables\"\n )\n @argument(\n '-v', '--verbose', action='store_true',\n help=\"Indicate function calls to be verbose\"\n )\n @argument('dotenv_path', nargs='?', type=str, default='.env',\n help='Search in increasingly higher folders for the `dotenv_path`')\n @line_magic\n def dotenv(self, line):\n args = parse_argstring(self.dotenv, line)\n # Locate the .env file\n dotenv_path = args.dotenv_path\n try:\n dotenv_path = find_dotenv(dotenv_path, True, True)\n except IOError:\n print(\"cannot find .env file\")\n return\n\n # Load the .env file\n load_dotenv(dotenv_path, verbose=args.verbose, override=args.override)\n\n\ndef load_ipython_extension(ipython):\n \"\"\"Register the %dotenv magic.\"\"\"\n ipython.register_magics(IPythonDotEnv)\n", "path": "pipenv/patched/dotenv/ipython.py"}], "after_files": [{"content": "from .cli import get_cli_string\nfrom .main import load_dotenv, get_key, set_key, unset_key, find_dotenv\n\n__all__ = ['get_cli_string', 'load_dotenv', 'get_key', 'set_key', 'unset_key', 'find_dotenv']\n", "path": "pipenv/patched/dotenv/__init__.py"}, {"content": null, "path": "pipenv/patched/dotenv/ipython.py"}]}
| 921 | 594 |
gh_patches_debug_4224
|
rasdani/github-patches
|
git_diff
|
pypa__pip-5146
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Development version number triggers a false positive warning
* Pip version: 10.0.0b1
* Python version: 3.6.4
* Operating system: Linux
### Description:
Say a package `foo` depends on `bar>=1.0.0`. If the installed version of `bar` is a development version such as `1.0.1.dev42`, pip issues an incompatible version warning upon installation of `foo`. Pip shouldn't issue any warning since `1.0.1.dev42>=1.0.0`. The weird thing is that pip is satisfied with that version when scanning the dependencies of `foo`, but issues that warning anyway.
For that matter, the real life scenario is installing a development library with a `setuptools_scm`-generated version number and then installing a library that depends on it.
### What I've run:
```
% tree
.
├── bar
│ └── setup.py
└── foo
└── setup.py
2 directories, 2 files
```
```
% cat bar/setup.py
from setuptools import setup
setup(
name='bar',
version='1.0.1.dev42')
```
```
% cat foo/setup.py
from setuptools import setup
setup(
name='foo',
install_requires=['bar>=1.0.0'],
version='3.14.15')
```
```
# setting up virtual environment
% python3 -m venv compat
% source compat/bin/activate
% pip install pip==10.0.0b1
```
```
% pip install ./bar
Processing ./bar
Installing collected packages: bar
Running setup.py install for bar ... done
Successfully installed bar-1.0.1.dev42
```
```
% pip install ./foo
Processing ./foo
Requirement already satisfied: bar>=1.0.0 in ./compat/lib/python3.6/site-packages (from foo==3.14.15) (1.0.1.dev42)
foo 3.14.15 has requirement bar>=1.0.0, but you'll have bar 1.0.1.dev42 which is incompatible.
Installing collected packages: foo
Running setup.py install for foo ... done
Successfully installed foo-3.14.15
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/operations/check.py`
Content:
```
1 """Validation of dependencies of packages
2 """
3
4 from collections import namedtuple
5
6 from pip._vendor.packaging.utils import canonicalize_name
7
8 from pip._internal.operations.prepare import make_abstract_dist
9
10 from pip._internal.utils.misc import get_installed_distributions
11 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
12
13 if MYPY_CHECK_RUNNING:
14 from pip._internal.req.req_install import InstallRequirement
15 from typing import Any, Dict, Iterator, Set, Tuple, List
16
17 # Shorthands
18 PackageSet = Dict[str, 'PackageDetails']
19 Missing = Tuple[str, Any]
20 Conflicting = Tuple[str, str, Any]
21
22 MissingDict = Dict[str, List[Missing]]
23 ConflictingDict = Dict[str, List[Conflicting]]
24 CheckResult = Tuple[MissingDict, ConflictingDict]
25
26 PackageDetails = namedtuple('PackageDetails', ['version', 'requires'])
27
28
29 def create_package_set_from_installed(**kwargs):
30 # type: (**Any) -> PackageSet
31 """Converts a list of distributions into a PackageSet.
32 """
33 retval = {}
34 for dist in get_installed_distributions(**kwargs):
35 name = canonicalize_name(dist.project_name)
36 retval[name] = PackageDetails(dist.version, dist.requires())
37 return retval
38
39
40 def check_package_set(package_set):
41 # type: (PackageSet) -> CheckResult
42 """Check if a package set is consistent
43 """
44 missing = dict()
45 conflicting = dict()
46
47 for package_name in package_set:
48 # Info about dependencies of package_name
49 missing_deps = set() # type: Set[Missing]
50 conflicting_deps = set() # type: Set[Conflicting]
51
52 for req in package_set[package_name].requires:
53 name = canonicalize_name(req.project_name) # type: str
54
55 # Check if it's missing
56 if name not in package_set:
57 missed = True
58 if req.marker is not None:
59 missed = req.marker.evaluate()
60 if missed:
61 missing_deps.add((name, req))
62 continue
63
64 # Check if there's a conflict
65 version = package_set[name].version # type: str
66 if version not in req.specifier:
67 conflicting_deps.add((name, version, req))
68
69 def str_key(x):
70 return str(x)
71
72 if missing_deps:
73 missing[package_name] = sorted(missing_deps, key=str_key)
74 if conflicting_deps:
75 conflicting[package_name] = sorted(conflicting_deps, key=str_key)
76
77 return missing, conflicting
78
79
80 def check_install_conflicts(to_install):
81 # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]
82 """For checking if the dependency graph would be consistent after \
83 installing given requirements
84 """
85 # Start from the current state
86 state = create_package_set_from_installed()
87 _simulate_installation_of(to_install, state)
88 return state, check_package_set(state)
89
90
91 # NOTE from @pradyunsg
92 # This required a minor update in dependency link handling logic over at
93 # operations.prepare.IsSDist.dist() to get it working
94 def _simulate_installation_of(to_install, state):
95 # type: (List[InstallRequirement], PackageSet) -> None
96 """Computes the version of packages after installing to_install.
97 """
98
99 # Modify it as installing requirement_set would (assuming no errors)
100 for inst_req in to_install:
101 dist = make_abstract_dist(inst_req).dist(finder=None)
102 state[dist.key] = PackageDetails(dist.version, dist.requires())
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py
--- a/src/pip/_internal/operations/check.py
+++ b/src/pip/_internal/operations/check.py
@@ -63,7 +63,7 @@
# Check if there's a conflict
version = package_set[name].version # type: str
- if version not in req.specifier:
+ if not req.specifier.contains(version, prereleases=True):
conflicting_deps.add((name, version, req))
def str_key(x):
|
{"golden_diff": "diff --git a/src/pip/_internal/operations/check.py b/src/pip/_internal/operations/check.py\n--- a/src/pip/_internal/operations/check.py\n+++ b/src/pip/_internal/operations/check.py\n@@ -63,7 +63,7 @@\n \n # Check if there's a conflict\n version = package_set[name].version # type: str\n- if version not in req.specifier:\n+ if not req.specifier.contains(version, prereleases=True):\n conflicting_deps.add((name, version, req))\n \n def str_key(x):\n", "issue": "Development version number triggers a false positive warning\n* Pip version: 10.0.0b1\r\n* Python version: 3.6.4\r\n* Operating system: Linux\r\n\r\n### Description:\r\n\r\nSay a package `foo` depends on `bar>=1.0.0`. If the installed version of `bar` is a development version such as `1.0.1.dev42`, pip issues an incompatible version warning upon installation of `foo`. Pip shouldn't issue any warning since `1.0.1.dev42>=1.0.0`. The weird thing is that pip is satisfied with that version when scanning the dependencies of `foo`, but issues that warning anyway.\r\n\r\nFor that matter, the real life scenario is installing a development library with a `setuptools_scm`-generated version number and then installing a library that depends on it.\r\n\r\n### What I've run:\r\n\r\n```\r\n% tree\r\n.\r\n\u251c\u2500\u2500 bar\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 setup.py\r\n\u2514\u2500\u2500 foo\r\n \u2514\u2500\u2500 setup.py\r\n\r\n2 directories, 2 files\r\n```\r\n\r\n```\r\n% cat bar/setup.py\r\nfrom setuptools import setup\r\n\r\nsetup(\r\n name='bar',\r\n version='1.0.1.dev42')\r\n```\r\n\r\n```\r\n% cat foo/setup.py\r\nfrom setuptools import setup\r\n\r\nsetup(\r\n name='foo',\r\n install_requires=['bar>=1.0.0'],\r\n version='3.14.15')\r\n```\r\n\r\n```\r\n# setting up virtual environment\r\n% python3 -m venv compat\r\n% source compat/bin/activate\r\n% pip install pip==10.0.0b1\r\n```\r\n\r\n```\r\n% pip install ./bar\r\nProcessing ./bar\r\nInstalling collected packages: bar\r\n Running setup.py install for bar ... done\r\nSuccessfully installed bar-1.0.1.dev42\r\n```\r\n\r\n```\r\n% pip install ./foo\r\nProcessing ./foo\r\nRequirement already satisfied: bar>=1.0.0 in ./compat/lib/python3.6/site-packages (from foo==3.14.15) (1.0.1.dev42)\r\nfoo 3.14.15 has requirement bar>=1.0.0, but you'll have bar 1.0.1.dev42 which is incompatible.\r\nInstalling collected packages: foo\r\n Running setup.py install for foo ... done\r\nSuccessfully installed foo-3.14.15\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Validation of dependencies of packages\n\"\"\"\n\nfrom collections import namedtuple\n\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.operations.prepare import make_abstract_dist\n\nfrom pip._internal.utils.misc import get_installed_distributions\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from pip._internal.req.req_install import InstallRequirement\n from typing import Any, Dict, Iterator, Set, Tuple, List\n\n # Shorthands\n PackageSet = Dict[str, 'PackageDetails']\n Missing = Tuple[str, Any]\n Conflicting = Tuple[str, str, Any]\n\n MissingDict = Dict[str, List[Missing]]\n ConflictingDict = Dict[str, List[Conflicting]]\n CheckResult = Tuple[MissingDict, ConflictingDict]\n\nPackageDetails = namedtuple('PackageDetails', ['version', 'requires'])\n\n\ndef create_package_set_from_installed(**kwargs):\n # type: (**Any) -> PackageSet\n \"\"\"Converts a list of distributions into a PackageSet.\n \"\"\"\n retval = {}\n for dist in get_installed_distributions(**kwargs):\n name = canonicalize_name(dist.project_name)\n retval[name] = PackageDetails(dist.version, dist.requires())\n return retval\n\n\ndef check_package_set(package_set):\n # type: (PackageSet) -> CheckResult\n \"\"\"Check if a package set is consistent\n \"\"\"\n missing = dict()\n conflicting = dict()\n\n for package_name in package_set:\n # Info about dependencies of package_name\n missing_deps = set() # type: Set[Missing]\n conflicting_deps = set() # type: Set[Conflicting]\n\n for req in package_set[package_name].requires:\n name = canonicalize_name(req.project_name) # type: str\n\n # Check if it's missing\n if name not in package_set:\n missed = True\n if req.marker is not None:\n missed = req.marker.evaluate()\n if missed:\n missing_deps.add((name, req))\n continue\n\n # Check if there's a conflict\n version = package_set[name].version # type: str\n if version not in req.specifier:\n conflicting_deps.add((name, version, req))\n\n def str_key(x):\n return str(x)\n\n if missing_deps:\n missing[package_name] = sorted(missing_deps, key=str_key)\n if conflicting_deps:\n conflicting[package_name] = sorted(conflicting_deps, key=str_key)\n\n return missing, conflicting\n\n\ndef check_install_conflicts(to_install):\n # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]\n \"\"\"For checking if the dependency graph would be consistent after \\\n installing given requirements\n \"\"\"\n # Start from the current state\n state = create_package_set_from_installed()\n _simulate_installation_of(to_install, state)\n return state, check_package_set(state)\n\n\n# NOTE from @pradyunsg\n# This required a minor update in dependency link handling logic over at\n# operations.prepare.IsSDist.dist() to get it working\ndef _simulate_installation_of(to_install, state):\n # type: (List[InstallRequirement], PackageSet) -> None\n \"\"\"Computes the version of packages after installing to_install.\n \"\"\"\n\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n state[dist.key] = PackageDetails(dist.version, dist.requires())\n", "path": "src/pip/_internal/operations/check.py"}], "after_files": [{"content": "\"\"\"Validation of dependencies of packages\n\"\"\"\n\nfrom collections import namedtuple\n\nfrom pip._vendor.packaging.utils import canonicalize_name\n\nfrom pip._internal.operations.prepare import make_abstract_dist\n\nfrom pip._internal.utils.misc import get_installed_distributions\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n from pip._internal.req.req_install import InstallRequirement\n from typing import Any, Dict, Iterator, Set, Tuple, List\n\n # Shorthands\n PackageSet = Dict[str, 'PackageDetails']\n Missing = Tuple[str, Any]\n Conflicting = Tuple[str, str, Any]\n\n MissingDict = Dict[str, List[Missing]]\n ConflictingDict = Dict[str, List[Conflicting]]\n CheckResult = Tuple[MissingDict, ConflictingDict]\n\nPackageDetails = namedtuple('PackageDetails', ['version', 'requires'])\n\n\ndef create_package_set_from_installed(**kwargs):\n # type: (**Any) -> PackageSet\n \"\"\"Converts a list of distributions into a PackageSet.\n \"\"\"\n retval = {}\n for dist in get_installed_distributions(**kwargs):\n name = canonicalize_name(dist.project_name)\n retval[name] = PackageDetails(dist.version, dist.requires())\n return retval\n\n\ndef check_package_set(package_set):\n # type: (PackageSet) -> CheckResult\n \"\"\"Check if a package set is consistent\n \"\"\"\n missing = dict()\n conflicting = dict()\n\n for package_name in package_set:\n # Info about dependencies of package_name\n missing_deps = set() # type: Set[Missing]\n conflicting_deps = set() # type: Set[Conflicting]\n\n for req in package_set[package_name].requires:\n name = canonicalize_name(req.project_name) # type: str\n\n # Check if it's missing\n if name not in package_set:\n missed = True\n if req.marker is not None:\n missed = req.marker.evaluate()\n if missed:\n missing_deps.add((name, req))\n continue\n\n # Check if there's a conflict\n version = package_set[name].version # type: str\n if not req.specifier.contains(version, prereleases=True):\n conflicting_deps.add((name, version, req))\n\n def str_key(x):\n return str(x)\n\n if missing_deps:\n missing[package_name] = sorted(missing_deps, key=str_key)\n if conflicting_deps:\n conflicting[package_name] = sorted(conflicting_deps, key=str_key)\n\n return missing, conflicting\n\n\ndef check_install_conflicts(to_install):\n # type: (List[InstallRequirement]) -> Tuple[PackageSet, CheckResult]\n \"\"\"For checking if the dependency graph would be consistent after \\\n installing given requirements\n \"\"\"\n # Start from the current state\n state = create_package_set_from_installed()\n _simulate_installation_of(to_install, state)\n return state, check_package_set(state)\n\n\n# NOTE from @pradyunsg\n# This required a minor update in dependency link handling logic over at\n# operations.prepare.IsSDist.dist() to get it working\ndef _simulate_installation_of(to_install, state):\n # type: (List[InstallRequirement], PackageSet) -> None\n \"\"\"Computes the version of packages after installing to_install.\n \"\"\"\n\n # Modify it as installing requirement_set would (assuming no errors)\n for inst_req in to_install:\n dist = make_abstract_dist(inst_req).dist(finder=None)\n state[dist.key] = PackageDetails(dist.version, dist.requires())\n", "path": "src/pip/_internal/operations/check.py"}]}
| 1,739 | 126 |
gh_patches_debug_51874
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-5287
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`json_graph.tree_data` can cause maximum recursion depth error.
<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->
<!--- Provide a general summary of the issue in the Title above -->
### Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
Currently the algorithm compares the `n_nodes` with `n_edges` to check if `G` is a tree. https://github.com/networkx/networkx/blob/0cc70051fa0a979b1f1eab4af5b6587a6ebf8334/networkx/readwrite/json_graph/tree.py#L74-L75
This check can be bypassed with specific inputs and cause a recursion error.
### Expected Behavior
<!--- Tell us what should happen -->
The code should check whether there are cycles with `root` as the source and raise an exception.
Another possible fix would be to check if the graph is not weakly connected.
### Steps to Reproduce
<!--- Provide a minimal example that reproduces the bug -->
```Python3
>>> import networkx as nx
>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])
>>> G.add_node(4)
>>> data = nx.json_graph.tree_data(G, 1)
RecursionError: maximum recursion depth exceeded
```
### Environment
<!--- Please provide details about your local environment -->
Python version: 3.8.10
NetworkX version: 2.7rc1.dev0
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `networkx/readwrite/json_graph/tree.py`
Content:
```
1 from itertools import chain
2 import networkx as nx
3
4 __all__ = ["tree_data", "tree_graph"]
5
6
7 # NOTE: Remove attrs from signature in 3.0
8 def tree_data(G, root, attrs=None, ident="id", children="children"):
9 """Returns data in tree format that is suitable for JSON serialization
10 and use in Javascript documents.
11
12 Parameters
13 ----------
14 G : NetworkX graph
15 G must be an oriented tree
16
17 root : node
18 The root of the tree
19
20 attrs : dict
21 A dictionary that contains two keys 'id' and 'children'. The
22 corresponding values provide the attribute names for storing
23 NetworkX-internal graph data. The values should be unique. Default
24 value: :samp:`dict(id='id', children='children')`.
25
26 If some user-defined graph data use these attribute names as data keys,
27 they may be silently dropped.
28
29 .. deprecated:: 2.6
30
31 The `attrs` keyword argument is replaced by `ident` and `children`
32 and will be removed in networkx 3.0
33
34 ident : string
35 Attribute name for storing NetworkX-internal graph data. `ident` must
36 have a different value than `children`. The default is 'id'.
37
38 children : string
39 Attribute name for storing NetworkX-internal graph data. `children`
40 must have a different value than `ident`. The default is 'children'.
41
42 Returns
43 -------
44 data : dict
45 A dictionary with node-link formatted data.
46
47 Raises
48 ------
49 NetworkXError
50 If `children` and `ident` attributes are identical.
51
52 Examples
53 --------
54 >>> from networkx.readwrite import json_graph
55 >>> G = nx.DiGraph([(1, 2)])
56 >>> data = json_graph.tree_data(G, root=1)
57
58 To serialize with json
59
60 >>> import json
61 >>> s = json.dumps(data)
62
63 Notes
64 -----
65 Node attributes are stored in this format but keys
66 for attributes must be strings if you want to serialize with JSON.
67
68 Graph and edge attributes are not stored.
69
70 See Also
71 --------
72 tree_graph, node_link_data, adjacency_data
73 """
74 if G.number_of_nodes() != G.number_of_edges() + 1:
75 raise TypeError("G is not a tree.")
76 if not G.is_directed():
77 raise TypeError("G is not directed.")
78
79 # NOTE: to be removed in 3.0
80 if attrs is not None:
81 import warnings
82
83 msg = (
84 "\nThe `attrs` keyword argument of tree_data is deprecated\n"
85 "and will be removed in networkx 3.0.\n"
86 "It is replaced with explicit `ident` and `children` "
87 "keyword arguments.\n"
88 "To make this warning go away and ensure usage is forward\n"
89 "compatible, replace `attrs` with `ident` and `children,\n"
90 "for example:\n\n"
91 " >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\n\n"
92 "should instead be written as\n\n"
93 " >>> tree_data(G, root, ident='foo', children='bar')\n\n"
94 "The default values of 'id' and 'children' will not change."
95 )
96 warnings.warn(msg, DeprecationWarning, stacklevel=2)
97
98 ident = attrs["id"]
99 children = attrs["children"]
100
101 if ident == children:
102 raise nx.NetworkXError("The values for `id` and `children` must be different.")
103
104 def add_children(n, G):
105 nbrs = G[n]
106 if len(nbrs) == 0:
107 return []
108 children_ = []
109 for child in nbrs:
110 d = dict(chain(G.nodes[child].items(), [(ident, child)]))
111 c = add_children(child, G)
112 if c:
113 d[children] = c
114 children_.append(d)
115 return children_
116
117 data = dict(chain(G.nodes[root].items(), [(ident, root)]))
118 data[children] = add_children(root, G)
119 return data
120
121
122 def tree_graph(data, attrs=None, ident="id", children="children"):
123 """Returns graph from tree data format.
124
125 Parameters
126 ----------
127 data : dict
128 Tree formatted graph data
129 attrs : dict
130 A dictionary that contains two keys 'id' and 'children'. The
131 corresponding values provide the attribute names for storing
132 NetworkX-internal graph data. The values should be unique. Default
133 value: :samp:`dict(id='id', children='children')`.
134
135 .. deprecated:: 2.6
136
137 The `attrs` keyword argument is replaced by `ident` and `children`
138 and will be removed in networkx 3.0
139
140 ident : string
141 Attribute name for storing NetworkX-internal graph data. `ident` must
142 have a different value than `children`. The default is 'id'.
143
144 children : string
145 Attribute name for storing NetworkX-internal graph data. `children`
146 must have a different value than `ident`. The default is 'children'.
147
148 Returns
149 -------
150 G : NetworkX DiGraph
151
152 Examples
153 --------
154 >>> from networkx.readwrite import json_graph
155 >>> G = nx.DiGraph([(1, 2)])
156 >>> data = json_graph.tree_data(G, root=1)
157 >>> H = json_graph.tree_graph(data)
158
159 See Also
160 --------
161 tree_data, node_link_data, adjacency_data
162 """
163 graph = nx.DiGraph()
164 if attrs is not None:
165 import warnings
166
167 msg = (
168 "\nThe `attrs` keyword argument of tree_graph is deprecated\n"
169 "and will be removed in networkx 3.0.\n"
170 "It is replaced with explicit `ident` and `children` "
171 "keyword arguments.\n"
172 "To make this warning go away and ensure usage is\n"
173 "forward compatible, replace `attrs` with `ident` and `children,\n"
174 "for example:\n\n"
175 " >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\n\n"
176 "should instead be written as\n\n"
177 " >>> tree_graph(data, ident='foo', children='bar')\n\n"
178 "The default values of 'id' and 'children' will not change."
179 )
180 warnings.warn(msg, DeprecationWarning, stacklevel=2)
181
182 ident = attrs["id"]
183 children = attrs["children"]
184
185 def add_children(parent, children_):
186 for data in children_:
187 child = data[ident]
188 graph.add_edge(parent, child)
189 grandchildren = data.get(children, [])
190 if grandchildren:
191 add_children(child, grandchildren)
192 nodedata = {
193 str(k): v for k, v in data.items() if k != ident and k != children
194 }
195 graph.add_node(child, **nodedata)
196
197 root = data[ident]
198 children_ = data.get(children, [])
199 nodedata = {str(k): v for k, v in data.items() if k != ident and k != children}
200 graph.add_node(root, **nodedata)
201 add_children(root, children_)
202 return graph
203
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py
--- a/networkx/readwrite/json_graph/tree.py
+++ b/networkx/readwrite/json_graph/tree.py
@@ -75,6 +75,8 @@
raise TypeError("G is not a tree.")
if not G.is_directed():
raise TypeError("G is not directed.")
+ if not nx.is_weakly_connected(G):
+ raise TypeError("G is not weakly connected.")
# NOTE: to be removed in 3.0
if attrs is not None:
|
{"golden_diff": "diff --git a/networkx/readwrite/json_graph/tree.py b/networkx/readwrite/json_graph/tree.py\n--- a/networkx/readwrite/json_graph/tree.py\n+++ b/networkx/readwrite/json_graph/tree.py\n@@ -75,6 +75,8 @@\n raise TypeError(\"G is not a tree.\")\n if not G.is_directed():\n raise TypeError(\"G is not directed.\")\n+ if not nx.is_weakly_connected(G):\n+ raise TypeError(\"G is not weakly connected.\")\n \n # NOTE: to be removed in 3.0\n if attrs is not None:\n", "issue": "`json_graph.tree_data` can cause maximum recursion depth error.\n<!-- If you have a general question about NetworkX, please use the discussions tab to create a new discussion -->\r\n\r\n<!--- Provide a general summary of the issue in the Title above -->\r\n\r\n### Current Behavior\r\n<!--- Tell us what happens instead of the expected behavior -->\r\nCurrently the algorithm compares the `n_nodes` with `n_edges` to check if `G` is a tree. https://github.com/networkx/networkx/blob/0cc70051fa0a979b1f1eab4af5b6587a6ebf8334/networkx/readwrite/json_graph/tree.py#L74-L75 \r\nThis check can be bypassed with specific inputs and cause a recursion error.\r\n\r\n### Expected Behavior\r\n<!--- Tell us what should happen -->\r\nThe code should check whether there are cycles with `root` as the source and raise an exception.\r\nAnother possible fix would be to check if the graph is not weakly connected.\r\n\r\n### Steps to Reproduce\r\n<!--- Provide a minimal example that reproduces the bug -->\r\n```Python3\r\n>>> import networkx as nx\r\n>>> G = nx.DiGraph([(1, 2), (2, 3), (3, 1)])\r\n>>> G.add_node(4)\r\n>>> data = nx.json_graph.tree_data(G, 1)\r\nRecursionError: maximum recursion depth exceeded\r\n```\r\n\r\n### Environment\r\n<!--- Please provide details about your local environment -->\r\nPython version: 3.8.10\r\nNetworkX version: 2.7rc1.dev0\r\n\n", "before_files": [{"content": "from itertools import chain\nimport networkx as nx\n\n__all__ = [\"tree_data\", \"tree_graph\"]\n\n\n# NOTE: Remove attrs from signature in 3.0\ndef tree_data(G, root, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns data in tree format that is suitable for JSON serialization\n and use in Javascript documents.\n\n Parameters\n ----------\n G : NetworkX graph\n G must be an oriented tree\n\n root : node\n The root of the tree\n\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n If some user-defined graph data use these attribute names as data keys,\n they may be silently dropped.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n data : dict\n A dictionary with node-link formatted data.\n\n Raises\n ------\n NetworkXError\n If `children` and `ident` attributes are identical.\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n\n To serialize with json\n\n >>> import json\n >>> s = json.dumps(data)\n\n Notes\n -----\n Node attributes are stored in this format but keys\n for attributes must be strings if you want to serialize with JSON.\n\n Graph and edge attributes are not stored.\n\n See Also\n --------\n tree_graph, node_link_data, adjacency_data\n \"\"\"\n if G.number_of_nodes() != G.number_of_edges() + 1:\n raise TypeError(\"G is not a tree.\")\n if not G.is_directed():\n raise TypeError(\"G is not directed.\")\n\n # NOTE: to be removed in 3.0\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_data is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is forward\\n\"\n \"compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_data(G, root, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n if ident == children:\n raise nx.NetworkXError(\"The values for `id` and `children` must be different.\")\n\n def add_children(n, G):\n nbrs = G[n]\n if len(nbrs) == 0:\n return []\n children_ = []\n for child in nbrs:\n d = dict(chain(G.nodes[child].items(), [(ident, child)]))\n c = add_children(child, G)\n if c:\n d[children] = c\n children_.append(d)\n return children_\n\n data = dict(chain(G.nodes[root].items(), [(ident, root)]))\n data[children] = add_children(root, G)\n return data\n\n\ndef tree_graph(data, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns graph from tree data format.\n\n Parameters\n ----------\n data : dict\n Tree formatted graph data\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n G : NetworkX DiGraph\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n >>> H = json_graph.tree_graph(data)\n\n See Also\n --------\n tree_data, node_link_data, adjacency_data\n \"\"\"\n graph = nx.DiGraph()\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_graph is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is\\n\"\n \"forward compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_graph(data, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n def add_children(parent, children_):\n for data in children_:\n child = data[ident]\n graph.add_edge(parent, child)\n grandchildren = data.get(children, [])\n if grandchildren:\n add_children(child, grandchildren)\n nodedata = {\n str(k): v for k, v in data.items() if k != ident and k != children\n }\n graph.add_node(child, **nodedata)\n\n root = data[ident]\n children_ = data.get(children, [])\n nodedata = {str(k): v for k, v in data.items() if k != ident and k != children}\n graph.add_node(root, **nodedata)\n add_children(root, children_)\n return graph\n", "path": "networkx/readwrite/json_graph/tree.py"}], "after_files": [{"content": "from itertools import chain\nimport networkx as nx\n\n__all__ = [\"tree_data\", \"tree_graph\"]\n\n\n# NOTE: Remove attrs from signature in 3.0\ndef tree_data(G, root, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns data in tree format that is suitable for JSON serialization\n and use in Javascript documents.\n\n Parameters\n ----------\n G : NetworkX graph\n G must be an oriented tree\n\n root : node\n The root of the tree\n\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n If some user-defined graph data use these attribute names as data keys,\n they may be silently dropped.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n data : dict\n A dictionary with node-link formatted data.\n\n Raises\n ------\n NetworkXError\n If `children` and `ident` attributes are identical.\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n\n To serialize with json\n\n >>> import json\n >>> s = json.dumps(data)\n\n Notes\n -----\n Node attributes are stored in this format but keys\n for attributes must be strings if you want to serialize with JSON.\n\n Graph and edge attributes are not stored.\n\n See Also\n --------\n tree_graph, node_link_data, adjacency_data\n \"\"\"\n if G.number_of_nodes() != G.number_of_edges() + 1:\n raise TypeError(\"G is not a tree.\")\n if not G.is_directed():\n raise TypeError(\"G is not directed.\")\n if not nx.is_weakly_connected(G):\n raise TypeError(\"G is not weakly connected.\")\n\n # NOTE: to be removed in 3.0\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_data is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is forward\\n\"\n \"compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_data(G, root, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_data(G, root, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n if ident == children:\n raise nx.NetworkXError(\"The values for `id` and `children` must be different.\")\n\n def add_children(n, G):\n nbrs = G[n]\n if len(nbrs) == 0:\n return []\n children_ = []\n for child in nbrs:\n d = dict(chain(G.nodes[child].items(), [(ident, child)]))\n c = add_children(child, G)\n if c:\n d[children] = c\n children_.append(d)\n return children_\n\n data = dict(chain(G.nodes[root].items(), [(ident, root)]))\n data[children] = add_children(root, G)\n return data\n\n\ndef tree_graph(data, attrs=None, ident=\"id\", children=\"children\"):\n \"\"\"Returns graph from tree data format.\n\n Parameters\n ----------\n data : dict\n Tree formatted graph data\n attrs : dict\n A dictionary that contains two keys 'id' and 'children'. The\n corresponding values provide the attribute names for storing\n NetworkX-internal graph data. The values should be unique. Default\n value: :samp:`dict(id='id', children='children')`.\n\n .. deprecated:: 2.6\n\n The `attrs` keyword argument is replaced by `ident` and `children`\n and will be removed in networkx 3.0\n\n ident : string\n Attribute name for storing NetworkX-internal graph data. `ident` must\n have a different value than `children`. The default is 'id'.\n\n children : string\n Attribute name for storing NetworkX-internal graph data. `children`\n must have a different value than `ident`. The default is 'children'.\n\n Returns\n -------\n G : NetworkX DiGraph\n\n Examples\n --------\n >>> from networkx.readwrite import json_graph\n >>> G = nx.DiGraph([(1, 2)])\n >>> data = json_graph.tree_data(G, root=1)\n >>> H = json_graph.tree_graph(data)\n\n See Also\n --------\n tree_data, node_link_data, adjacency_data\n \"\"\"\n graph = nx.DiGraph()\n if attrs is not None:\n import warnings\n\n msg = (\n \"\\nThe `attrs` keyword argument of tree_graph is deprecated\\n\"\n \"and will be removed in networkx 3.0.\\n\"\n \"It is replaced with explicit `ident` and `children` \"\n \"keyword arguments.\\n\"\n \"To make this warning go away and ensure usage is\\n\"\n \"forward compatible, replace `attrs` with `ident` and `children,\\n\"\n \"for example:\\n\\n\"\n \" >>> tree_graph(data, attrs={'id': 'foo', 'children': 'bar'})\\n\\n\"\n \"should instead be written as\\n\\n\"\n \" >>> tree_graph(data, ident='foo', children='bar')\\n\\n\"\n \"The default values of 'id' and 'children' will not change.\"\n )\n warnings.warn(msg, DeprecationWarning, stacklevel=2)\n\n ident = attrs[\"id\"]\n children = attrs[\"children\"]\n\n def add_children(parent, children_):\n for data in children_:\n child = data[ident]\n graph.add_edge(parent, child)\n grandchildren = data.get(children, [])\n if grandchildren:\n add_children(child, grandchildren)\n nodedata = {\n str(k): v for k, v in data.items() if k != ident and k != children\n }\n graph.add_node(child, **nodedata)\n\n root = data[ident]\n children_ = data.get(children, [])\n nodedata = {str(k): v for k, v in data.items() if k != ident and k != children}\n graph.add_node(root, **nodedata)\n add_children(root, children_)\n return graph\n", "path": "networkx/readwrite/json_graph/tree.py"}]}
| 2,703 | 127 |
gh_patches_debug_63956
|
rasdani/github-patches
|
git_diff
|
redis__redis-py-1780
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Module installation fails due to missing dependency
https://github.com/redis/redis-py/blob/039488d97ec545b37e903d1b791a88bac8f77973/redis/connection.py#L1
the deprecated distutils was replaced with the packaging module as part of release v4.0.0b1
packaging is not a builtin python module but was not added to setup.py as a dependency which causes applications that require redis-py to fail if packaging isn't already installed on the machine.
the packaging module should probably be added as a dependency in setup.py to resolve this
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 from setuptools import find_packages, setup
3
4 import redis
5
6 setup(
7 name="redis",
8 description="Python client for Redis database and key-value store",
9 long_description=open("README.md").read().strip(),
10 long_description_content_type="text/markdown",
11 keywords=["Redis", "key-value store", "database"],
12 license="MIT",
13 version=redis.__version__,
14 packages=find_packages(
15 include=[
16 "redis",
17 "redis.commands",
18 "redis.commands.bf",
19 "redis.commands.json",
20 "redis.commands.search",
21 "redis.commands.timeseries",
22 "redis.commands.graph",
23 ]
24 ),
25 url="https://github.com/redis/redis-py",
26 author="Redis Inc.",
27 author_email="[email protected]",
28 python_requires=">=3.6",
29 install_requires=[
30 "deprecated==1.2.3",
31 "packaging==21.3",
32 ],
33 classifiers=[
34 "Development Status :: 5 - Production/Stable",
35 "Environment :: Console",
36 "Intended Audience :: Developers",
37 "License :: OSI Approved :: MIT License",
38 "Operating System :: OS Independent",
39 "Programming Language :: Python",
40 "Programming Language :: Python :: 3",
41 "Programming Language :: Python :: 3 :: Only",
42 "Programming Language :: Python :: 3.6",
43 "Programming Language :: Python :: 3.7",
44 "Programming Language :: Python :: 3.8",
45 "Programming Language :: Python :: 3.9",
46 "Programming Language :: Python :: 3.10",
47 "Programming Language :: Python :: Implementation :: CPython",
48 "Programming Language :: Python :: Implementation :: PyPy",
49 ],
50 extras_require={
51 "hiredis": ["hiredis>=1.0.0"],
52 },
53 )
54
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,9 +26,12 @@
author="Redis Inc.",
author_email="[email protected]",
python_requires=">=3.6",
+ setup_requires=[
+ "packaging>=21.3",
+ ],
install_requires=[
- "deprecated==1.2.3",
- "packaging==21.3",
+ "deprecated>=1.2.3",
+ "packaging>=21.3",
],
classifiers=[
"Development Status :: 5 - Production/Stable",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,9 +26,12 @@\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n+ setup_requires=[\n+ \"packaging>=21.3\",\n+ ],\n install_requires=[\n- \"deprecated==1.2.3\",\n- \"packaging==21.3\",\n+ \"deprecated>=1.2.3\",\n+ \"packaging>=21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n", "issue": "Module installation fails due to missing dependency\nhttps://github.com/redis/redis-py/blob/039488d97ec545b37e903d1b791a88bac8f77973/redis/connection.py#L1\r\nthe deprecated distutils was replaced with the packaging module as part of release v4.0.0b1\r\npackaging is not a builtin python module but was not added to setup.py as a dependency which causes applications that require redis-py to fail if packaging isn't already installed on the machine.\r\nthe packaging module should probably be added as a dependency in setup.py to resolve this\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nimport redis\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=redis.__version__,\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n install_requires=[\n \"deprecated==1.2.3\",\n \"packaging==21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\nimport redis\n\nsetup(\n name=\"redis\",\n description=\"Python client for Redis database and key-value store\",\n long_description=open(\"README.md\").read().strip(),\n long_description_content_type=\"text/markdown\",\n keywords=[\"Redis\", \"key-value store\", \"database\"],\n license=\"MIT\",\n version=redis.__version__,\n packages=find_packages(\n include=[\n \"redis\",\n \"redis.commands\",\n \"redis.commands.bf\",\n \"redis.commands.json\",\n \"redis.commands.search\",\n \"redis.commands.timeseries\",\n \"redis.commands.graph\",\n ]\n ),\n url=\"https://github.com/redis/redis-py\",\n author=\"Redis Inc.\",\n author_email=\"[email protected]\",\n python_requires=\">=3.6\",\n setup_requires=[\n \"packaging>=21.3\",\n ],\n install_requires=[\n \"deprecated>=1.2.3\",\n \"packaging>=21.3\",\n ],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n extras_require={\n \"hiredis\": [\"hiredis>=1.0.0\"],\n },\n)\n", "path": "setup.py"}]}
| 889 | 141 |
gh_patches_debug_560
|
rasdani/github-patches
|
git_diff
|
ethereum__consensus-specs-1130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BLS and testing
Decided I wanted to get this out to explain the current state of testing, and **collect feedback** (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.
# BLS and testing
The two pain-points to get a pretty (and large) set of test-vectors out for clients are:
- BLS Signature creation
- BLS Signature verification
And side-issue, but easily resolved:
*efficient creation of a genesis state*:
When BLS functionality is implemented in test-code (creation of signed deposits, and verification).
Solution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).
## Status
Talking about the status on [`spectest-deco` PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052) here, based on the `v06x` branch, where we are developing 0.6 improvements. (to be merged back into dev later)
### The testing pipeline currently looks like:
- py-spec, calls BLS stub
- test-helpers, don't create self-signed objects with valid signatures
- py-test code, unified with test-vector-creation (see [PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052))
- py-test runner to run spec-tests, purely for assertions
- test-generator running the spec-tests, passing `generator_mode=true` to each of them, making them output a test-vector.
### Pytests status:
- move from `tests/` to `eth2spec/test`, i.e. part of package
- removed use of `pytest`
- annotated with `@spec_test` or similar (see PR 1052)
- as part of test-generation effort, yay for shared effort:
- expanded in block-operation testing: [coverage checklist here](https://github.com/ethereum/eth2.0-specs/issues/927)
- slightly faster, less deep-copies
- stuck on BLS stub (no sig creation/verification)
### Test-generation status:
- BLS, SSZ-generic, SSZ-static, shuffling test generators still all in place and up to date (`v06x` branch)
- `operations` test-gen uses test-package ability to output test-vectors for each test-case
- but no valid signatures
- lack of a definition how to handle this signature problem as a test-consumer
- there are no signature-related testcases
- turning BLS off would effectively let you check conformance, but it's hacky, and not remotely a good practice to have even an option for...
- it's approx. ~140MB worth (iirc) of yaml encoded state-transitions, covering many edge-cases. Worth to get in the hands of implementers quick.
- `sanity` tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.
- `epoch` processing tests also updated, also can be used, not as complete as block-processing, lower priority.
## Possible ways forward:
- Simple but hacky: "turn BLS off for testing"
- No "BLS off", BLS ON on client side, but only partially on spec side. Rely on signature verification not being hit before anything else during testing
- valid test cases generated with valid signatures
- invalid test cases marked: does it error because of BLS? And runners should check the reason for aborting processing: if it doesn't match, the test should fail. Now these pytests don't need full BLS update work, and can be released somewhat quicker
- "BLS on", more work (~1 week)
- slower on test-generation, but we get the best kind of test-vectors: correct, BLS verification ON.
- blocker: what if a test case fails because of a signature error (test setup not creating the sig correctly), instead of a real assertion case. Spec will look correct, passes tests, but things are not right. We need to mark Sig-verification errors distinctly, so we can catch these problems when we turn BLS on in the pyspec. How: instead of `assert verify_...`, just `verify_...`, and make it raise a special `BLSVerificationError` (or something like that)
- We likely still want to mark tests as "signature related" or not, so implementers can catch it easily if their code is not aborting properly before signature verification, to assure invalid inputs are not costly.
A work-in-progress introduction of actual full BLS usage in the pytests is started here: [`tests-with-sigs` branch](https://github.com/ethereum/eth2.0-specs/tree/tests-with-sigs)
Suggestions welcome.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scripts/phase0/build_spec.py`
Content:
```
1 import sys
2 import function_puller
3
4
5 def build_phase0_spec(sourcefile, outfile):
6 code_lines = []
7 code_lines.append("""
8 from typing import (
9 Any,
10 Dict,
11 List,
12 NewType,
13 Tuple,
14 )
15 from eth2spec.utils.minimal_ssz import *
16 from eth2spec.utils.bls_stub import *
17
18 """)
19 for i in (1, 2, 3, 4, 8, 32, 48, 96):
20 code_lines.append("def int_to_bytes%d(x): return x.to_bytes(%d, 'little')" % (i, i))
21
22 code_lines.append("""
23
24 # stub, will get overwritten by real var
25 SLOTS_PER_EPOCH = 64
26
27
28 Slot = NewType('Slot', int) # uint64
29 Epoch = NewType('Epoch', int) # uint64
30 Shard = NewType('Shard', int) # uint64
31 ValidatorIndex = NewType('ValidatorIndex', int) # uint64
32 Gwei = NewType('Gwei', int) # uint64
33 Bytes32 = NewType('Bytes32', bytes) # bytes32
34 BLSPubkey = NewType('BLSPubkey', bytes) # bytes48
35 BLSSignature = NewType('BLSSignature', bytes) # bytes96
36 Store = None
37 """)
38
39 code_lines += function_puller.get_spec(sourcefile)
40
41 code_lines.append("""
42 # Monkey patch validator compute committee code
43 _compute_committee = compute_committee
44 committee_cache = {}
45
46
47 def compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:
48 param_hash = (hash_tree_root(indices), seed, index, count)
49
50 if param_hash in committee_cache:
51 return committee_cache[param_hash]
52 else:
53 ret = _compute_committee(indices, seed, index, count)
54 committee_cache[param_hash] = ret
55 return ret
56
57
58 # Monkey patch hash cache
59 _hash = hash
60 hash_cache = {}
61
62
63 def hash(x):
64 if x in hash_cache:
65 return hash_cache[x]
66 else:
67 ret = _hash(x)
68 hash_cache[x] = ret
69 return ret
70
71 # Access to overwrite spec constants based on configuration
72 def apply_constants_preset(preset: Dict[str, Any]):
73 global_vars = globals()
74 for k, v in preset.items():
75 global_vars[k] = v
76
77 # Deal with derived constants
78 global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)
79
80 # Initialize SSZ types again, to account for changed lengths
81 init_SSZ_types()
82 """)
83
84 with open(outfile, 'w') as out:
85 out.write("\n".join(code_lines))
86
87
88 if __name__ == '__main__':
89 if len(sys.argv) < 3:
90 print("Usage: <source phase0> <output phase0 pyspec>")
91 build_phase0_spec(sys.argv[1], sys.argv[2])
92
93
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/scripts/phase0/build_spec.py b/scripts/phase0/build_spec.py
--- a/scripts/phase0/build_spec.py
+++ b/scripts/phase0/build_spec.py
@@ -13,7 +13,7 @@
Tuple,
)
from eth2spec.utils.minimal_ssz import *
-from eth2spec.utils.bls_stub import *
+from eth2spec.utils.bls import *
""")
for i in (1, 2, 3, 4, 8, 32, 48, 96):
|
{"golden_diff": "diff --git a/scripts/phase0/build_spec.py b/scripts/phase0/build_spec.py\n--- a/scripts/phase0/build_spec.py\n+++ b/scripts/phase0/build_spec.py\n@@ -13,7 +13,7 @@\n Tuple,\n )\n from eth2spec.utils.minimal_ssz import *\n-from eth2spec.utils.bls_stub import *\n+from eth2spec.utils.bls import *\n \n \"\"\")\n for i in (1, 2, 3, 4, 8, 32, 48, 96):\n", "issue": "BLS and testing\nDecided I wanted to get this out to explain the current state of testing, and **collect feedback** (implementers please comment) on what you need from testing, and your feelings about BLS usage in tests.\r\n\r\n# BLS and testing\r\n\r\nThe two pain-points to get a pretty (and large) set of test-vectors out for clients are:\r\n- BLS Signature creation\r\n- BLS Signature verification\r\n\r\nAnd side-issue, but easily resolved:\r\n*efficient creation of a genesis state*:\r\nWhen BLS functionality is implemented in test-code (creation of signed deposits, and verification).\r\nSolution would be to either cache it, or create it directly, without going through the spec functions (current temporary solution on experiment branch).\r\n\r\n## Status\r\n\r\nTalking about the status on [`spectest-deco` PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052) here, based on the `v06x` branch, where we are developing 0.6 improvements. (to be merged back into dev later)\r\n\r\n### The testing pipeline currently looks like:\r\n\r\n- py-spec, calls BLS stub\r\n- test-helpers, don't create self-signed objects with valid signatures\r\n- py-test code, unified with test-vector-creation (see [PR 1052](https://github.com/ethereum/eth2.0-specs/pull/1052))\r\n- py-test runner to run spec-tests, purely for assertions\r\n- test-generator running the spec-tests, passing `generator_mode=true` to each of them, making them output a test-vector.\r\n\r\n### Pytests status:\r\n\r\n- move from `tests/` to `eth2spec/test`, i.e. part of package\r\n - removed use of `pytest`\r\n - annotated with `@spec_test` or similar (see PR 1052)\r\n- as part of test-generation effort, yay for shared effort:\r\n - expanded in block-operation testing: [coverage checklist here](https://github.com/ethereum/eth2.0-specs/issues/927)\r\n - slightly faster, less deep-copies\r\n- stuck on BLS stub (no sig creation/verification)\r\n\r\n### Test-generation status:\r\n\r\n- BLS, SSZ-generic, SSZ-static, shuffling test generators still all in place and up to date (`v06x` branch)\r\n- `operations` test-gen uses test-package ability to output test-vectors for each test-case\r\n - but no valid signatures\r\n - lack of a definition how to handle this signature problem as a test-consumer\r\n - there are no signature-related testcases\r\n - turning BLS off would effectively let you check conformance, but it's hacky, and not remotely a good practice to have even an option for...\r\n - it's approx. ~140MB worth (iirc) of yaml encoded state-transitions, covering many edge-cases. Worth to get in the hands of implementers quick.\r\n- `sanity` tests updated and can be cleanly used for test-generation, but requires more work to define the format of the test-vectors, as they is more variety.\r\n- `epoch` processing tests also updated, also can be used, not as complete as block-processing, lower priority.\r\n\r\n## Possible ways forward:\r\n\r\n- Simple but hacky: \"turn BLS off for testing\"\r\n- No \"BLS off\", BLS ON on client side, but only partially on spec side. Rely on signature verification not being hit before anything else during testing\r\n - valid test cases generated with valid signatures\r\n - invalid test cases marked: does it error because of BLS? And runners should check the reason for aborting processing: if it doesn't match, the test should fail. Now these pytests don't need full BLS update work, and can be released somewhat quicker\r\n- \"BLS on\", more work (~1 week)\r\n - slower on test-generation, but we get the best kind of test-vectors: correct, BLS verification ON.\r\n - blocker: what if a test case fails because of a signature error (test setup not creating the sig correctly), instead of a real assertion case. Spec will look correct, passes tests, but things are not right. We need to mark Sig-verification errors distinctly, so we can catch these problems when we turn BLS on in the pyspec. How: instead of `assert verify_...`, just `verify_...`, and make it raise a special `BLSVerificationError` (or something like that)\r\n - We likely still want to mark tests as \"signature related\" or not, so implementers can catch it easily if their code is not aborting properly before signature verification, to assure invalid inputs are not costly.\r\n\r\nA work-in-progress introduction of actual full BLS usage in the pytests is started here: [`tests-with-sigs` branch](https://github.com/ethereum/eth2.0-specs/tree/tests-with-sigs)\r\n\r\nSuggestions welcome.\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import sys\nimport function_puller\n\n\ndef build_phase0_spec(sourcefile, outfile):\n code_lines = []\n code_lines.append(\"\"\"\nfrom typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\nfrom eth2spec.utils.minimal_ssz import *\nfrom eth2spec.utils.bls_stub import *\n\n\"\"\")\n for i in (1, 2, 3, 4, 8, 32, 48, 96):\n code_lines.append(\"def int_to_bytes%d(x): return x.to_bytes(%d, 'little')\" % (i, i))\n\n code_lines.append(\"\"\"\n\n# stub, will get overwritten by real var\nSLOTS_PER_EPOCH = 64\n\n\nSlot = NewType('Slot', int) # uint64\nEpoch = NewType('Epoch', int) # uint64\nShard = NewType('Shard', int) # uint64\nValidatorIndex = NewType('ValidatorIndex', int) # uint64\nGwei = NewType('Gwei', int) # uint64\nBytes32 = NewType('Bytes32', bytes) # bytes32\nBLSPubkey = NewType('BLSPubkey', bytes) # bytes48\nBLSSignature = NewType('BLSSignature', bytes) # bytes96\nStore = None\n\"\"\")\n\n code_lines += function_puller.get_spec(sourcefile)\n\n code_lines.append(\"\"\"\n# Monkey patch validator compute committee code\n_compute_committee = compute_committee\ncommittee_cache = {}\n\n\ndef compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:\n param_hash = (hash_tree_root(indices), seed, index, count)\n\n if param_hash in committee_cache:\n return committee_cache[param_hash]\n else:\n ret = _compute_committee(indices, seed, index, count)\n committee_cache[param_hash] = ret\n return ret\n\n\n# Monkey patch hash cache\n_hash = hash\nhash_cache = {}\n\n\ndef hash(x):\n if x in hash_cache:\n return hash_cache[x]\n else:\n ret = _hash(x)\n hash_cache[x] = ret\n return ret\n\n# Access to overwrite spec constants based on configuration\ndef apply_constants_preset(preset: Dict[str, Any]):\n global_vars = globals()\n for k, v in preset.items():\n global_vars[k] = v\n\n # Deal with derived constants\n global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)\n\n # Initialize SSZ types again, to account for changed lengths\n init_SSZ_types()\n\"\"\")\n\n with open(outfile, 'w') as out:\n out.write(\"\\n\".join(code_lines))\n\n\nif __name__ == '__main__':\n if len(sys.argv) < 3:\n print(\"Usage: <source phase0> <output phase0 pyspec>\")\n build_phase0_spec(sys.argv[1], sys.argv[2])\n\n", "path": "scripts/phase0/build_spec.py"}], "after_files": [{"content": "import sys\nimport function_puller\n\n\ndef build_phase0_spec(sourcefile, outfile):\n code_lines = []\n code_lines.append(\"\"\"\nfrom typing import (\n Any,\n Dict,\n List,\n NewType,\n Tuple,\n)\nfrom eth2spec.utils.minimal_ssz import *\nfrom eth2spec.utils.bls import *\n\n\"\"\")\n for i in (1, 2, 3, 4, 8, 32, 48, 96):\n code_lines.append(\"def int_to_bytes%d(x): return x.to_bytes(%d, 'little')\" % (i, i))\n\n code_lines.append(\"\"\"\n\n# stub, will get overwritten by real var\nSLOTS_PER_EPOCH = 64\n\n\nSlot = NewType('Slot', int) # uint64\nEpoch = NewType('Epoch', int) # uint64\nShard = NewType('Shard', int) # uint64\nValidatorIndex = NewType('ValidatorIndex', int) # uint64\nGwei = NewType('Gwei', int) # uint64\nBytes32 = NewType('Bytes32', bytes) # bytes32\nBLSPubkey = NewType('BLSPubkey', bytes) # bytes48\nBLSSignature = NewType('BLSSignature', bytes) # bytes96\nStore = None\n\"\"\")\n\n code_lines += function_puller.get_spec(sourcefile)\n\n code_lines.append(\"\"\"\n# Monkey patch validator compute committee code\n_compute_committee = compute_committee\ncommittee_cache = {}\n\n\ndef compute_committee(indices: List[ValidatorIndex], seed: Bytes32, index: int, count: int) -> List[ValidatorIndex]:\n param_hash = (hash_tree_root(indices), seed, index, count)\n\n if param_hash in committee_cache:\n return committee_cache[param_hash]\n else:\n ret = _compute_committee(indices, seed, index, count)\n committee_cache[param_hash] = ret\n return ret\n\n\n# Monkey patch hash cache\n_hash = hash\nhash_cache = {}\n\n\ndef hash(x):\n if x in hash_cache:\n return hash_cache[x]\n else:\n ret = _hash(x)\n hash_cache[x] = ret\n return ret\n\n# Access to overwrite spec constants based on configuration\ndef apply_constants_preset(preset: Dict[str, Any]):\n global_vars = globals()\n for k, v in preset.items():\n global_vars[k] = v\n\n # Deal with derived constants\n global_vars['GENESIS_EPOCH'] = slot_to_epoch(GENESIS_SLOT)\n\n # Initialize SSZ types again, to account for changed lengths\n init_SSZ_types()\n\"\"\")\n\n with open(outfile, 'w') as out:\n out.write(\"\\n\".join(code_lines))\n\n\nif __name__ == '__main__':\n if len(sys.argv) < 3:\n print(\"Usage: <source phase0> <output phase0 pyspec>\")\n build_phase0_spec(sys.argv[1], sys.argv[2])\n\n", "path": "scripts/phase0/build_spec.py"}]}
| 2,161 | 121 |
gh_patches_debug_20147
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-447
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
After creating a new organization it should appear in the pending approval menu
Please make sure if a user adds an organization the Pending Approval menu is updated
http://staging.changelog.qgis.org/en/qgis/pending-certifyingorganisation/list/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django_project/core/custom_middleware.py`
Content:
```
1 # coding=utf-8
2 # flake8: noqa
3 """
4 core.custom_middleware
5 """
6 from base.models import Project, Version
7 from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry
8
9
10 class NavContextMiddleware(object):
11 """
12 Adds the required navigation variables to each response
13 """
14
15 def __init__(self):
16 pass
17
18 @staticmethod
19 def process_template_response(request, response):
20 """
21 Add 'the_project', 'the_entry', 'the_version' to context for the
22 navigation.
23
24 Justification: To make the navigation functional, we need to know
25 which Project (or Version, Committee etc) the current context
26 relates to. This is required for URLs. Rather than include lots of
27 if/else in the navigation template, it seems cleaner to add the
28 above variables to the context here.
29
30 :param request: Http Request obj
31 :param response: Http Response obj
32 :return: context :rtype: dict
33 """
34 context = response.context_data
35
36 if context.get('project', None):
37 context['the_project'] = context.get('project')
38 versions = Version.objects.filter(project=context.get('project'))
39 context['has_pending_versions'] = (
40 Version.unapproved_objects.filter(
41 project=context.get('project')).exists())
42 context['has_pending_categories'] = (
43 Category.unapproved_objects.filter(
44 project=context.get('project')).exists())
45 context['has_pending_sponsor_lvl'] = (
46 SponsorshipLevel.unapproved_objects.filter(
47 project=context.get('project')).exists())
48 context['has_pending_sponsor_period'] = (
49 SponsorshipPeriod.unapproved_objects.filter(
50 project=context.get('project')).exists())
51 if versions:
52 context['has_pending_entries'] = (
53 Entry.unapproved_objects.filter(
54 version__in=versions).exists())
55
56 else:
57 if request.user.is_staff:
58 context['the_projects'] = Project.objects.all()
59 else:
60 context['the_projects'] = Project.approved_objects.filter(
61 private=False
62 )
63
64 if context.get('version', None):
65 context['the_version'] = context.get('version')
66 context['the_project'] = context.get('version').project
67
68 if context.get('committee', None):
69 context['the_committee'] = context.get('committee')
70 context['the_project'] = context.get('committee').project
71
72 if context.get('ballot', None):
73 context['the_committee'] = context.get('ballot').committee
74 context['the_project'] = context.get('ballot').committee.project
75
76 if context.get('category', None):
77 context['the_project'] = context.get('category').project
78
79 if context.get('ballots', None):
80 try:
81 context['the_project'] = \
82 context.get('ballots')[0].committee.project
83 except (KeyError, IndexError):
84 pass
85
86 if context.get('entry', None):
87 context['the_entry'] = context.get('entry')
88 context['the_version'] = context.get('entry').version
89 context['the_project'] = context.get('entry').version.project
90
91 if context.get('committees', None):
92 try:
93 context['the_project'] = context.get('committees')[0].project
94 except (KeyError, IndexError):
95 pass
96
97 if context.get('versions', None):
98 try:
99 context['the_project'] = context.get('versions')[0].project
100 except (KeyError, IndexError):
101 pass
102
103 if context.get('entries', None):
104 try:
105 context['the_version'] = context.get('entries')[0].version
106 context['the_project'] = \
107 context.get('entries')[0].version.project
108 except (KeyError, IndexError):
109 pass
110
111 if context.get('categories', None):
112 try:
113 context['the_project'] = \
114 context.get('categories')[0].project
115 except (KeyError, IndexError):
116 pass
117
118 return response
119
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py
--- a/django_project/core/custom_middleware.py
+++ b/django_project/core/custom_middleware.py
@@ -5,6 +5,7 @@
"""
from base.models import Project, Version
from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry
+from certification.models import CertifyingOrganisation
class NavContextMiddleware(object):
@@ -48,6 +49,9 @@
context['has_pending_sponsor_period'] = (
SponsorshipPeriod.unapproved_objects.filter(
project=context.get('project')).exists())
+ context['has_pending_organisations'] = (
+ CertifyingOrganisation.unapproved_objects.filter(
+ project=context.get('project')).exists())
if versions:
context['has_pending_entries'] = (
Entry.unapproved_objects.filter(
|
{"golden_diff": "diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py\n--- a/django_project/core/custom_middleware.py\n+++ b/django_project/core/custom_middleware.py\n@@ -5,6 +5,7 @@\n \"\"\"\n from base.models import Project, Version\n from changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry\n+from certification.models import CertifyingOrganisation\n \n \n class NavContextMiddleware(object):\n@@ -48,6 +49,9 @@\n context['has_pending_sponsor_period'] = (\n SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists())\n+ context['has_pending_organisations'] = (\n+ CertifyingOrganisation.unapproved_objects.filter(\n+ project=context.get('project')).exists())\n if versions:\n context['has_pending_entries'] = (\n Entry.unapproved_objects.filter(\n", "issue": "After creating a new organization it should appear in the pending approval menu\nPlease make sure if a user adds an organization the Pending Approval menu is updated\r\n\r\nhttp://staging.changelog.qgis.org/en/qgis/pending-certifyingorganisation/list/\n", "before_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project, Version\nfrom changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n versions = Version.objects.filter(project=context.get('project'))\n context['has_pending_versions'] = (\n Version.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_categories'] = (\n Category.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_lvl'] = (\n SponsorshipLevel.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_period'] = (\n SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists())\n if versions:\n context['has_pending_entries'] = (\n Entry.unapproved_objects.filter(\n version__in=versions).exists())\n\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}], "after_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project, Version\nfrom changes.models import Category, SponsorshipLevel, SponsorshipPeriod, Entry\nfrom certification.models import CertifyingOrganisation\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n versions = Version.objects.filter(project=context.get('project'))\n context['has_pending_versions'] = (\n Version.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_categories'] = (\n Category.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_lvl'] = (\n SponsorshipLevel.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_sponsor_period'] = (\n SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists())\n context['has_pending_organisations'] = (\n CertifyingOrganisation.unapproved_objects.filter(\n project=context.get('project')).exists())\n if versions:\n context['has_pending_entries'] = (\n Entry.unapproved_objects.filter(\n version__in=versions).exists())\n\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}]}
| 1,422 | 190 |
gh_patches_debug_5708
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-4130
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
BaseDownloader.fetch passes coroutine to asyncio.wait which is forbidden in python 3.11
Python 3.8 deprecated passing coroutines to `asyncio.wait` and Python 3.11 will now [raise an error](https://github.com/python/cpython/blob/a6313d78f21f79ca64dedd38e637509dc530a1b6/Lib/asyncio/tasks.py#L414C13-L414C13). This causes the BaseDownloader.fetch call to fail on Python 3.11 https://github.com/pulp/pulpcore/blob/9dbcc8810f97f53297a933df2e1b74cdc324a8ea/pulpcore/download/base.py#L185 .
Python provides the solution in the error message: "Passing coroutines is forbidden, use tasks explicitly."
I believe this can be fixed by explicitly converting the coroutine to a task using asyncio's `create_task`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/download/base.py`
Content:
```
1 from gettext import gettext as _
2
3 import asyncio
4 from collections import namedtuple
5 import logging
6 import os
7 import tempfile
8 from urllib.parse import urlsplit
9
10 from pulpcore.app import pulp_hashlib
11 from pulpcore.app.models import Artifact
12 from pulpcore.exceptions import (
13 DigestValidationError,
14 SizeValidationError,
15 TimeoutException,
16 UnsupportedDigestValidationError,
17 )
18
19
20 log = logging.getLogger(__name__)
21
22
23 DownloadResult = namedtuple("DownloadResult", ["url", "artifact_attributes", "path", "headers"])
24 """
25 Args:
26 url (str): The url corresponding with the download.
27 path (str): The absolute path to the saved file
28 artifact_attributes (dict): Contains keys corresponding with
29 :class:`~pulpcore.plugin.models.Artifact` fields. This includes the computed digest values
30 along with size information.
31 headers (aiohttp.multidict.MultiDict): HTTP response headers. The keys are header names. The
32 values are header content. None when not using the HttpDownloader or sublclass.
33 """
34
35
36 class BaseDownloader:
37 """
38 The base class of all downloaders, providing digest calculation, validation, and file handling.
39
40 This is an abstract class and is meant to be subclassed. Subclasses are required to implement
41 the :meth:`~pulpcore.plugin.download.BaseDownloader.run` method and do two things:
42
43 1. Pass all downloaded data to
44 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` and schedule it.
45
46 2. Schedule :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has
47 been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
48
49 Passing all downloaded data the into
50 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` allows the file digests to
51 be computed while data is written to disk. The digests computed are required if the download is
52 to be saved as an :class:`~pulpcore.plugin.models.Artifact` which avoids having to re-read the
53 data later.
54
55 The :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` method by default
56 writes to a random file in the current working directory.
57
58 The call to :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` ensures that all
59 data written to the file-like object is quiesced to disk before the file-like object has
60 `close()` called on it.
61
62 Attributes:
63 url (str): The url to download.
64 expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the
65 value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}
66 expected_size (int): The number of bytes the download is expected to have.
67 path (str): The full path to the file containing the downloaded data.
68 """
69
70 def __init__(
71 self,
72 url,
73 expected_digests=None,
74 expected_size=None,
75 semaphore=None,
76 *args,
77 **kwargs,
78 ):
79 """
80 Create a BaseDownloader object. This is expected to be called by all subclasses.
81
82 Args:
83 url (str): The url to download.
84 expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the
85 value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}
86 expected_size (int): The number of bytes the download is expected to have.
87 semaphore (asyncio.Semaphore): A semaphore the downloader must acquire before running.
88 Useful for limiting the number of outstanding downloaders in various ways.
89 """
90
91 self.url = url
92 self._writer = None
93 self.path = None
94 self.expected_digests = expected_digests
95 self.expected_size = expected_size
96 if semaphore:
97 self.semaphore = semaphore
98 else:
99 self.semaphore = asyncio.Semaphore() # This will always be acquired
100 self._digests = {}
101 self._size = 0
102 if self.expected_digests:
103 if not set(self.expected_digests).intersection(set(Artifact.DIGEST_FIELDS)):
104 raise UnsupportedDigestValidationError(
105 _(
106 "Content at the URL '{}' does not contain at least one trusted hasher which"
107 " is specified in the 'ALLOWED_CONTENT_CHECKSUMS' setting ({}). The"
108 " downloader expected one of the following hashers: {}"
109 ).format(self.url, Artifact.DIGEST_FIELDS, set(self.expected_digests))
110 )
111
112 def _ensure_writer_has_open_file(self):
113 """
114 Create a temporary file on demand.
115
116 Create a temporary file when it's actually used,
117 allowing plugin writers to instantiate many downloaders in memory.
118 """
119 if not self._writer:
120 filename = urlsplit(self.url).path.split("/")[-1]
121 # linux allows any character except NUL or / in a filename and has a length limit of
122 # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
123 # paths should be OK
124 is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
125 # if the filename isn't legal then we just fall back to no suffix (random name)
126 suffix = "-" + filename if is_legal_filename else None
127 # write the file to the current working directory with a random prefix and the
128 # desired suffix. we always want the random prefix as it is possible to download
129 # the same filename from two different URLs, and the files may not be the same.
130 self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
131 self.path = self._writer.name
132 self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
133 self._size = 0
134
135 async def handle_data(self, data):
136 """
137 A coroutine that writes data to the file object and compute its digests.
138
139 All subclassed downloaders are expected to pass all data downloaded to this method. Similar
140 to the hashlib docstring, repeated calls are equivalent to a single call with
141 the concatenation of all the arguments: m.handle_data(a); m.handle_data(b) is equivalent to
142 m.handle_data(a+b).
143
144 Args:
145 data (bytes): The data to be handled by the downloader.
146 """
147 self._ensure_writer_has_open_file()
148 self._writer.write(data)
149 self._record_size_and_digests_for_data(data)
150
151 async def finalize(self):
152 """
153 A coroutine to flush downloaded data, close the file writer, and validate the data.
154
155 All subclasses are required to call this method after all data has been passed to
156 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
157
158 Raises:
159 :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``
160 values don't match the digest of the data passed to
161 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
162 :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value
163 doesn't match the size of the data passed to
164 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
165 """
166 self._ensure_writer_has_open_file()
167 self._writer.flush()
168 os.fsync(self._writer.fileno())
169 self._writer.close()
170 self._writer = None
171 self.validate_digests()
172 self.validate_size()
173 log.debug(f"Downloaded file from {self.url}")
174
175 def fetch(self):
176 """
177 Run the download synchronously and return the `DownloadResult`.
178
179 Returns:
180 :class:`~pulpcore.plugin.download.DownloadResult`
181
182 Raises:
183 Exception: Any fatal exception emitted during downloading
184 """
185 done, _ = asyncio.get_event_loop().run_until_complete(asyncio.wait([self.run()]))
186 return done.pop().result()
187
188 def _record_size_and_digests_for_data(self, data):
189 """
190 Record the size and digest for an available chunk of data.
191
192 Args:
193 data (bytes): The data to have its size and digest values recorded.
194 """
195 for algorithm in self._digests.values():
196 algorithm.update(data)
197 self._size += len(data)
198
199 @property
200 def artifact_attributes(self):
201 """
202 A property that returns a dictionary with size and digest information. The keys of this
203 dictionary correspond with :class:`~pulpcore.plugin.models.Artifact` fields.
204 """
205 attributes = {"size": self._size}
206 for algorithm in self._digests:
207 attributes[algorithm] = self._digests[algorithm].hexdigest()
208 return attributes
209
210 def validate_digests(self):
211 """
212 Validate all digests validate if ``expected_digests`` is set
213
214 Raises:
215 :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``
216 values don't match the digest of the data passed to
217 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
218 """
219 if self.expected_digests:
220 for algorithm, expected_digest in self.expected_digests.items():
221 actual_digest = self._digests[algorithm].hexdigest()
222 if actual_digest != expected_digest:
223 raise DigestValidationError(actual_digest, expected_digest, url=self.url)
224
225 def validate_size(self):
226 """
227 Validate the size if ``expected_size`` is set
228
229 Raises:
230 :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value
231 doesn't match the size of the data passed to
232 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
233 """
234 if self.expected_size:
235 actual_size = self._size
236 expected_size = self.expected_size
237 if actual_size != expected_size:
238 raise SizeValidationError(actual_size, expected_size, url=self.url)
239
240 async def run(self, extra_data=None):
241 """
242 Run the downloader with concurrency restriction.
243
244 This method acquires `self.semaphore` before calling the actual download implementation
245 contained in `_run()`. This ensures that the semaphore stays acquired even as the `backoff`
246 decorator on `_run()`, handles backoff-and-retry logic.
247
248 Args:
249 extra_data (dict): Extra data passed to the downloader.
250
251 Returns:
252 :class:`~pulpcore.plugin.download.DownloadResult` from `_run()`.
253
254 """
255 async with self.semaphore:
256 try:
257 return await self._run(extra_data=extra_data)
258 except asyncio.TimeoutError:
259 raise TimeoutException(self.url)
260
261 async def _run(self, extra_data=None):
262 """
263 Run the downloader.
264
265 This is a coroutine that asyncio can schedule to complete downloading. Subclasses are
266 required to implement this method and do two things:
267
268 1. Pass all downloaded data to
269 :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
270
271 2. Call :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has
272 been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.
273
274 It is also expected that the subclass implementation return a
275 :class:`~pulpcore.plugin.download.DownloadResult` object. The
276 ``artifact_attributes`` value of the
277 :class:`~pulpcore.plugin.download.DownloadResult` is usually set to the
278 :attr:`~pulpcore.plugin.download.BaseDownloader.artifact_attributes` property value.
279
280 This method is called from :meth:`~pulpcore.plugin.download.BaseDownloader.run` which
281 handles concurrency restriction. Thus, by the time this method is called, the download can
282 occur without violating the concurrency restriction.
283
284 Args:
285 extra_data (dict): Extra data passed to the downloader.
286
287 Returns:
288 :class:`~pulpcore.plugin.download.DownloadResult`
289
290 Raises:
291 Validation errors could be emitted when subclassed implementations call
292 :meth:`~pulpcore.plugin.download.BaseDownloader.finalize`.
293 """
294 raise NotImplementedError("Subclasses must define a _run() method that returns a coroutine")
295
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -182,8 +182,8 @@
Raises:
Exception: Any fatal exception emitted during downloading
"""
- done, _ = asyncio.get_event_loop().run_until_complete(asyncio.wait([self.run()]))
- return done.pop().result()
+ result = asyncio.get_event_loop().run_until_complete(self.run())
+ return result
def _record_size_and_digests_for_data(self, data):
"""
|
{"golden_diff": "diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py\n--- a/pulpcore/download/base.py\n+++ b/pulpcore/download/base.py\n@@ -182,8 +182,8 @@\n Raises:\n Exception: Any fatal exception emitted during downloading\n \"\"\"\n- done, _ = asyncio.get_event_loop().run_until_complete(asyncio.wait([self.run()]))\n- return done.pop().result()\n+ result = asyncio.get_event_loop().run_until_complete(self.run())\n+ return result\n \n def _record_size_and_digests_for_data(self, data):\n \"\"\"\n", "issue": "BaseDownloader.fetch passes coroutine to asyncio.wait which is forbidden in python 3.11\nPython 3.8 deprecated passing coroutines to `asyncio.wait` and Python 3.11 will now [raise an error](https://github.com/python/cpython/blob/a6313d78f21f79ca64dedd38e637509dc530a1b6/Lib/asyncio/tasks.py#L414C13-L414C13). This causes the BaseDownloader.fetch call to fail on Python 3.11 https://github.com/pulp/pulpcore/blob/9dbcc8810f97f53297a933df2e1b74cdc324a8ea/pulpcore/download/base.py#L185 .\r\n\r\nPython provides the solution in the error message: \"Passing coroutines is forbidden, use tasks explicitly.\"\r\n\r\nI believe this can be fixed by explicitly converting the coroutine to a task using asyncio's `create_task`\n", "before_files": [{"content": "from gettext import gettext as _\n\nimport asyncio\nfrom collections import namedtuple\nimport logging\nimport os\nimport tempfile\nfrom urllib.parse import urlsplit\n\nfrom pulpcore.app import pulp_hashlib\nfrom pulpcore.app.models import Artifact\nfrom pulpcore.exceptions import (\n DigestValidationError,\n SizeValidationError,\n TimeoutException,\n UnsupportedDigestValidationError,\n)\n\n\nlog = logging.getLogger(__name__)\n\n\nDownloadResult = namedtuple(\"DownloadResult\", [\"url\", \"artifact_attributes\", \"path\", \"headers\"])\n\"\"\"\nArgs:\n url (str): The url corresponding with the download.\n path (str): The absolute path to the saved file\n artifact_attributes (dict): Contains keys corresponding with\n :class:`~pulpcore.plugin.models.Artifact` fields. This includes the computed digest values\n along with size information.\n headers (aiohttp.multidict.MultiDict): HTTP response headers. The keys are header names. The\n values are header content. None when not using the HttpDownloader or sublclass.\n\"\"\"\n\n\nclass BaseDownloader:\n \"\"\"\n The base class of all downloaders, providing digest calculation, validation, and file handling.\n\n This is an abstract class and is meant to be subclassed. Subclasses are required to implement\n the :meth:`~pulpcore.plugin.download.BaseDownloader.run` method and do two things:\n\n 1. Pass all downloaded data to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` and schedule it.\n\n 2. Schedule :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has\n been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n Passing all downloaded data the into\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` allows the file digests to\n be computed while data is written to disk. The digests computed are required if the download is\n to be saved as an :class:`~pulpcore.plugin.models.Artifact` which avoids having to re-read the\n data later.\n\n The :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` method by default\n writes to a random file in the current working directory.\n\n The call to :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` ensures that all\n data written to the file-like object is quiesced to disk before the file-like object has\n `close()` called on it.\n\n Attributes:\n url (str): The url to download.\n expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the\n value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}\n expected_size (int): The number of bytes the download is expected to have.\n path (str): The full path to the file containing the downloaded data.\n \"\"\"\n\n def __init__(\n self,\n url,\n expected_digests=None,\n expected_size=None,\n semaphore=None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Create a BaseDownloader object. This is expected to be called by all subclasses.\n\n Args:\n url (str): The url to download.\n expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the\n value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}\n expected_size (int): The number of bytes the download is expected to have.\n semaphore (asyncio.Semaphore): A semaphore the downloader must acquire before running.\n Useful for limiting the number of outstanding downloaders in various ways.\n \"\"\"\n\n self.url = url\n self._writer = None\n self.path = None\n self.expected_digests = expected_digests\n self.expected_size = expected_size\n if semaphore:\n self.semaphore = semaphore\n else:\n self.semaphore = asyncio.Semaphore() # This will always be acquired\n self._digests = {}\n self._size = 0\n if self.expected_digests:\n if not set(self.expected_digests).intersection(set(Artifact.DIGEST_FIELDS)):\n raise UnsupportedDigestValidationError(\n _(\n \"Content at the URL '{}' does not contain at least one trusted hasher which\"\n \" is specified in the 'ALLOWED_CONTENT_CHECKSUMS' setting ({}). The\"\n \" downloader expected one of the following hashers: {}\"\n ).format(self.url, Artifact.DIGEST_FIELDS, set(self.expected_digests))\n )\n\n def _ensure_writer_has_open_file(self):\n \"\"\"\n Create a temporary file on demand.\n\n Create a temporary file when it's actually used,\n allowing plugin writers to instantiate many downloaders in memory.\n \"\"\"\n if not self._writer:\n filename = urlsplit(self.url).path.split(\"/\")[-1]\n # linux allows any character except NUL or / in a filename and has a length limit of\n # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded\n # paths should be OK\n is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length\n # if the filename isn't legal then we just fall back to no suffix (random name)\n suffix = \"-\" + filename if is_legal_filename else None\n # write the file to the current working directory with a random prefix and the\n # desired suffix. we always want the random prefix as it is possible to download\n # the same filename from two different URLs, and the files may not be the same.\n self._writer = tempfile.NamedTemporaryFile(dir=\".\", suffix=suffix, delete=False)\n self.path = self._writer.name\n self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}\n self._size = 0\n\n async def handle_data(self, data):\n \"\"\"\n A coroutine that writes data to the file object and compute its digests.\n\n All subclassed downloaders are expected to pass all data downloaded to this method. Similar\n to the hashlib docstring, repeated calls are equivalent to a single call with\n the concatenation of all the arguments: m.handle_data(a); m.handle_data(b) is equivalent to\n m.handle_data(a+b).\n\n Args:\n data (bytes): The data to be handled by the downloader.\n \"\"\"\n self._ensure_writer_has_open_file()\n self._writer.write(data)\n self._record_size_and_digests_for_data(data)\n\n async def finalize(self):\n \"\"\"\n A coroutine to flush downloaded data, close the file writer, and validate the data.\n\n All subclasses are required to call this method after all data has been passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n Raises:\n :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``\n values don't match the digest of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value\n doesn't match the size of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n self._ensure_writer_has_open_file()\n self._writer.flush()\n os.fsync(self._writer.fileno())\n self._writer.close()\n self._writer = None\n self.validate_digests()\n self.validate_size()\n log.debug(f\"Downloaded file from {self.url}\")\n\n def fetch(self):\n \"\"\"\n Run the download synchronously and return the `DownloadResult`.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult`\n\n Raises:\n Exception: Any fatal exception emitted during downloading\n \"\"\"\n done, _ = asyncio.get_event_loop().run_until_complete(asyncio.wait([self.run()]))\n return done.pop().result()\n\n def _record_size_and_digests_for_data(self, data):\n \"\"\"\n Record the size and digest for an available chunk of data.\n\n Args:\n data (bytes): The data to have its size and digest values recorded.\n \"\"\"\n for algorithm in self._digests.values():\n algorithm.update(data)\n self._size += len(data)\n\n @property\n def artifact_attributes(self):\n \"\"\"\n A property that returns a dictionary with size and digest information. The keys of this\n dictionary correspond with :class:`~pulpcore.plugin.models.Artifact` fields.\n \"\"\"\n attributes = {\"size\": self._size}\n for algorithm in self._digests:\n attributes[algorithm] = self._digests[algorithm].hexdigest()\n return attributes\n\n def validate_digests(self):\n \"\"\"\n Validate all digests validate if ``expected_digests`` is set\n\n Raises:\n :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``\n values don't match the digest of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n if self.expected_digests:\n for algorithm, expected_digest in self.expected_digests.items():\n actual_digest = self._digests[algorithm].hexdigest()\n if actual_digest != expected_digest:\n raise DigestValidationError(actual_digest, expected_digest, url=self.url)\n\n def validate_size(self):\n \"\"\"\n Validate the size if ``expected_size`` is set\n\n Raises:\n :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value\n doesn't match the size of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n if self.expected_size:\n actual_size = self._size\n expected_size = self.expected_size\n if actual_size != expected_size:\n raise SizeValidationError(actual_size, expected_size, url=self.url)\n\n async def run(self, extra_data=None):\n \"\"\"\n Run the downloader with concurrency restriction.\n\n This method acquires `self.semaphore` before calling the actual download implementation\n contained in `_run()`. This ensures that the semaphore stays acquired even as the `backoff`\n decorator on `_run()`, handles backoff-and-retry logic.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult` from `_run()`.\n\n \"\"\"\n async with self.semaphore:\n try:\n return await self._run(extra_data=extra_data)\n except asyncio.TimeoutError:\n raise TimeoutException(self.url)\n\n async def _run(self, extra_data=None):\n \"\"\"\n Run the downloader.\n\n This is a coroutine that asyncio can schedule to complete downloading. Subclasses are\n required to implement this method and do two things:\n\n 1. Pass all downloaded data to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n 2. Call :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has\n been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n It is also expected that the subclass implementation return a\n :class:`~pulpcore.plugin.download.DownloadResult` object. The\n ``artifact_attributes`` value of the\n :class:`~pulpcore.plugin.download.DownloadResult` is usually set to the\n :attr:`~pulpcore.plugin.download.BaseDownloader.artifact_attributes` property value.\n\n This method is called from :meth:`~pulpcore.plugin.download.BaseDownloader.run` which\n handles concurrency restriction. Thus, by the time this method is called, the download can\n occur without violating the concurrency restriction.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult`\n\n Raises:\n Validation errors could be emitted when subclassed implementations call\n :meth:`~pulpcore.plugin.download.BaseDownloader.finalize`.\n \"\"\"\n raise NotImplementedError(\"Subclasses must define a _run() method that returns a coroutine\")\n", "path": "pulpcore/download/base.py"}], "after_files": [{"content": "from gettext import gettext as _\n\nimport asyncio\nfrom collections import namedtuple\nimport logging\nimport os\nimport tempfile\nfrom urllib.parse import urlsplit\n\nfrom pulpcore.app import pulp_hashlib\nfrom pulpcore.app.models import Artifact\nfrom pulpcore.exceptions import (\n DigestValidationError,\n SizeValidationError,\n TimeoutException,\n UnsupportedDigestValidationError,\n)\n\n\nlog = logging.getLogger(__name__)\n\n\nDownloadResult = namedtuple(\"DownloadResult\", [\"url\", \"artifact_attributes\", \"path\", \"headers\"])\n\"\"\"\nArgs:\n url (str): The url corresponding with the download.\n path (str): The absolute path to the saved file\n artifact_attributes (dict): Contains keys corresponding with\n :class:`~pulpcore.plugin.models.Artifact` fields. This includes the computed digest values\n along with size information.\n headers (aiohttp.multidict.MultiDict): HTTP response headers. The keys are header names. The\n values are header content. None when not using the HttpDownloader or sublclass.\n\"\"\"\n\n\nclass BaseDownloader:\n \"\"\"\n The base class of all downloaders, providing digest calculation, validation, and file handling.\n\n This is an abstract class and is meant to be subclassed. Subclasses are required to implement\n the :meth:`~pulpcore.plugin.download.BaseDownloader.run` method and do two things:\n\n 1. Pass all downloaded data to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` and schedule it.\n\n 2. Schedule :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has\n been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n Passing all downloaded data the into\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` allows the file digests to\n be computed while data is written to disk. The digests computed are required if the download is\n to be saved as an :class:`~pulpcore.plugin.models.Artifact` which avoids having to re-read the\n data later.\n\n The :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data` method by default\n writes to a random file in the current working directory.\n\n The call to :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` ensures that all\n data written to the file-like object is quiesced to disk before the file-like object has\n `close()` called on it.\n\n Attributes:\n url (str): The url to download.\n expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the\n value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}\n expected_size (int): The number of bytes the download is expected to have.\n path (str): The full path to the file containing the downloaded data.\n \"\"\"\n\n def __init__(\n self,\n url,\n expected_digests=None,\n expected_size=None,\n semaphore=None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Create a BaseDownloader object. This is expected to be called by all subclasses.\n\n Args:\n url (str): The url to download.\n expected_digests (dict): Keyed on the algorithm name provided by hashlib and stores the\n value of the expected digest. e.g. {'md5': '912ec803b2ce49e4a541068d495ab570'}\n expected_size (int): The number of bytes the download is expected to have.\n semaphore (asyncio.Semaphore): A semaphore the downloader must acquire before running.\n Useful for limiting the number of outstanding downloaders in various ways.\n \"\"\"\n\n self.url = url\n self._writer = None\n self.path = None\n self.expected_digests = expected_digests\n self.expected_size = expected_size\n if semaphore:\n self.semaphore = semaphore\n else:\n self.semaphore = asyncio.Semaphore() # This will always be acquired\n self._digests = {}\n self._size = 0\n if self.expected_digests:\n if not set(self.expected_digests).intersection(set(Artifact.DIGEST_FIELDS)):\n raise UnsupportedDigestValidationError(\n _(\n \"Content at the URL '{}' does not contain at least one trusted hasher which\"\n \" is specified in the 'ALLOWED_CONTENT_CHECKSUMS' setting ({}). The\"\n \" downloader expected one of the following hashers: {}\"\n ).format(self.url, Artifact.DIGEST_FIELDS, set(self.expected_digests))\n )\n\n def _ensure_writer_has_open_file(self):\n \"\"\"\n Create a temporary file on demand.\n\n Create a temporary file when it's actually used,\n allowing plugin writers to instantiate many downloaders in memory.\n \"\"\"\n if not self._writer:\n filename = urlsplit(self.url).path.split(\"/\")[-1]\n # linux allows any character except NUL or / in a filename and has a length limit of\n # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded\n # paths should be OK\n is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length\n # if the filename isn't legal then we just fall back to no suffix (random name)\n suffix = \"-\" + filename if is_legal_filename else None\n # write the file to the current working directory with a random prefix and the\n # desired suffix. we always want the random prefix as it is possible to download\n # the same filename from two different URLs, and the files may not be the same.\n self._writer = tempfile.NamedTemporaryFile(dir=\".\", suffix=suffix, delete=False)\n self.path = self._writer.name\n self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}\n self._size = 0\n\n async def handle_data(self, data):\n \"\"\"\n A coroutine that writes data to the file object and compute its digests.\n\n All subclassed downloaders are expected to pass all data downloaded to this method. Similar\n to the hashlib docstring, repeated calls are equivalent to a single call with\n the concatenation of all the arguments: m.handle_data(a); m.handle_data(b) is equivalent to\n m.handle_data(a+b).\n\n Args:\n data (bytes): The data to be handled by the downloader.\n \"\"\"\n self._ensure_writer_has_open_file()\n self._writer.write(data)\n self._record_size_and_digests_for_data(data)\n\n async def finalize(self):\n \"\"\"\n A coroutine to flush downloaded data, close the file writer, and validate the data.\n\n All subclasses are required to call this method after all data has been passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n Raises:\n :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``\n values don't match the digest of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value\n doesn't match the size of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n self._ensure_writer_has_open_file()\n self._writer.flush()\n os.fsync(self._writer.fileno())\n self._writer.close()\n self._writer = None\n self.validate_digests()\n self.validate_size()\n log.debug(f\"Downloaded file from {self.url}\")\n\n def fetch(self):\n \"\"\"\n Run the download synchronously and return the `DownloadResult`.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult`\n\n Raises:\n Exception: Any fatal exception emitted during downloading\n \"\"\"\n result = asyncio.get_event_loop().run_until_complete(self.run())\n return result\n\n def _record_size_and_digests_for_data(self, data):\n \"\"\"\n Record the size and digest for an available chunk of data.\n\n Args:\n data (bytes): The data to have its size and digest values recorded.\n \"\"\"\n for algorithm in self._digests.values():\n algorithm.update(data)\n self._size += len(data)\n\n @property\n def artifact_attributes(self):\n \"\"\"\n A property that returns a dictionary with size and digest information. The keys of this\n dictionary correspond with :class:`~pulpcore.plugin.models.Artifact` fields.\n \"\"\"\n attributes = {\"size\": self._size}\n for algorithm in self._digests:\n attributes[algorithm] = self._digests[algorithm].hexdigest()\n return attributes\n\n def validate_digests(self):\n \"\"\"\n Validate all digests validate if ``expected_digests`` is set\n\n Raises:\n :class:`~pulpcore.exceptions.DigestValidationError`: When any of the ``expected_digest``\n values don't match the digest of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n if self.expected_digests:\n for algorithm, expected_digest in self.expected_digests.items():\n actual_digest = self._digests[algorithm].hexdigest()\n if actual_digest != expected_digest:\n raise DigestValidationError(actual_digest, expected_digest, url=self.url)\n\n def validate_size(self):\n \"\"\"\n Validate the size if ``expected_size`` is set\n\n Raises:\n :class:`~pulpcore.exceptions.SizeValidationError`: When the ``expected_size`` value\n doesn't match the size of the data passed to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n \"\"\"\n if self.expected_size:\n actual_size = self._size\n expected_size = self.expected_size\n if actual_size != expected_size:\n raise SizeValidationError(actual_size, expected_size, url=self.url)\n\n async def run(self, extra_data=None):\n \"\"\"\n Run the downloader with concurrency restriction.\n\n This method acquires `self.semaphore` before calling the actual download implementation\n contained in `_run()`. This ensures that the semaphore stays acquired even as the `backoff`\n decorator on `_run()`, handles backoff-and-retry logic.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult` from `_run()`.\n\n \"\"\"\n async with self.semaphore:\n try:\n return await self._run(extra_data=extra_data)\n except asyncio.TimeoutError:\n raise TimeoutException(self.url)\n\n async def _run(self, extra_data=None):\n \"\"\"\n Run the downloader.\n\n This is a coroutine that asyncio can schedule to complete downloading. Subclasses are\n required to implement this method and do two things:\n\n 1. Pass all downloaded data to\n :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n 2. Call :meth:`~pulpcore.plugin.download.BaseDownloader.finalize` after all data has\n been delivered to :meth:`~pulpcore.plugin.download.BaseDownloader.handle_data`.\n\n It is also expected that the subclass implementation return a\n :class:`~pulpcore.plugin.download.DownloadResult` object. The\n ``artifact_attributes`` value of the\n :class:`~pulpcore.plugin.download.DownloadResult` is usually set to the\n :attr:`~pulpcore.plugin.download.BaseDownloader.artifact_attributes` property value.\n\n This method is called from :meth:`~pulpcore.plugin.download.BaseDownloader.run` which\n handles concurrency restriction. Thus, by the time this method is called, the download can\n occur without violating the concurrency restriction.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n\n Returns:\n :class:`~pulpcore.plugin.download.DownloadResult`\n\n Raises:\n Validation errors could be emitted when subclassed implementations call\n :meth:`~pulpcore.plugin.download.BaseDownloader.finalize`.\n \"\"\"\n raise NotImplementedError(\"Subclasses must define a _run() method that returns a coroutine\")\n", "path": "pulpcore/download/base.py"}]}
| 3,894 | 131 |
gh_patches_debug_26902
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-2558
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Use literal_eval for parsing configuration values
## Current behavior
*Please describe how the feature works today*
Currently, Prefect takes a [hardcoded stab](https://github.com/PrefectHQ/prefect/blob/master/src/prefect/configuration.py#L37) at converting strings to Python objects. We could replace this function with `ast.literal_eval()` and instantly gain a more flexible cast, including the ability to natively handle lists and dicts.
## Proposed behavior
*Please describe your proposed change to the current behavior*
Currently, setting `PREFECT__X__Y="['a', 'b', 3]"` results in `prefect.config.x.y == "['a', 'b', 3]"`. This means some Prefect objects (thinking of environments, but also including the config itself) must re-parse config options. This is after we already make an attempt to coerce some values to known types. Replacing this with `ast.literal_eval()` would result in `prefect.config.x.y == ['a', 'b', 3]`.
## Example
*Please give an example of how the enhancement would be useful*
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/prefect/configuration.py`
Content:
```
1 import datetime
2 import os
3 import re
4 from typing import Optional, Union, cast
5
6 import toml
7 from box import Box
8
9 from prefect.utilities import collections
10
11 DEFAULT_CONFIG = os.path.join(os.path.dirname(__file__), "config.toml")
12 USER_CONFIG = os.getenv("PREFECT__USER_CONFIG_PATH", "~/.prefect/config.toml")
13 BACKEND_CONFIG = os.getenv("PREFECT__BACKEND_CONFIG_PATH", "~/.prefect/backend.toml")
14 ENV_VAR_PREFIX = "PREFECT"
15 INTERPOLATION_REGEX = re.compile(r"\${(.[^${}]*)}")
16
17
18 class Config(Box):
19 """
20 A config is a Box subclass
21 """
22
23 def copy(self) -> "Config":
24 """
25 Create a recursive copy of the config. Each level of the Config is a new Config object, so
26 modifying keys won't affect the original Config object. However, values are not
27 deep-copied, and mutations can affect the original.
28 """
29 new_config = Config()
30 for key, value in self.items():
31 if isinstance(value, Config):
32 value = value.copy()
33 new_config[key] = value
34 return new_config
35
36
37 def string_to_type(val: str) -> Union[bool, int, float, str]:
38 """
39 Helper function for transforming string env var values into typed values.
40
41 Maps:
42 - "true" (any capitalization) to `True`
43 - "false" (any capitalization) to `False`
44 - integers to `int`
45 - floats to `float`
46
47 Arguments:
48 - val (str): the string value of an environment variable
49
50 Returns:
51 Union[bool, int, float, str]: the type-cast env var value
52 """
53
54 # bool
55 if val.upper() == "TRUE":
56 return True
57 elif val.upper() == "FALSE":
58 return False
59
60 # int
61 try:
62 val_as_int = int(val)
63 if str(val_as_int) == val:
64 return val_as_int
65 except Exception:
66 pass
67
68 # float
69 try:
70 val_as_float = float(val)
71 if str(val_as_float) == val:
72 return val_as_float
73 except Exception:
74 pass
75
76 # return string value
77 return val
78
79
80 def interpolate_env_vars(env_var: str) -> Optional[Union[bool, int, float, str]]:
81 """
82 Expands (potentially nested) env vars by repeatedly applying
83 `expandvars` and `expanduser` until interpolation stops having
84 any effect.
85 """
86 if not env_var or not isinstance(env_var, str):
87 return env_var
88
89 counter = 0
90
91 while counter < 10:
92 interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))
93 if interpolated == env_var:
94 # if a change was made, apply string-to-type casts; otherwise leave alone
95 # this is because we don't want to override TOML type-casting if this function
96 # is applied to a non-interpolated value
97 if counter > 1:
98 interpolated = string_to_type(interpolated) # type: ignore
99 return interpolated
100 else:
101 env_var = interpolated
102 counter += 1
103
104 return None
105
106
107 def create_user_config(dest_path: str, source_path: str = DEFAULT_CONFIG) -> None:
108 """
109 Copies the default configuration to a user-customizable file at `dest_path`
110 """
111 dest_path = cast(str, interpolate_env_vars(dest_path))
112 if os.path.isfile(dest_path):
113 raise ValueError("File already exists: {}".format(dest_path))
114 os.makedirs(os.path.dirname(dest_path), exist_ok=True)
115
116 with open(dest_path, "w") as dest:
117 with open(source_path, "r") as source:
118 dest.write(source.read())
119
120
121 # Process Config -------------------------------------------------------------
122
123
124 def process_task_defaults(config: Config) -> Config:
125 """
126 Converts task defaults from basic types to Python objects like timedeltas
127
128 Args:
129 - config (Config): the configuration to modify
130 """
131 # make sure defaults exists
132 defaults = config.setdefault("tasks", {}).setdefault("defaults", {})
133
134 # max_retries defaults to 0 if not set, False, or None
135 if not defaults.setdefault("max_retries", 0):
136 defaults.max_retries = 0
137 defaults.max_retries = defaults.get("max_retries", 0) or 0
138
139 # retry_delay defaults to None if not set - also check for False because TOML has no NULL
140 if defaults.setdefault("retry_delay", False) is False:
141 defaults.retry_delay = None
142 elif isinstance(defaults.retry_delay, int):
143 defaults.retry_delay = datetime.timedelta(seconds=defaults.retry_delay)
144
145 # timeout defaults to None if not set - also check for False because TOML has no NULL
146 if defaults.setdefault("timeout", False) is False:
147 defaults.timeout = None
148 elif isinstance(defaults.timeout, int):
149 defaults.timeout = datetime.timedelta(seconds=defaults.timeout)
150
151 return config
152
153
154 # Validation ------------------------------------------------------------------
155
156
157 def validate_config(config: Config) -> None:
158 """
159 Validates that the configuration file is valid.
160 - keys do not shadow Config methods
161
162 Note that this is performed when the config is first loaded, but not after.
163 """
164
165 def check_valid_keys(config: Config) -> None:
166 """
167 Recursively check that keys do not shadow methods of the Config object
168 """
169 invalid_keys = dir(Config)
170 for k, v in config.items():
171 if k in invalid_keys:
172 raise ValueError('Invalid config key: "{}"'.format(k))
173 if isinstance(v, Config):
174 check_valid_keys(v)
175
176 check_valid_keys(config)
177
178
179 # Load configuration ----------------------------------------------------------
180
181
182 def load_toml(path: str) -> dict:
183 """
184 Loads a config dictionary from TOML
185 """
186 return {
187 key: value
188 for key, value in toml.load(cast(str, interpolate_env_vars(path))).items()
189 }
190
191
192 def interpolate_config(config: dict, env_var_prefix: str = None) -> Config:
193 """
194 Processes a config dictionary, such as the one loaded from `load_toml`.
195 """
196
197 # toml supports nested dicts, so we work with a flattened representation to do any
198 # requested interpolation
199 flat_config = collections.dict_to_flatdict(config)
200
201 # --------------------- Interpolate env vars -----------------------
202 # check if any env var sets a configuration value with the format:
203 # [ENV_VAR_PREFIX]__[Section]__[Optional Sub-Sections...]__[Key] = Value
204 # and if it does, add it to the config file.
205
206 if env_var_prefix:
207
208 for env_var, env_var_value in os.environ.items():
209 if env_var.startswith(env_var_prefix + "__"):
210
211 # strip the prefix off the env var
212 env_var_option = env_var[len(env_var_prefix + "__") :]
213
214 # make sure the resulting env var has at least one delimitied section and key
215 if "__" not in env_var:
216 continue
217
218 # env vars with escaped characters are interpreted as literal "\", which
219 # Python helpfully escapes with a second "\". This step makes sure that
220 # escaped characters are properly interpreted.
221 value = cast(str, env_var_value.encode().decode("unicode_escape"))
222
223 # place the env var in the flat config as a compound key
224 if env_var_option.upper().startswith("CONTEXT__SECRETS"):
225 formatted_option = env_var_option.split("__")
226 formatted_option[:-1] = [
227 val.lower() for val in formatted_option[:-1]
228 ]
229 config_option = collections.CompoundKey(formatted_option)
230 else:
231 config_option = collections.CompoundKey(
232 env_var_option.lower().split("__")
233 )
234
235 flat_config[config_option] = string_to_type(
236 cast(str, interpolate_env_vars(value))
237 )
238
239 # interpolate any env vars referenced
240 for k, v in list(flat_config.items()):
241 flat_config[k] = interpolate_env_vars(v)
242
243 # --------------------- Interpolate other config keys -----------------
244 # TOML doesn't support references to other keys... but we do!
245 # This has the potential to lead to nasty recursions, so we check at most 10 times.
246 # we use a set called "keys_to_check" to track only the ones of interest, so we aren't
247 # checking every key every time.
248
249 keys_to_check = set(flat_config.keys())
250
251 for _ in range(10):
252
253 # iterate over every key and value to check if the value uses interpolation
254 for k in list(keys_to_check):
255
256 # if the value isn't a string, it can't be a reference, so we exit
257 if not isinstance(flat_config[k], str):
258 keys_to_check.remove(k)
259 continue
260
261 # see if the ${...} syntax was used in the value and exit if it wasn't
262 match = INTERPOLATION_REGEX.search(flat_config[k])
263 if not match:
264 keys_to_check.remove(k)
265 continue
266
267 # the matched_string includes "${}"; the matched_key is just the inner value
268 matched_string = match.group(0)
269 matched_key = match.group(1)
270
271 # get the referenced key from the config value
272 ref_key = collections.CompoundKey(matched_key.split("."))
273 # get the value corresponding to the referenced key
274 ref_value = flat_config.get(ref_key, "")
275
276 # if the matched was the entire value, replace it with the interpolated value
277 if flat_config[k] == matched_string:
278 flat_config[k] = ref_value
279 # if it was a partial match, then drop the interpolated value into the string
280 else:
281 flat_config[k] = flat_config[k].replace(
282 matched_string, str(ref_value), 1
283 )
284
285 return cast(Config, collections.flatdict_to_dict(flat_config, dct_class=Config))
286
287
288 def load_configuration(
289 path: str,
290 user_config_path: str = None,
291 backend_config_path: str = None,
292 env_var_prefix: str = None,
293 ) -> Config:
294 """
295 Loads a configuration from a known location.
296
297 Args:
298 - path (str): the path to the TOML configuration file
299 - user_config_path (str): an optional path to a user config file. If a user config
300 is provided, it will be used to update the main config prior to interpolation
301 - env_var_prefix (str): any env vars matching this prefix will be used to create
302 configuration values
303
304 Returns:
305 - Config
306 """
307
308 # load default config
309 default_config = load_toml(path)
310
311 # load user config
312 if user_config_path and os.path.isfile(str(interpolate_env_vars(user_config_path))):
313 user_config = load_toml(user_config_path)
314 # merge user config into default config
315 default_config = cast(
316 dict, collections.merge_dicts(default_config, user_config)
317 )
318
319 # load backend config
320 if backend_config_path and os.path.isfile(
321 str(interpolate_env_vars(backend_config_path))
322 ):
323 backend_config = load_toml(backend_config_path)
324 # merge backend config into default config
325 default_config = cast(
326 dict, collections.merge_dicts(default_config, backend_config)
327 )
328
329 # interpolate after user config has already been merged
330 config = interpolate_config(default_config, env_var_prefix=env_var_prefix)
331
332 validate_config(config)
333 return config
334
335
336 # load prefect configuration
337 config = load_configuration(
338 path=DEFAULT_CONFIG,
339 user_config_path=USER_CONFIG,
340 backend_config_path=BACKEND_CONFIG,
341 env_var_prefix=ENV_VAR_PREFIX,
342 )
343
344 # add task defaults
345 config = process_task_defaults(config)
346
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/prefect/configuration.py b/src/prefect/configuration.py
--- a/src/prefect/configuration.py
+++ b/src/prefect/configuration.py
@@ -1,6 +1,7 @@
import datetime
import os
import re
+from ast import literal_eval
from typing import Optional, Union, cast
import toml
@@ -41,14 +42,13 @@
Maps:
- "true" (any capitalization) to `True`
- "false" (any capitalization) to `False`
- - integers to `int`
- - floats to `float`
-
+ - any other valid literal Python syntax interpretable by ast.literal_eval
+
Arguments:
- val (str): the string value of an environment variable
Returns:
- Union[bool, int, float, str]: the type-cast env var value
+ Union[bool, int, float, str, dict, list, None, tuple]: the type-cast env var value
"""
# bool
@@ -57,19 +57,10 @@
elif val.upper() == "FALSE":
return False
- # int
- try:
- val_as_int = int(val)
- if str(val_as_int) == val:
- return val_as_int
- except Exception:
- pass
-
- # float
+ # dicts, ints, floats, or any other literal Python syntax
try:
- val_as_float = float(val)
- if str(val_as_float) == val:
- return val_as_float
+ val_as_obj = literal_eval(val)
+ return val_as_obj
except Exception:
pass
|
{"golden_diff": "diff --git a/src/prefect/configuration.py b/src/prefect/configuration.py\n--- a/src/prefect/configuration.py\n+++ b/src/prefect/configuration.py\n@@ -1,6 +1,7 @@\n import datetime\n import os\n import re\n+from ast import literal_eval\n from typing import Optional, Union, cast\n \n import toml\n@@ -41,14 +42,13 @@\n Maps:\n - \"true\" (any capitalization) to `True`\n - \"false\" (any capitalization) to `False`\n- - integers to `int`\n- - floats to `float`\n-\n+ - any other valid literal Python syntax interpretable by ast.literal_eval\n+ \n Arguments:\n - val (str): the string value of an environment variable\n \n Returns:\n- Union[bool, int, float, str]: the type-cast env var value\n+ Union[bool, int, float, str, dict, list, None, tuple]: the type-cast env var value\n \"\"\"\n \n # bool\n@@ -57,19 +57,10 @@\n elif val.upper() == \"FALSE\":\n return False\n \n- # int\n- try:\n- val_as_int = int(val)\n- if str(val_as_int) == val:\n- return val_as_int\n- except Exception:\n- pass\n-\n- # float\n+ # dicts, ints, floats, or any other literal Python syntax\n try:\n- val_as_float = float(val)\n- if str(val_as_float) == val:\n- return val_as_float\n+ val_as_obj = literal_eval(val)\n+ return val_as_obj\n except Exception:\n pass\n", "issue": "Use literal_eval for parsing configuration values\n## Current behavior\r\n*Please describe how the feature works today*\r\nCurrently, Prefect takes a [hardcoded stab](https://github.com/PrefectHQ/prefect/blob/master/src/prefect/configuration.py#L37) at converting strings to Python objects. We could replace this function with `ast.literal_eval()` and instantly gain a more flexible cast, including the ability to natively handle lists and dicts. \r\n\r\n\r\n\r\n## Proposed behavior\r\n*Please describe your proposed change to the current behavior*\r\n\r\nCurrently, setting `PREFECT__X__Y=\"['a', 'b', 3]\"` results in `prefect.config.x.y == \"['a', 'b', 3]\"`. This means some Prefect objects (thinking of environments, but also including the config itself) must re-parse config options. This is after we already make an attempt to coerce some values to known types. Replacing this with `ast.literal_eval()` would result in `prefect.config.x.y == ['a', 'b', 3]`.\r\n\r\n\r\n## Example\r\n*Please give an example of how the enhancement would be useful*\r\n\n", "before_files": [{"content": "import datetime\nimport os\nimport re\nfrom typing import Optional, Union, cast\n\nimport toml\nfrom box import Box\n\nfrom prefect.utilities import collections\n\nDEFAULT_CONFIG = os.path.join(os.path.dirname(__file__), \"config.toml\")\nUSER_CONFIG = os.getenv(\"PREFECT__USER_CONFIG_PATH\", \"~/.prefect/config.toml\")\nBACKEND_CONFIG = os.getenv(\"PREFECT__BACKEND_CONFIG_PATH\", \"~/.prefect/backend.toml\")\nENV_VAR_PREFIX = \"PREFECT\"\nINTERPOLATION_REGEX = re.compile(r\"\\${(.[^${}]*)}\")\n\n\nclass Config(Box):\n \"\"\"\n A config is a Box subclass\n \"\"\"\n\n def copy(self) -> \"Config\":\n \"\"\"\n Create a recursive copy of the config. Each level of the Config is a new Config object, so\n modifying keys won't affect the original Config object. However, values are not\n deep-copied, and mutations can affect the original.\n \"\"\"\n new_config = Config()\n for key, value in self.items():\n if isinstance(value, Config):\n value = value.copy()\n new_config[key] = value\n return new_config\n\n\ndef string_to_type(val: str) -> Union[bool, int, float, str]:\n \"\"\"\n Helper function for transforming string env var values into typed values.\n\n Maps:\n - \"true\" (any capitalization) to `True`\n - \"false\" (any capitalization) to `False`\n - integers to `int`\n - floats to `float`\n\n Arguments:\n - val (str): the string value of an environment variable\n\n Returns:\n Union[bool, int, float, str]: the type-cast env var value\n \"\"\"\n\n # bool\n if val.upper() == \"TRUE\":\n return True\n elif val.upper() == \"FALSE\":\n return False\n\n # int\n try:\n val_as_int = int(val)\n if str(val_as_int) == val:\n return val_as_int\n except Exception:\n pass\n\n # float\n try:\n val_as_float = float(val)\n if str(val_as_float) == val:\n return val_as_float\n except Exception:\n pass\n\n # return string value\n return val\n\n\ndef interpolate_env_vars(env_var: str) -> Optional[Union[bool, int, float, str]]:\n \"\"\"\n Expands (potentially nested) env vars by repeatedly applying\n `expandvars` and `expanduser` until interpolation stops having\n any effect.\n \"\"\"\n if not env_var or not isinstance(env_var, str):\n return env_var\n\n counter = 0\n\n while counter < 10:\n interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))\n if interpolated == env_var:\n # if a change was made, apply string-to-type casts; otherwise leave alone\n # this is because we don't want to override TOML type-casting if this function\n # is applied to a non-interpolated value\n if counter > 1:\n interpolated = string_to_type(interpolated) # type: ignore\n return interpolated\n else:\n env_var = interpolated\n counter += 1\n\n return None\n\n\ndef create_user_config(dest_path: str, source_path: str = DEFAULT_CONFIG) -> None:\n \"\"\"\n Copies the default configuration to a user-customizable file at `dest_path`\n \"\"\"\n dest_path = cast(str, interpolate_env_vars(dest_path))\n if os.path.isfile(dest_path):\n raise ValueError(\"File already exists: {}\".format(dest_path))\n os.makedirs(os.path.dirname(dest_path), exist_ok=True)\n\n with open(dest_path, \"w\") as dest:\n with open(source_path, \"r\") as source:\n dest.write(source.read())\n\n\n# Process Config -------------------------------------------------------------\n\n\ndef process_task_defaults(config: Config) -> Config:\n \"\"\"\n Converts task defaults from basic types to Python objects like timedeltas\n\n Args:\n - config (Config): the configuration to modify\n \"\"\"\n # make sure defaults exists\n defaults = config.setdefault(\"tasks\", {}).setdefault(\"defaults\", {})\n\n # max_retries defaults to 0 if not set, False, or None\n if not defaults.setdefault(\"max_retries\", 0):\n defaults.max_retries = 0\n defaults.max_retries = defaults.get(\"max_retries\", 0) or 0\n\n # retry_delay defaults to None if not set - also check for False because TOML has no NULL\n if defaults.setdefault(\"retry_delay\", False) is False:\n defaults.retry_delay = None\n elif isinstance(defaults.retry_delay, int):\n defaults.retry_delay = datetime.timedelta(seconds=defaults.retry_delay)\n\n # timeout defaults to None if not set - also check for False because TOML has no NULL\n if defaults.setdefault(\"timeout\", False) is False:\n defaults.timeout = None\n elif isinstance(defaults.timeout, int):\n defaults.timeout = datetime.timedelta(seconds=defaults.timeout)\n\n return config\n\n\n# Validation ------------------------------------------------------------------\n\n\ndef validate_config(config: Config) -> None:\n \"\"\"\n Validates that the configuration file is valid.\n - keys do not shadow Config methods\n\n Note that this is performed when the config is first loaded, but not after.\n \"\"\"\n\n def check_valid_keys(config: Config) -> None:\n \"\"\"\n Recursively check that keys do not shadow methods of the Config object\n \"\"\"\n invalid_keys = dir(Config)\n for k, v in config.items():\n if k in invalid_keys:\n raise ValueError('Invalid config key: \"{}\"'.format(k))\n if isinstance(v, Config):\n check_valid_keys(v)\n\n check_valid_keys(config)\n\n\n# Load configuration ----------------------------------------------------------\n\n\ndef load_toml(path: str) -> dict:\n \"\"\"\n Loads a config dictionary from TOML\n \"\"\"\n return {\n key: value\n for key, value in toml.load(cast(str, interpolate_env_vars(path))).items()\n }\n\n\ndef interpolate_config(config: dict, env_var_prefix: str = None) -> Config:\n \"\"\"\n Processes a config dictionary, such as the one loaded from `load_toml`.\n \"\"\"\n\n # toml supports nested dicts, so we work with a flattened representation to do any\n # requested interpolation\n flat_config = collections.dict_to_flatdict(config)\n\n # --------------------- Interpolate env vars -----------------------\n # check if any env var sets a configuration value with the format:\n # [ENV_VAR_PREFIX]__[Section]__[Optional Sub-Sections...]__[Key] = Value\n # and if it does, add it to the config file.\n\n if env_var_prefix:\n\n for env_var, env_var_value in os.environ.items():\n if env_var.startswith(env_var_prefix + \"__\"):\n\n # strip the prefix off the env var\n env_var_option = env_var[len(env_var_prefix + \"__\") :]\n\n # make sure the resulting env var has at least one delimitied section and key\n if \"__\" not in env_var:\n continue\n\n # env vars with escaped characters are interpreted as literal \"\\\", which\n # Python helpfully escapes with a second \"\\\". This step makes sure that\n # escaped characters are properly interpreted.\n value = cast(str, env_var_value.encode().decode(\"unicode_escape\"))\n\n # place the env var in the flat config as a compound key\n if env_var_option.upper().startswith(\"CONTEXT__SECRETS\"):\n formatted_option = env_var_option.split(\"__\")\n formatted_option[:-1] = [\n val.lower() for val in formatted_option[:-1]\n ]\n config_option = collections.CompoundKey(formatted_option)\n else:\n config_option = collections.CompoundKey(\n env_var_option.lower().split(\"__\")\n )\n\n flat_config[config_option] = string_to_type(\n cast(str, interpolate_env_vars(value))\n )\n\n # interpolate any env vars referenced\n for k, v in list(flat_config.items()):\n flat_config[k] = interpolate_env_vars(v)\n\n # --------------------- Interpolate other config keys -----------------\n # TOML doesn't support references to other keys... but we do!\n # This has the potential to lead to nasty recursions, so we check at most 10 times.\n # we use a set called \"keys_to_check\" to track only the ones of interest, so we aren't\n # checking every key every time.\n\n keys_to_check = set(flat_config.keys())\n\n for _ in range(10):\n\n # iterate over every key and value to check if the value uses interpolation\n for k in list(keys_to_check):\n\n # if the value isn't a string, it can't be a reference, so we exit\n if not isinstance(flat_config[k], str):\n keys_to_check.remove(k)\n continue\n\n # see if the ${...} syntax was used in the value and exit if it wasn't\n match = INTERPOLATION_REGEX.search(flat_config[k])\n if not match:\n keys_to_check.remove(k)\n continue\n\n # the matched_string includes \"${}\"; the matched_key is just the inner value\n matched_string = match.group(0)\n matched_key = match.group(1)\n\n # get the referenced key from the config value\n ref_key = collections.CompoundKey(matched_key.split(\".\"))\n # get the value corresponding to the referenced key\n ref_value = flat_config.get(ref_key, \"\")\n\n # if the matched was the entire value, replace it with the interpolated value\n if flat_config[k] == matched_string:\n flat_config[k] = ref_value\n # if it was a partial match, then drop the interpolated value into the string\n else:\n flat_config[k] = flat_config[k].replace(\n matched_string, str(ref_value), 1\n )\n\n return cast(Config, collections.flatdict_to_dict(flat_config, dct_class=Config))\n\n\ndef load_configuration(\n path: str,\n user_config_path: str = None,\n backend_config_path: str = None,\n env_var_prefix: str = None,\n) -> Config:\n \"\"\"\n Loads a configuration from a known location.\n\n Args:\n - path (str): the path to the TOML configuration file\n - user_config_path (str): an optional path to a user config file. If a user config\n is provided, it will be used to update the main config prior to interpolation\n - env_var_prefix (str): any env vars matching this prefix will be used to create\n configuration values\n\n Returns:\n - Config\n \"\"\"\n\n # load default config\n default_config = load_toml(path)\n\n # load user config\n if user_config_path and os.path.isfile(str(interpolate_env_vars(user_config_path))):\n user_config = load_toml(user_config_path)\n # merge user config into default config\n default_config = cast(\n dict, collections.merge_dicts(default_config, user_config)\n )\n\n # load backend config\n if backend_config_path and os.path.isfile(\n str(interpolate_env_vars(backend_config_path))\n ):\n backend_config = load_toml(backend_config_path)\n # merge backend config into default config\n default_config = cast(\n dict, collections.merge_dicts(default_config, backend_config)\n )\n\n # interpolate after user config has already been merged\n config = interpolate_config(default_config, env_var_prefix=env_var_prefix)\n\n validate_config(config)\n return config\n\n\n# load prefect configuration\nconfig = load_configuration(\n path=DEFAULT_CONFIG,\n user_config_path=USER_CONFIG,\n backend_config_path=BACKEND_CONFIG,\n env_var_prefix=ENV_VAR_PREFIX,\n)\n\n# add task defaults\nconfig = process_task_defaults(config)\n", "path": "src/prefect/configuration.py"}], "after_files": [{"content": "import datetime\nimport os\nimport re\nfrom ast import literal_eval\nfrom typing import Optional, Union, cast\n\nimport toml\nfrom box import Box\n\nfrom prefect.utilities import collections\n\nDEFAULT_CONFIG = os.path.join(os.path.dirname(__file__), \"config.toml\")\nUSER_CONFIG = os.getenv(\"PREFECT__USER_CONFIG_PATH\", \"~/.prefect/config.toml\")\nBACKEND_CONFIG = os.getenv(\"PREFECT__BACKEND_CONFIG_PATH\", \"~/.prefect/backend.toml\")\nENV_VAR_PREFIX = \"PREFECT\"\nINTERPOLATION_REGEX = re.compile(r\"\\${(.[^${}]*)}\")\n\n\nclass Config(Box):\n \"\"\"\n A config is a Box subclass\n \"\"\"\n\n def copy(self) -> \"Config\":\n \"\"\"\n Create a recursive copy of the config. Each level of the Config is a new Config object, so\n modifying keys won't affect the original Config object. However, values are not\n deep-copied, and mutations can affect the original.\n \"\"\"\n new_config = Config()\n for key, value in self.items():\n if isinstance(value, Config):\n value = value.copy()\n new_config[key] = value\n return new_config\n\n\ndef string_to_type(val: str) -> Union[bool, int, float, str]:\n \"\"\"\n Helper function for transforming string env var values into typed values.\n\n Maps:\n - \"true\" (any capitalization) to `True`\n - \"false\" (any capitalization) to `False`\n - any other valid literal Python syntax interpretable by ast.literal_eval\n \n Arguments:\n - val (str): the string value of an environment variable\n\n Returns:\n Union[bool, int, float, str, dict, list, None, tuple]: the type-cast env var value\n \"\"\"\n\n # bool\n if val.upper() == \"TRUE\":\n return True\n elif val.upper() == \"FALSE\":\n return False\n\n # dicts, ints, floats, or any other literal Python syntax\n try:\n val_as_obj = literal_eval(val)\n return val_as_obj\n except Exception:\n pass\n\n # return string value\n return val\n\n\ndef interpolate_env_vars(env_var: str) -> Optional[Union[bool, int, float, str]]:\n \"\"\"\n Expands (potentially nested) env vars by repeatedly applying\n `expandvars` and `expanduser` until interpolation stops having\n any effect.\n \"\"\"\n if not env_var or not isinstance(env_var, str):\n return env_var\n\n counter = 0\n\n while counter < 10:\n interpolated = os.path.expanduser(os.path.expandvars(str(env_var)))\n if interpolated == env_var:\n # if a change was made, apply string-to-type casts; otherwise leave alone\n # this is because we don't want to override TOML type-casting if this function\n # is applied to a non-interpolated value\n if counter > 1:\n interpolated = string_to_type(interpolated) # type: ignore\n return interpolated\n else:\n env_var = interpolated\n counter += 1\n\n return None\n\n\ndef create_user_config(dest_path: str, source_path: str = DEFAULT_CONFIG) -> None:\n \"\"\"\n Copies the default configuration to a user-customizable file at `dest_path`\n \"\"\"\n dest_path = cast(str, interpolate_env_vars(dest_path))\n if os.path.isfile(dest_path):\n raise ValueError(\"File already exists: {}\".format(dest_path))\n os.makedirs(os.path.dirname(dest_path), exist_ok=True)\n\n with open(dest_path, \"w\") as dest:\n with open(source_path, \"r\") as source:\n dest.write(source.read())\n\n\n# Process Config -------------------------------------------------------------\n\n\ndef process_task_defaults(config: Config) -> Config:\n \"\"\"\n Converts task defaults from basic types to Python objects like timedeltas\n\n Args:\n - config (Config): the configuration to modify\n \"\"\"\n # make sure defaults exists\n defaults = config.setdefault(\"tasks\", {}).setdefault(\"defaults\", {})\n\n # max_retries defaults to 0 if not set, False, or None\n if not defaults.setdefault(\"max_retries\", 0):\n defaults.max_retries = 0\n defaults.max_retries = defaults.get(\"max_retries\", 0) or 0\n\n # retry_delay defaults to None if not set - also check for False because TOML has no NULL\n if defaults.setdefault(\"retry_delay\", False) is False:\n defaults.retry_delay = None\n elif isinstance(defaults.retry_delay, int):\n defaults.retry_delay = datetime.timedelta(seconds=defaults.retry_delay)\n\n # timeout defaults to None if not set - also check for False because TOML has no NULL\n if defaults.setdefault(\"timeout\", False) is False:\n defaults.timeout = None\n elif isinstance(defaults.timeout, int):\n defaults.timeout = datetime.timedelta(seconds=defaults.timeout)\n\n return config\n\n\n# Validation ------------------------------------------------------------------\n\n\ndef validate_config(config: Config) -> None:\n \"\"\"\n Validates that the configuration file is valid.\n - keys do not shadow Config methods\n\n Note that this is performed when the config is first loaded, but not after.\n \"\"\"\n\n def check_valid_keys(config: Config) -> None:\n \"\"\"\n Recursively check that keys do not shadow methods of the Config object\n \"\"\"\n invalid_keys = dir(Config)\n for k, v in config.items():\n if k in invalid_keys:\n raise ValueError('Invalid config key: \"{}\"'.format(k))\n if isinstance(v, Config):\n check_valid_keys(v)\n\n check_valid_keys(config)\n\n\n# Load configuration ----------------------------------------------------------\n\n\ndef load_toml(path: str) -> dict:\n \"\"\"\n Loads a config dictionary from TOML\n \"\"\"\n return {\n key: value\n for key, value in toml.load(cast(str, interpolate_env_vars(path))).items()\n }\n\n\ndef interpolate_config(config: dict, env_var_prefix: str = None) -> Config:\n \"\"\"\n Processes a config dictionary, such as the one loaded from `load_toml`.\n \"\"\"\n\n # toml supports nested dicts, so we work with a flattened representation to do any\n # requested interpolation\n flat_config = collections.dict_to_flatdict(config)\n\n # --------------------- Interpolate env vars -----------------------\n # check if any env var sets a configuration value with the format:\n # [ENV_VAR_PREFIX]__[Section]__[Optional Sub-Sections...]__[Key] = Value\n # and if it does, add it to the config file.\n\n if env_var_prefix:\n\n for env_var, env_var_value in os.environ.items():\n if env_var.startswith(env_var_prefix + \"__\"):\n\n # strip the prefix off the env var\n env_var_option = env_var[len(env_var_prefix + \"__\") :]\n\n # make sure the resulting env var has at least one delimitied section and key\n if \"__\" not in env_var:\n continue\n\n # env vars with escaped characters are interpreted as literal \"\\\", which\n # Python helpfully escapes with a second \"\\\". This step makes sure that\n # escaped characters are properly interpreted.\n value = cast(str, env_var_value.encode().decode(\"unicode_escape\"))\n\n # place the env var in the flat config as a compound key\n if env_var_option.upper().startswith(\"CONTEXT__SECRETS\"):\n formatted_option = env_var_option.split(\"__\")\n formatted_option[:-1] = [\n val.lower() for val in formatted_option[:-1]\n ]\n config_option = collections.CompoundKey(formatted_option)\n else:\n config_option = collections.CompoundKey(\n env_var_option.lower().split(\"__\")\n )\n\n flat_config[config_option] = string_to_type(\n cast(str, interpolate_env_vars(value))\n )\n\n # interpolate any env vars referenced\n for k, v in list(flat_config.items()):\n flat_config[k] = interpolate_env_vars(v)\n\n # --------------------- Interpolate other config keys -----------------\n # TOML doesn't support references to other keys... but we do!\n # This has the potential to lead to nasty recursions, so we check at most 10 times.\n # we use a set called \"keys_to_check\" to track only the ones of interest, so we aren't\n # checking every key every time.\n\n keys_to_check = set(flat_config.keys())\n\n for _ in range(10):\n\n # iterate over every key and value to check if the value uses interpolation\n for k in list(keys_to_check):\n\n # if the value isn't a string, it can't be a reference, so we exit\n if not isinstance(flat_config[k], str):\n keys_to_check.remove(k)\n continue\n\n # see if the ${...} syntax was used in the value and exit if it wasn't\n match = INTERPOLATION_REGEX.search(flat_config[k])\n if not match:\n keys_to_check.remove(k)\n continue\n\n # the matched_string includes \"${}\"; the matched_key is just the inner value\n matched_string = match.group(0)\n matched_key = match.group(1)\n\n # get the referenced key from the config value\n ref_key = collections.CompoundKey(matched_key.split(\".\"))\n # get the value corresponding to the referenced key\n ref_value = flat_config.get(ref_key, \"\")\n\n # if the matched was the entire value, replace it with the interpolated value\n if flat_config[k] == matched_string:\n flat_config[k] = ref_value\n # if it was a partial match, then drop the interpolated value into the string\n else:\n flat_config[k] = flat_config[k].replace(\n matched_string, str(ref_value), 1\n )\n\n return cast(Config, collections.flatdict_to_dict(flat_config, dct_class=Config))\n\n\ndef load_configuration(\n path: str,\n user_config_path: str = None,\n backend_config_path: str = None,\n env_var_prefix: str = None,\n) -> Config:\n \"\"\"\n Loads a configuration from a known location.\n\n Args:\n - path (str): the path to the TOML configuration file\n - user_config_path (str): an optional path to a user config file. If a user config\n is provided, it will be used to update the main config prior to interpolation\n - env_var_prefix (str): any env vars matching this prefix will be used to create\n configuration values\n\n Returns:\n - Config\n \"\"\"\n\n # load default config\n default_config = load_toml(path)\n\n # load user config\n if user_config_path and os.path.isfile(str(interpolate_env_vars(user_config_path))):\n user_config = load_toml(user_config_path)\n # merge user config into default config\n default_config = cast(\n dict, collections.merge_dicts(default_config, user_config)\n )\n\n # load backend config\n if backend_config_path and os.path.isfile(\n str(interpolate_env_vars(backend_config_path))\n ):\n backend_config = load_toml(backend_config_path)\n # merge backend config into default config\n default_config = cast(\n dict, collections.merge_dicts(default_config, backend_config)\n )\n\n # interpolate after user config has already been merged\n config = interpolate_config(default_config, env_var_prefix=env_var_prefix)\n\n validate_config(config)\n return config\n\n\n# load prefect configuration\nconfig = load_configuration(\n path=DEFAULT_CONFIG,\n user_config_path=USER_CONFIG,\n backend_config_path=BACKEND_CONFIG,\n env_var_prefix=ENV_VAR_PREFIX,\n)\n\n# add task defaults\nconfig = process_task_defaults(config)\n", "path": "src/prefect/configuration.py"}]}
| 3,988 | 378 |
gh_patches_debug_35988
|
rasdani/github-patches
|
git_diff
|
PlasmaPy__PlasmaPy-2175
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incompatibility between `@angular_freq_to_hz` and var-keyword arguments
### Bug description
While trying to decorate `gyrofrequency` with `@particle_input` in #2026, I found an issue with `@angular_freq_to_hz`. It appears that `@angular_freq_to_hz` cannot decorate functions that accept var-keyword arguments.
### Expected outcome
We should be able to use `@angular_freq_to_hz` to decorate functions with var-keyword parameters.
### Minimal complete verifiable example
When declaring this function:
```Python
from plasmapy.utils.decorators import angular_freq_to_hz
@angular_freq_to_hz
def f(**kwargs):
return kwargs
```
I get:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[41], line 1
----> 1 @angular_freq_to_hz
2 def f(**kwargs):
3 return kwargs
File ~/Projects/PlasmaPy/plasmapy/utils/decorators/converter.py:101, in angular_freq_to_hz(fn)
97 new_params = sig.parameters.copy()
98 new_params["to_hz"] = inspect.Parameter(
99 "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
100 )
--> 101 new_sig = inspect.Signature(
102 parameters=new_params.values(), return_annotation=sig.return_annotation
103 )
104 fn.__signature__ = new_sig
106 @preserve_signature
107 @functools.wraps(fn)
108 def wrapper(*args, to_hz=False, **kwargs):
File ~/miniconda3/envs/pldev/lib/python3.11/inspect.py:2994, in Signature.__init__(self, parameters, return_annotation, __validate_parameters__)
2988 msg = (
2989 'wrong parameter order: {} parameter before {} '
2990 'parameter'
2991 )
2992 msg = msg.format(top_kind.description,
2993 kind.description)
-> 2994 raise ValueError(msg)
2995 elif kind > top_kind:
2996 kind_defaults = False
ValueError: wrong parameter order: variadic keyword parameter before positional or keyword parameter
```
### Package versions
Development branch
### Additional context
This is medium priority to address since it's blocking #2026 and possibly also #2022.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `plasmapy/utils/decorators/converter.py`
Content:
```
1 """Decorators to convert units."""
2
3 __all__ = ["angular_freq_to_hz"]
4
5 import astropy.units as u
6 import functools
7 import inspect
8
9 from plasmapy.utils.decorators.helpers import preserve_signature
10
11
12 def angular_freq_to_hz(fn):
13 """
14 A decorator that enables a function to convert its return
15 value from angular frequency (rad/s) to frequency (Hz).
16
17 A kwarg ``to_hz`` is added to the function's signature, with a
18 default value of `False`. The keyword is also added to the
19 function's docstring under the **"Other Parameters"** heading.
20
21 Parameters
22 ----------
23 fn : function
24 The function to be decorated.
25
26 Raises
27 ------
28 ValueError
29 If ``fn`` has already defined a kwarg ``to_hz``.
30
31 Returns
32 -------
33 callable
34 The decorated function.
35
36 Notes
37 -----
38 * If `~plasmapy.utils.decorators.converter.angular_freq_to_hz` is
39 used with decorator
40 :func:`~plasmapy.utils.decorators.validators.validate_quantities`,
41 then `angular_freq_to_hz` should be used inside
42 :func:`~plasmapy.utils.decorators.validators.validate_quantities`
43 but special consideration is needed for setup. The following is
44 an example of an appropriate setup::
45
46 import astropy.units as u
47 from plasmapy.utils.decorators.converter import angular_freq_to_hz
48 from plasmapy.utils.decorators.validators import validate_quantities
49
50 @validate_quantities(validations_on_return={'units': [u.rad / u.s, u.Hz]})
51 @angular_freq_to_hz
52 def foo(x: u.rad / u.s) -> u.rad / u.s
53 return x
54
55 Adding ``u.Hz`` to the allowed units allows the converted
56 quantity to pass the validations.
57
58 Examples
59 --------
60 >>> import astropy.units as u
61 >>> from plasmapy.utils.decorators.converter import angular_freq_to_hz
62 >>>
63 >>> @angular_freq_to_hz
64 ... def foo(x):
65 ... return x
66 >>>
67 >>> foo(5 * u.rad / u.s, to_hz=True)
68 <Quantity 0.79577472 Hz>
69 >>>
70 >>> foo(-1 * u.rad / u.s, to_hz=True)
71 <Quantity -0.15915494 Hz>
72
73 Decoration also works with methods
74
75 >>> class Foo:
76 ... def __init__(self, x):
77 ... self.x = x
78 ...
79 ... @angular_freq_to_hz
80 ... def bar(self):
81 ... return self.x
82 >>>
83 >>> foo = Foo(0.5 * u.rad / u.s)
84 >>> foo.bar(to_hz=True)
85 <Quantity 0.07957747 Hz>
86
87 """
88 # raise exception if fn uses the 'to_hz' kwarg
89 sig = inspect.signature(fn)
90 if "to_hz" in sig.parameters:
91 raise ValueError(
92 f"Wrapped function '{fn.__name__}' can not use keyword 'to_hz'."
93 f" Keyword reserved for decorator functionality."
94 )
95
96 # make new signature for fn
97 new_params = sig.parameters.copy()
98 new_params["to_hz"] = inspect.Parameter(
99 "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
100 )
101 new_sig = inspect.Signature(
102 parameters=new_params.values(), return_annotation=sig.return_annotation
103 )
104 fn.__signature__ = new_sig
105
106 @preserve_signature
107 @functools.wraps(fn)
108 def wrapper(*args, to_hz=False, **kwargs):
109 _result = fn(*args, **kwargs)
110 if to_hz:
111 return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])
112 return _result
113
114 added_doc_bit = """
115 Other Parameters
116 ----------------
117 to_hz: bool
118 Set `True` to to convert function output from angular frequency to Hz
119 """
120 if wrapper.__doc__ is not None:
121 wrapper.__doc__ += added_doc_bit
122 else:
123 wrapper.__doc__ = added_doc_bit
124
125 return wrapper
126
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/plasmapy/utils/decorators/converter.py b/plasmapy/utils/decorators/converter.py
--- a/plasmapy/utils/decorators/converter.py
+++ b/plasmapy/utils/decorators/converter.py
@@ -3,10 +3,8 @@
__all__ = ["angular_freq_to_hz"]
import astropy.units as u
-import functools
import inspect
-
-from plasmapy.utils.decorators.helpers import preserve_signature
+import wrapt
def angular_freq_to_hz(fn):
@@ -85,7 +83,6 @@
<Quantity 0.07957747 Hz>
"""
- # raise exception if fn uses the 'to_hz' kwarg
sig = inspect.signature(fn)
if "to_hz" in sig.parameters:
raise ValueError(
@@ -94,32 +91,45 @@
)
# make new signature for fn
- new_params = sig.parameters.copy()
- new_params["to_hz"] = inspect.Parameter(
- "to_hz", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False
+ new_params = []
+ var_keyword_param = None
+ for param in sig.parameters.values():
+ if param.kind == param.VAR_KEYWORD:
+ var_keyword_param = param
+ else:
+ new_params.append(param)
+
+ new_params.append(
+ inspect.Parameter("to_hz", inspect.Parameter.KEYWORD_ONLY, default=False)
)
+
+ if var_keyword_param:
+ new_params.append(var_keyword_param)
+
new_sig = inspect.Signature(
- parameters=new_params.values(), return_annotation=sig.return_annotation
+ parameters=new_params, return_annotation=sig.return_annotation
)
fn.__signature__ = new_sig
- @preserve_signature
- @functools.wraps(fn)
- def wrapper(*args, to_hz=False, **kwargs):
+ @wrapt.decorator
+ def wrapper(fn, instance, args, kwargs): # noqa: ARG001
+ to_hz = kwargs.pop("to_hz", False)
_result = fn(*args, **kwargs)
if to_hz:
return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])
return _result
+ fn = wrapper(fn)
+
added_doc_bit = """
Other Parameters
----------------
to_hz: bool
- Set `True` to to convert function output from angular frequency to Hz
+ Set `True` to convert function output from angular frequency to Hz
"""
- if wrapper.__doc__ is not None:
- wrapper.__doc__ += added_doc_bit
+ if fn.__doc__ is not None:
+ fn.__doc__ += added_doc_bit
else:
- wrapper.__doc__ = added_doc_bit
+ fn.__doc__ = added_doc_bit
- return wrapper
+ return fn
|
{"golden_diff": "diff --git a/plasmapy/utils/decorators/converter.py b/plasmapy/utils/decorators/converter.py\n--- a/plasmapy/utils/decorators/converter.py\n+++ b/plasmapy/utils/decorators/converter.py\n@@ -3,10 +3,8 @@\n __all__ = [\"angular_freq_to_hz\"]\n \n import astropy.units as u\n-import functools\n import inspect\n-\n-from plasmapy.utils.decorators.helpers import preserve_signature\n+import wrapt\n \n \n def angular_freq_to_hz(fn):\n@@ -85,7 +83,6 @@\n <Quantity 0.07957747 Hz>\n \n \"\"\"\n- # raise exception if fn uses the 'to_hz' kwarg\n sig = inspect.signature(fn)\n if \"to_hz\" in sig.parameters:\n raise ValueError(\n@@ -94,32 +91,45 @@\n )\n \n # make new signature for fn\n- new_params = sig.parameters.copy()\n- new_params[\"to_hz\"] = inspect.Parameter(\n- \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\n+ new_params = []\n+ var_keyword_param = None\n+ for param in sig.parameters.values():\n+ if param.kind == param.VAR_KEYWORD:\n+ var_keyword_param = param\n+ else:\n+ new_params.append(param)\n+\n+ new_params.append(\n+ inspect.Parameter(\"to_hz\", inspect.Parameter.KEYWORD_ONLY, default=False)\n )\n+\n+ if var_keyword_param:\n+ new_params.append(var_keyword_param)\n+\n new_sig = inspect.Signature(\n- parameters=new_params.values(), return_annotation=sig.return_annotation\n+ parameters=new_params, return_annotation=sig.return_annotation\n )\n fn.__signature__ = new_sig\n \n- @preserve_signature\n- @functools.wraps(fn)\n- def wrapper(*args, to_hz=False, **kwargs):\n+ @wrapt.decorator\n+ def wrapper(fn, instance, args, kwargs): # noqa: ARG001\n+ to_hz = kwargs.pop(\"to_hz\", False)\n _result = fn(*args, **kwargs)\n if to_hz:\n return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])\n return _result\n \n+ fn = wrapper(fn)\n+\n added_doc_bit = \"\"\"\n Other Parameters\n ----------------\n to_hz: bool\n- Set `True` to to convert function output from angular frequency to Hz\n+ Set `True` to convert function output from angular frequency to Hz\n \"\"\"\n- if wrapper.__doc__ is not None:\n- wrapper.__doc__ += added_doc_bit\n+ if fn.__doc__ is not None:\n+ fn.__doc__ += added_doc_bit\n else:\n- wrapper.__doc__ = added_doc_bit\n+ fn.__doc__ = added_doc_bit\n \n- return wrapper\n+ return fn\n", "issue": "Incompatibility between `@angular_freq_to_hz` and var-keyword arguments\n### Bug description\r\n\r\nWhile trying to decorate `gyrofrequency` with `@particle_input` in #2026, I found an issue with `@angular_freq_to_hz`. It appears that `@angular_freq_to_hz` cannot decorate functions that accept var-keyword arguments.\r\n\r\n### Expected outcome\r\n\r\nWe should be able to use `@angular_freq_to_hz` to decorate functions with var-keyword parameters.\r\n\r\n### Minimal complete verifiable example\r\n\r\nWhen declaring this function:\r\n\r\n```Python\r\nfrom plasmapy.utils.decorators import angular_freq_to_hz\r\n@angular_freq_to_hz\r\ndef f(**kwargs):\r\n return kwargs\r\n```\r\nI get:\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[41], line 1\r\n----> 1 @angular_freq_to_hz\r\n 2 def f(**kwargs):\r\n 3 return kwargs\r\n\r\nFile ~/Projects/PlasmaPy/plasmapy/utils/decorators/converter.py:101, in angular_freq_to_hz(fn)\r\n 97 new_params = sig.parameters.copy()\r\n 98 new_params[\"to_hz\"] = inspect.Parameter(\r\n 99 \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\r\n 100 )\r\n--> 101 new_sig = inspect.Signature(\r\n 102 parameters=new_params.values(), return_annotation=sig.return_annotation\r\n 103 )\r\n 104 fn.__signature__ = new_sig\r\n 106 @preserve_signature\r\n 107 @functools.wraps(fn)\r\n 108 def wrapper(*args, to_hz=False, **kwargs):\r\n\r\nFile ~/miniconda3/envs/pldev/lib/python3.11/inspect.py:2994, in Signature.__init__(self, parameters, return_annotation, __validate_parameters__)\r\n 2988 msg = (\r\n 2989 'wrong parameter order: {} parameter before {} '\r\n 2990 'parameter'\r\n 2991 )\r\n 2992 msg = msg.format(top_kind.description,\r\n 2993 kind.description)\r\n-> 2994 raise ValueError(msg)\r\n 2995 elif kind > top_kind:\r\n 2996 kind_defaults = False\r\n\r\nValueError: wrong parameter order: variadic keyword parameter before positional or keyword parameter\r\n```\r\n\r\n\r\n### Package versions\r\n\r\nDevelopment branch \r\n\r\n### Additional context\r\n\r\nThis is medium priority to address since it's blocking #2026 and possibly also #2022.\n", "before_files": [{"content": "\"\"\"Decorators to convert units.\"\"\"\n\n__all__ = [\"angular_freq_to_hz\"]\n\nimport astropy.units as u\nimport functools\nimport inspect\n\nfrom plasmapy.utils.decorators.helpers import preserve_signature\n\n\ndef angular_freq_to_hz(fn):\n \"\"\"\n A decorator that enables a function to convert its return\n value from angular frequency (rad/s) to frequency (Hz).\n\n A kwarg ``to_hz`` is added to the function's signature, with a\n default value of `False`. The keyword is also added to the\n function's docstring under the **\"Other Parameters\"** heading.\n\n Parameters\n ----------\n fn : function\n The function to be decorated.\n\n Raises\n ------\n ValueError\n If ``fn`` has already defined a kwarg ``to_hz``.\n\n Returns\n -------\n callable\n The decorated function.\n\n Notes\n -----\n * If `~plasmapy.utils.decorators.converter.angular_freq_to_hz` is\n used with decorator\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`,\n then `angular_freq_to_hz` should be used inside\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`\n but special consideration is needed for setup. The following is\n an example of an appropriate setup::\n\n import astropy.units as u\n from plasmapy.utils.decorators.converter import angular_freq_to_hz\n from plasmapy.utils.decorators.validators import validate_quantities\n\n @validate_quantities(validations_on_return={'units': [u.rad / u.s, u.Hz]})\n @angular_freq_to_hz\n def foo(x: u.rad / u.s) -> u.rad / u.s\n return x\n\n Adding ``u.Hz`` to the allowed units allows the converted\n quantity to pass the validations.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> from plasmapy.utils.decorators.converter import angular_freq_to_hz\n >>>\n >>> @angular_freq_to_hz\n ... def foo(x):\n ... return x\n >>>\n >>> foo(5 * u.rad / u.s, to_hz=True)\n <Quantity 0.79577472 Hz>\n >>>\n >>> foo(-1 * u.rad / u.s, to_hz=True)\n <Quantity -0.15915494 Hz>\n\n Decoration also works with methods\n\n >>> class Foo:\n ... def __init__(self, x):\n ... self.x = x\n ...\n ... @angular_freq_to_hz\n ... def bar(self):\n ... return self.x\n >>>\n >>> foo = Foo(0.5 * u.rad / u.s)\n >>> foo.bar(to_hz=True)\n <Quantity 0.07957747 Hz>\n\n \"\"\"\n # raise exception if fn uses the 'to_hz' kwarg\n sig = inspect.signature(fn)\n if \"to_hz\" in sig.parameters:\n raise ValueError(\n f\"Wrapped function '{fn.__name__}' can not use keyword 'to_hz'.\"\n f\" Keyword reserved for decorator functionality.\"\n )\n\n # make new signature for fn\n new_params = sig.parameters.copy()\n new_params[\"to_hz\"] = inspect.Parameter(\n \"to_hz\", inspect.Parameter.POSITIONAL_OR_KEYWORD, default=False\n )\n new_sig = inspect.Signature(\n parameters=new_params.values(), return_annotation=sig.return_annotation\n )\n fn.__signature__ = new_sig\n\n @preserve_signature\n @functools.wraps(fn)\n def wrapper(*args, to_hz=False, **kwargs):\n _result = fn(*args, **kwargs)\n if to_hz:\n return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])\n return _result\n\n added_doc_bit = \"\"\"\n Other Parameters\n ----------------\n to_hz: bool\n Set `True` to to convert function output from angular frequency to Hz\n \"\"\"\n if wrapper.__doc__ is not None:\n wrapper.__doc__ += added_doc_bit\n else:\n wrapper.__doc__ = added_doc_bit\n\n return wrapper\n", "path": "plasmapy/utils/decorators/converter.py"}], "after_files": [{"content": "\"\"\"Decorators to convert units.\"\"\"\n\n__all__ = [\"angular_freq_to_hz\"]\n\nimport astropy.units as u\nimport inspect\nimport wrapt\n\n\ndef angular_freq_to_hz(fn):\n \"\"\"\n A decorator that enables a function to convert its return\n value from angular frequency (rad/s) to frequency (Hz).\n\n A kwarg ``to_hz`` is added to the function's signature, with a\n default value of `False`. The keyword is also added to the\n function's docstring under the **\"Other Parameters\"** heading.\n\n Parameters\n ----------\n fn : function\n The function to be decorated.\n\n Raises\n ------\n ValueError\n If ``fn`` has already defined a kwarg ``to_hz``.\n\n Returns\n -------\n callable\n The decorated function.\n\n Notes\n -----\n * If `~plasmapy.utils.decorators.converter.angular_freq_to_hz` is\n used with decorator\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`,\n then `angular_freq_to_hz` should be used inside\n :func:`~plasmapy.utils.decorators.validators.validate_quantities`\n but special consideration is needed for setup. The following is\n an example of an appropriate setup::\n\n import astropy.units as u\n from plasmapy.utils.decorators.converter import angular_freq_to_hz\n from plasmapy.utils.decorators.validators import validate_quantities\n\n @validate_quantities(validations_on_return={'units': [u.rad / u.s, u.Hz]})\n @angular_freq_to_hz\n def foo(x: u.rad / u.s) -> u.rad / u.s\n return x\n\n Adding ``u.Hz`` to the allowed units allows the converted\n quantity to pass the validations.\n\n Examples\n --------\n >>> import astropy.units as u\n >>> from plasmapy.utils.decorators.converter import angular_freq_to_hz\n >>>\n >>> @angular_freq_to_hz\n ... def foo(x):\n ... return x\n >>>\n >>> foo(5 * u.rad / u.s, to_hz=True)\n <Quantity 0.79577472 Hz>\n >>>\n >>> foo(-1 * u.rad / u.s, to_hz=True)\n <Quantity -0.15915494 Hz>\n\n Decoration also works with methods\n\n >>> class Foo:\n ... def __init__(self, x):\n ... self.x = x\n ...\n ... @angular_freq_to_hz\n ... def bar(self):\n ... return self.x\n >>>\n >>> foo = Foo(0.5 * u.rad / u.s)\n >>> foo.bar(to_hz=True)\n <Quantity 0.07957747 Hz>\n\n \"\"\"\n sig = inspect.signature(fn)\n if \"to_hz\" in sig.parameters:\n raise ValueError(\n f\"Wrapped function '{fn.__name__}' can not use keyword 'to_hz'.\"\n f\" Keyword reserved for decorator functionality.\"\n )\n\n # make new signature for fn\n new_params = []\n var_keyword_param = None\n for param in sig.parameters.values():\n if param.kind == param.VAR_KEYWORD:\n var_keyword_param = param\n else:\n new_params.append(param)\n\n new_params.append(\n inspect.Parameter(\"to_hz\", inspect.Parameter.KEYWORD_ONLY, default=False)\n )\n\n if var_keyword_param:\n new_params.append(var_keyword_param)\n\n new_sig = inspect.Signature(\n parameters=new_params, return_annotation=sig.return_annotation\n )\n fn.__signature__ = new_sig\n\n @wrapt.decorator\n def wrapper(fn, instance, args, kwargs): # noqa: ARG001\n to_hz = kwargs.pop(\"to_hz\", False)\n _result = fn(*args, **kwargs)\n if to_hz:\n return _result.to(u.Hz, equivalencies=[(u.cy / u.s, u.Hz)])\n return _result\n\n fn = wrapper(fn)\n\n added_doc_bit = \"\"\"\n Other Parameters\n ----------------\n to_hz: bool\n Set `True` to convert function output from angular frequency to Hz\n \"\"\"\n if fn.__doc__ is not None:\n fn.__doc__ += added_doc_bit\n else:\n fn.__doc__ = added_doc_bit\n\n return fn\n", "path": "plasmapy/utils/decorators/converter.py"}]}
| 2,025 | 647 |
gh_patches_debug_3596
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2170
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Identity spoofing via secondary email
See https://github.com/pennersr/django-allauth/issues/2265
cc: @CarolingerSeilchenspringer @MagdaN @fuzzylogic2000
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `meinberlin/apps/users/adapters.py`
Content:
```
1 import re
2 from urllib.parse import quote
3
4 from allauth.account.adapter import DefaultAccountAdapter
5 from django.conf import settings
6 from django.utils.http import is_safe_url
7
8 from adhocracy4.emails.mixins import SyncEmailMixin
9 from meinberlin.apps.contrib.emails import Email
10 from meinberlin.apps.users import USERNAME_INVALID_MESSAGE
11 from meinberlin.apps.users import USERNAME_REGEX
12
13
14 class UserAccountEmail(SyncEmailMixin, Email):
15 def get_receivers(self):
16 return [self.object]
17
18 @property
19 def template_name(self):
20 return self.kwargs['template_name']
21
22 def get_context(self):
23 context = super().get_context()
24 context['contact_email'] = settings.CONTACT_EMAIL
25 return context
26
27
28 class AccountAdapter(DefaultAccountAdapter):
29 username_regex = re.compile(USERNAME_REGEX)
30 error_messages = dict(
31 DefaultAccountAdapter.error_messages,
32 invalid_username=USERNAME_INVALID_MESSAGE
33 )
34
35 def get_email_confirmation_url(self, request, emailconfirmation):
36 url = super().get_email_confirmation_url(request, emailconfirmation)
37 if 'next' in request.POST and is_safe_url(request.POST['next']):
38 return '{}?next={}'.format(url, quote(request.POST['next']))
39 else:
40 return url
41
42 def send_mail(self, template_prefix, email, context):
43 user = context['user']
44 return UserAccountEmail.send(
45 user,
46 template_name=template_prefix,
47 **context
48 )
49
50 def get_email_confirmation_redirect_url(self, request):
51 if 'next' in request.GET and is_safe_url(request.GET['next']):
52 return request.GET['next']
53 else:
54 return super().get_email_confirmation_redirect_url(request)
55
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/meinberlin/apps/users/adapters.py b/meinberlin/apps/users/adapters.py
--- a/meinberlin/apps/users/adapters.py
+++ b/meinberlin/apps/users/adapters.py
@@ -40,9 +40,8 @@
return url
def send_mail(self, template_prefix, email, context):
- user = context['user']
return UserAccountEmail.send(
- user,
+ email,
template_name=template_prefix,
**context
)
|
{"golden_diff": "diff --git a/meinberlin/apps/users/adapters.py b/meinberlin/apps/users/adapters.py\n--- a/meinberlin/apps/users/adapters.py\n+++ b/meinberlin/apps/users/adapters.py\n@@ -40,9 +40,8 @@\n return url\n \n def send_mail(self, template_prefix, email, context):\n- user = context['user']\n return UserAccountEmail.send(\n- user,\n+ email,\n template_name=template_prefix,\n **context\n )\n", "issue": "Identity spoofing via secondary email\nSee https://github.com/pennersr/django-allauth/issues/2265\r\n\r\ncc: @CarolingerSeilchenspringer @MagdaN @fuzzylogic2000 \n", "before_files": [{"content": "import re\nfrom urllib.parse import quote\n\nfrom allauth.account.adapter import DefaultAccountAdapter\nfrom django.conf import settings\nfrom django.utils.http import is_safe_url\n\nfrom adhocracy4.emails.mixins import SyncEmailMixin\nfrom meinberlin.apps.contrib.emails import Email\nfrom meinberlin.apps.users import USERNAME_INVALID_MESSAGE\nfrom meinberlin.apps.users import USERNAME_REGEX\n\n\nclass UserAccountEmail(SyncEmailMixin, Email):\n def get_receivers(self):\n return [self.object]\n\n @property\n def template_name(self):\n return self.kwargs['template_name']\n\n def get_context(self):\n context = super().get_context()\n context['contact_email'] = settings.CONTACT_EMAIL\n return context\n\n\nclass AccountAdapter(DefaultAccountAdapter):\n username_regex = re.compile(USERNAME_REGEX)\n error_messages = dict(\n DefaultAccountAdapter.error_messages,\n invalid_username=USERNAME_INVALID_MESSAGE\n )\n\n def get_email_confirmation_url(self, request, emailconfirmation):\n url = super().get_email_confirmation_url(request, emailconfirmation)\n if 'next' in request.POST and is_safe_url(request.POST['next']):\n return '{}?next={}'.format(url, quote(request.POST['next']))\n else:\n return url\n\n def send_mail(self, template_prefix, email, context):\n user = context['user']\n return UserAccountEmail.send(\n user,\n template_name=template_prefix,\n **context\n )\n\n def get_email_confirmation_redirect_url(self, request):\n if 'next' in request.GET and is_safe_url(request.GET['next']):\n return request.GET['next']\n else:\n return super().get_email_confirmation_redirect_url(request)\n", "path": "meinberlin/apps/users/adapters.py"}], "after_files": [{"content": "import re\nfrom urllib.parse import quote\n\nfrom allauth.account.adapter import DefaultAccountAdapter\nfrom django.conf import settings\nfrom django.utils.http import is_safe_url\n\nfrom adhocracy4.emails.mixins import SyncEmailMixin\nfrom meinberlin.apps.contrib.emails import Email\nfrom meinberlin.apps.users import USERNAME_INVALID_MESSAGE\nfrom meinberlin.apps.users import USERNAME_REGEX\n\n\nclass UserAccountEmail(SyncEmailMixin, Email):\n def get_receivers(self):\n return [self.object]\n\n @property\n def template_name(self):\n return self.kwargs['template_name']\n\n def get_context(self):\n context = super().get_context()\n context['contact_email'] = settings.CONTACT_EMAIL\n return context\n\n\nclass AccountAdapter(DefaultAccountAdapter):\n username_regex = re.compile(USERNAME_REGEX)\n error_messages = dict(\n DefaultAccountAdapter.error_messages,\n invalid_username=USERNAME_INVALID_MESSAGE\n )\n\n def get_email_confirmation_url(self, request, emailconfirmation):\n url = super().get_email_confirmation_url(request, emailconfirmation)\n if 'next' in request.POST and is_safe_url(request.POST['next']):\n return '{}?next={}'.format(url, quote(request.POST['next']))\n else:\n return url\n\n def send_mail(self, template_prefix, email, context):\n return UserAccountEmail.send(\n email,\n template_name=template_prefix,\n **context\n )\n\n def get_email_confirmation_redirect_url(self, request):\n if 'next' in request.GET and is_safe_url(request.GET['next']):\n return request.GET['next']\n else:\n return super().get_email_confirmation_redirect_url(request)\n", "path": "meinberlin/apps/users/adapters.py"}]}
| 778 | 113 |
gh_patches_debug_10828
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmdeploy-700
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pytorch2onnx fails with mmedit models
error with master branch
```
TypeError: forward_dummy() got an unexpected keyword argument 'img_metas'
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mmdeploy/apis/pytorch2onnx.py`
Content:
```
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import os.path as osp
3 from typing import Any, Optional, Union
4
5 import mmcv
6 import torch
7
8 from mmdeploy.apis.core.pipeline_manager import no_mp
9 from mmdeploy.utils import (get_backend, get_dynamic_axes, get_input_shape,
10 get_onnx_config, load_config)
11 from .core import PIPELINE_MANAGER
12 from .onnx import export
13
14
15 @PIPELINE_MANAGER.register_pipeline()
16 def torch2onnx(img: Any,
17 work_dir: str,
18 save_file: str,
19 deploy_cfg: Union[str, mmcv.Config],
20 model_cfg: Union[str, mmcv.Config],
21 model_checkpoint: Optional[str] = None,
22 device: str = 'cuda:0'):
23 """Convert PyTorch model to ONNX model.
24
25 Examples:
26 >>> from mmdeploy.apis import torch2onnx
27 >>> img = 'demo.jpg'
28 >>> work_dir = 'work_dir'
29 >>> save_file = 'fcos.onnx'
30 >>> deploy_cfg = ('configs/mmdet/detection/'
31 'detection_onnxruntime_dynamic.py')
32 >>> model_cfg = ('mmdetection/configs/fcos/'
33 'fcos_r50_caffe_fpn_gn-head_1x_coco.py')
34 >>> model_checkpoint = ('checkpoints/'
35 'fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth')
36 >>> device = 'cpu'
37 >>> torch2onnx(img, work_dir, save_file, deploy_cfg, \
38 model_cfg, model_checkpoint, device)
39
40 Args:
41 img (str | np.ndarray | torch.Tensor): Input image used to assist
42 converting model.
43 work_dir (str): A working directory to save files.
44 save_file (str): Filename to save onnx model.
45 deploy_cfg (str | mmcv.Config): Deployment config file or
46 Config object.
47 model_cfg (str | mmcv.Config): Model config file or Config object.
48 model_checkpoint (str): A checkpoint path of PyTorch model,
49 defaults to `None`.
50 device (str): A string specifying device type, defaults to 'cuda:0'.
51 """
52 # load deploy_cfg if necessary
53 deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)
54 mmcv.mkdir_or_exist(osp.abspath(work_dir))
55
56 input_shape = get_input_shape(deploy_cfg)
57
58 # create model an inputs
59 from mmdeploy.apis import build_task_processor
60 task_processor = build_task_processor(model_cfg, deploy_cfg, device)
61
62 torch_model = task_processor.init_pytorch_model(model_checkpoint)
63 data, model_inputs = task_processor.create_input(img, input_shape)
64 input_metas = dict(img_metas=data.get('img_metas', None))
65 if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:
66 model_inputs = model_inputs[0]
67
68 # export to onnx
69 context_info = dict()
70 context_info['deploy_cfg'] = deploy_cfg
71 output_prefix = osp.join(work_dir,
72 osp.splitext(osp.basename(save_file))[0])
73 backend = get_backend(deploy_cfg).value
74
75 onnx_cfg = get_onnx_config(deploy_cfg)
76 opset_version = onnx_cfg.get('opset_version', 11)
77
78 input_names = onnx_cfg['input_names']
79 output_names = onnx_cfg['output_names']
80 axis_names = input_names + output_names
81 dynamic_axes = get_dynamic_axes(deploy_cfg, axis_names)
82 verbose = not onnx_cfg.get('strip_doc_string', True) or onnx_cfg.get(
83 'verbose', False)
84 keep_initializers_as_inputs = onnx_cfg.get('keep_initializers_as_inputs',
85 True)
86 optimize = onnx_cfg.get('optimize', False)
87 with no_mp():
88 export(
89 torch_model,
90 model_inputs,
91 input_metas=input_metas,
92 output_path_prefix=output_prefix,
93 backend=backend,
94 input_names=input_names,
95 output_names=output_names,
96 context_info=context_info,
97 opset_version=opset_version,
98 dynamic_axes=dynamic_axes,
99 verbose=verbose,
100 keep_initializers_as_inputs=keep_initializers_as_inputs,
101 optimize=optimize)
102
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mmdeploy/apis/pytorch2onnx.py b/mmdeploy/apis/pytorch2onnx.py
--- a/mmdeploy/apis/pytorch2onnx.py
+++ b/mmdeploy/apis/pytorch2onnx.py
@@ -61,7 +61,11 @@
torch_model = task_processor.init_pytorch_model(model_checkpoint)
data, model_inputs = task_processor.create_input(img, input_shape)
- input_metas = dict(img_metas=data.get('img_metas', None))
+ if 'img_metas' in data:
+ input_metas = dict(img_metas=data['img_metas'])
+ else:
+ # codebases like mmedit do not have img_metas argument
+ input_metas = None
if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:
model_inputs = model_inputs[0]
|
{"golden_diff": "diff --git a/mmdeploy/apis/pytorch2onnx.py b/mmdeploy/apis/pytorch2onnx.py\n--- a/mmdeploy/apis/pytorch2onnx.py\n+++ b/mmdeploy/apis/pytorch2onnx.py\n@@ -61,7 +61,11 @@\n \n torch_model = task_processor.init_pytorch_model(model_checkpoint)\n data, model_inputs = task_processor.create_input(img, input_shape)\n- input_metas = dict(img_metas=data.get('img_metas', None))\n+ if 'img_metas' in data:\n+ input_metas = dict(img_metas=data['img_metas'])\n+ else:\n+ # codebases like mmedit do not have img_metas argument\n+ input_metas = None\n if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:\n model_inputs = model_inputs[0]\n", "issue": "pytorch2onnx fails with mmedit models\nerror with master branch\r\n```\r\nTypeError: forward_dummy() got an unexpected keyword argument 'img_metas'\r\n```\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport os.path as osp\nfrom typing import Any, Optional, Union\n\nimport mmcv\nimport torch\n\nfrom mmdeploy.apis.core.pipeline_manager import no_mp\nfrom mmdeploy.utils import (get_backend, get_dynamic_axes, get_input_shape,\n get_onnx_config, load_config)\nfrom .core import PIPELINE_MANAGER\nfrom .onnx import export\n\n\n@PIPELINE_MANAGER.register_pipeline()\ndef torch2onnx(img: Any,\n work_dir: str,\n save_file: str,\n deploy_cfg: Union[str, mmcv.Config],\n model_cfg: Union[str, mmcv.Config],\n model_checkpoint: Optional[str] = None,\n device: str = 'cuda:0'):\n \"\"\"Convert PyTorch model to ONNX model.\n\n Examples:\n >>> from mmdeploy.apis import torch2onnx\n >>> img = 'demo.jpg'\n >>> work_dir = 'work_dir'\n >>> save_file = 'fcos.onnx'\n >>> deploy_cfg = ('configs/mmdet/detection/'\n 'detection_onnxruntime_dynamic.py')\n >>> model_cfg = ('mmdetection/configs/fcos/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco.py')\n >>> model_checkpoint = ('checkpoints/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth')\n >>> device = 'cpu'\n >>> torch2onnx(img, work_dir, save_file, deploy_cfg, \\\n model_cfg, model_checkpoint, device)\n\n Args:\n img (str | np.ndarray | torch.Tensor): Input image used to assist\n converting model.\n work_dir (str): A working directory to save files.\n save_file (str): Filename to save onnx model.\n deploy_cfg (str | mmcv.Config): Deployment config file or\n Config object.\n model_cfg (str | mmcv.Config): Model config file or Config object.\n model_checkpoint (str): A checkpoint path of PyTorch model,\n defaults to `None`.\n device (str): A string specifying device type, defaults to 'cuda:0'.\n \"\"\"\n # load deploy_cfg if necessary\n deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)\n mmcv.mkdir_or_exist(osp.abspath(work_dir))\n\n input_shape = get_input_shape(deploy_cfg)\n\n # create model an inputs\n from mmdeploy.apis import build_task_processor\n task_processor = build_task_processor(model_cfg, deploy_cfg, device)\n\n torch_model = task_processor.init_pytorch_model(model_checkpoint)\n data, model_inputs = task_processor.create_input(img, input_shape)\n input_metas = dict(img_metas=data.get('img_metas', None))\n if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:\n model_inputs = model_inputs[0]\n\n # export to onnx\n context_info = dict()\n context_info['deploy_cfg'] = deploy_cfg\n output_prefix = osp.join(work_dir,\n osp.splitext(osp.basename(save_file))[0])\n backend = get_backend(deploy_cfg).value\n\n onnx_cfg = get_onnx_config(deploy_cfg)\n opset_version = onnx_cfg.get('opset_version', 11)\n\n input_names = onnx_cfg['input_names']\n output_names = onnx_cfg['output_names']\n axis_names = input_names + output_names\n dynamic_axes = get_dynamic_axes(deploy_cfg, axis_names)\n verbose = not onnx_cfg.get('strip_doc_string', True) or onnx_cfg.get(\n 'verbose', False)\n keep_initializers_as_inputs = onnx_cfg.get('keep_initializers_as_inputs',\n True)\n optimize = onnx_cfg.get('optimize', False)\n with no_mp():\n export(\n torch_model,\n model_inputs,\n input_metas=input_metas,\n output_path_prefix=output_prefix,\n backend=backend,\n input_names=input_names,\n output_names=output_names,\n context_info=context_info,\n opset_version=opset_version,\n dynamic_axes=dynamic_axes,\n verbose=verbose,\n keep_initializers_as_inputs=keep_initializers_as_inputs,\n optimize=optimize)\n", "path": "mmdeploy/apis/pytorch2onnx.py"}], "after_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport os.path as osp\nfrom typing import Any, Optional, Union\n\nimport mmcv\nimport torch\n\nfrom mmdeploy.apis.core.pipeline_manager import no_mp\nfrom mmdeploy.utils import (get_backend, get_dynamic_axes, get_input_shape,\n get_onnx_config, load_config)\nfrom .core import PIPELINE_MANAGER\nfrom .onnx import export\n\n\n@PIPELINE_MANAGER.register_pipeline()\ndef torch2onnx(img: Any,\n work_dir: str,\n save_file: str,\n deploy_cfg: Union[str, mmcv.Config],\n model_cfg: Union[str, mmcv.Config],\n model_checkpoint: Optional[str] = None,\n device: str = 'cuda:0'):\n \"\"\"Convert PyTorch model to ONNX model.\n\n Examples:\n >>> from mmdeploy.apis import torch2onnx\n >>> img = 'demo.jpg'\n >>> work_dir = 'work_dir'\n >>> save_file = 'fcos.onnx'\n >>> deploy_cfg = ('configs/mmdet/detection/'\n 'detection_onnxruntime_dynamic.py')\n >>> model_cfg = ('mmdetection/configs/fcos/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco.py')\n >>> model_checkpoint = ('checkpoints/'\n 'fcos_r50_caffe_fpn_gn-head_1x_coco-821213aa.pth')\n >>> device = 'cpu'\n >>> torch2onnx(img, work_dir, save_file, deploy_cfg, \\\n model_cfg, model_checkpoint, device)\n\n Args:\n img (str | np.ndarray | torch.Tensor): Input image used to assist\n converting model.\n work_dir (str): A working directory to save files.\n save_file (str): Filename to save onnx model.\n deploy_cfg (str | mmcv.Config): Deployment config file or\n Config object.\n model_cfg (str | mmcv.Config): Model config file or Config object.\n model_checkpoint (str): A checkpoint path of PyTorch model,\n defaults to `None`.\n device (str): A string specifying device type, defaults to 'cuda:0'.\n \"\"\"\n # load deploy_cfg if necessary\n deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)\n mmcv.mkdir_or_exist(osp.abspath(work_dir))\n\n input_shape = get_input_shape(deploy_cfg)\n\n # create model an inputs\n from mmdeploy.apis import build_task_processor\n task_processor = build_task_processor(model_cfg, deploy_cfg, device)\n\n torch_model = task_processor.init_pytorch_model(model_checkpoint)\n data, model_inputs = task_processor.create_input(img, input_shape)\n if 'img_metas' in data:\n input_metas = dict(img_metas=data['img_metas'])\n else:\n # codebases like mmedit do not have img_metas argument\n input_metas = None\n if not isinstance(model_inputs, torch.Tensor) and len(model_inputs) == 1:\n model_inputs = model_inputs[0]\n\n # export to onnx\n context_info = dict()\n context_info['deploy_cfg'] = deploy_cfg\n output_prefix = osp.join(work_dir,\n osp.splitext(osp.basename(save_file))[0])\n backend = get_backend(deploy_cfg).value\n\n onnx_cfg = get_onnx_config(deploy_cfg)\n opset_version = onnx_cfg.get('opset_version', 11)\n\n input_names = onnx_cfg['input_names']\n output_names = onnx_cfg['output_names']\n axis_names = input_names + output_names\n dynamic_axes = get_dynamic_axes(deploy_cfg, axis_names)\n verbose = not onnx_cfg.get('strip_doc_string', True) or onnx_cfg.get(\n 'verbose', False)\n keep_initializers_as_inputs = onnx_cfg.get('keep_initializers_as_inputs',\n True)\n optimize = onnx_cfg.get('optimize', False)\n with no_mp():\n export(\n torch_model,\n model_inputs,\n input_metas=input_metas,\n output_path_prefix=output_prefix,\n backend=backend,\n input_names=input_names,\n output_names=output_names,\n context_info=context_info,\n opset_version=opset_version,\n dynamic_axes=dynamic_axes,\n verbose=verbose,\n keep_initializers_as_inputs=keep_initializers_as_inputs,\n optimize=optimize)\n", "path": "mmdeploy/apis/pytorch2onnx.py"}]}
| 1,420 | 193 |
gh_patches_debug_12971
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-514
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MAINT: Remove warning message
<!--
Thanks for contributing a pull request!
Please follow these standard acronyms to start the commit message:
- ENH: enhancement
- BUG: bug fix
- DOC: documentation
- TYP: type annotations
- TST: addition or modification of tests
- MAINT: maintenance commit (refactoring, typos, etc.)
- BLD: change related to building
- REL: related to releasing
- API: an (incompatible) API change
- DEP: deprecate something, or remove a deprecated object
- DEV: development tool or utility
- REV: revert an earlier commit
- PERF: performance improvement
- BOT: always commit via a bot
- CI: related to CI or CD
- CLN: Code cleanup
-->
- [ ] closes #xxxx
- [ ] whatsnew entry
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `dtoolkit/accessor/dataframe/values_to_dict.py`
Content:
```
1 from __future__ import annotations
2
3 import pandas as pd
4
5 from dtoolkit.accessor.register import register_dataframe_method
6 from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa
7 from dtoolkit.util._decorator import deprecated_alias
8
9
10 @register_dataframe_method
11 @deprecated_alias(
12 warning_msg=(
13 "{func_name}'s parameter '{old_alias}' is deprecated and will be removed in "
14 "0.0.15. Please use the parameter '{new_alias}'. "
15 "(Warning added DToolKit 0.0.14)"
16 ),
17 few_as_key="ascending",
18 )
19 def values_to_dict(
20 df: pd.DataFrame,
21 order: list | tuple = None,
22 ascending: bool = True,
23 to_list: bool = True,
24 ) -> dict:
25 """
26 Convert :attr:`~pandas.DataFrame.values` to :class:`dict`.
27
28 Parameters
29 ----------
30 order : list or tuple, optional
31 The order of keys via given columns. If ``order`` is set, ``ascending``
32 will not work.
33
34 ascending : bool, default True
35 If True the key would use the few unique of column values first.
36
37 to_list : bool, default True
38 If True one element value will return :keyword:`list`.
39
40 Returns
41 -------
42 dict
43
44 See Also
45 --------
46 dtoolkit.accessor.series.values_to_dict
47
48 Notes
49 -----
50 The same key of values would be merged into :class:`list`.
51
52 Examples
53 --------
54 >>> import json
55 >>> import dtoolkit.accessor
56 >>> import pandas as pd
57 >>> df = pd.DataFrame(
58 ... {
59 ... "x" : ["A", "A", "B", "B", "B"],
60 ... "y" : ["a", "b", "c", "d", "d"],
61 ... "z" : ["1", "2", "3", "3", "4"],
62 ... }
63 ... )
64 >>> df
65 x y z
66 0 A a 1
67 1 A b 2
68 2 B c 3
69 3 B d 3
70 4 B d 4
71
72 Use few unique of column values as key first. The order of column unique values
73 number is `x` < `y` < `z`. So the result will be ``{x: {y: [z]} }``.
74
75 >>> print(json.dumps(df.values_to_dict(), indent=4))
76 {
77 "A": {
78 "a": [
79 "1"
80 ],
81 "b": [
82 "2"
83 ]
84 },
85 "B": {
86 "c": [
87 "3"
88 ],
89 "d": [
90 "3",
91 "4"
92 ]
93 }
94 }
95
96 Use many unique of column values as key first, the result will be
97 ``{y: {z: [x]} }``.
98
99 >>> print(json.dumps(df.values_to_dict(ascending=False), indent=4))
100 {
101 "a": {
102 "1": [
103 "A"
104 ]
105 },
106 "b": {
107 "2": [
108 "A"
109 ]
110 },
111 "c": {
112 "3": [
113 "B"
114 ]
115 },
116 "d": {
117 "3": [
118 "B"
119 ],
120 "4": [
121 "B"
122 ]
123 }
124 }
125
126 Output the arbitrary order like ``{z: x} or ``{x: {z: [y]} }``,
127 via ``order`` argument.
128
129 >>> print(json.dumps(df.values_to_dict(order=["x", "z"]), indent=4))
130 {
131 "A": [
132 "1",
133 "2"
134 ],
135 "B": [
136 "3",
137 "3",
138 "4"
139 ]
140 }
141 >>> print(json.dumps(df.values_to_dict(order=["x", "z", "y"]), indent=4))
142 {
143 "A": {
144 "1": [
145 "a"
146 ],
147 "2": [
148 "b"
149 ]
150 },
151 "B": {
152 "3": [
153 "c",
154 "d"
155 ],
156 "4": [
157 "d"
158 ]
159 }
160 }
161
162 It also could convert one column DataFrame. But ``ascending`` wouldn' work.
163 The result would be ``{index: [values]}``.
164
165 >>> print(json.dumps(df[["x"]].values_to_dict(), indent=4))
166 {
167 "0": [
168 "A"
169 ],
170 "1": [
171 "A"
172 ],
173 "2": [
174 "B"
175 ],
176 "3": [
177 "B"
178 ],
179 "4": [
180 "B"
181 ]
182 }
183
184 Unpack one element value list.
185
186 >>> print(json.dumps(df.values_to_dict(to_list=False), indent=4))
187 {
188 "A": {
189 "a": "1",
190 "b": "2"
191 },
192 "B": {
193 "c": "3",
194 "d": [
195 "3",
196 "4"
197 ]
198 }
199 }
200 """
201
202 if df.columns.__len__() == 1: # one columns DataFrame
203 return df.to_series().values_to_dict(to_list=to_list)
204
205 columns = order or (
206 df.nunique()
207 .sort_values(
208 ascending=ascending,
209 )
210 .index
211 )
212 return _dict(df[columns], to_list=to_list)
213
214
215 def _dict(df: pd.DataFrame, to_list: bool) -> dict:
216 key_column, *value_column = df.columns
217
218 if df.columns.__len__() == 2: # two column DataFrame
219 return df.to_series(
220 index_column=key_column,
221 value_column=value_column[0],
222 ).values_to_dict(to_list=to_list)
223
224 return {
225 key: _dict(
226 df.loc[df[key_column] == key, value_column],
227 to_list=to_list,
228 )
229 for key in df[key_column].unique()
230 }
231
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/dtoolkit/accessor/dataframe/values_to_dict.py b/dtoolkit/accessor/dataframe/values_to_dict.py
--- a/dtoolkit/accessor/dataframe/values_to_dict.py
+++ b/dtoolkit/accessor/dataframe/values_to_dict.py
@@ -4,18 +4,9 @@
from dtoolkit.accessor.register import register_dataframe_method
from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa
-from dtoolkit.util._decorator import deprecated_alias
@register_dataframe_method
-@deprecated_alias(
- warning_msg=(
- "{func_name}'s parameter '{old_alias}' is deprecated and will be removed in "
- "0.0.15. Please use the parameter '{new_alias}'. "
- "(Warning added DToolKit 0.0.14)"
- ),
- few_as_key="ascending",
-)
def values_to_dict(
df: pd.DataFrame,
order: list | tuple = None,
|
{"golden_diff": "diff --git a/dtoolkit/accessor/dataframe/values_to_dict.py b/dtoolkit/accessor/dataframe/values_to_dict.py\n--- a/dtoolkit/accessor/dataframe/values_to_dict.py\n+++ b/dtoolkit/accessor/dataframe/values_to_dict.py\n@@ -4,18 +4,9 @@\n \n from dtoolkit.accessor.register import register_dataframe_method\n from dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa\n-from dtoolkit.util._decorator import deprecated_alias\n \n \n @register_dataframe_method\n-@deprecated_alias(\n- warning_msg=(\n- \"{func_name}'s parameter '{old_alias}' is deprecated and will be removed in \"\n- \"0.0.15. Please use the parameter '{new_alias}'. \"\n- \"(Warning added DToolKit 0.0.14)\"\n- ),\n- few_as_key=\"ascending\",\n-)\n def values_to_dict(\n df: pd.DataFrame,\n order: list | tuple = None,\n", "issue": "MAINT: Remove warning message\n<!--\r\nThanks for contributing a pull request!\r\n\r\nPlease follow these standard acronyms to start the commit message:\r\n\r\n- ENH: enhancement\r\n- BUG: bug fix\r\n- DOC: documentation\r\n- TYP: type annotations\r\n- TST: addition or modification of tests\r\n- MAINT: maintenance commit (refactoring, typos, etc.)\r\n- BLD: change related to building\r\n- REL: related to releasing\r\n- API: an (incompatible) API change\r\n- DEP: deprecate something, or remove a deprecated object\r\n- DEV: development tool or utility\r\n- REV: revert an earlier commit\r\n- PERF: performance improvement\r\n- BOT: always commit via a bot\r\n- CI: related to CI or CD\r\n- CLN: Code cleanup\r\n-->\r\n\r\n- [ ] closes #xxxx\r\n- [ ] whatsnew entry\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport pandas as pd\n\nfrom dtoolkit.accessor.register import register_dataframe_method\nfrom dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa\nfrom dtoolkit.util._decorator import deprecated_alias\n\n\n@register_dataframe_method\n@deprecated_alias(\n warning_msg=(\n \"{func_name}'s parameter '{old_alias}' is deprecated and will be removed in \"\n \"0.0.15. Please use the parameter '{new_alias}'. \"\n \"(Warning added DToolKit 0.0.14)\"\n ),\n few_as_key=\"ascending\",\n)\ndef values_to_dict(\n df: pd.DataFrame,\n order: list | tuple = None,\n ascending: bool = True,\n to_list: bool = True,\n) -> dict:\n \"\"\"\n Convert :attr:`~pandas.DataFrame.values` to :class:`dict`.\n\n Parameters\n ----------\n order : list or tuple, optional\n The order of keys via given columns. If ``order`` is set, ``ascending``\n will not work.\n\n ascending : bool, default True\n If True the key would use the few unique of column values first.\n\n to_list : bool, default True\n If True one element value will return :keyword:`list`.\n\n Returns\n -------\n dict\n\n See Also\n --------\n dtoolkit.accessor.series.values_to_dict\n\n Notes\n -----\n The same key of values would be merged into :class:`list`.\n\n Examples\n --------\n >>> import json\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> df = pd.DataFrame(\n ... {\n ... \"x\" : [\"A\", \"A\", \"B\", \"B\", \"B\"],\n ... \"y\" : [\"a\", \"b\", \"c\", \"d\", \"d\"],\n ... \"z\" : [\"1\", \"2\", \"3\", \"3\", \"4\"],\n ... }\n ... )\n >>> df\n x y z\n 0 A a 1\n 1 A b 2\n 2 B c 3\n 3 B d 3\n 4 B d 4\n\n Use few unique of column values as key first. The order of column unique values\n number is `x` < `y` < `z`. So the result will be ``{x: {y: [z]} }``.\n\n >>> print(json.dumps(df.values_to_dict(), indent=4))\n {\n \"A\": {\n \"a\": [\n \"1\"\n ],\n \"b\": [\n \"2\"\n ]\n },\n \"B\": {\n \"c\": [\n \"3\"\n ],\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n\n Use many unique of column values as key first, the result will be\n ``{y: {z: [x]} }``.\n\n >>> print(json.dumps(df.values_to_dict(ascending=False), indent=4))\n {\n \"a\": {\n \"1\": [\n \"A\"\n ]\n },\n \"b\": {\n \"2\": [\n \"A\"\n ]\n },\n \"c\": {\n \"3\": [\n \"B\"\n ]\n },\n \"d\": {\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n }\n\n Output the arbitrary order like ``{z: x} or ``{x: {z: [y]} }``,\n via ``order`` argument.\n\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\"]), indent=4))\n {\n \"A\": [\n \"1\",\n \"2\"\n ],\n \"B\": [\n \"3\",\n \"3\",\n \"4\"\n ]\n }\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\", \"y\"]), indent=4))\n {\n \"A\": {\n \"1\": [\n \"a\"\n ],\n \"2\": [\n \"b\"\n ]\n },\n \"B\": {\n \"3\": [\n \"c\",\n \"d\"\n ],\n \"4\": [\n \"d\"\n ]\n }\n }\n\n It also could convert one column DataFrame. But ``ascending`` wouldn' work.\n The result would be ``{index: [values]}``.\n\n >>> print(json.dumps(df[[\"x\"]].values_to_dict(), indent=4))\n {\n \"0\": [\n \"A\"\n ],\n \"1\": [\n \"A\"\n ],\n \"2\": [\n \"B\"\n ],\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n\n Unpack one element value list.\n\n >>> print(json.dumps(df.values_to_dict(to_list=False), indent=4))\n {\n \"A\": {\n \"a\": \"1\",\n \"b\": \"2\"\n },\n \"B\": {\n \"c\": \"3\",\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n \"\"\"\n\n if df.columns.__len__() == 1: # one columns DataFrame\n return df.to_series().values_to_dict(to_list=to_list)\n\n columns = order or (\n df.nunique()\n .sort_values(\n ascending=ascending,\n )\n .index\n )\n return _dict(df[columns], to_list=to_list)\n\n\ndef _dict(df: pd.DataFrame, to_list: bool) -> dict:\n key_column, *value_column = df.columns\n\n if df.columns.__len__() == 2: # two column DataFrame\n return df.to_series(\n index_column=key_column,\n value_column=value_column[0],\n ).values_to_dict(to_list=to_list)\n\n return {\n key: _dict(\n df.loc[df[key_column] == key, value_column],\n to_list=to_list,\n )\n for key in df[key_column].unique()\n }\n", "path": "dtoolkit/accessor/dataframe/values_to_dict.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport pandas as pd\n\nfrom dtoolkit.accessor.register import register_dataframe_method\nfrom dtoolkit.accessor.series import values_to_dict as s_values_to_dict # noqa\n\n\n@register_dataframe_method\ndef values_to_dict(\n df: pd.DataFrame,\n order: list | tuple = None,\n ascending: bool = True,\n to_list: bool = True,\n) -> dict:\n \"\"\"\n Convert :attr:`~pandas.DataFrame.values` to :class:`dict`.\n\n Parameters\n ----------\n order : list or tuple, optional\n The order of keys via given columns. If ``order`` is set, ``ascending``\n will not work.\n\n ascending : bool, default True\n If True the key would use the few unique of column values first.\n\n to_list : bool, default True\n If True one element value will return :keyword:`list`.\n\n Returns\n -------\n dict\n\n See Also\n --------\n dtoolkit.accessor.series.values_to_dict\n\n Notes\n -----\n The same key of values would be merged into :class:`list`.\n\n Examples\n --------\n >>> import json\n >>> import dtoolkit.accessor\n >>> import pandas as pd\n >>> df = pd.DataFrame(\n ... {\n ... \"x\" : [\"A\", \"A\", \"B\", \"B\", \"B\"],\n ... \"y\" : [\"a\", \"b\", \"c\", \"d\", \"d\"],\n ... \"z\" : [\"1\", \"2\", \"3\", \"3\", \"4\"],\n ... }\n ... )\n >>> df\n x y z\n 0 A a 1\n 1 A b 2\n 2 B c 3\n 3 B d 3\n 4 B d 4\n\n Use few unique of column values as key first. The order of column unique values\n number is `x` < `y` < `z`. So the result will be ``{x: {y: [z]} }``.\n\n >>> print(json.dumps(df.values_to_dict(), indent=4))\n {\n \"A\": {\n \"a\": [\n \"1\"\n ],\n \"b\": [\n \"2\"\n ]\n },\n \"B\": {\n \"c\": [\n \"3\"\n ],\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n\n Use many unique of column values as key first, the result will be\n ``{y: {z: [x]} }``.\n\n >>> print(json.dumps(df.values_to_dict(ascending=False), indent=4))\n {\n \"a\": {\n \"1\": [\n \"A\"\n ]\n },\n \"b\": {\n \"2\": [\n \"A\"\n ]\n },\n \"c\": {\n \"3\": [\n \"B\"\n ]\n },\n \"d\": {\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n }\n\n Output the arbitrary order like ``{z: x} or ``{x: {z: [y]} }``,\n via ``order`` argument.\n\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\"]), indent=4))\n {\n \"A\": [\n \"1\",\n \"2\"\n ],\n \"B\": [\n \"3\",\n \"3\",\n \"4\"\n ]\n }\n >>> print(json.dumps(df.values_to_dict(order=[\"x\", \"z\", \"y\"]), indent=4))\n {\n \"A\": {\n \"1\": [\n \"a\"\n ],\n \"2\": [\n \"b\"\n ]\n },\n \"B\": {\n \"3\": [\n \"c\",\n \"d\"\n ],\n \"4\": [\n \"d\"\n ]\n }\n }\n\n It also could convert one column DataFrame. But ``ascending`` wouldn' work.\n The result would be ``{index: [values]}``.\n\n >>> print(json.dumps(df[[\"x\"]].values_to_dict(), indent=4))\n {\n \"0\": [\n \"A\"\n ],\n \"1\": [\n \"A\"\n ],\n \"2\": [\n \"B\"\n ],\n \"3\": [\n \"B\"\n ],\n \"4\": [\n \"B\"\n ]\n }\n\n Unpack one element value list.\n\n >>> print(json.dumps(df.values_to_dict(to_list=False), indent=4))\n {\n \"A\": {\n \"a\": \"1\",\n \"b\": \"2\"\n },\n \"B\": {\n \"c\": \"3\",\n \"d\": [\n \"3\",\n \"4\"\n ]\n }\n }\n \"\"\"\n\n if df.columns.__len__() == 1: # one columns DataFrame\n return df.to_series().values_to_dict(to_list=to_list)\n\n columns = order or (\n df.nunique()\n .sort_values(\n ascending=ascending,\n )\n .index\n )\n return _dict(df[columns], to_list=to_list)\n\n\ndef _dict(df: pd.DataFrame, to_list: bool) -> dict:\n key_column, *value_column = df.columns\n\n if df.columns.__len__() == 2: # two column DataFrame\n return df.to_series(\n index_column=key_column,\n value_column=value_column[0],\n ).values_to_dict(to_list=to_list)\n\n return {\n key: _dict(\n df.loc[df[key_column] == key, value_column],\n to_list=to_list,\n )\n for key in df[key_column].unique()\n }\n", "path": "dtoolkit/accessor/dataframe/values_to_dict.py"}]}
| 2,383 | 216 |
gh_patches_debug_38360
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1234
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
'ls -al' incorrectly shows '..' files as duplicates of '.'
When using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie.
**To Reproduce**
Steps to reproduce the behavior:
1. SSH into a cowrie instance.
2. `cd /home/richard`
3. `ls -al`
4. The '..' entry has ownership 'richard richard'
**Expected behavior**
The information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'
'ls -al' incorrectly shows '..' files as duplicates of '.'
When using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie.
**To Reproduce**
Steps to reproduce the behavior:
1. SSH into a cowrie instance.
2. `cd /home/richard`
3. `ls -al`
4. The '..' entry has ownership 'richard richard'
**Expected behavior**
The information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/cowrie/commands/ls.py`
Content:
```
1 # Copyright (c) 2009 Upi Tamminen <[email protected]>
2 # See the COPYRIGHT file for more information
3
4 from __future__ import absolute_import, division
5
6 import getopt
7 import stat
8 import time
9
10 import cowrie.shell.fs as fs
11 from cowrie.shell.command import HoneyPotCommand
12 from cowrie.shell.pwd import Group, Passwd
13
14 commands = {}
15
16
17 class command_ls(HoneyPotCommand):
18
19 def uid2name(self, uid):
20 try:
21 return Passwd().getpwuid(uid)["pw_name"]
22 except Exception:
23 return str(uid)
24
25 def gid2name(self, gid):
26 try:
27 return Group().getgrgid(gid)["gr_name"]
28 except Exception:
29 return str(gid)
30
31 def call(self):
32 path = self.protocol.cwd
33 paths = []
34 self.showHidden = False
35 self.showDirectories = False
36 func = self.do_ls_normal
37
38 # Parse options or display no files
39 try:
40 opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',
41 ['help', 'version', 'param'])
42 except getopt.GetoptError as err:
43 self.write("ls: {}\n".format(err))
44 self.write("Try 'ls --help' for more information.\n")
45 return
46
47 for x, a in opts:
48 if x in ('-l'):
49 func = self.do_ls_l
50 if x in ('-a'):
51 self.showHidden = True
52 if x in ('-d'):
53 self.showDirectories = True
54
55 for arg in args:
56 paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))
57
58 if not paths:
59 func(path)
60 else:
61 for path in paths:
62 func(path)
63
64 def do_ls_normal(self, path):
65 try:
66 if self.protocol.fs.isdir(path) and not self.showDirectories:
67 files = self.protocol.fs.get_path(path)[:]
68 if self.showHidden:
69 dot = self.protocol.fs.getfile(path)[:]
70 dot[fs.A_NAME] = '.'
71 files.append(dot)
72 # FIXME: should grab dotdot off the parent instead
73 dotdot = self.protocol.fs.getfile(path)[:]
74 dotdot[fs.A_NAME] = '..'
75 files.append(dotdot)
76 else:
77 files = [x for x in files if not x[fs.A_NAME].startswith('.')]
78 files.sort()
79 else:
80 files = (self.protocol.fs.getfile(path)[:],)
81 except Exception:
82 self.write(
83 'ls: cannot access %s: No such file or directory\n' % (path,))
84 return
85
86 line = [x[fs.A_NAME] for x in files]
87 if not line:
88 return
89 count = 0
90 maxlen = max([len(x) for x in line])
91
92 try:
93 wincols = self.protocol.user.windowSize[1]
94 except AttributeError:
95 wincols = 80
96
97 perline = int(wincols / (maxlen + 1))
98 for f in line:
99 if count == perline:
100 count = 0
101 self.write('\n')
102 self.write(f.ljust(maxlen + 1))
103 count += 1
104 self.write('\n')
105
106 def do_ls_l(self, path):
107 try:
108 if self.protocol.fs.isdir(path) and not self.showDirectories:
109 files = self.protocol.fs.get_path(path)[:]
110 if self.showHidden:
111 dot = self.protocol.fs.getfile(path)[:]
112 dot[fs.A_NAME] = '.'
113 files.append(dot)
114 # FIXME: should grab dotdot off the parent instead
115 dotdot = self.protocol.fs.getfile(path)[:]
116 dotdot[fs.A_NAME] = '..'
117 files.append(dotdot)
118 else:
119 files = [x for x in files if not x[fs.A_NAME].startswith('.')]
120 files.sort()
121 else:
122 files = (self.protocol.fs.getfile(path)[:],)
123 except Exception:
124 self.write(
125 'ls: cannot access %s: No such file or directory\n' % (path,))
126 return
127
128 largest = 0
129 if len(files):
130 largest = max([x[fs.A_SIZE] for x in files])
131
132 for file in files:
133 if file[fs.A_NAME].startswith('.') and not self.showHidden:
134 continue
135
136 perms = ['-'] * 10
137 if file[fs.A_MODE] & stat.S_IRUSR:
138 perms[1] = 'r'
139 if file[fs.A_MODE] & stat.S_IWUSR:
140 perms[2] = 'w'
141 if file[fs.A_MODE] & stat.S_IXUSR:
142 perms[3] = 'x'
143 if file[fs.A_MODE] & stat.S_ISUID:
144 perms[3] = 'S'
145 if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:
146 perms[3] = 's'
147
148 if file[fs.A_MODE] & stat.S_IRGRP:
149 perms[4] = 'r'
150 if file[fs.A_MODE] & stat.S_IWGRP:
151 perms[5] = 'w'
152 if file[fs.A_MODE] & stat.S_IXGRP:
153 perms[6] = 'x'
154 if file[fs.A_MODE] & stat.S_ISGID:
155 perms[6] = 'S'
156 if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:
157 perms[6] = 's'
158
159 if file[fs.A_MODE] & stat.S_IROTH:
160 perms[7] = 'r'
161 if file[fs.A_MODE] & stat.S_IWOTH:
162 perms[8] = 'w'
163 if file[fs.A_MODE] & stat.S_IXOTH:
164 perms[9] = 'x'
165 if file[fs.A_MODE] & stat.S_ISVTX:
166 perms[9] = 'T'
167 if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:
168 perms[9] = 't'
169
170 linktarget = ''
171
172 if file[fs.A_TYPE] == fs.T_DIR:
173 perms[0] = 'd'
174 elif file[fs.A_TYPE] == fs.T_LINK:
175 perms[0] = 'l'
176 linktarget = ' -> %s' % (file[fs.A_TARGET],)
177
178 perms = ''.join(perms)
179 ctime = time.localtime(file[fs.A_CTIME])
180
181 line = '%s 1 %s %s %s %s %s%s' % \
182 (perms,
183 self.uid2name(file[fs.A_UID]),
184 self.gid2name(file[fs.A_GID]),
185 str(file[fs.A_SIZE]).rjust(len(str(largest))),
186 time.strftime('%Y-%m-%d %H:%M', ctime),
187 file[fs.A_NAME],
188 linktarget)
189
190 self.write('{0}\n'.format(line))
191
192
193 commands['/bin/ls'] = command_ls
194 commands['ls'] = command_ls
195 commands['/bin/dir'] = command_ls
196 commands['dir'] = command_ls
197
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py
--- a/src/cowrie/commands/ls.py
+++ b/src/cowrie/commands/ls.py
@@ -4,6 +4,7 @@
from __future__ import absolute_import, division
import getopt
+import os.path
import stat
import time
@@ -61,7 +62,7 @@
for path in paths:
func(path)
- def do_ls_normal(self, path):
+ def get_dir_files(self, path):
try:
if self.protocol.fs.isdir(path) and not self.showDirectories:
files = self.protocol.fs.get_path(path)[:]
@@ -69,8 +70,9 @@
dot = self.protocol.fs.getfile(path)[:]
dot[fs.A_NAME] = '.'
files.append(dot)
- # FIXME: should grab dotdot off the parent instead
- dotdot = self.protocol.fs.getfile(path)[:]
+ dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]
+ if not dotdot:
+ dotdot = self.protocol.fs.getfile(path)[:]
dotdot[fs.A_NAME] = '..'
files.append(dotdot)
else:
@@ -82,6 +84,10 @@
self.write(
'ls: cannot access %s: No such file or directory\n' % (path,))
return
+ return files
+
+ def do_ls_normal(self, path):
+ files = self.get_dir_files(path)
line = [x[fs.A_NAME] for x in files]
if not line:
@@ -104,26 +110,7 @@
self.write('\n')
def do_ls_l(self, path):
- try:
- if self.protocol.fs.isdir(path) and not self.showDirectories:
- files = self.protocol.fs.get_path(path)[:]
- if self.showHidden:
- dot = self.protocol.fs.getfile(path)[:]
- dot[fs.A_NAME] = '.'
- files.append(dot)
- # FIXME: should grab dotdot off the parent instead
- dotdot = self.protocol.fs.getfile(path)[:]
- dotdot[fs.A_NAME] = '..'
- files.append(dotdot)
- else:
- files = [x for x in files if not x[fs.A_NAME].startswith('.')]
- files.sort()
- else:
- files = (self.protocol.fs.getfile(path)[:],)
- except Exception:
- self.write(
- 'ls: cannot access %s: No such file or directory\n' % (path,))
- return
+ files = self.get_dir_files(path)
largest = 0
if len(files):
|
{"golden_diff": "diff --git a/src/cowrie/commands/ls.py b/src/cowrie/commands/ls.py\n--- a/src/cowrie/commands/ls.py\n+++ b/src/cowrie/commands/ls.py\n@@ -4,6 +4,7 @@\n from __future__ import absolute_import, division\n \n import getopt\n+import os.path\n import stat\n import time\n \n@@ -61,7 +62,7 @@\n for path in paths:\n func(path)\n \n- def do_ls_normal(self, path):\n+ def get_dir_files(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n@@ -69,8 +70,9 @@\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n- # FIXME: should grab dotdot off the parent instead\n- dotdot = self.protocol.fs.getfile(path)[:]\n+ dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]\n+ if not dotdot:\n+ dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n@@ -82,6 +84,10 @@\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n+ return files\n+\n+ def do_ls_normal(self, path):\n+ files = self.get_dir_files(path)\n \n line = [x[fs.A_NAME] for x in files]\n if not line:\n@@ -104,26 +110,7 @@\n self.write('\\n')\n \n def do_ls_l(self, path):\n- try:\n- if self.protocol.fs.isdir(path) and not self.showDirectories:\n- files = self.protocol.fs.get_path(path)[:]\n- if self.showHidden:\n- dot = self.protocol.fs.getfile(path)[:]\n- dot[fs.A_NAME] = '.'\n- files.append(dot)\n- # FIXME: should grab dotdot off the parent instead\n- dotdot = self.protocol.fs.getfile(path)[:]\n- dotdot[fs.A_NAME] = '..'\n- files.append(dotdot)\n- else:\n- files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n- files.sort()\n- else:\n- files = (self.protocol.fs.getfile(path)[:],)\n- except Exception:\n- self.write(\n- 'ls: cannot access %s: No such file or directory\\n' % (path,))\n- return\n+ files = self.get_dir_files(path)\n \n largest = 0\n if len(files):\n", "issue": "'ls -al' incorrectly shows '..' files as duplicates of '.'\nWhen using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. SSH into a cowrie instance.\r\n2. `cd /home/richard`\r\n3. `ls -al`\r\n4. The '..' entry has ownership 'richard richard'\r\n\r\n**Expected behavior**\r\nThe information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'\r\n\n'ls -al' incorrectly shows '..' files as duplicates of '.'\nWhen using ls inside a cowrie instance the '..' entry is just a duplicate of the '.' entry. The group and user information is often wrong. This is a very easy to check fingerprint of cowrie. \r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. SSH into a cowrie instance.\r\n2. `cd /home/richard`\r\n3. `ls -al`\r\n4. The '..' entry has ownership 'richard richard'\r\n\r\n**Expected behavior**\r\nThe information for the parent folder should be retrieved. In the case of '/home' from '/home/richard' the owner of '/home' should read as 'root root'\r\n\n", "before_files": [{"content": "# Copyright (c) 2009 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport getopt\nimport stat\nimport time\n\nimport cowrie.shell.fs as fs\nfrom cowrie.shell.command import HoneyPotCommand\nfrom cowrie.shell.pwd import Group, Passwd\n\ncommands = {}\n\n\nclass command_ls(HoneyPotCommand):\n\n def uid2name(self, uid):\n try:\n return Passwd().getpwuid(uid)[\"pw_name\"]\n except Exception:\n return str(uid)\n\n def gid2name(self, gid):\n try:\n return Group().getgrgid(gid)[\"gr_name\"]\n except Exception:\n return str(gid)\n\n def call(self):\n path = self.protocol.cwd\n paths = []\n self.showHidden = False\n self.showDirectories = False\n func = self.do_ls_normal\n\n # Parse options or display no files\n try:\n opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',\n ['help', 'version', 'param'])\n except getopt.GetoptError as err:\n self.write(\"ls: {}\\n\".format(err))\n self.write(\"Try 'ls --help' for more information.\\n\")\n return\n\n for x, a in opts:\n if x in ('-l'):\n func = self.do_ls_l\n if x in ('-a'):\n self.showHidden = True\n if x in ('-d'):\n self.showDirectories = True\n\n for arg in args:\n paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))\n\n if not paths:\n func(path)\n else:\n for path in paths:\n func(path)\n\n def do_ls_normal(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n # FIXME: should grab dotdot off the parent instead\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n\n line = [x[fs.A_NAME] for x in files]\n if not line:\n return\n count = 0\n maxlen = max([len(x) for x in line])\n\n try:\n wincols = self.protocol.user.windowSize[1]\n except AttributeError:\n wincols = 80\n\n perline = int(wincols / (maxlen + 1))\n for f in line:\n if count == perline:\n count = 0\n self.write('\\n')\n self.write(f.ljust(maxlen + 1))\n count += 1\n self.write('\\n')\n\n def do_ls_l(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n # FIXME: should grab dotdot off the parent instead\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n\n largest = 0\n if len(files):\n largest = max([x[fs.A_SIZE] for x in files])\n\n for file in files:\n if file[fs.A_NAME].startswith('.') and not self.showHidden:\n continue\n\n perms = ['-'] * 10\n if file[fs.A_MODE] & stat.S_IRUSR:\n perms[1] = 'r'\n if file[fs.A_MODE] & stat.S_IWUSR:\n perms[2] = 'w'\n if file[fs.A_MODE] & stat.S_IXUSR:\n perms[3] = 'x'\n if file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 'S'\n if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 's'\n\n if file[fs.A_MODE] & stat.S_IRGRP:\n perms[4] = 'r'\n if file[fs.A_MODE] & stat.S_IWGRP:\n perms[5] = 'w'\n if file[fs.A_MODE] & stat.S_IXGRP:\n perms[6] = 'x'\n if file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 'S'\n if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 's'\n\n if file[fs.A_MODE] & stat.S_IROTH:\n perms[7] = 'r'\n if file[fs.A_MODE] & stat.S_IWOTH:\n perms[8] = 'w'\n if file[fs.A_MODE] & stat.S_IXOTH:\n perms[9] = 'x'\n if file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 'T'\n if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 't'\n\n linktarget = ''\n\n if file[fs.A_TYPE] == fs.T_DIR:\n perms[0] = 'd'\n elif file[fs.A_TYPE] == fs.T_LINK:\n perms[0] = 'l'\n linktarget = ' -> %s' % (file[fs.A_TARGET],)\n\n perms = ''.join(perms)\n ctime = time.localtime(file[fs.A_CTIME])\n\n line = '%s 1 %s %s %s %s %s%s' % \\\n (perms,\n self.uid2name(file[fs.A_UID]),\n self.gid2name(file[fs.A_GID]),\n str(file[fs.A_SIZE]).rjust(len(str(largest))),\n time.strftime('%Y-%m-%d %H:%M', ctime),\n file[fs.A_NAME],\n linktarget)\n\n self.write('{0}\\n'.format(line))\n\n\ncommands['/bin/ls'] = command_ls\ncommands['ls'] = command_ls\ncommands['/bin/dir'] = command_ls\ncommands['dir'] = command_ls\n", "path": "src/cowrie/commands/ls.py"}], "after_files": [{"content": "# Copyright (c) 2009 Upi Tamminen <[email protected]>\n# See the COPYRIGHT file for more information\n\nfrom __future__ import absolute_import, division\n\nimport getopt\nimport os.path\nimport stat\nimport time\n\nimport cowrie.shell.fs as fs\nfrom cowrie.shell.command import HoneyPotCommand\nfrom cowrie.shell.pwd import Group, Passwd\n\ncommands = {}\n\n\nclass command_ls(HoneyPotCommand):\n\n def uid2name(self, uid):\n try:\n return Passwd().getpwuid(uid)[\"pw_name\"]\n except Exception:\n return str(uid)\n\n def gid2name(self, gid):\n try:\n return Group().getgrgid(gid)[\"gr_name\"]\n except Exception:\n return str(gid)\n\n def call(self):\n path = self.protocol.cwd\n paths = []\n self.showHidden = False\n self.showDirectories = False\n func = self.do_ls_normal\n\n # Parse options or display no files\n try:\n opts, args = getopt.gnu_getopt(self.args, '1@ABCFGHLOPRSTUWabcdefghiklmnopqrstuvwx',\n ['help', 'version', 'param'])\n except getopt.GetoptError as err:\n self.write(\"ls: {}\\n\".format(err))\n self.write(\"Try 'ls --help' for more information.\\n\")\n return\n\n for x, a in opts:\n if x in ('-l'):\n func = self.do_ls_l\n if x in ('-a'):\n self.showHidden = True\n if x in ('-d'):\n self.showDirectories = True\n\n for arg in args:\n paths.append(self.protocol.fs.resolve_path(arg, self.protocol.cwd))\n\n if not paths:\n func(path)\n else:\n for path in paths:\n func(path)\n\n def get_dir_files(self, path):\n try:\n if self.protocol.fs.isdir(path) and not self.showDirectories:\n files = self.protocol.fs.get_path(path)[:]\n if self.showHidden:\n dot = self.protocol.fs.getfile(path)[:]\n dot[fs.A_NAME] = '.'\n files.append(dot)\n dotdot = self.protocol.fs.getfile(os.path.split(path)[0])[:]\n if not dotdot:\n dotdot = self.protocol.fs.getfile(path)[:]\n dotdot[fs.A_NAME] = '..'\n files.append(dotdot)\n else:\n files = [x for x in files if not x[fs.A_NAME].startswith('.')]\n files.sort()\n else:\n files = (self.protocol.fs.getfile(path)[:],)\n except Exception:\n self.write(\n 'ls: cannot access %s: No such file or directory\\n' % (path,))\n return\n return files\n\n def do_ls_normal(self, path):\n files = self.get_dir_files(path)\n\n line = [x[fs.A_NAME] for x in files]\n if not line:\n return\n count = 0\n maxlen = max([len(x) for x in line])\n\n try:\n wincols = self.protocol.user.windowSize[1]\n except AttributeError:\n wincols = 80\n\n perline = int(wincols / (maxlen + 1))\n for f in line:\n if count == perline:\n count = 0\n self.write('\\n')\n self.write(f.ljust(maxlen + 1))\n count += 1\n self.write('\\n')\n\n def do_ls_l(self, path):\n files = self.get_dir_files(path)\n\n largest = 0\n if len(files):\n largest = max([x[fs.A_SIZE] for x in files])\n\n for file in files:\n if file[fs.A_NAME].startswith('.') and not self.showHidden:\n continue\n\n perms = ['-'] * 10\n if file[fs.A_MODE] & stat.S_IRUSR:\n perms[1] = 'r'\n if file[fs.A_MODE] & stat.S_IWUSR:\n perms[2] = 'w'\n if file[fs.A_MODE] & stat.S_IXUSR:\n perms[3] = 'x'\n if file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 'S'\n if file[fs.A_MODE] & stat.S_IXUSR and file[fs.A_MODE] & stat.S_ISUID:\n perms[3] = 's'\n\n if file[fs.A_MODE] & stat.S_IRGRP:\n perms[4] = 'r'\n if file[fs.A_MODE] & stat.S_IWGRP:\n perms[5] = 'w'\n if file[fs.A_MODE] & stat.S_IXGRP:\n perms[6] = 'x'\n if file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 'S'\n if file[fs.A_MODE] & stat.S_IXGRP and file[fs.A_MODE] & stat.S_ISGID:\n perms[6] = 's'\n\n if file[fs.A_MODE] & stat.S_IROTH:\n perms[7] = 'r'\n if file[fs.A_MODE] & stat.S_IWOTH:\n perms[8] = 'w'\n if file[fs.A_MODE] & stat.S_IXOTH:\n perms[9] = 'x'\n if file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 'T'\n if file[fs.A_MODE] & stat.S_IXOTH and file[fs.A_MODE] & stat.S_ISVTX:\n perms[9] = 't'\n\n linktarget = ''\n\n if file[fs.A_TYPE] == fs.T_DIR:\n perms[0] = 'd'\n elif file[fs.A_TYPE] == fs.T_LINK:\n perms[0] = 'l'\n linktarget = ' -> %s' % (file[fs.A_TARGET],)\n\n perms = ''.join(perms)\n ctime = time.localtime(file[fs.A_CTIME])\n\n line = '%s 1 %s %s %s %s %s%s' % \\\n (perms,\n self.uid2name(file[fs.A_UID]),\n self.gid2name(file[fs.A_GID]),\n str(file[fs.A_SIZE]).rjust(len(str(largest))),\n time.strftime('%Y-%m-%d %H:%M', ctime),\n file[fs.A_NAME],\n linktarget)\n\n self.write('{0}\\n'.format(line))\n\n\ncommands['/bin/ls'] = command_ls\ncommands['ls'] = command_ls\ncommands['/bin/dir'] = command_ls\ncommands['dir'] = command_ls\n", "path": "src/cowrie/commands/ls.py"}]}
| 2,650 | 612 |
gh_patches_debug_25470
|
rasdani/github-patches
|
git_diff
|
hylang__hy-2188
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Monkey-patching `py.path.local.pyimport` should no longer be necessary
Hi
I noticed **py** is used in conftest.py but not declared in any configuration files .
In addition, py as a Python library is deprecated as its [documentation](https://pypi.org/project/py/) "py.path: uniform local and svn path objects -> please use pathlib/pathlib2 instead"
Maybe it is necessary to migrate to new dependency-pathlib2 and add it to configuration files.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conftest.py`
Content:
```
1 import sys
2 import os
3 import importlib
4 from operator import or_
5 from functools import reduce
6
7 import py
8 import pytest
9 import hy
10 from hy._compat import PY3_8, PY3_10
11
12 NATIVE_TESTS = os.path.join("", "tests", "native_tests", "")
13
14 _fspath_pyimport = py.path.local.pyimport
15
16 # https://github.com/hylang/hy/issues/2029
17 os.environ.pop("HYSTARTUP", None)
18
19
20 def pytest_ignore_collect(path, config):
21 versions = [
22 (sys.version_info < (3, 8), "sub_py3_7_only"),
23 (PY3_8, "py3_8_only"),
24 (PY3_10, "py3_10_only"),
25 ]
26
27 return reduce(
28 or_,
29 (name in path.basename and not condition for condition, name in versions),
30 ) or None
31
32
33 def pyimport_patch_mismatch(self, **kwargs):
34 """Lame fix for https://github.com/pytest-dev/py/issues/195"""
35 try:
36 return _fspath_pyimport(self, **kwargs)
37 except py.path.local.ImportMismatchError:
38 pkgpath = self.pypkgpath()
39 if pkgpath is None:
40 pkgroot = self.dirpath()
41 modname = self.purebasename
42 else:
43 pkgroot = pkgpath.dirpath()
44 names = self.new(ext="").relto(pkgroot).split(self.sep)
45 if names[-1] == "__init__":
46 names.pop()
47 modname = ".".join(names)
48
49 res = importlib.import_module(modname)
50
51 return res
52
53
54 py.path.local.pyimport = pyimport_patch_mismatch
55
56
57 def pytest_collect_file(parent, path):
58 if (path.ext == ".hy"
59 and NATIVE_TESTS in path.dirname + os.sep
60 and path.basename != "__init__.hy"):
61
62 if hasattr(pytest.Module, "from_parent"):
63 pytest_mod = pytest.Module.from_parent(parent, fspath=path)
64 else:
65 pytest_mod = pytest.Module(path, parent)
66 return pytest_mod
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/conftest.py b/conftest.py
--- a/conftest.py
+++ b/conftest.py
@@ -4,15 +4,12 @@
from operator import or_
from functools import reduce
-import py
import pytest
import hy
from hy._compat import PY3_8, PY3_10
NATIVE_TESTS = os.path.join("", "tests", "native_tests", "")
-_fspath_pyimport = py.path.local.pyimport
-
# https://github.com/hylang/hy/issues/2029
os.environ.pop("HYSTARTUP", None)
@@ -30,30 +27,6 @@
) or None
-def pyimport_patch_mismatch(self, **kwargs):
- """Lame fix for https://github.com/pytest-dev/py/issues/195"""
- try:
- return _fspath_pyimport(self, **kwargs)
- except py.path.local.ImportMismatchError:
- pkgpath = self.pypkgpath()
- if pkgpath is None:
- pkgroot = self.dirpath()
- modname = self.purebasename
- else:
- pkgroot = pkgpath.dirpath()
- names = self.new(ext="").relto(pkgroot).split(self.sep)
- if names[-1] == "__init__":
- names.pop()
- modname = ".".join(names)
-
- res = importlib.import_module(modname)
-
- return res
-
-
-py.path.local.pyimport = pyimport_patch_mismatch
-
-
def pytest_collect_file(parent, path):
if (path.ext == ".hy"
and NATIVE_TESTS in path.dirname + os.sep
|
{"golden_diff": "diff --git a/conftest.py b/conftest.py\n--- a/conftest.py\n+++ b/conftest.py\n@@ -4,15 +4,12 @@\n from operator import or_\n from functools import reduce\n \n-import py\n import pytest\n import hy\n from hy._compat import PY3_8, PY3_10\n \n NATIVE_TESTS = os.path.join(\"\", \"tests\", \"native_tests\", \"\")\n \n-_fspath_pyimport = py.path.local.pyimport\n-\n # https://github.com/hylang/hy/issues/2029\n os.environ.pop(\"HYSTARTUP\", None)\n \n@@ -30,30 +27,6 @@\n ) or None\n \n \n-def pyimport_patch_mismatch(self, **kwargs):\n- \"\"\"Lame fix for https://github.com/pytest-dev/py/issues/195\"\"\"\n- try:\n- return _fspath_pyimport(self, **kwargs)\n- except py.path.local.ImportMismatchError:\n- pkgpath = self.pypkgpath()\n- if pkgpath is None:\n- pkgroot = self.dirpath()\n- modname = self.purebasename\n- else:\n- pkgroot = pkgpath.dirpath()\n- names = self.new(ext=\"\").relto(pkgroot).split(self.sep)\n- if names[-1] == \"__init__\":\n- names.pop()\n- modname = \".\".join(names)\n-\n- res = importlib.import_module(modname)\n-\n- return res\n-\n-\n-py.path.local.pyimport = pyimport_patch_mismatch\n-\n-\n def pytest_collect_file(parent, path):\n if (path.ext == \".hy\"\n and NATIVE_TESTS in path.dirname + os.sep\n", "issue": "Monkey-patching `py.path.local.pyimport` should no longer be necessary\nHi\r\nI noticed **py** is used in conftest.py but not declared in any configuration files .\r\nIn addition, py as a Python library is deprecated as its [documentation](https://pypi.org/project/py/) \"py.path: uniform local and svn path objects -> please use pathlib/pathlib2 instead\"\r\n\r\nMaybe it is necessary to migrate to new dependency-pathlib2 and add it to configuration files.\n", "before_files": [{"content": "import sys\nimport os\nimport importlib\nfrom operator import or_\nfrom functools import reduce\n\nimport py\nimport pytest\nimport hy\nfrom hy._compat import PY3_8, PY3_10\n\nNATIVE_TESTS = os.path.join(\"\", \"tests\", \"native_tests\", \"\")\n\n_fspath_pyimport = py.path.local.pyimport\n\n# https://github.com/hylang/hy/issues/2029\nos.environ.pop(\"HYSTARTUP\", None)\n\n\ndef pytest_ignore_collect(path, config):\n versions = [\n (sys.version_info < (3, 8), \"sub_py3_7_only\"),\n (PY3_8, \"py3_8_only\"),\n (PY3_10, \"py3_10_only\"),\n ]\n\n return reduce(\n or_,\n (name in path.basename and not condition for condition, name in versions),\n ) or None\n\n\ndef pyimport_patch_mismatch(self, **kwargs):\n \"\"\"Lame fix for https://github.com/pytest-dev/py/issues/195\"\"\"\n try:\n return _fspath_pyimport(self, **kwargs)\n except py.path.local.ImportMismatchError:\n pkgpath = self.pypkgpath()\n if pkgpath is None:\n pkgroot = self.dirpath()\n modname = self.purebasename\n else:\n pkgroot = pkgpath.dirpath()\n names = self.new(ext=\"\").relto(pkgroot).split(self.sep)\n if names[-1] == \"__init__\":\n names.pop()\n modname = \".\".join(names)\n\n res = importlib.import_module(modname)\n\n return res\n\n\npy.path.local.pyimport = pyimport_patch_mismatch\n\n\ndef pytest_collect_file(parent, path):\n if (path.ext == \".hy\"\n and NATIVE_TESTS in path.dirname + os.sep\n and path.basename != \"__init__.hy\"):\n\n if hasattr(pytest.Module, \"from_parent\"):\n pytest_mod = pytest.Module.from_parent(parent, fspath=path)\n else:\n pytest_mod = pytest.Module(path, parent)\n return pytest_mod\n", "path": "conftest.py"}], "after_files": [{"content": "import sys\nimport os\nimport importlib\nfrom operator import or_\nfrom functools import reduce\n\nimport pytest\nimport hy\nfrom hy._compat import PY3_8, PY3_10\n\nNATIVE_TESTS = os.path.join(\"\", \"tests\", \"native_tests\", \"\")\n\n# https://github.com/hylang/hy/issues/2029\nos.environ.pop(\"HYSTARTUP\", None)\n\n\ndef pytest_ignore_collect(path, config):\n versions = [\n (sys.version_info < (3, 8), \"sub_py3_7_only\"),\n (PY3_8, \"py3_8_only\"),\n (PY3_10, \"py3_10_only\"),\n ]\n\n return reduce(\n or_,\n (name in path.basename and not condition for condition, name in versions),\n ) or None\n\n\ndef pytest_collect_file(parent, path):\n if (path.ext == \".hy\"\n and NATIVE_TESTS in path.dirname + os.sep\n and path.basename != \"__init__.hy\"):\n\n if hasattr(pytest.Module, \"from_parent\"):\n pytest_mod = pytest.Module.from_parent(parent, fspath=path)\n else:\n pytest_mod = pytest.Module(path, parent)\n return pytest_mod\n", "path": "conftest.py"}]}
| 939 | 371 |
gh_patches_debug_36132
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-1823
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DeprecationWarning: the imp module is deprecated in favour of importlib
When running a Django project using ddtrace with [warnings enabled](https://docs.python.org/3/using/cmdline.html#cmdoption-w), this warning is emitted:
## Issue
> `/usr/local/lib/python3.7/dist-packages/ddtrace/bootstrap/sitecustomize.py:7`: `DeprecationWarning`: the `imp` module is deprecated in favour of `importlib`; see the [module's documentation](https://docs.python.org/3/library/imp.html) for alternative uses
## Details
The line in question:
https://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L7
How it's used:
https://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L103-L120
Documentation note for [`imp.find_module()`](https://docs.python.org/3/library/imp.html#imp.find_module):
> Deprecated since version 3.3: Use `importlib.util.find_spec()` instead unless Python 3.3 compatibility is required, in which case use `importlib.find_loader()`. For example usage of the former case, see the Examples section of the `importlib` documentation.
Documentation note for [`imp.load_module()`](https://docs.python.org/3/library/imp.html#imp.load_module):
> Deprecated since version 3.3: If previously used in conjunction with `imp.find_module()` then consider using `importlib.import_module()`, otherwise use the loader returned by the replacement you chose for `imp.find_module()`. If you called `imp.load_module()` and related functions directly with file path arguments then use a combination of `importlib.util.spec_from_file_location()` and `importlib.util.module_from_spec()`. See the Examples section of the `importlib` documentation for details of the various approaches.
## Resolution
I suspect [this example](https://docs.python.org/3/library/importlib.html#approximating-importlib-import-module) could be worth building off of to do the necessary path customization.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ddtrace/bootstrap/sitecustomize.py`
Content:
```
1 """
2 Bootstrapping code that is run when using the `ddtrace-run` Python entrypoint
3 Add all monkey-patching that needs to run by default here
4 """
5 import logging
6 import os
7 import imp
8 import sys
9
10 from ddtrace.utils.formats import asbool, get_env, parse_tags_str
11 from ddtrace.internal.logger import get_logger
12 from ddtrace import config, constants
13 from ddtrace.tracer import debug_mode, DD_LOG_FORMAT
14
15
16 if config.logs_injection:
17 # immediately patch logging if trace id injected
18 from ddtrace import patch
19
20 patch(logging=True)
21
22
23 # DEV: Once basicConfig is called here, future calls to it cannot be used to
24 # change the formatter since it applies the formatter to the root handler only
25 # upon initializing it the first time.
26 # See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550
27 # Debug mode from the tracer will do a basicConfig so only need to do this otherwise
28 if not debug_mode:
29 if config.logs_injection:
30 logging.basicConfig(format=DD_LOG_FORMAT)
31 else:
32 logging.basicConfig()
33
34 log = get_logger(__name__)
35
36 EXTRA_PATCHED_MODULES = {
37 "bottle": True,
38 "django": True,
39 "falcon": True,
40 "flask": True,
41 "pylons": True,
42 "pyramid": True,
43 }
44
45
46 def update_patched_modules():
47 modules_to_patch = os.environ.get("DATADOG_PATCH_MODULES")
48 if not modules_to_patch:
49 return
50
51 modules = parse_tags_str(modules_to_patch)
52 for module, should_patch in modules.items():
53 EXTRA_PATCHED_MODULES[module] = asbool(should_patch)
54
55
56 try:
57 from ddtrace import tracer
58
59 # Respect DATADOG_* environment variables in global tracer configuration
60 # TODO: these variables are deprecated; use utils method and update our documentation
61 # correct prefix should be DD_*
62 hostname = os.environ.get("DD_AGENT_HOST", os.environ.get("DATADOG_TRACE_AGENT_HOSTNAME"))
63 port = os.environ.get("DATADOG_TRACE_AGENT_PORT")
64 priority_sampling = os.environ.get("DATADOG_PRIORITY_SAMPLING")
65 profiling = asbool(os.environ.get("DD_PROFILING_ENABLED", False))
66
67 if profiling:
68 import ddtrace.profiling.auto # noqa: F401
69
70 opts = {}
71
72 if asbool(os.environ.get("DATADOG_TRACE_ENABLED", True)):
73 patch = True
74 else:
75 patch = False
76 opts["enabled"] = False
77
78 if hostname:
79 opts["hostname"] = hostname
80 if port:
81 opts["port"] = int(port)
82 if priority_sampling:
83 opts["priority_sampling"] = asbool(priority_sampling)
84
85 opts["collect_metrics"] = asbool(get_env("runtime_metrics", "enabled"))
86
87 if opts:
88 tracer.configure(**opts)
89
90 if patch:
91 update_patched_modules()
92 from ddtrace import patch_all
93
94 patch_all(**EXTRA_PATCHED_MODULES)
95
96 if "DATADOG_ENV" in os.environ:
97 tracer.set_tags({constants.ENV_KEY: os.environ["DATADOG_ENV"]})
98
99 if "DD_TRACE_GLOBAL_TAGS" in os.environ:
100 env_tags = os.getenv("DD_TRACE_GLOBAL_TAGS")
101 tracer.set_tags(parse_tags_str(env_tags))
102
103 # Ensure sitecustomize.py is properly called if available in application directories:
104 # * exclude `bootstrap_dir` from the search
105 # * find a user `sitecustomize.py` module
106 # * import that module via `imp`
107 bootstrap_dir = os.path.dirname(__file__)
108 path = list(sys.path)
109
110 if bootstrap_dir in path:
111 path.remove(bootstrap_dir)
112
113 try:
114 (f, path, description) = imp.find_module("sitecustomize", path)
115 except ImportError:
116 pass
117 else:
118 # `sitecustomize.py` found, load it
119 log.debug("sitecustomize from user found in: %s", path)
120 imp.load_module("sitecustomize", f, path, description)
121
122 # Loading status used in tests to detect if the `sitecustomize` has been
123 # properly loaded without exceptions. This must be the last action in the module
124 # when the execution ends with a success.
125 loaded = True
126 except Exception:
127 loaded = False
128 log.warning("error configuring Datadog tracing", exc_info=True)
129
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ddtrace/bootstrap/sitecustomize.py b/ddtrace/bootstrap/sitecustomize.py
--- a/ddtrace/bootstrap/sitecustomize.py
+++ b/ddtrace/bootstrap/sitecustomize.py
@@ -4,7 +4,6 @@
"""
import logging
import os
-import imp
import sys
from ddtrace.utils.formats import asbool, get_env, parse_tags_str
@@ -100,24 +99,40 @@
env_tags = os.getenv("DD_TRACE_GLOBAL_TAGS")
tracer.set_tags(parse_tags_str(env_tags))
- # Ensure sitecustomize.py is properly called if available in application directories:
- # * exclude `bootstrap_dir` from the search
- # * find a user `sitecustomize.py` module
- # * import that module via `imp`
+ # Check for and import any sitecustomize that would have normally been used
+ # had ddtrace-run not been used.
bootstrap_dir = os.path.dirname(__file__)
- path = list(sys.path)
-
- if bootstrap_dir in path:
- path.remove(bootstrap_dir)
-
- try:
- (f, path, description) = imp.find_module("sitecustomize", path)
- except ImportError:
- pass
+ if bootstrap_dir in sys.path:
+ index = sys.path.index(bootstrap_dir)
+ del sys.path[index]
+
+ # NOTE: this reference to the module is crucial in Python 2.
+ # Without it the current module gets gc'd and all subsequent references
+ # will be `None`.
+ ddtrace_sitecustomize = sys.modules["sitecustomize"]
+ del sys.modules["sitecustomize"]
+ try:
+ import sitecustomize # noqa
+ except ImportError:
+ # If an additional sitecustomize is not found then put the ddtrace
+ # sitecustomize back.
+ log.debug("additional sitecustomize not found")
+ sys.modules["sitecustomize"] = ddtrace_sitecustomize
+ else:
+ log.debug("additional sitecustomize found in: %s", sys.path)
+ finally:
+ # Always reinsert the ddtrace bootstrap directory to the path so
+ # that introspection and debugging the application makes sense.
+ # Note that this does not interfere with imports since a user
+ # sitecustomize, if it exists, will be imported.
+ sys.path.insert(index, bootstrap_dir)
else:
- # `sitecustomize.py` found, load it
- log.debug("sitecustomize from user found in: %s", path)
- imp.load_module("sitecustomize", f, path, description)
+ try:
+ import sitecustomize # noqa
+ except ImportError:
+ log.debug("additional sitecustomize not found")
+ else:
+ log.debug("additional sitecustomize found in: %s", sys.path)
# Loading status used in tests to detect if the `sitecustomize` has been
# properly loaded without exceptions. This must be the last action in the module
|
{"golden_diff": "diff --git a/ddtrace/bootstrap/sitecustomize.py b/ddtrace/bootstrap/sitecustomize.py\n--- a/ddtrace/bootstrap/sitecustomize.py\n+++ b/ddtrace/bootstrap/sitecustomize.py\n@@ -4,7 +4,6 @@\n \"\"\"\n import logging\n import os\n-import imp\n import sys\n \n from ddtrace.utils.formats import asbool, get_env, parse_tags_str\n@@ -100,24 +99,40 @@\n env_tags = os.getenv(\"DD_TRACE_GLOBAL_TAGS\")\n tracer.set_tags(parse_tags_str(env_tags))\n \n- # Ensure sitecustomize.py is properly called if available in application directories:\n- # * exclude `bootstrap_dir` from the search\n- # * find a user `sitecustomize.py` module\n- # * import that module via `imp`\n+ # Check for and import any sitecustomize that would have normally been used\n+ # had ddtrace-run not been used.\n bootstrap_dir = os.path.dirname(__file__)\n- path = list(sys.path)\n-\n- if bootstrap_dir in path:\n- path.remove(bootstrap_dir)\n-\n- try:\n- (f, path, description) = imp.find_module(\"sitecustomize\", path)\n- except ImportError:\n- pass\n+ if bootstrap_dir in sys.path:\n+ index = sys.path.index(bootstrap_dir)\n+ del sys.path[index]\n+\n+ # NOTE: this reference to the module is crucial in Python 2.\n+ # Without it the current module gets gc'd and all subsequent references\n+ # will be `None`.\n+ ddtrace_sitecustomize = sys.modules[\"sitecustomize\"]\n+ del sys.modules[\"sitecustomize\"]\n+ try:\n+ import sitecustomize # noqa\n+ except ImportError:\n+ # If an additional sitecustomize is not found then put the ddtrace\n+ # sitecustomize back.\n+ log.debug(\"additional sitecustomize not found\")\n+ sys.modules[\"sitecustomize\"] = ddtrace_sitecustomize\n+ else:\n+ log.debug(\"additional sitecustomize found in: %s\", sys.path)\n+ finally:\n+ # Always reinsert the ddtrace bootstrap directory to the path so\n+ # that introspection and debugging the application makes sense.\n+ # Note that this does not interfere with imports since a user\n+ # sitecustomize, if it exists, will be imported.\n+ sys.path.insert(index, bootstrap_dir)\n else:\n- # `sitecustomize.py` found, load it\n- log.debug(\"sitecustomize from user found in: %s\", path)\n- imp.load_module(\"sitecustomize\", f, path, description)\n+ try:\n+ import sitecustomize # noqa\n+ except ImportError:\n+ log.debug(\"additional sitecustomize not found\")\n+ else:\n+ log.debug(\"additional sitecustomize found in: %s\", sys.path)\n \n # Loading status used in tests to detect if the `sitecustomize` has been\n # properly loaded without exceptions. This must be the last action in the module\n", "issue": "DeprecationWarning: the imp module is deprecated in favour of importlib\nWhen running a Django project using ddtrace with [warnings enabled](https://docs.python.org/3/using/cmdline.html#cmdoption-w), this warning is emitted:\r\n\r\n## Issue\r\n\r\n> `/usr/local/lib/python3.7/dist-packages/ddtrace/bootstrap/sitecustomize.py:7`: `DeprecationWarning`: the `imp` module is deprecated in favour of `importlib`; see the [module's documentation](https://docs.python.org/3/library/imp.html) for alternative uses\r\n\r\n## Details\r\n\r\nThe line in question:\r\n\r\nhttps://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L7\r\n\r\nHow it's used: \r\n\r\nhttps://github.com/DataDog/dd-trace-py/blob/94148324196eb41c1f6bef56be51bdd96c758fa7/ddtrace/bootstrap/sitecustomize.py#L103-L120\r\n\r\nDocumentation note for [`imp.find_module()`](https://docs.python.org/3/library/imp.html#imp.find_module):\r\n\r\n> Deprecated since version 3.3: Use `importlib.util.find_spec()` instead unless Python 3.3 compatibility is required, in which case use `importlib.find_loader()`. For example usage of the former case, see the Examples section of the `importlib` documentation.\r\n\r\nDocumentation note for [`imp.load_module()`](https://docs.python.org/3/library/imp.html#imp.load_module):\r\n\r\n> Deprecated since version 3.3: If previously used in conjunction with `imp.find_module()` then consider using `importlib.import_module()`, otherwise use the loader returned by the replacement you chose for `imp.find_module()`. If you called `imp.load_module()` and related functions directly with file path arguments then use a combination of `importlib.util.spec_from_file_location()` and `importlib.util.module_from_spec()`. See the Examples section of the `importlib` documentation for details of the various approaches.\r\n\r\n## Resolution\r\n\r\nI suspect [this example](https://docs.python.org/3/library/importlib.html#approximating-importlib-import-module) could be worth building off of to do the necessary path customization.\n", "before_files": [{"content": "\"\"\"\nBootstrapping code that is run when using the `ddtrace-run` Python entrypoint\nAdd all monkey-patching that needs to run by default here\n\"\"\"\nimport logging\nimport os\nimport imp\nimport sys\n\nfrom ddtrace.utils.formats import asbool, get_env, parse_tags_str\nfrom ddtrace.internal.logger import get_logger\nfrom ddtrace import config, constants\nfrom ddtrace.tracer import debug_mode, DD_LOG_FORMAT\n\n\nif config.logs_injection:\n # immediately patch logging if trace id injected\n from ddtrace import patch\n\n patch(logging=True)\n\n\n# DEV: Once basicConfig is called here, future calls to it cannot be used to\n# change the formatter since it applies the formatter to the root handler only\n# upon initializing it the first time.\n# See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550\n# Debug mode from the tracer will do a basicConfig so only need to do this otherwise\nif not debug_mode:\n if config.logs_injection:\n logging.basicConfig(format=DD_LOG_FORMAT)\n else:\n logging.basicConfig()\n\nlog = get_logger(__name__)\n\nEXTRA_PATCHED_MODULES = {\n \"bottle\": True,\n \"django\": True,\n \"falcon\": True,\n \"flask\": True,\n \"pylons\": True,\n \"pyramid\": True,\n}\n\n\ndef update_patched_modules():\n modules_to_patch = os.environ.get(\"DATADOG_PATCH_MODULES\")\n if not modules_to_patch:\n return\n\n modules = parse_tags_str(modules_to_patch)\n for module, should_patch in modules.items():\n EXTRA_PATCHED_MODULES[module] = asbool(should_patch)\n\n\ntry:\n from ddtrace import tracer\n\n # Respect DATADOG_* environment variables in global tracer configuration\n # TODO: these variables are deprecated; use utils method and update our documentation\n # correct prefix should be DD_*\n hostname = os.environ.get(\"DD_AGENT_HOST\", os.environ.get(\"DATADOG_TRACE_AGENT_HOSTNAME\"))\n port = os.environ.get(\"DATADOG_TRACE_AGENT_PORT\")\n priority_sampling = os.environ.get(\"DATADOG_PRIORITY_SAMPLING\")\n profiling = asbool(os.environ.get(\"DD_PROFILING_ENABLED\", False))\n\n if profiling:\n import ddtrace.profiling.auto # noqa: F401\n\n opts = {}\n\n if asbool(os.environ.get(\"DATADOG_TRACE_ENABLED\", True)):\n patch = True\n else:\n patch = False\n opts[\"enabled\"] = False\n\n if hostname:\n opts[\"hostname\"] = hostname\n if port:\n opts[\"port\"] = int(port)\n if priority_sampling:\n opts[\"priority_sampling\"] = asbool(priority_sampling)\n\n opts[\"collect_metrics\"] = asbool(get_env(\"runtime_metrics\", \"enabled\"))\n\n if opts:\n tracer.configure(**opts)\n\n if patch:\n update_patched_modules()\n from ddtrace import patch_all\n\n patch_all(**EXTRA_PATCHED_MODULES)\n\n if \"DATADOG_ENV\" in os.environ:\n tracer.set_tags({constants.ENV_KEY: os.environ[\"DATADOG_ENV\"]})\n\n if \"DD_TRACE_GLOBAL_TAGS\" in os.environ:\n env_tags = os.getenv(\"DD_TRACE_GLOBAL_TAGS\")\n tracer.set_tags(parse_tags_str(env_tags))\n\n # Ensure sitecustomize.py is properly called if available in application directories:\n # * exclude `bootstrap_dir` from the search\n # * find a user `sitecustomize.py` module\n # * import that module via `imp`\n bootstrap_dir = os.path.dirname(__file__)\n path = list(sys.path)\n\n if bootstrap_dir in path:\n path.remove(bootstrap_dir)\n\n try:\n (f, path, description) = imp.find_module(\"sitecustomize\", path)\n except ImportError:\n pass\n else:\n # `sitecustomize.py` found, load it\n log.debug(\"sitecustomize from user found in: %s\", path)\n imp.load_module(\"sitecustomize\", f, path, description)\n\n # Loading status used in tests to detect if the `sitecustomize` has been\n # properly loaded without exceptions. This must be the last action in the module\n # when the execution ends with a success.\n loaded = True\nexcept Exception:\n loaded = False\n log.warning(\"error configuring Datadog tracing\", exc_info=True)\n", "path": "ddtrace/bootstrap/sitecustomize.py"}], "after_files": [{"content": "\"\"\"\nBootstrapping code that is run when using the `ddtrace-run` Python entrypoint\nAdd all monkey-patching that needs to run by default here\n\"\"\"\nimport logging\nimport os\nimport sys\n\nfrom ddtrace.utils.formats import asbool, get_env, parse_tags_str\nfrom ddtrace.internal.logger import get_logger\nfrom ddtrace import config, constants\nfrom ddtrace.tracer import debug_mode, DD_LOG_FORMAT\n\n\nif config.logs_injection:\n # immediately patch logging if trace id injected\n from ddtrace import patch\n\n patch(logging=True)\n\n\n# DEV: Once basicConfig is called here, future calls to it cannot be used to\n# change the formatter since it applies the formatter to the root handler only\n# upon initializing it the first time.\n# See https://github.com/python/cpython/blob/112e4afd582515fcdcc0cde5012a4866e5cfda12/Lib/logging/__init__.py#L1550\n# Debug mode from the tracer will do a basicConfig so only need to do this otherwise\nif not debug_mode:\n if config.logs_injection:\n logging.basicConfig(format=DD_LOG_FORMAT)\n else:\n logging.basicConfig()\n\nlog = get_logger(__name__)\n\nEXTRA_PATCHED_MODULES = {\n \"bottle\": True,\n \"django\": True,\n \"falcon\": True,\n \"flask\": True,\n \"pylons\": True,\n \"pyramid\": True,\n}\n\n\ndef update_patched_modules():\n modules_to_patch = os.environ.get(\"DATADOG_PATCH_MODULES\")\n if not modules_to_patch:\n return\n\n modules = parse_tags_str(modules_to_patch)\n for module, should_patch in modules.items():\n EXTRA_PATCHED_MODULES[module] = asbool(should_patch)\n\n\ntry:\n from ddtrace import tracer\n\n # Respect DATADOG_* environment variables in global tracer configuration\n # TODO: these variables are deprecated; use utils method and update our documentation\n # correct prefix should be DD_*\n hostname = os.environ.get(\"DD_AGENT_HOST\", os.environ.get(\"DATADOG_TRACE_AGENT_HOSTNAME\"))\n port = os.environ.get(\"DATADOG_TRACE_AGENT_PORT\")\n priority_sampling = os.environ.get(\"DATADOG_PRIORITY_SAMPLING\")\n profiling = asbool(os.environ.get(\"DD_PROFILING_ENABLED\", False))\n\n if profiling:\n import ddtrace.profiling.auto # noqa: F401\n\n opts = {}\n\n if asbool(os.environ.get(\"DATADOG_TRACE_ENABLED\", True)):\n patch = True\n else:\n patch = False\n opts[\"enabled\"] = False\n\n if hostname:\n opts[\"hostname\"] = hostname\n if port:\n opts[\"port\"] = int(port)\n if priority_sampling:\n opts[\"priority_sampling\"] = asbool(priority_sampling)\n\n opts[\"collect_metrics\"] = asbool(get_env(\"runtime_metrics\", \"enabled\"))\n\n if opts:\n tracer.configure(**opts)\n\n if patch:\n update_patched_modules()\n from ddtrace import patch_all\n\n patch_all(**EXTRA_PATCHED_MODULES)\n\n if \"DATADOG_ENV\" in os.environ:\n tracer.set_tags({constants.ENV_KEY: os.environ[\"DATADOG_ENV\"]})\n\n if \"DD_TRACE_GLOBAL_TAGS\" in os.environ:\n env_tags = os.getenv(\"DD_TRACE_GLOBAL_TAGS\")\n tracer.set_tags(parse_tags_str(env_tags))\n\n # Check for and import any sitecustomize that would have normally been used\n # had ddtrace-run not been used.\n bootstrap_dir = os.path.dirname(__file__)\n if bootstrap_dir in sys.path:\n index = sys.path.index(bootstrap_dir)\n del sys.path[index]\n\n # NOTE: this reference to the module is crucial in Python 2.\n # Without it the current module gets gc'd and all subsequent references\n # will be `None`.\n ddtrace_sitecustomize = sys.modules[\"sitecustomize\"]\n del sys.modules[\"sitecustomize\"]\n try:\n import sitecustomize # noqa\n except ImportError:\n # If an additional sitecustomize is not found then put the ddtrace\n # sitecustomize back.\n log.debug(\"additional sitecustomize not found\")\n sys.modules[\"sitecustomize\"] = ddtrace_sitecustomize\n else:\n log.debug(\"additional sitecustomize found in: %s\", sys.path)\n finally:\n # Always reinsert the ddtrace bootstrap directory to the path so\n # that introspection and debugging the application makes sense.\n # Note that this does not interfere with imports since a user\n # sitecustomize, if it exists, will be imported.\n sys.path.insert(index, bootstrap_dir)\n else:\n try:\n import sitecustomize # noqa\n except ImportError:\n log.debug(\"additional sitecustomize not found\")\n else:\n log.debug(\"additional sitecustomize found in: %s\", sys.path)\n\n # Loading status used in tests to detect if the `sitecustomize` has been\n # properly loaded without exceptions. This must be the last action in the module\n # when the execution ends with a success.\n loaded = True\nexcept Exception:\n loaded = False\n log.warning(\"error configuring Datadog tracing\", exc_info=True)\n", "path": "ddtrace/bootstrap/sitecustomize.py"}]}
| 2,030 | 649 |
gh_patches_debug_20168
|
rasdani/github-patches
|
git_diff
|
stephenmcd__mezzanine-1259
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
overextends tag broken in Django 1.7+1.8
Looks like the changes made to `loader_tags.py` in a50de50699bb6a24bfb5f118449991aa7608b426 either didn't work or both Django versions have since changed.
As reported here: https://groups.google.com/d/msg/mezzanine-users/_QWfFVB3RVc/ZirizEV9t2YJ
Just pinging @AlexHill as you might have a head's up on this one already.
I made a quick attempt by changing `find_template_loader = context.engine.find_template_loader` to `find_template_loader = context.engine.find_template_loader` which appears to work for 1.8, but then other possibly unrelated exceptions came up.
BTW my quick tip for actually running `overextends` is to modify the first line of `core/templates/admin/base_site.html` to use it instead of `extends`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mezzanine/template/loader_tags.py`
Content:
```
1 from __future__ import unicode_literals
2 from future.builtins import map
3
4 import os
5
6 from django.template import Template, TemplateSyntaxError, TemplateDoesNotExist
7 from django.template.loader_tags import ExtendsNode
8
9 from mezzanine import template
10
11
12 register = template.Library()
13
14
15 class OverExtendsNode(ExtendsNode):
16 """
17 Allows the template ``foo/bar.html`` to extend ``foo/bar.html``,
18 given that there is another version of it that can be loaded. This
19 allows templates to be created in a project that extend their app
20 template counterparts, or even app templates that extend other app
21 templates with the same relative name/path.
22
23 We use our own version of ``find_template``, that uses an explict
24 list of template directories to search for the template, based on
25 the directories that the known template loaders
26 (``app_directories`` and ``filesystem``) use. This list gets stored
27 in the template context, and each time a template is found, its
28 absolute path gets removed from the list, so that subsequent
29 searches for the same relative name/path can find parent templates
30 in other directories, which allows circular inheritance to occur.
31
32 Django's ``app_directories``, ``filesystem``, and ``cached``
33 loaders are supported. The ``eggs`` loader, and any loader that
34 implements ``load_template_source`` with a source string returned,
35 should also theoretically work.
36 """
37
38 def find_template(self, name, context, peeking=False):
39 """
40 Replacement for Django's ``find_template`` that uses the current
41 template context to keep track of which template directories it
42 has used when finding a template. This allows multiple templates
43 with the same relative name/path to be discovered, so that
44 circular template inheritance can occur.
45 """
46
47 # These imports want settings, which aren't available when this
48 # module is imported to ``add_to_builtins``, so do them here.
49 import django.template.loaders.app_directories as app_directories
50 try:
51 # Django >= 1.8
52 app_template_dirs = app_directories.get_app_template_dirs
53 except AttributeError:
54 # Django <= 1.7
55 app_template_dirs = app_directories.app_template_dirs
56
57 try:
58 # Django >= 1.8
59 find_template_loader = context.engine.find_template_loader
60 except AttributeError:
61 # Django <= 1.7
62 from django.template.loaders import find_template_loader
63
64 from mezzanine.conf import settings
65
66 # Store a dictionary in the template context mapping template
67 # names to the lists of template directories available to
68 # search for that template. Each time a template is loaded, its
69 # origin directory is removed from its directories list.
70 context_name = "OVEREXTENDS_DIRS"
71 if context_name not in context:
72 context[context_name] = {}
73 if name not in context[context_name]:
74 all_dirs = list(settings.TEMPLATE_DIRS) + list(app_template_dirs)
75 # os.path.abspath is needed under uWSGI, and also ensures we
76 # have consistent path separators across different OSes.
77 context[context_name][name] = list(map(os.path.abspath, all_dirs))
78
79 # Build a list of template loaders to use. For loaders that wrap
80 # other loaders like the ``cached`` template loader, unwind its
81 # internal loaders and add those instead.
82 loaders = []
83 for loader_name in settings.TEMPLATE_LOADERS:
84 loader = find_template_loader(loader_name)
85 loaders.extend(getattr(loader, "loaders", [loader]))
86
87 # Go through the loaders and try to find the template. When
88 # found, removed its absolute path from the context dict so
89 # that it won't be used again when the same relative name/path
90 # is requested.
91 for loader in loaders:
92 dirs = context[context_name][name]
93 try:
94 source, path = loader.load_template_source(name, dirs)
95 except TemplateDoesNotExist:
96 pass
97 else:
98 # Only remove the absolute path for the initial call in
99 # get_parent, and not when we're peeking during the
100 # second call.
101 if not peeking:
102 remove_path = os.path.abspath(path[:-len(name) - 1])
103 context[context_name][name].remove(remove_path)
104 return Template(source)
105 raise TemplateDoesNotExist(name)
106
107 def get_parent(self, context):
108 """
109 Load the parent template using our own ``find_template``, which
110 will cause its absolute path to not be used again. Then peek at
111 the first node, and if its parent arg is the same as the
112 current parent arg, we know circular inheritance is going to
113 occur, in which case we try and find the template again, with
114 the absolute directory removed from the search list.
115 """
116 parent = self.parent_name.resolve(context)
117 # If parent is a template object, just return it.
118 if hasattr(parent, "render"):
119 return parent
120 template = self.find_template(parent, context)
121 for node in template.nodelist:
122 if (isinstance(node, ExtendsNode) and
123 node.parent_name.resolve(context) == parent):
124 return self.find_template(parent, context, peeking=True)
125 return template
126
127
128 @register.tag
129 def overextends(parser, token):
130 """
131 Extended version of Django's ``extends`` tag that allows circular
132 inheritance to occur, eg a template can both be overridden and
133 extended at once.
134 """
135 bits = token.split_contents()
136 if len(bits) != 2:
137 raise TemplateSyntaxError("'%s' takes one argument" % bits[0])
138 parent_name = parser.compile_filter(bits[1])
139 nodelist = parser.parse()
140 if nodelist.get_nodes_by_type(ExtendsNode):
141 raise TemplateSyntaxError("'%s' cannot appear more than once "
142 "in the same template" % bits[0])
143 return OverExtendsNode(nodelist, parent_name, None)
144
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/mezzanine/template/loader_tags.py b/mezzanine/template/loader_tags.py
--- a/mezzanine/template/loader_tags.py
+++ b/mezzanine/template/loader_tags.py
@@ -49,17 +49,18 @@
import django.template.loaders.app_directories as app_directories
try:
# Django >= 1.8
- app_template_dirs = app_directories.get_app_template_dirs
+ get_app_template_dirs = app_directories.get_app_template_dirs
+ app_template_dirs = get_app_template_dirs('templates')
except AttributeError:
# Django <= 1.7
app_template_dirs = app_directories.app_template_dirs
try:
# Django >= 1.8
- find_template_loader = context.engine.find_template_loader
+ find_template_loader = context.template.engine.find_template_loader
except AttributeError:
# Django <= 1.7
- from django.template.loaders import find_template_loader
+ from django.template.loader import find_template_loader
from mezzanine.conf import settings
|
{"golden_diff": "diff --git a/mezzanine/template/loader_tags.py b/mezzanine/template/loader_tags.py\n--- a/mezzanine/template/loader_tags.py\n+++ b/mezzanine/template/loader_tags.py\n@@ -49,17 +49,18 @@\n import django.template.loaders.app_directories as app_directories\n try:\n # Django >= 1.8\n- app_template_dirs = app_directories.get_app_template_dirs\n+ get_app_template_dirs = app_directories.get_app_template_dirs\n+ app_template_dirs = get_app_template_dirs('templates')\n except AttributeError:\n # Django <= 1.7\n app_template_dirs = app_directories.app_template_dirs\n \n try:\n # Django >= 1.8\n- find_template_loader = context.engine.find_template_loader\n+ find_template_loader = context.template.engine.find_template_loader\n except AttributeError:\n # Django <= 1.7\n- from django.template.loaders import find_template_loader\n+ from django.template.loader import find_template_loader\n \n from mezzanine.conf import settings\n", "issue": "overextends tag broken in Django 1.7+1.8\nLooks like the changes made to `loader_tags.py` in a50de50699bb6a24bfb5f118449991aa7608b426 either didn't work or both Django versions have since changed.\n\nAs reported here: https://groups.google.com/d/msg/mezzanine-users/_QWfFVB3RVc/ZirizEV9t2YJ\n\nJust pinging @AlexHill as you might have a head's up on this one already. \n\nI made a quick attempt by changing `find_template_loader = context.engine.find_template_loader` to `find_template_loader = context.engine.find_template_loader` which appears to work for 1.8, but then other possibly unrelated exceptions came up.\n\nBTW my quick tip for actually running `overextends` is to modify the first line of `core/templates/admin/base_site.html` to use it instead of `extends`\n\n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom future.builtins import map\n\nimport os\n\nfrom django.template import Template, TemplateSyntaxError, TemplateDoesNotExist\nfrom django.template.loader_tags import ExtendsNode\n\nfrom mezzanine import template\n\n\nregister = template.Library()\n\n\nclass OverExtendsNode(ExtendsNode):\n \"\"\"\n Allows the template ``foo/bar.html`` to extend ``foo/bar.html``,\n given that there is another version of it that can be loaded. This\n allows templates to be created in a project that extend their app\n template counterparts, or even app templates that extend other app\n templates with the same relative name/path.\n\n We use our own version of ``find_template``, that uses an explict\n list of template directories to search for the template, based on\n the directories that the known template loaders\n (``app_directories`` and ``filesystem``) use. This list gets stored\n in the template context, and each time a template is found, its\n absolute path gets removed from the list, so that subsequent\n searches for the same relative name/path can find parent templates\n in other directories, which allows circular inheritance to occur.\n\n Django's ``app_directories``, ``filesystem``, and ``cached``\n loaders are supported. The ``eggs`` loader, and any loader that\n implements ``load_template_source`` with a source string returned,\n should also theoretically work.\n \"\"\"\n\n def find_template(self, name, context, peeking=False):\n \"\"\"\n Replacement for Django's ``find_template`` that uses the current\n template context to keep track of which template directories it\n has used when finding a template. This allows multiple templates\n with the same relative name/path to be discovered, so that\n circular template inheritance can occur.\n \"\"\"\n\n # These imports want settings, which aren't available when this\n # module is imported to ``add_to_builtins``, so do them here.\n import django.template.loaders.app_directories as app_directories\n try:\n # Django >= 1.8\n app_template_dirs = app_directories.get_app_template_dirs\n except AttributeError:\n # Django <= 1.7\n app_template_dirs = app_directories.app_template_dirs\n\n try:\n # Django >= 1.8\n find_template_loader = context.engine.find_template_loader\n except AttributeError:\n # Django <= 1.7\n from django.template.loaders import find_template_loader\n\n from mezzanine.conf import settings\n\n # Store a dictionary in the template context mapping template\n # names to the lists of template directories available to\n # search for that template. Each time a template is loaded, its\n # origin directory is removed from its directories list.\n context_name = \"OVEREXTENDS_DIRS\"\n if context_name not in context:\n context[context_name] = {}\n if name not in context[context_name]:\n all_dirs = list(settings.TEMPLATE_DIRS) + list(app_template_dirs)\n # os.path.abspath is needed under uWSGI, and also ensures we\n # have consistent path separators across different OSes.\n context[context_name][name] = list(map(os.path.abspath, all_dirs))\n\n # Build a list of template loaders to use. For loaders that wrap\n # other loaders like the ``cached`` template loader, unwind its\n # internal loaders and add those instead.\n loaders = []\n for loader_name in settings.TEMPLATE_LOADERS:\n loader = find_template_loader(loader_name)\n loaders.extend(getattr(loader, \"loaders\", [loader]))\n\n # Go through the loaders and try to find the template. When\n # found, removed its absolute path from the context dict so\n # that it won't be used again when the same relative name/path\n # is requested.\n for loader in loaders:\n dirs = context[context_name][name]\n try:\n source, path = loader.load_template_source(name, dirs)\n except TemplateDoesNotExist:\n pass\n else:\n # Only remove the absolute path for the initial call in\n # get_parent, and not when we're peeking during the\n # second call.\n if not peeking:\n remove_path = os.path.abspath(path[:-len(name) - 1])\n context[context_name][name].remove(remove_path)\n return Template(source)\n raise TemplateDoesNotExist(name)\n\n def get_parent(self, context):\n \"\"\"\n Load the parent template using our own ``find_template``, which\n will cause its absolute path to not be used again. Then peek at\n the first node, and if its parent arg is the same as the\n current parent arg, we know circular inheritance is going to\n occur, in which case we try and find the template again, with\n the absolute directory removed from the search list.\n \"\"\"\n parent = self.parent_name.resolve(context)\n # If parent is a template object, just return it.\n if hasattr(parent, \"render\"):\n return parent\n template = self.find_template(parent, context)\n for node in template.nodelist:\n if (isinstance(node, ExtendsNode) and\n node.parent_name.resolve(context) == parent):\n return self.find_template(parent, context, peeking=True)\n return template\n\n\[email protected]\ndef overextends(parser, token):\n \"\"\"\n Extended version of Django's ``extends`` tag that allows circular\n inheritance to occur, eg a template can both be overridden and\n extended at once.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes one argument\" % bits[0])\n parent_name = parser.compile_filter(bits[1])\n nodelist = parser.parse()\n if nodelist.get_nodes_by_type(ExtendsNode):\n raise TemplateSyntaxError(\"'%s' cannot appear more than once \"\n \"in the same template\" % bits[0])\n return OverExtendsNode(nodelist, parent_name, None)\n", "path": "mezzanine/template/loader_tags.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nfrom future.builtins import map\n\nimport os\n\nfrom django.template import Template, TemplateSyntaxError, TemplateDoesNotExist\nfrom django.template.loader_tags import ExtendsNode\n\nfrom mezzanine import template\n\n\nregister = template.Library()\n\n\nclass OverExtendsNode(ExtendsNode):\n \"\"\"\n Allows the template ``foo/bar.html`` to extend ``foo/bar.html``,\n given that there is another version of it that can be loaded. This\n allows templates to be created in a project that extend their app\n template counterparts, or even app templates that extend other app\n templates with the same relative name/path.\n\n We use our own version of ``find_template``, that uses an explict\n list of template directories to search for the template, based on\n the directories that the known template loaders\n (``app_directories`` and ``filesystem``) use. This list gets stored\n in the template context, and each time a template is found, its\n absolute path gets removed from the list, so that subsequent\n searches for the same relative name/path can find parent templates\n in other directories, which allows circular inheritance to occur.\n\n Django's ``app_directories``, ``filesystem``, and ``cached``\n loaders are supported. The ``eggs`` loader, and any loader that\n implements ``load_template_source`` with a source string returned,\n should also theoretically work.\n \"\"\"\n\n def find_template(self, name, context, peeking=False):\n \"\"\"\n Replacement for Django's ``find_template`` that uses the current\n template context to keep track of which template directories it\n has used when finding a template. This allows multiple templates\n with the same relative name/path to be discovered, so that\n circular template inheritance can occur.\n \"\"\"\n\n # These imports want settings, which aren't available when this\n # module is imported to ``add_to_builtins``, so do them here.\n import django.template.loaders.app_directories as app_directories\n try:\n # Django >= 1.8\n get_app_template_dirs = app_directories.get_app_template_dirs\n app_template_dirs = get_app_template_dirs('templates')\n except AttributeError:\n # Django <= 1.7\n app_template_dirs = app_directories.app_template_dirs\n\n try:\n # Django >= 1.8\n find_template_loader = context.template.engine.find_template_loader\n except AttributeError:\n # Django <= 1.7\n from django.template.loader import find_template_loader\n\n from mezzanine.conf import settings\n\n # Store a dictionary in the template context mapping template\n # names to the lists of template directories available to\n # search for that template. Each time a template is loaded, its\n # origin directory is removed from its directories list.\n context_name = \"OVEREXTENDS_DIRS\"\n if context_name not in context:\n context[context_name] = {}\n if name not in context[context_name]:\n all_dirs = list(settings.TEMPLATE_DIRS) + list(app_template_dirs)\n # os.path.abspath is needed under uWSGI, and also ensures we\n # have consistent path separators across different OSes.\n context[context_name][name] = list(map(os.path.abspath, all_dirs))\n\n # Build a list of template loaders to use. For loaders that wrap\n # other loaders like the ``cached`` template loader, unwind its\n # internal loaders and add those instead.\n loaders = []\n for loader_name in settings.TEMPLATE_LOADERS:\n loader = find_template_loader(loader_name)\n loaders.extend(getattr(loader, \"loaders\", [loader]))\n\n # Go through the loaders and try to find the template. When\n # found, removed its absolute path from the context dict so\n # that it won't be used again when the same relative name/path\n # is requested.\n for loader in loaders:\n dirs = context[context_name][name]\n try:\n source, path = loader.load_template_source(name, dirs)\n except TemplateDoesNotExist:\n pass\n else:\n # Only remove the absolute path for the initial call in\n # get_parent, and not when we're peeking during the\n # second call.\n if not peeking:\n remove_path = os.path.abspath(path[:-len(name) - 1])\n context[context_name][name].remove(remove_path)\n return Template(source)\n raise TemplateDoesNotExist(name)\n\n def get_parent(self, context):\n \"\"\"\n Load the parent template using our own ``find_template``, which\n will cause its absolute path to not be used again. Then peek at\n the first node, and if its parent arg is the same as the\n current parent arg, we know circular inheritance is going to\n occur, in which case we try and find the template again, with\n the absolute directory removed from the search list.\n \"\"\"\n parent = self.parent_name.resolve(context)\n # If parent is a template object, just return it.\n if hasattr(parent, \"render\"):\n return parent\n template = self.find_template(parent, context)\n for node in template.nodelist:\n if (isinstance(node, ExtendsNode) and\n node.parent_name.resolve(context) == parent):\n return self.find_template(parent, context, peeking=True)\n return template\n\n\[email protected]\ndef overextends(parser, token):\n \"\"\"\n Extended version of Django's ``extends`` tag that allows circular\n inheritance to occur, eg a template can both be overridden and\n extended at once.\n \"\"\"\n bits = token.split_contents()\n if len(bits) != 2:\n raise TemplateSyntaxError(\"'%s' takes one argument\" % bits[0])\n parent_name = parser.compile_filter(bits[1])\n nodelist = parser.parse()\n if nodelist.get_nodes_by_type(ExtendsNode):\n raise TemplateSyntaxError(\"'%s' cannot appear more than once \"\n \"in the same template\" % bits[0])\n return OverExtendsNode(nodelist, parent_name, None)\n", "path": "mezzanine/template/loader_tags.py"}]}
| 2,095 | 229 |
gh_patches_debug_27770
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-2081
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Documentation Updates Megaissue
While working on #1915 I've run into some areas where the documentation is lacking. This issue is a list of topics that I think need work. It may change over time.
- [x] `public` and `constant` as methods applied to storage variables
- [x] `self`
- [x] assignment
- [x] statements, expressions, control structure
- [x] scoping rules
- [x] `for` loops
- [x] tuples
- [x] contract objects
- [ ] memory layout of data types
- [ ] pass-by-reference / pass-by-value
- [ ] abi format
- [x] arithmetic functions (should be moved from types to builtin functions)
- [x] allowable literals for each type
- [x] examples for each of the builtin functions
- [x] `__init__` method
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Vyper documentation build configuration file, created by
5 # sphinx-quickstart on Wed Jul 26 11:18:29 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23 from recommonmark.parser import CommonMarkParser
24
25 # TO DO - Create and Implement Vyper Lexer
26 # def setup(sphinx):
27 # sys.path.insert(0, os.path.abspath('./utils'))
28 # from SolidityLexer import SolidityLexer
29 # sphinx.add_lexer('Python', SolidityLexer())
30
31
32 # -- General configuration ------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 "sphinx.ext.autodoc",
43 "sphinx.ext.intersphinx",
44 ]
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # The suffix(es) of source filenames.
50 # You can specify multiple suffix as a list of string:
51 #
52 # source_suffix = ['.rst', '.md']
53 source_suffix = ".rst"
54
55 # The master toctree document.
56 master_doc = "index"
57
58 # General information about the project.
59 project = "Vyper"
60 copyright = "2017-2020 CC-BY-4.0 Vyper Team"
61 author = "Vyper Team (originally created by Vitalik Buterin)"
62
63 # The version info for the project you're documenting, acts as replacement for
64 # |version| and |release|, also used in various other places throughout the
65 # built documents.
66 #
67 # The short X.Y version.
68 version = ""
69 # The full version, including alpha/beta/rc tags.
70 release = ""
71
72 # The language for content autogenerated by Sphinx. Refer to documentation
73 # for a list of supported languages.
74 #
75 # This is also used if you do content translation via gettext catalogs.
76 # Usually you set "language" from the command line for these cases.
77 language = "python"
78
79 # List of patterns, relative to source directory, that match files and
80 # directories to ignore when looking for source files.
81 # This patterns also effect to html_static_path and html_extra_path
82 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
83
84 # The name of the Pygments (syntax highlighting) style to use.
85 pygments_style = "sphinx"
86
87 # If true, `todo` and `todoList` produce output, else they produce nothing.
88 todo_include_todos = False
89
90
91 # -- Options for HTML output ----------------------------------------------
92
93 # The theme to use for HTML and HTML Help pages. See the documentation for
94 # a list of builtin themes.
95 #
96 html_theme = "sphinx_rtd_theme"
97
98 # Theme options are theme-specific and customize the look and feel of a theme
99 # further. For a list of options available for each theme, see the
100 # documentation.
101 #
102 # html_theme_options = {}
103
104 # Add any paths that contain custom static files (such as style sheets) here,
105 # relative to this directory. They are copied after the builtin static files,
106 # so a file named "default.css" will overwrite the builtin "default.css".
107 html_static_path = ["_static"]
108
109 html_css_files = ["css/toggle.css", "css/dark.css"]
110
111 html_js_files = ["js/toggle.js"]
112
113 # Custom sidebar templates, must be a dictionary that maps document names
114 # to template names.
115 #
116 # The default sidebars (for documents that don't match any pattern) are
117 # defined by theme itself. Builtin themes are using these templates by
118 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
119 # 'searchbox.html']``.
120 #
121 # html_sidebars = {}
122
123
124 # -- Options for HTMLHelp output ------------------------------------------
125
126 # Output file base name for HTML help builder.
127 htmlhelp_basename = "Vyperdoc"
128
129
130 # -- Options for LaTeX output ---------------------------------------------
131
132 latex_elements = {
133 # The paper size ('letterpaper' or 'a4paper').
134 #
135 # 'papersize': 'letterpaper',
136 # The font size ('10pt', '11pt' or '12pt').
137 #
138 # 'pointsize': '10pt',
139 # Additional stuff for the LaTeX preamble.
140 #
141 # 'preamble': '',
142 # Latex figure (float) alignment
143 #
144 # 'figure_align': 'htbp',
145 }
146
147 # Grouping the document tree into LaTeX files. List of tuples
148 # (source start file, target name, title,
149 # author, documentclass [howto, manual, or own class]).
150 latex_documents = [
151 (
152 master_doc,
153 "Vyper.tex",
154 "Vyper Documentation",
155 "Vyper Team (originally created by Vitalik Buterin)",
156 "manual",
157 ),
158 ]
159
160
161 # -- Options for manual page output ---------------------------------------
162
163 # One entry per manual page. List of tuples
164 # (source start file, name, description, authors, manual section).
165 man_pages = [(master_doc, "vyper", "Vyper Documentation", [author], 1)]
166
167
168 # -- Options for Texinfo output -------------------------------------------
169
170 # Grouping the document tree into Texinfo files. List of tuples
171 # (source start file, target name, title, author,
172 # dir menu entry, description, category)
173 texinfo_documents = [
174 (
175 master_doc,
176 "Vyper",
177 "Vyper Documentation",
178 author,
179 "Vyper",
180 "One line description of project.",
181 "Miscellaneous",
182 ),
183 ]
184
185 source_parsers = {
186 ".md": CommonMarkParser,
187 }
188
189 source_suffix = [".rst", ".md"]
190
191 intersphinx_mapping = {
192 "brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
193 "pytest": ("https://docs.pytest.org/en/latest/", None),
194 "python": ("https://docs.python.org/3.8/", None),
195 }
196
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -49,11 +49,10 @@
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
-# source_suffix = ['.rst', '.md']
-source_suffix = ".rst"
+source_suffix = [".rst", ".md"]
# The master toctree document.
-master_doc = "index"
+master_doc = "toctree"
# General information about the project.
project = "Vyper"
@@ -110,6 +109,8 @@
html_js_files = ["js/toggle.js"]
+html_logo = "vyper-logo-transparent.svg"
+
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
#
@@ -129,7 +130,7 @@
# -- Options for LaTeX output ---------------------------------------------
-latex_elements = {
+latex_elements: dict = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
@@ -186,8 +187,6 @@
".md": CommonMarkParser,
}
-source_suffix = [".rst", ".md"]
-
intersphinx_mapping = {
"brownie": ("https://eth-brownie.readthedocs.io/en/stable", None),
"pytest": ("https://docs.pytest.org/en/latest/", None),
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -49,11 +49,10 @@\n # The suffix(es) of source filenames.\n # You can specify multiple suffix as a list of string:\n #\n-# source_suffix = ['.rst', '.md']\n-source_suffix = \".rst\"\n+source_suffix = [\".rst\", \".md\"]\n \n # The master toctree document.\n-master_doc = \"index\"\n+master_doc = \"toctree\"\n \n # General information about the project.\n project = \"Vyper\"\n@@ -110,6 +109,8 @@\n \n html_js_files = [\"js/toggle.js\"]\n \n+html_logo = \"vyper-logo-transparent.svg\"\n+\n # Custom sidebar templates, must be a dictionary that maps document names\n # to template names.\n #\n@@ -129,7 +130,7 @@\n \n # -- Options for LaTeX output ---------------------------------------------\n \n-latex_elements = {\n+latex_elements: dict = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n@@ -186,8 +187,6 @@\n \".md\": CommonMarkParser,\n }\n \n-source_suffix = [\".rst\", \".md\"]\n-\n intersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n", "issue": "Documentation Updates Megaissue\nWhile working on #1915 I've run into some areas where the documentation is lacking. This issue is a list of topics that I think need work. It may change over time.\r\n\r\n- [x] `public` and `constant` as methods applied to storage variables\r\n- [x] `self`\r\n- [x] assignment\r\n- [x] statements, expressions, control structure\r\n- [x] scoping rules\r\n- [x] `for` loops\r\n- [x] tuples\r\n- [x] contract objects\r\n- [ ] memory layout of data types\r\n- [ ] pass-by-reference / pass-by-value\r\n- [ ] abi format\r\n- [x] arithmetic functions (should be moved from types to builtin functions)\r\n- [x] allowable literals for each type\r\n- [x] examples for each of the builtin functions\r\n- [x] `__init__` method\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Vyper documentation build configuration file, created by\n# sphinx-quickstart on Wed Jul 26 11:18:29 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nfrom recommonmark.parser import CommonMarkParser\n\n# TO DO - Create and Implement Vyper Lexer\n# def setup(sphinx):\n# sys.path.insert(0, os.path.abspath('./utils'))\n# from SolidityLexer import SolidityLexer\n# sphinx.add_lexer('Python', SolidityLexer())\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"Vyper\"\ncopyright = \"2017-2020 CC-BY-4.0 Vyper Team\"\nauthor = \"Vyper Team (originally created by Vitalik Buterin)\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = [\"css/toggle.css\", \"css/dark.css\"]\n\nhtml_js_files = [\"js/toggle.js\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Vyperdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Vyper.tex\",\n \"Vyper Documentation\",\n \"Vyper Team (originally created by Vitalik Buterin)\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"vyper\", \"Vyper Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Vyper\",\n \"Vyper Documentation\",\n author,\n \"Vyper\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\nsource_parsers = {\n \".md\": CommonMarkParser,\n}\n\nsource_suffix = [\".rst\", \".md\"]\n\nintersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n \"python\": (\"https://docs.python.org/3.8/\", None),\n}\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Vyper documentation build configuration file, created by\n# sphinx-quickstart on Wed Jul 26 11:18:29 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nfrom recommonmark.parser import CommonMarkParser\n\n# TO DO - Create and Implement Vyper Lexer\n# def setup(sphinx):\n# sys.path.insert(0, os.path.abspath('./utils'))\n# from SolidityLexer import SolidityLexer\n# sphinx.add_lexer('Python', SolidityLexer())\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"toctree\"\n\n# General information about the project.\nproject = \"Vyper\"\ncopyright = \"2017-2020 CC-BY-4.0 Vyper Team\"\nauthor = \"Vyper Team (originally created by Vitalik Buterin)\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"python\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_css_files = [\"css/toggle.css\", \"css/dark.css\"]\n\nhtml_js_files = [\"js/toggle.js\"]\n\nhtml_logo = \"vyper-logo-transparent.svg\"\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Vyperdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements: dict = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"Vyper.tex\",\n \"Vyper Documentation\",\n \"Vyper Team (originally created by Vitalik Buterin)\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"vyper\", \"Vyper Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Vyper\",\n \"Vyper Documentation\",\n author,\n \"Vyper\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\nsource_parsers = {\n \".md\": CommonMarkParser,\n}\n\nintersphinx_mapping = {\n \"brownie\": (\"https://eth-brownie.readthedocs.io/en/stable\", None),\n \"pytest\": (\"https://docs.pytest.org/en/latest/\", None),\n \"python\": (\"https://docs.python.org/3.8/\", None),\n}\n", "path": "docs/conf.py"}]}
| 2,382 | 325 |
gh_patches_debug_33084
|
rasdani/github-patches
|
git_diff
|
dj-stripe__dj-stripe-1095
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Webhook is not syncing TaxRate model
**Describe the bug**
Whenever I create or modify an existing TaxRate, the webhooks (signals tax_rate.created and tax_rate.updated) are triggered but the changes are not reflected in the database.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a new TaxRate object using the Django shell:
```
import stripe
stripe.TaxRate.create(display_name='name', description='desc', jurisdiction='ES', percentage=21.0, inclusive=False)
```
2. Modify the created TaxRate object using the Stripe Dashboard
**Expected behavior**
Changes should be reflected in the database.
A possible workaround is running: `./manage.py djstripe_sync_models TaxRate`
**Environment**
- dj-stripe version: 2.2.1
- Your Stripe account's default API version: 2019-11-05
- Database: Postgres 10.10
- Python version: 3.6
- Django version: 2.2.7
**Can you reproduce the issue with the latest version of master?**
Yes
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `djstripe/event_handlers.py`
Content:
```
1 """
2 Webhook event handlers for the various models
3
4 Stripe docs for Events: https://stripe.com/docs/api/events
5 Stripe docs for Webhooks: https://stripe.com/docs/webhooks
6
7 TODO: Implement webhook event handlers for all the models that need to
8 respond to webhook events.
9
10 NOTE:
11 Event data is not guaranteed to be in the correct API version format.
12 See #116. When writing a webhook handler, make sure to first
13 re-retrieve the object you wish to process.
14
15 """
16 import logging
17
18 from . import models, webhooks
19 from .enums import SourceType
20 from .utils import convert_tstamp
21
22 logger = logging.getLogger(__name__)
23
24
25 @webhooks.handler("customer")
26 def customer_webhook_handler(event):
27 """Handle updates to customer objects.
28
29 First determines the crud_type and then handles the event if a customer
30 exists locally.
31 As customers are tied to local users, djstripe will not create customers that
32 do not already exist locally.
33
34 Docs and an example customer webhook response:
35 https://stripe.com/docs/api#customer_object
36 """
37 if event.customer:
38 # As customers are tied to local users, djstripe will not create
39 # customers that do not already exist locally.
40 _handle_crud_like_event(
41 target_cls=models.Customer, event=event, crud_exact=True, crud_valid=True
42 )
43
44
45 @webhooks.handler("customer.discount")
46 def customer_discount_webhook_handler(event):
47 """Handle updates to customer discount objects.
48
49 Docs: https://stripe.com/docs/api#discounts
50
51 Because there is no concept of a "Discount" model in dj-stripe (due to the
52 lack of a stripe id on them), this is a little different to the other
53 handlers.
54 """
55
56 crud_type = CrudType.determine(event=event)
57 discount_data = event.data.get("object", {})
58 coupon_data = discount_data.get("coupon", {})
59 customer = event.customer
60
61 if crud_type.created or crud_type.updated:
62 coupon, _ = _handle_crud_like_event(
63 target_cls=models.Coupon,
64 event=event,
65 data=coupon_data,
66 id=coupon_data.get("id"),
67 )
68 coupon_start = discount_data.get("start")
69 coupon_end = discount_data.get("end")
70 else:
71 coupon = None
72 coupon_start = None
73 coupon_end = None
74
75 customer.coupon = coupon
76 customer.coupon_start = convert_tstamp(coupon_start)
77 customer.coupon_end = convert_tstamp(coupon_end)
78 customer.save()
79
80
81 @webhooks.handler("customer.source")
82 def customer_source_webhook_handler(event):
83 """Handle updates to customer payment-source objects.
84
85 Docs: https://stripe.com/docs/api#customer_object-sources.
86 """
87 customer_data = event.data.get("object", {})
88 source_type = customer_data.get("object", {})
89
90 # TODO: handle other types of sources
91 # (https://stripe.com/docs/api#customer_object-sources)
92 if source_type == SourceType.card:
93 if event.verb.endswith("deleted") and customer_data:
94 # On customer.source.deleted, we do not delete the object,
95 # we merely unlink it.
96 # customer = Customer.objects.get(id=customer_data["id"])
97 # NOTE: for now, customer.sources still points to Card
98 # Also, https://github.com/dj-stripe/dj-stripe/issues/576
99 models.Card.objects.filter(id=customer_data.get("id", "")).delete()
100 models.DjstripePaymentMethod.objects.filter(
101 id=customer_data.get("id", "")
102 ).delete()
103 else:
104 _handle_crud_like_event(target_cls=models.Card, event=event)
105
106
107 @webhooks.handler("customer.subscription")
108 def customer_subscription_webhook_handler(event):
109 """Handle updates to customer subscription objects.
110
111 Docs an example subscription webhook response:
112 https://stripe.com/docs/api#subscription_object
113 """
114
115 # customer.subscription.deleted doesn't actually delete the subscription
116 # on the stripe side, it updates it to canceled status, so override
117 # crud_type to update to match.
118 crud_type = CrudType.determine(event=event)
119 if crud_type.deleted:
120 crud_type = CrudType(updated=True)
121 _handle_crud_like_event(
122 target_cls=models.Subscription, event=event, crud_type=crud_type
123 )
124
125
126 @webhooks.handler("payment_method")
127 def payment_method_handler(event):
128 """
129 Handle updates to payment_method objects
130 :param event:
131 :return:
132
133 Docs for:
134 - payment_method: https://stripe.com/docs/api/payment_methods
135 """
136 id_ = event.data.get("object", {}).get("id", None)
137
138 if (
139 event.parts == ["payment_method", "detached"]
140 and id_
141 and id_.startswith("card_")
142 ):
143 # Special case to handle a quirk in stripe's wrapping of legacy "card" objects
144 # with payment_methods - card objects are deleted on detach, so treat this as
145 # a delete event
146 _handle_crud_like_event(
147 target_cls=models.PaymentMethod,
148 event=event,
149 crud_type=CrudType(deleted=True),
150 )
151 else:
152 _handle_crud_like_event(target_cls=models.PaymentMethod, event=event)
153
154
155 @webhooks.handler(
156 "transfer",
157 "charge",
158 "coupon",
159 "invoice",
160 "invoiceitem",
161 "payment_intent",
162 "plan",
163 "product",
164 "setup_intent",
165 "source",
166 )
167 def other_object_webhook_handler(event):
168 """
169 Handle updates to transfer, charge, coupon, invoice, invoiceitem, payment_intent,
170 plan, product, setup_intent and source objects.
171
172 Docs for:
173 - charge: https://stripe.com/docs/api#charges
174 - coupon: https://stripe.com/docs/api#coupons
175 - invoice: https://stripe.com/docs/api#invoices
176 - invoiceitem: https://stripe.com/docs/api#invoiceitems
177 - plan: https://stripe.com/docs/api#plans
178 - product: https://stripe.com/docs/api#products
179 - source: https://stripe.com/docs/api#sources
180 - payment_intent: https://stripe.com/docs/api/payment_intents
181 """
182
183 if event.parts[:2] == ["charge", "dispute"]:
184 # Do not attempt to handle charge.dispute.* events.
185 # We do not have a Dispute model yet.
186 target_cls = models.Dispute
187 else:
188 target_cls = {
189 "charge": models.Charge,
190 "coupon": models.Coupon,
191 "invoice": models.Invoice,
192 "invoiceitem": models.InvoiceItem,
193 "payment_intent": models.PaymentIntent,
194 "plan": models.Plan,
195 "product": models.Product,
196 "transfer": models.Transfer,
197 "setup_intent": models.SetupIntent,
198 "source": models.Source,
199 }.get(event.category)
200
201 _handle_crud_like_event(target_cls=target_cls, event=event)
202
203
204 #
205 # Helpers
206 #
207
208
209 class CrudType(object):
210 """Helper object to determine CRUD-like event state."""
211
212 created = False
213 updated = False
214 deleted = False
215
216 def __init__(self, **kwargs):
217 """Set attributes."""
218 for k, v in kwargs.items():
219 setattr(self, k, v)
220
221 @property
222 def valid(self):
223 """Return True if this is a CRUD-like event."""
224 return self.created or self.updated or self.deleted
225
226 @classmethod
227 def determine(cls, event, verb=None, exact=False):
228 """
229 Determine if the event verb is a crud_type (without the 'R') event.
230
231 :param event:
232 :type event: models.Event
233 :param verb: The event verb to examine.
234 :type verb: str
235 :param exact: If True, match crud_type to event verb string exactly.
236 :type exact: bool
237 :returns: The CrudType state object.
238 :rtype: CrudType
239 """
240 verb = verb or event.verb
241
242 def check(crud_type_event):
243 if exact:
244 return verb == crud_type_event
245 else:
246 return verb.endswith(crud_type_event)
247
248 created = updated = deleted = False
249
250 if check("updated"):
251 updated = True
252 elif check("created"):
253 created = True
254 elif check("deleted"):
255 deleted = True
256
257 return cls(created=created, updated=updated, deleted=deleted)
258
259
260 def _handle_crud_like_event(
261 target_cls,
262 event,
263 data=None,
264 verb=None,
265 id=None,
266 customer=None,
267 crud_type=None,
268 crud_exact=False,
269 crud_valid=False,
270 ):
271 """
272 Helper to process crud_type-like events for objects.
273
274 Non-deletes (creates, updates and "anything else" events) are treated as
275 update_or_create events - The object will be retrieved locally, then it is
276 synchronised with the Stripe API for parity.
277
278 Deletes only occur for delete events and cause the object to be deleted
279 from the local database, if it existed. If it doesn't exist then it is
280 ignored (but the event processing still succeeds).
281
282 :param target_cls: The djstripe model being handled.
283 :type target_cls: Type[models.StripeModel]
284 :param event: The event object
285 :type event: models.Event
286 :param data: The event object data (defaults to ``event.data``).
287 :param verb: The event verb (defaults to ``event.verb``).
288 :type verb: str
289 :param id: The object Stripe ID (defaults to ``object.id``).
290 :type id: str
291 :param customer: The customer object (defaults to ``event.customer``).
292 :param crud_type: The CrudType object (determined by default).
293 :param crud_exact: If True, match verb against CRUD type exactly.
294 :param crud_valid: If True, CRUD type must match valid type.
295 :returns: The object (if any) and the event CrudType.
296 :rtype: Tuple[models.StripeModel, CrudType]
297 """
298 data = data or event.data
299 id = id or data.get("object", {}).get("id", None)
300
301 if not id:
302 # We require an object when applying CRUD-like events, so if there's
303 # no ID the event is ignored/dropped. This happens in events such as
304 # invoice.upcoming, which refer to a future (non-existant) invoice.
305 logger.debug(
306 "Ignoring %r Stripe event without object ID: %r", event.type, event
307 )
308 return
309
310 verb = verb or event.verb
311 customer = customer or event.customer
312 crud_type = crud_type or CrudType.determine(
313 event=event, verb=verb, exact=crud_exact
314 )
315 obj = None
316
317 if crud_valid and not crud_type.valid:
318 logger.debug(
319 "Ignoring %r Stripe event without valid CRUD type: %r", event.type, event
320 )
321 return
322
323 if crud_type.deleted:
324 qs = target_cls.objects.filter(id=id)
325 if target_cls is models.Customer and qs.exists():
326 qs.get().purge()
327 else:
328 obj = target_cls.objects.filter(id=id).delete()
329 else:
330 # Any other event type (creates, updates, etc.) - This can apply to
331 # verbs that aren't strictly CRUD but Stripe do intend an update. Such
332 # as invoice.payment_failed.
333 kwargs = {"id": id}
334 if hasattr(target_cls, "customer"):
335 kwargs["customer"] = customer
336 data = target_cls(**kwargs).api_retrieve()
337 obj = target_cls.sync_from_stripe_data(data)
338
339 return obj, crud_type
340
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/djstripe/event_handlers.py b/djstripe/event_handlers.py
--- a/djstripe/event_handlers.py
+++ b/djstripe/event_handlers.py
@@ -153,7 +153,6 @@
@webhooks.handler(
- "transfer",
"charge",
"coupon",
"invoice",
@@ -163,21 +162,26 @@
"product",
"setup_intent",
"source",
+ "tax_rate",
+ "transfer",
)
def other_object_webhook_handler(event):
"""
- Handle updates to transfer, charge, coupon, invoice, invoiceitem, payment_intent,
- plan, product, setup_intent and source objects.
+ Handle updates to charge, coupon, invoice, invoiceitem, payment_intent,
+ plan, product, setup_intent, source, tax_rate and transfer objects.
Docs for:
- - charge: https://stripe.com/docs/api#charges
- - coupon: https://stripe.com/docs/api#coupons
- - invoice: https://stripe.com/docs/api#invoices
- - invoiceitem: https://stripe.com/docs/api#invoiceitems
- - plan: https://stripe.com/docs/api#plans
- - product: https://stripe.com/docs/api#products
- - source: https://stripe.com/docs/api#sources
+ - charge: https://stripe.com/docs/api/charges
+ - coupon: https://stripe.com/docs/api/coupons
+ - invoice: https://stripe.com/docs/api/invoices
+ - invoiceitem: https://stripe.com/docs/api/invoiceitems
- payment_intent: https://stripe.com/docs/api/payment_intents
+ - plan: https://stripe.com/docs/api/plans
+ - product: https://stripe.com/docs/api/products
+ - setup_intent: https://stripe.com/docs/api/setup_intents
+ - source: https://stripe.com/docs/api/sources
+ - tax_rate: https://stripe.com/docs/api/tax_rates/
+ - transfer: https://stripe.com/docs/api/transfers
"""
if event.parts[:2] == ["charge", "dispute"]:
@@ -196,6 +200,7 @@
"transfer": models.Transfer,
"setup_intent": models.SetupIntent,
"source": models.Source,
+ "tax_rate": models.TaxRate,
}.get(event.category)
_handle_crud_like_event(target_cls=target_cls, event=event)
|
{"golden_diff": "diff --git a/djstripe/event_handlers.py b/djstripe/event_handlers.py\n--- a/djstripe/event_handlers.py\n+++ b/djstripe/event_handlers.py\n@@ -153,7 +153,6 @@\n \n \n @webhooks.handler(\n- \"transfer\",\n \"charge\",\n \"coupon\",\n \"invoice\",\n@@ -163,21 +162,26 @@\n \"product\",\n \"setup_intent\",\n \"source\",\n+ \"tax_rate\",\n+ \"transfer\",\n )\n def other_object_webhook_handler(event):\n \"\"\"\n- Handle updates to transfer, charge, coupon, invoice, invoiceitem, payment_intent,\n- plan, product, setup_intent and source objects.\n+ Handle updates to charge, coupon, invoice, invoiceitem, payment_intent,\n+ plan, product, setup_intent, source, tax_rate and transfer objects.\n \n Docs for:\n- - charge: https://stripe.com/docs/api#charges\n- - coupon: https://stripe.com/docs/api#coupons\n- - invoice: https://stripe.com/docs/api#invoices\n- - invoiceitem: https://stripe.com/docs/api#invoiceitems\n- - plan: https://stripe.com/docs/api#plans\n- - product: https://stripe.com/docs/api#products\n- - source: https://stripe.com/docs/api#sources\n+ - charge: https://stripe.com/docs/api/charges\n+ - coupon: https://stripe.com/docs/api/coupons\n+ - invoice: https://stripe.com/docs/api/invoices\n+ - invoiceitem: https://stripe.com/docs/api/invoiceitems\n - payment_intent: https://stripe.com/docs/api/payment_intents\n+ - plan: https://stripe.com/docs/api/plans\n+ - product: https://stripe.com/docs/api/products\n+ - setup_intent: https://stripe.com/docs/api/setup_intents\n+ - source: https://stripe.com/docs/api/sources\n+ - tax_rate: https://stripe.com/docs/api/tax_rates/\n+ - transfer: https://stripe.com/docs/api/transfers\n \"\"\"\n \n if event.parts[:2] == [\"charge\", \"dispute\"]:\n@@ -196,6 +200,7 @@\n \"transfer\": models.Transfer,\n \"setup_intent\": models.SetupIntent,\n \"source\": models.Source,\n+ \"tax_rate\": models.TaxRate,\n }.get(event.category)\n \n _handle_crud_like_event(target_cls=target_cls, event=event)\n", "issue": "Webhook is not syncing TaxRate model\n**Describe the bug**\r\nWhenever I create or modify an existing TaxRate, the webhooks (signals tax_rate.created and tax_rate.updated) are triggered but the changes are not reflected in the database.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Create a new TaxRate object using the Django shell:\r\n\r\n```\r\nimport stripe\r\nstripe.TaxRate.create(display_name='name', description='desc', jurisdiction='ES', percentage=21.0, inclusive=False)\r\n```\r\n\r\n2. Modify the created TaxRate object using the Stripe Dashboard\r\n\r\n**Expected behavior**\r\nChanges should be reflected in the database.\r\n\r\nA possible workaround is running: `./manage.py djstripe_sync_models TaxRate`\r\n\r\n**Environment**\r\n- dj-stripe version: 2.2.1\r\n- Your Stripe account's default API version: 2019-11-05\r\n- Database: Postgres 10.10\r\n- Python version: 3.6\r\n- Django version: 2.2.7\r\n\r\n**Can you reproduce the issue with the latest version of master?**\r\n\r\nYes\r\n\n", "before_files": [{"content": "\"\"\"\nWebhook event handlers for the various models\n\nStripe docs for Events: https://stripe.com/docs/api/events\nStripe docs for Webhooks: https://stripe.com/docs/webhooks\n\nTODO: Implement webhook event handlers for all the models that need to\n respond to webhook events.\n\nNOTE:\n Event data is not guaranteed to be in the correct API version format.\n See #116. When writing a webhook handler, make sure to first\n re-retrieve the object you wish to process.\n\n\"\"\"\nimport logging\n\nfrom . import models, webhooks\nfrom .enums import SourceType\nfrom .utils import convert_tstamp\n\nlogger = logging.getLogger(__name__)\n\n\[email protected](\"customer\")\ndef customer_webhook_handler(event):\n \"\"\"Handle updates to customer objects.\n\n First determines the crud_type and then handles the event if a customer\n exists locally.\n As customers are tied to local users, djstripe will not create customers that\n do not already exist locally.\n\n Docs and an example customer webhook response:\n https://stripe.com/docs/api#customer_object\n \"\"\"\n if event.customer:\n # As customers are tied to local users, djstripe will not create\n # customers that do not already exist locally.\n _handle_crud_like_event(\n target_cls=models.Customer, event=event, crud_exact=True, crud_valid=True\n )\n\n\[email protected](\"customer.discount\")\ndef customer_discount_webhook_handler(event):\n \"\"\"Handle updates to customer discount objects.\n\n Docs: https://stripe.com/docs/api#discounts\n\n Because there is no concept of a \"Discount\" model in dj-stripe (due to the\n lack of a stripe id on them), this is a little different to the other\n handlers.\n \"\"\"\n\n crud_type = CrudType.determine(event=event)\n discount_data = event.data.get(\"object\", {})\n coupon_data = discount_data.get(\"coupon\", {})\n customer = event.customer\n\n if crud_type.created or crud_type.updated:\n coupon, _ = _handle_crud_like_event(\n target_cls=models.Coupon,\n event=event,\n data=coupon_data,\n id=coupon_data.get(\"id\"),\n )\n coupon_start = discount_data.get(\"start\")\n coupon_end = discount_data.get(\"end\")\n else:\n coupon = None\n coupon_start = None\n coupon_end = None\n\n customer.coupon = coupon\n customer.coupon_start = convert_tstamp(coupon_start)\n customer.coupon_end = convert_tstamp(coupon_end)\n customer.save()\n\n\[email protected](\"customer.source\")\ndef customer_source_webhook_handler(event):\n \"\"\"Handle updates to customer payment-source objects.\n\n Docs: https://stripe.com/docs/api#customer_object-sources.\n \"\"\"\n customer_data = event.data.get(\"object\", {})\n source_type = customer_data.get(\"object\", {})\n\n # TODO: handle other types of sources\n # (https://stripe.com/docs/api#customer_object-sources)\n if source_type == SourceType.card:\n if event.verb.endswith(\"deleted\") and customer_data:\n # On customer.source.deleted, we do not delete the object,\n # we merely unlink it.\n # customer = Customer.objects.get(id=customer_data[\"id\"])\n # NOTE: for now, customer.sources still points to Card\n # Also, https://github.com/dj-stripe/dj-stripe/issues/576\n models.Card.objects.filter(id=customer_data.get(\"id\", \"\")).delete()\n models.DjstripePaymentMethod.objects.filter(\n id=customer_data.get(\"id\", \"\")\n ).delete()\n else:\n _handle_crud_like_event(target_cls=models.Card, event=event)\n\n\[email protected](\"customer.subscription\")\ndef customer_subscription_webhook_handler(event):\n \"\"\"Handle updates to customer subscription objects.\n\n Docs an example subscription webhook response:\n https://stripe.com/docs/api#subscription_object\n \"\"\"\n\n # customer.subscription.deleted doesn't actually delete the subscription\n # on the stripe side, it updates it to canceled status, so override\n # crud_type to update to match.\n crud_type = CrudType.determine(event=event)\n if crud_type.deleted:\n crud_type = CrudType(updated=True)\n _handle_crud_like_event(\n target_cls=models.Subscription, event=event, crud_type=crud_type\n )\n\n\[email protected](\"payment_method\")\ndef payment_method_handler(event):\n \"\"\"\n Handle updates to payment_method objects\n :param event:\n :return:\n\n Docs for:\n - payment_method: https://stripe.com/docs/api/payment_methods\n \"\"\"\n id_ = event.data.get(\"object\", {}).get(\"id\", None)\n\n if (\n event.parts == [\"payment_method\", \"detached\"]\n and id_\n and id_.startswith(\"card_\")\n ):\n # Special case to handle a quirk in stripe's wrapping of legacy \"card\" objects\n # with payment_methods - card objects are deleted on detach, so treat this as\n # a delete event\n _handle_crud_like_event(\n target_cls=models.PaymentMethod,\n event=event,\n crud_type=CrudType(deleted=True),\n )\n else:\n _handle_crud_like_event(target_cls=models.PaymentMethod, event=event)\n\n\[email protected](\n \"transfer\",\n \"charge\",\n \"coupon\",\n \"invoice\",\n \"invoiceitem\",\n \"payment_intent\",\n \"plan\",\n \"product\",\n \"setup_intent\",\n \"source\",\n)\ndef other_object_webhook_handler(event):\n \"\"\"\n Handle updates to transfer, charge, coupon, invoice, invoiceitem, payment_intent,\n plan, product, setup_intent and source objects.\n\n Docs for:\n - charge: https://stripe.com/docs/api#charges\n - coupon: https://stripe.com/docs/api#coupons\n - invoice: https://stripe.com/docs/api#invoices\n - invoiceitem: https://stripe.com/docs/api#invoiceitems\n - plan: https://stripe.com/docs/api#plans\n - product: https://stripe.com/docs/api#products\n - source: https://stripe.com/docs/api#sources\n - payment_intent: https://stripe.com/docs/api/payment_intents\n \"\"\"\n\n if event.parts[:2] == [\"charge\", \"dispute\"]:\n # Do not attempt to handle charge.dispute.* events.\n # We do not have a Dispute model yet.\n target_cls = models.Dispute\n else:\n target_cls = {\n \"charge\": models.Charge,\n \"coupon\": models.Coupon,\n \"invoice\": models.Invoice,\n \"invoiceitem\": models.InvoiceItem,\n \"payment_intent\": models.PaymentIntent,\n \"plan\": models.Plan,\n \"product\": models.Product,\n \"transfer\": models.Transfer,\n \"setup_intent\": models.SetupIntent,\n \"source\": models.Source,\n }.get(event.category)\n\n _handle_crud_like_event(target_cls=target_cls, event=event)\n\n\n#\n# Helpers\n#\n\n\nclass CrudType(object):\n \"\"\"Helper object to determine CRUD-like event state.\"\"\"\n\n created = False\n updated = False\n deleted = False\n\n def __init__(self, **kwargs):\n \"\"\"Set attributes.\"\"\"\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n @property\n def valid(self):\n \"\"\"Return True if this is a CRUD-like event.\"\"\"\n return self.created or self.updated or self.deleted\n\n @classmethod\n def determine(cls, event, verb=None, exact=False):\n \"\"\"\n Determine if the event verb is a crud_type (without the 'R') event.\n\n :param event:\n :type event: models.Event\n :param verb: The event verb to examine.\n :type verb: str\n :param exact: If True, match crud_type to event verb string exactly.\n :type exact: bool\n :returns: The CrudType state object.\n :rtype: CrudType\n \"\"\"\n verb = verb or event.verb\n\n def check(crud_type_event):\n if exact:\n return verb == crud_type_event\n else:\n return verb.endswith(crud_type_event)\n\n created = updated = deleted = False\n\n if check(\"updated\"):\n updated = True\n elif check(\"created\"):\n created = True\n elif check(\"deleted\"):\n deleted = True\n\n return cls(created=created, updated=updated, deleted=deleted)\n\n\ndef _handle_crud_like_event(\n target_cls,\n event,\n data=None,\n verb=None,\n id=None,\n customer=None,\n crud_type=None,\n crud_exact=False,\n crud_valid=False,\n):\n \"\"\"\n Helper to process crud_type-like events for objects.\n\n Non-deletes (creates, updates and \"anything else\" events) are treated as\n update_or_create events - The object will be retrieved locally, then it is\n synchronised with the Stripe API for parity.\n\n Deletes only occur for delete events and cause the object to be deleted\n from the local database, if it existed. If it doesn't exist then it is\n ignored (but the event processing still succeeds).\n\n :param target_cls: The djstripe model being handled.\n :type target_cls: Type[models.StripeModel]\n :param event: The event object\n :type event: models.Event\n :param data: The event object data (defaults to ``event.data``).\n :param verb: The event verb (defaults to ``event.verb``).\n :type verb: str\n :param id: The object Stripe ID (defaults to ``object.id``).\n :type id: str\n :param customer: The customer object (defaults to ``event.customer``).\n :param crud_type: The CrudType object (determined by default).\n :param crud_exact: If True, match verb against CRUD type exactly.\n :param crud_valid: If True, CRUD type must match valid type.\n :returns: The object (if any) and the event CrudType.\n :rtype: Tuple[models.StripeModel, CrudType]\n \"\"\"\n data = data or event.data\n id = id or data.get(\"object\", {}).get(\"id\", None)\n\n if not id:\n # We require an object when applying CRUD-like events, so if there's\n # no ID the event is ignored/dropped. This happens in events such as\n # invoice.upcoming, which refer to a future (non-existant) invoice.\n logger.debug(\n \"Ignoring %r Stripe event without object ID: %r\", event.type, event\n )\n return\n\n verb = verb or event.verb\n customer = customer or event.customer\n crud_type = crud_type or CrudType.determine(\n event=event, verb=verb, exact=crud_exact\n )\n obj = None\n\n if crud_valid and not crud_type.valid:\n logger.debug(\n \"Ignoring %r Stripe event without valid CRUD type: %r\", event.type, event\n )\n return\n\n if crud_type.deleted:\n qs = target_cls.objects.filter(id=id)\n if target_cls is models.Customer and qs.exists():\n qs.get().purge()\n else:\n obj = target_cls.objects.filter(id=id).delete()\n else:\n # Any other event type (creates, updates, etc.) - This can apply to\n # verbs that aren't strictly CRUD but Stripe do intend an update. Such\n # as invoice.payment_failed.\n kwargs = {\"id\": id}\n if hasattr(target_cls, \"customer\"):\n kwargs[\"customer\"] = customer\n data = target_cls(**kwargs).api_retrieve()\n obj = target_cls.sync_from_stripe_data(data)\n\n return obj, crud_type\n", "path": "djstripe/event_handlers.py"}], "after_files": [{"content": "\"\"\"\nWebhook event handlers for the various models\n\nStripe docs for Events: https://stripe.com/docs/api/events\nStripe docs for Webhooks: https://stripe.com/docs/webhooks\n\nTODO: Implement webhook event handlers for all the models that need to\n respond to webhook events.\n\nNOTE:\n Event data is not guaranteed to be in the correct API version format.\n See #116. When writing a webhook handler, make sure to first\n re-retrieve the object you wish to process.\n\n\"\"\"\nimport logging\n\nfrom . import models, webhooks\nfrom .enums import SourceType\nfrom .utils import convert_tstamp\n\nlogger = logging.getLogger(__name__)\n\n\[email protected](\"customer\")\ndef customer_webhook_handler(event):\n \"\"\"Handle updates to customer objects.\n\n First determines the crud_type and then handles the event if a customer\n exists locally.\n As customers are tied to local users, djstripe will not create customers that\n do not already exist locally.\n\n Docs and an example customer webhook response:\n https://stripe.com/docs/api#customer_object\n \"\"\"\n if event.customer:\n # As customers are tied to local users, djstripe will not create\n # customers that do not already exist locally.\n _handle_crud_like_event(\n target_cls=models.Customer, event=event, crud_exact=True, crud_valid=True\n )\n\n\[email protected](\"customer.discount\")\ndef customer_discount_webhook_handler(event):\n \"\"\"Handle updates to customer discount objects.\n\n Docs: https://stripe.com/docs/api#discounts\n\n Because there is no concept of a \"Discount\" model in dj-stripe (due to the\n lack of a stripe id on them), this is a little different to the other\n handlers.\n \"\"\"\n\n crud_type = CrudType.determine(event=event)\n discount_data = event.data.get(\"object\", {})\n coupon_data = discount_data.get(\"coupon\", {})\n customer = event.customer\n\n if crud_type.created or crud_type.updated:\n coupon, _ = _handle_crud_like_event(\n target_cls=models.Coupon,\n event=event,\n data=coupon_data,\n id=coupon_data.get(\"id\"),\n )\n coupon_start = discount_data.get(\"start\")\n coupon_end = discount_data.get(\"end\")\n else:\n coupon = None\n coupon_start = None\n coupon_end = None\n\n customer.coupon = coupon\n customer.coupon_start = convert_tstamp(coupon_start)\n customer.coupon_end = convert_tstamp(coupon_end)\n customer.save()\n\n\[email protected](\"customer.source\")\ndef customer_source_webhook_handler(event):\n \"\"\"Handle updates to customer payment-source objects.\n\n Docs: https://stripe.com/docs/api#customer_object-sources.\n \"\"\"\n customer_data = event.data.get(\"object\", {})\n source_type = customer_data.get(\"object\", {})\n\n # TODO: handle other types of sources\n # (https://stripe.com/docs/api#customer_object-sources)\n if source_type == SourceType.card:\n if event.verb.endswith(\"deleted\") and customer_data:\n # On customer.source.deleted, we do not delete the object,\n # we merely unlink it.\n # customer = Customer.objects.get(id=customer_data[\"id\"])\n # NOTE: for now, customer.sources still points to Card\n # Also, https://github.com/dj-stripe/dj-stripe/issues/576\n models.Card.objects.filter(id=customer_data.get(\"id\", \"\")).delete()\n models.DjstripePaymentMethod.objects.filter(\n id=customer_data.get(\"id\", \"\")\n ).delete()\n else:\n _handle_crud_like_event(target_cls=models.Card, event=event)\n\n\[email protected](\"customer.subscription\")\ndef customer_subscription_webhook_handler(event):\n \"\"\"Handle updates to customer subscription objects.\n\n Docs an example subscription webhook response:\n https://stripe.com/docs/api#subscription_object\n \"\"\"\n\n # customer.subscription.deleted doesn't actually delete the subscription\n # on the stripe side, it updates it to canceled status, so override\n # crud_type to update to match.\n crud_type = CrudType.determine(event=event)\n if crud_type.deleted:\n crud_type = CrudType(updated=True)\n _handle_crud_like_event(\n target_cls=models.Subscription, event=event, crud_type=crud_type\n )\n\n\[email protected](\"payment_method\")\ndef payment_method_handler(event):\n \"\"\"\n Handle updates to payment_method objects\n :param event:\n :return:\n\n Docs for:\n - payment_method: https://stripe.com/docs/api/payment_methods\n \"\"\"\n id_ = event.data.get(\"object\", {}).get(\"id\", None)\n\n if (\n event.parts == [\"payment_method\", \"detached\"]\n and id_\n and id_.startswith(\"card_\")\n ):\n # Special case to handle a quirk in stripe's wrapping of legacy \"card\" objects\n # with payment_methods - card objects are deleted on detach, so treat this as\n # a delete event\n _handle_crud_like_event(\n target_cls=models.PaymentMethod,\n event=event,\n crud_type=CrudType(deleted=True),\n )\n else:\n _handle_crud_like_event(target_cls=models.PaymentMethod, event=event)\n\n\[email protected](\n \"charge\",\n \"coupon\",\n \"invoice\",\n \"invoiceitem\",\n \"payment_intent\",\n \"plan\",\n \"product\",\n \"setup_intent\",\n \"source\",\n \"tax_rate\",\n \"transfer\",\n)\ndef other_object_webhook_handler(event):\n \"\"\"\n Handle updates to charge, coupon, invoice, invoiceitem, payment_intent,\n plan, product, setup_intent, source, tax_rate and transfer objects.\n\n Docs for:\n - charge: https://stripe.com/docs/api/charges\n - coupon: https://stripe.com/docs/api/coupons\n - invoice: https://stripe.com/docs/api/invoices\n - invoiceitem: https://stripe.com/docs/api/invoiceitems\n - payment_intent: https://stripe.com/docs/api/payment_intents\n - plan: https://stripe.com/docs/api/plans\n - product: https://stripe.com/docs/api/products\n - setup_intent: https://stripe.com/docs/api/setup_intents\n - source: https://stripe.com/docs/api/sources\n - tax_rate: https://stripe.com/docs/api/tax_rates/\n - transfer: https://stripe.com/docs/api/transfers\n \"\"\"\n\n if event.parts[:2] == [\"charge\", \"dispute\"]:\n # Do not attempt to handle charge.dispute.* events.\n # We do not have a Dispute model yet.\n target_cls = models.Dispute\n else:\n target_cls = {\n \"charge\": models.Charge,\n \"coupon\": models.Coupon,\n \"invoice\": models.Invoice,\n \"invoiceitem\": models.InvoiceItem,\n \"payment_intent\": models.PaymentIntent,\n \"plan\": models.Plan,\n \"product\": models.Product,\n \"transfer\": models.Transfer,\n \"setup_intent\": models.SetupIntent,\n \"source\": models.Source,\n \"tax_rate\": models.TaxRate,\n }.get(event.category)\n\n _handle_crud_like_event(target_cls=target_cls, event=event)\n\n\n#\n# Helpers\n#\n\n\nclass CrudType(object):\n \"\"\"Helper object to determine CRUD-like event state.\"\"\"\n\n created = False\n updated = False\n deleted = False\n\n def __init__(self, **kwargs):\n \"\"\"Set attributes.\"\"\"\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n @property\n def valid(self):\n \"\"\"Return True if this is a CRUD-like event.\"\"\"\n return self.created or self.updated or self.deleted\n\n @classmethod\n def determine(cls, event, verb=None, exact=False):\n \"\"\"\n Determine if the event verb is a crud_type (without the 'R') event.\n\n :param event:\n :type event: models.Event\n :param verb: The event verb to examine.\n :type verb: str\n :param exact: If True, match crud_type to event verb string exactly.\n :type exact: bool\n :returns: The CrudType state object.\n :rtype: CrudType\n \"\"\"\n verb = verb or event.verb\n\n def check(crud_type_event):\n if exact:\n return verb == crud_type_event\n else:\n return verb.endswith(crud_type_event)\n\n created = updated = deleted = False\n\n if check(\"updated\"):\n updated = True\n elif check(\"created\"):\n created = True\n elif check(\"deleted\"):\n deleted = True\n\n return cls(created=created, updated=updated, deleted=deleted)\n\n\ndef _handle_crud_like_event(\n target_cls,\n event,\n data=None,\n verb=None,\n id=None,\n customer=None,\n crud_type=None,\n crud_exact=False,\n crud_valid=False,\n):\n \"\"\"\n Helper to process crud_type-like events for objects.\n\n Non-deletes (creates, updates and \"anything else\" events) are treated as\n update_or_create events - The object will be retrieved locally, then it is\n synchronised with the Stripe API for parity.\n\n Deletes only occur for delete events and cause the object to be deleted\n from the local database, if it existed. If it doesn't exist then it is\n ignored (but the event processing still succeeds).\n\n :param target_cls: The djstripe model being handled.\n :type target_cls: Type[models.StripeModel]\n :param event: The event object\n :type event: models.Event\n :param data: The event object data (defaults to ``event.data``).\n :param verb: The event verb (defaults to ``event.verb``).\n :type verb: str\n :param id: The object Stripe ID (defaults to ``object.id``).\n :type id: str\n :param customer: The customer object (defaults to ``event.customer``).\n :param crud_type: The CrudType object (determined by default).\n :param crud_exact: If True, match verb against CRUD type exactly.\n :param crud_valid: If True, CRUD type must match valid type.\n :returns: The object (if any) and the event CrudType.\n :rtype: Tuple[models.StripeModel, CrudType]\n \"\"\"\n data = data or event.data\n id = id or data.get(\"object\", {}).get(\"id\", None)\n\n if not id:\n # We require an object when applying CRUD-like events, so if there's\n # no ID the event is ignored/dropped. This happens in events such as\n # invoice.upcoming, which refer to a future (non-existant) invoice.\n logger.debug(\n \"Ignoring %r Stripe event without object ID: %r\", event.type, event\n )\n return\n\n verb = verb or event.verb\n customer = customer or event.customer\n crud_type = crud_type or CrudType.determine(\n event=event, verb=verb, exact=crud_exact\n )\n obj = None\n\n if crud_valid and not crud_type.valid:\n logger.debug(\n \"Ignoring %r Stripe event without valid CRUD type: %r\", event.type, event\n )\n return\n\n if crud_type.deleted:\n qs = target_cls.objects.filter(id=id)\n if target_cls is models.Customer and qs.exists():\n qs.get().purge()\n else:\n obj = target_cls.objects.filter(id=id).delete()\n else:\n # Any other event type (creates, updates, etc.) - This can apply to\n # verbs that aren't strictly CRUD but Stripe do intend an update. Such\n # as invoice.payment_failed.\n kwargs = {\"id\": id}\n if hasattr(target_cls, \"customer\"):\n kwargs[\"customer\"] = customer\n data = target_cls(**kwargs).api_retrieve()\n obj = target_cls.sync_from_stripe_data(data)\n\n return obj, crud_type\n", "path": "djstripe/event_handlers.py"}]}
| 3,945 | 543 |
gh_patches_debug_27499
|
rasdani/github-patches
|
git_diff
|
gammapy__gammapy-4863
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Just importing gammapy raises a warning about ray support being experimental
**Gammapy version**
```
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
System:
python_executable : /home/maxnoe/.local/conda/envs/gammapy-dev/bin/python3.9
python_version : 3.9.16
machine : x86_64
system : Linux
Gammapy package:
version : 1.2.dev201+g514451881.d20230627
path : /home/maxnoe/Projects/gammapy/gammapy
Other packages:
numpy : 1.25.0
scipy : 1.11.0
astropy : 5.3
regions : 0.7
click : 8.1.3
yaml : 6.0
IPython : 8.14.0
jupyterlab : 3.5.3
matplotlib : 3.7.1
pandas : 2.0.2
healpy : 1.16.2
iminuit : 2.22.0
sherpa : 4.15.1
naima : 0.10.0
emcee : 3.1.4
corner : 2.2.2
ray : 2.5.1
Gammapy environment variables:
GAMMAPY_DATA : /home/maxnoe/Projects/gammapy/gammapy-datasets/dev
```
**Bug description**
Just importing a subpackage of gammapy, without doing anything else, raises a warning about ray support being experimental.
I am not doing anything with ray, I just setup the dev environment and imported things:
```
❯ python -c 'import gammapy.datasets'
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
❯ python -c 'import gammapy.makers'
Gammapy support for parallelisation with ray is still a prototype and is not fully functional.
```
**Expected behavior**
No warnings about things I don't actually use.
**To Reproduce**
See above, dev environment and the imports.
**Other information**
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gammapy/utils/parallel.py`
Content:
```
1 # Licensed under a 3-clause BSD style license - see LICENSE.rst
2 """Multiprocessing and multithreading setup"""
3 import importlib
4 import logging
5 import multiprocessing
6 from enum import Enum
7 from gammapy.utils.pbar import progress_bar
8
9 log = logging.getLogger(__name__)
10
11
12 class ParallelBackendEnum(Enum):
13 """Enum for parallel backend"""
14
15 multiprocessing = "multiprocessing"
16 ray = "ray"
17
18 @classmethod
19 def from_str(cls, value):
20 """Get enum from string"""
21 if value is None:
22 value = BACKEND_DEFAULT
23
24 if value == "ray" and not is_ray_available():
25 log.warning("Ray is not installed, falling back to multiprocessing backend")
26 value = "multiprocessing"
27
28 return cls(value)
29
30
31 class PoolMethodEnum(Enum):
32 """Enum for pool method"""
33
34 starmap = "starmap"
35 apply_async = "apply_async"
36
37
38 BACKEND_DEFAULT = ParallelBackendEnum.multiprocessing
39 N_JOBS_DEFAULT = 1
40
41
42 def get_multiprocessing_ray():
43 """Get multiprocessing module for ray backend"""
44 import ray.util.multiprocessing as multiprocessing
45
46 log.warning(
47 "Gammapy support for parallelisation with ray is still a prototype and is not fully functional."
48 )
49 return multiprocessing
50
51
52 def is_ray_initialized():
53 """Check if ray is initialized"""
54 try:
55 from ray import is_initialized
56
57 return is_initialized()
58 except ModuleNotFoundError:
59 return False
60
61
62 def is_ray_available():
63 """Check if ray is available"""
64 try:
65 importlib.import_module("ray")
66 return True
67 except ModuleNotFoundError:
68 return False
69
70
71 class ParallelMixin:
72 """Mixin class to handle parallel processing"""
73
74 @property
75 def n_jobs(self):
76 """Number of jobs (int)"""
77 # TODO: this is somewhat unusual behaviour. It deviates from a normal default value handling
78 if self._n_jobs is None:
79 return N_JOBS_DEFAULT
80
81 return self._n_jobs
82
83 @n_jobs.setter
84 def n_jobs(self, value):
85 """Number of jobs setter (int)"""
86 if not isinstance(value, (int, type(None))):
87 raise ValueError(
88 f"Invalid type: {value!r}, and integer or None is expected."
89 )
90
91 self._n_jobs = value
92
93 @property
94 def parallel_backend(self):
95 """Parallel backend (str)"""
96 if self._parallel_backend is None:
97 return BACKEND_DEFAULT
98
99 return self._parallel_backend
100
101 @parallel_backend.setter
102 def parallel_backend(self, value):
103 """Parallel backend setter (str)"""
104 self._parallel_backend = ParallelBackendEnum.from_str(value).value
105
106
107 def run_multiprocessing(
108 func,
109 inputs,
110 backend=None,
111 pool_kwargs=None,
112 method="starmap",
113 method_kwargs=None,
114 task_name="",
115 ):
116 """Run function in a loop or in Parallel
117
118 Notes
119 -----
120 The progress bar can be displayed for this function.
121
122 Parameters
123 ----------
124 func : function
125 Function to run
126 inputs : list
127 List of arguments to pass to the function
128 backend : {'multiprocessing', 'ray'}
129 Backend to use.
130 pool_kwargs : dict
131 Keyword arguments passed to the pool. The number of processes is limited
132 to the number of physical CPUs.
133 method : {'starmap', 'apply_async'}
134 Pool method to use.
135 method_kwargs : dict
136 Keyword arguments passed to the method
137 task_name : str
138 Name of the task to display in the progress bar
139 """
140 backend = ParallelBackendEnum.from_str(backend)
141
142 if method_kwargs is None:
143 method_kwargs = {}
144
145 if pool_kwargs is None:
146 pool_kwargs = {}
147
148 processes = pool_kwargs.get("processes", N_JOBS_DEFAULT)
149
150 multiprocessing = PARALLEL_BACKEND_MODULES[backend]
151
152 if backend == ParallelBackendEnum.multiprocessing:
153 cpu_count = multiprocessing.cpu_count()
154
155 if processes > cpu_count:
156 log.info(f"Limiting number of processes from {processes} to {cpu_count}")
157 processes = cpu_count
158
159 if multiprocessing.current_process().name != "MainProcess":
160 # subprocesses cannot have childs
161 processes = 1
162 # TODO: check for ray
163
164 if processes == 1:
165 return run_loop(
166 func=func, inputs=inputs, method_kwargs=method_kwargs, task_name=task_name
167 )
168
169 if backend == ParallelBackendEnum.ray:
170 address = "auto" if is_ray_initialized() else None
171 pool_kwargs.setdefault("ray_address", address)
172
173 log.info(f"Using {processes} processes to compute {task_name}")
174
175 with multiprocessing.Pool(**pool_kwargs) as pool:
176 pool_func = POOL_METHODS[PoolMethodEnum(method)]
177 results = pool_func(
178 pool=pool,
179 func=func,
180 inputs=inputs,
181 method_kwargs=method_kwargs,
182 task_name=task_name,
183 )
184
185 return results
186
187
188 def run_loop(func, inputs, method_kwargs=None, task_name=""):
189 """Loop over inputs an run function"""
190 results = []
191
192 callback = method_kwargs.get("callback", None)
193
194 for arguments in progress_bar(inputs, desc=task_name):
195 result = func(*arguments)
196
197 if callback is not None:
198 result = callback(result)
199
200 results.append(result)
201
202 return results
203
204
205 def run_pool_star_map(pool, func, inputs, method_kwargs=None, task_name=""):
206 """Run function in parallel"""
207 return pool.starmap(func, progress_bar(inputs, desc=task_name), **method_kwargs)
208
209
210 def run_pool_async(pool, func, inputs, method_kwargs=None, task_name=""):
211 """Run function in parallel async"""
212 results = []
213
214 for arguments in progress_bar(inputs, desc=task_name):
215 result = pool.apply_async(func, arguments, **method_kwargs)
216 results.append(result)
217 # wait async run is done
218 [result.wait() for result in results]
219 return results
220
221
222 POOL_METHODS = {
223 PoolMethodEnum.starmap: run_pool_star_map,
224 PoolMethodEnum.apply_async: run_pool_async,
225 }
226
227 PARALLEL_BACKEND_MODULES = {
228 ParallelBackendEnum.multiprocessing: multiprocessing,
229 }
230
231 if is_ray_available():
232 PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()
233
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/gammapy/utils/parallel.py b/gammapy/utils/parallel.py
--- a/gammapy/utils/parallel.py
+++ b/gammapy/utils/parallel.py
@@ -2,7 +2,6 @@
"""Multiprocessing and multithreading setup"""
import importlib
import logging
-import multiprocessing
from enum import Enum
from gammapy.utils.pbar import progress_bar
@@ -39,6 +38,13 @@
N_JOBS_DEFAULT = 1
+def get_multiprocessing():
+ """Get multiprocessing module"""
+ import multiprocessing
+
+ return multiprocessing
+
+
def get_multiprocessing_ray():
"""Get multiprocessing module for ray backend"""
import ray.util.multiprocessing as multiprocessing
@@ -147,7 +153,7 @@
processes = pool_kwargs.get("processes", N_JOBS_DEFAULT)
- multiprocessing = PARALLEL_BACKEND_MODULES[backend]
+ multiprocessing = PARALLEL_BACKEND_MODULES[backend]()
if backend == ParallelBackendEnum.multiprocessing:
cpu_count = multiprocessing.cpu_count()
@@ -225,8 +231,6 @@
}
PARALLEL_BACKEND_MODULES = {
- ParallelBackendEnum.multiprocessing: multiprocessing,
+ ParallelBackendEnum.multiprocessing: get_multiprocessing,
+ ParallelBackendEnum.ray: get_multiprocessing_ray,
}
-
-if is_ray_available():
- PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()
|
{"golden_diff": "diff --git a/gammapy/utils/parallel.py b/gammapy/utils/parallel.py\n--- a/gammapy/utils/parallel.py\n+++ b/gammapy/utils/parallel.py\n@@ -2,7 +2,6 @@\n \"\"\"Multiprocessing and multithreading setup\"\"\"\n import importlib\n import logging\n-import multiprocessing\n from enum import Enum\n from gammapy.utils.pbar import progress_bar\n \n@@ -39,6 +38,13 @@\n N_JOBS_DEFAULT = 1\n \n \n+def get_multiprocessing():\n+ \"\"\"Get multiprocessing module\"\"\"\n+ import multiprocessing\n+\n+ return multiprocessing\n+\n+\n def get_multiprocessing_ray():\n \"\"\"Get multiprocessing module for ray backend\"\"\"\n import ray.util.multiprocessing as multiprocessing\n@@ -147,7 +153,7 @@\n \n processes = pool_kwargs.get(\"processes\", N_JOBS_DEFAULT)\n \n- multiprocessing = PARALLEL_BACKEND_MODULES[backend]\n+ multiprocessing = PARALLEL_BACKEND_MODULES[backend]()\n \n if backend == ParallelBackendEnum.multiprocessing:\n cpu_count = multiprocessing.cpu_count()\n@@ -225,8 +231,6 @@\n }\n \n PARALLEL_BACKEND_MODULES = {\n- ParallelBackendEnum.multiprocessing: multiprocessing,\n+ ParallelBackendEnum.multiprocessing: get_multiprocessing,\n+ ParallelBackendEnum.ray: get_multiprocessing_ray,\n }\n-\n-if is_ray_available():\n- PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()\n", "issue": "Just importing gammapy raises a warning about ray support being experimental\n**Gammapy version**\r\n\r\n```\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n\r\nSystem:\r\n\r\n\tpython_executable : /home/maxnoe/.local/conda/envs/gammapy-dev/bin/python3.9 \r\n\tpython_version : 3.9.16 \r\n\tmachine : x86_64 \r\n\tsystem : Linux \r\n\r\n\r\nGammapy package:\r\n\r\n\tversion : 1.2.dev201+g514451881.d20230627 \r\n\tpath : /home/maxnoe/Projects/gammapy/gammapy \r\n\r\n\r\nOther packages:\r\n\r\n\tnumpy : 1.25.0 \r\n\tscipy : 1.11.0 \r\n\tastropy : 5.3 \r\n\tregions : 0.7 \r\n\tclick : 8.1.3 \r\n\tyaml : 6.0 \r\n\tIPython : 8.14.0 \r\n\tjupyterlab : 3.5.3 \r\n\tmatplotlib : 3.7.1 \r\n\tpandas : 2.0.2 \r\n\thealpy : 1.16.2 \r\n\timinuit : 2.22.0 \r\n\tsherpa : 4.15.1 \r\n\tnaima : 0.10.0 \r\n\temcee : 3.1.4 \r\n\tcorner : 2.2.2 \r\n\tray : 2.5.1 \r\n\r\n\r\nGammapy environment variables:\r\n\r\n\tGAMMAPY_DATA : /home/maxnoe/Projects/gammapy/gammapy-datasets/dev \r\n```\r\n\r\n**Bug description**\r\n\r\nJust importing a subpackage of gammapy, without doing anything else, raises a warning about ray support being experimental.\r\n\r\nI am not doing anything with ray, I just setup the dev environment and imported things:\r\n\r\n```\r\n\u276f python -c 'import gammapy.datasets'\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n\u276f python -c 'import gammapy.makers'\r\nGammapy support for parallelisation with ray is still a prototype and is not fully functional.\r\n```\r\n\r\n**Expected behavior**\r\n\r\nNo warnings about things I don't actually use.\r\n\r\n\r\n**To Reproduce**\r\nSee above, dev environment and the imports.\r\n\r\n**Other information**\r\n\r\n\n", "before_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Multiprocessing and multithreading setup\"\"\"\nimport importlib\nimport logging\nimport multiprocessing\nfrom enum import Enum\nfrom gammapy.utils.pbar import progress_bar\n\nlog = logging.getLogger(__name__)\n\n\nclass ParallelBackendEnum(Enum):\n \"\"\"Enum for parallel backend\"\"\"\n\n multiprocessing = \"multiprocessing\"\n ray = \"ray\"\n\n @classmethod\n def from_str(cls, value):\n \"\"\"Get enum from string\"\"\"\n if value is None:\n value = BACKEND_DEFAULT\n\n if value == \"ray\" and not is_ray_available():\n log.warning(\"Ray is not installed, falling back to multiprocessing backend\")\n value = \"multiprocessing\"\n\n return cls(value)\n\n\nclass PoolMethodEnum(Enum):\n \"\"\"Enum for pool method\"\"\"\n\n starmap = \"starmap\"\n apply_async = \"apply_async\"\n\n\nBACKEND_DEFAULT = ParallelBackendEnum.multiprocessing\nN_JOBS_DEFAULT = 1\n\n\ndef get_multiprocessing_ray():\n \"\"\"Get multiprocessing module for ray backend\"\"\"\n import ray.util.multiprocessing as multiprocessing\n\n log.warning(\n \"Gammapy support for parallelisation with ray is still a prototype and is not fully functional.\"\n )\n return multiprocessing\n\n\ndef is_ray_initialized():\n \"\"\"Check if ray is initialized\"\"\"\n try:\n from ray import is_initialized\n\n return is_initialized()\n except ModuleNotFoundError:\n return False\n\n\ndef is_ray_available():\n \"\"\"Check if ray is available\"\"\"\n try:\n importlib.import_module(\"ray\")\n return True\n except ModuleNotFoundError:\n return False\n\n\nclass ParallelMixin:\n \"\"\"Mixin class to handle parallel processing\"\"\"\n\n @property\n def n_jobs(self):\n \"\"\"Number of jobs (int)\"\"\"\n # TODO: this is somewhat unusual behaviour. It deviates from a normal default value handling\n if self._n_jobs is None:\n return N_JOBS_DEFAULT\n\n return self._n_jobs\n\n @n_jobs.setter\n def n_jobs(self, value):\n \"\"\"Number of jobs setter (int)\"\"\"\n if not isinstance(value, (int, type(None))):\n raise ValueError(\n f\"Invalid type: {value!r}, and integer or None is expected.\"\n )\n\n self._n_jobs = value\n\n @property\n def parallel_backend(self):\n \"\"\"Parallel backend (str)\"\"\"\n if self._parallel_backend is None:\n return BACKEND_DEFAULT\n\n return self._parallel_backend\n\n @parallel_backend.setter\n def parallel_backend(self, value):\n \"\"\"Parallel backend setter (str)\"\"\"\n self._parallel_backend = ParallelBackendEnum.from_str(value).value\n\n\ndef run_multiprocessing(\n func,\n inputs,\n backend=None,\n pool_kwargs=None,\n method=\"starmap\",\n method_kwargs=None,\n task_name=\"\",\n):\n \"\"\"Run function in a loop or in Parallel\n\n Notes\n -----\n The progress bar can be displayed for this function.\n\n Parameters\n ----------\n func : function\n Function to run\n inputs : list\n List of arguments to pass to the function\n backend : {'multiprocessing', 'ray'}\n Backend to use.\n pool_kwargs : dict\n Keyword arguments passed to the pool. The number of processes is limited\n to the number of physical CPUs.\n method : {'starmap', 'apply_async'}\n Pool method to use.\n method_kwargs : dict\n Keyword arguments passed to the method\n task_name : str\n Name of the task to display in the progress bar\n \"\"\"\n backend = ParallelBackendEnum.from_str(backend)\n\n if method_kwargs is None:\n method_kwargs = {}\n\n if pool_kwargs is None:\n pool_kwargs = {}\n\n processes = pool_kwargs.get(\"processes\", N_JOBS_DEFAULT)\n\n multiprocessing = PARALLEL_BACKEND_MODULES[backend]\n\n if backend == ParallelBackendEnum.multiprocessing:\n cpu_count = multiprocessing.cpu_count()\n\n if processes > cpu_count:\n log.info(f\"Limiting number of processes from {processes} to {cpu_count}\")\n processes = cpu_count\n\n if multiprocessing.current_process().name != \"MainProcess\":\n # subprocesses cannot have childs\n processes = 1\n # TODO: check for ray\n\n if processes == 1:\n return run_loop(\n func=func, inputs=inputs, method_kwargs=method_kwargs, task_name=task_name\n )\n\n if backend == ParallelBackendEnum.ray:\n address = \"auto\" if is_ray_initialized() else None\n pool_kwargs.setdefault(\"ray_address\", address)\n\n log.info(f\"Using {processes} processes to compute {task_name}\")\n\n with multiprocessing.Pool(**pool_kwargs) as pool:\n pool_func = POOL_METHODS[PoolMethodEnum(method)]\n results = pool_func(\n pool=pool,\n func=func,\n inputs=inputs,\n method_kwargs=method_kwargs,\n task_name=task_name,\n )\n\n return results\n\n\ndef run_loop(func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Loop over inputs an run function\"\"\"\n results = []\n\n callback = method_kwargs.get(\"callback\", None)\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = func(*arguments)\n\n if callback is not None:\n result = callback(result)\n\n results.append(result)\n\n return results\n\n\ndef run_pool_star_map(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel\"\"\"\n return pool.starmap(func, progress_bar(inputs, desc=task_name), **method_kwargs)\n\n\ndef run_pool_async(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel async\"\"\"\n results = []\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = pool.apply_async(func, arguments, **method_kwargs)\n results.append(result)\n # wait async run is done\n [result.wait() for result in results]\n return results\n\n\nPOOL_METHODS = {\n PoolMethodEnum.starmap: run_pool_star_map,\n PoolMethodEnum.apply_async: run_pool_async,\n}\n\nPARALLEL_BACKEND_MODULES = {\n ParallelBackendEnum.multiprocessing: multiprocessing,\n}\n\nif is_ray_available():\n PARALLEL_BACKEND_MODULES[ParallelBackendEnum.ray] = get_multiprocessing_ray()\n", "path": "gammapy/utils/parallel.py"}], "after_files": [{"content": "# Licensed under a 3-clause BSD style license - see LICENSE.rst\n\"\"\"Multiprocessing and multithreading setup\"\"\"\nimport importlib\nimport logging\nfrom enum import Enum\nfrom gammapy.utils.pbar import progress_bar\n\nlog = logging.getLogger(__name__)\n\n\nclass ParallelBackendEnum(Enum):\n \"\"\"Enum for parallel backend\"\"\"\n\n multiprocessing = \"multiprocessing\"\n ray = \"ray\"\n\n @classmethod\n def from_str(cls, value):\n \"\"\"Get enum from string\"\"\"\n if value is None:\n value = BACKEND_DEFAULT\n\n if value == \"ray\" and not is_ray_available():\n log.warning(\"Ray is not installed, falling back to multiprocessing backend\")\n value = \"multiprocessing\"\n\n return cls(value)\n\n\nclass PoolMethodEnum(Enum):\n \"\"\"Enum for pool method\"\"\"\n\n starmap = \"starmap\"\n apply_async = \"apply_async\"\n\n\nBACKEND_DEFAULT = ParallelBackendEnum.multiprocessing\nN_JOBS_DEFAULT = 1\n\n\ndef get_multiprocessing():\n \"\"\"Get multiprocessing module\"\"\"\n import multiprocessing\n\n return multiprocessing\n\n\ndef get_multiprocessing_ray():\n \"\"\"Get multiprocessing module for ray backend\"\"\"\n import ray.util.multiprocessing as multiprocessing\n\n log.warning(\n \"Gammapy support for parallelisation with ray is still a prototype and is not fully functional.\"\n )\n return multiprocessing\n\n\ndef is_ray_initialized():\n \"\"\"Check if ray is initialized\"\"\"\n try:\n from ray import is_initialized\n\n return is_initialized()\n except ModuleNotFoundError:\n return False\n\n\ndef is_ray_available():\n \"\"\"Check if ray is available\"\"\"\n try:\n importlib.import_module(\"ray\")\n return True\n except ModuleNotFoundError:\n return False\n\n\nclass ParallelMixin:\n \"\"\"Mixin class to handle parallel processing\"\"\"\n\n @property\n def n_jobs(self):\n \"\"\"Number of jobs (int)\"\"\"\n # TODO: this is somewhat unusual behaviour. It deviates from a normal default value handling\n if self._n_jobs is None:\n return N_JOBS_DEFAULT\n\n return self._n_jobs\n\n @n_jobs.setter\n def n_jobs(self, value):\n \"\"\"Number of jobs setter (int)\"\"\"\n if not isinstance(value, (int, type(None))):\n raise ValueError(\n f\"Invalid type: {value!r}, and integer or None is expected.\"\n )\n\n self._n_jobs = value\n\n @property\n def parallel_backend(self):\n \"\"\"Parallel backend (str)\"\"\"\n if self._parallel_backend is None:\n return BACKEND_DEFAULT\n\n return self._parallel_backend\n\n @parallel_backend.setter\n def parallel_backend(self, value):\n \"\"\"Parallel backend setter (str)\"\"\"\n self._parallel_backend = ParallelBackendEnum.from_str(value).value\n\n\ndef run_multiprocessing(\n func,\n inputs,\n backend=None,\n pool_kwargs=None,\n method=\"starmap\",\n method_kwargs=None,\n task_name=\"\",\n):\n \"\"\"Run function in a loop or in Parallel\n\n Notes\n -----\n The progress bar can be displayed for this function.\n\n Parameters\n ----------\n func : function\n Function to run\n inputs : list\n List of arguments to pass to the function\n backend : {'multiprocessing', 'ray'}\n Backend to use.\n pool_kwargs : dict\n Keyword arguments passed to the pool. The number of processes is limited\n to the number of physical CPUs.\n method : {'starmap', 'apply_async'}\n Pool method to use.\n method_kwargs : dict\n Keyword arguments passed to the method\n task_name : str\n Name of the task to display in the progress bar\n \"\"\"\n backend = ParallelBackendEnum.from_str(backend)\n\n if method_kwargs is None:\n method_kwargs = {}\n\n if pool_kwargs is None:\n pool_kwargs = {}\n\n processes = pool_kwargs.get(\"processes\", N_JOBS_DEFAULT)\n\n multiprocessing = PARALLEL_BACKEND_MODULES[backend]()\n\n if backend == ParallelBackendEnum.multiprocessing:\n cpu_count = multiprocessing.cpu_count()\n\n if processes > cpu_count:\n log.info(f\"Limiting number of processes from {processes} to {cpu_count}\")\n processes = cpu_count\n\n if multiprocessing.current_process().name != \"MainProcess\":\n # subprocesses cannot have childs\n processes = 1\n # TODO: check for ray\n\n if processes == 1:\n return run_loop(\n func=func, inputs=inputs, method_kwargs=method_kwargs, task_name=task_name\n )\n\n if backend == ParallelBackendEnum.ray:\n address = \"auto\" if is_ray_initialized() else None\n pool_kwargs.setdefault(\"ray_address\", address)\n\n log.info(f\"Using {processes} processes to compute {task_name}\")\n\n with multiprocessing.Pool(**pool_kwargs) as pool:\n pool_func = POOL_METHODS[PoolMethodEnum(method)]\n results = pool_func(\n pool=pool,\n func=func,\n inputs=inputs,\n method_kwargs=method_kwargs,\n task_name=task_name,\n )\n\n return results\n\n\ndef run_loop(func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Loop over inputs an run function\"\"\"\n results = []\n\n callback = method_kwargs.get(\"callback\", None)\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = func(*arguments)\n\n if callback is not None:\n result = callback(result)\n\n results.append(result)\n\n return results\n\n\ndef run_pool_star_map(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel\"\"\"\n return pool.starmap(func, progress_bar(inputs, desc=task_name), **method_kwargs)\n\n\ndef run_pool_async(pool, func, inputs, method_kwargs=None, task_name=\"\"):\n \"\"\"Run function in parallel async\"\"\"\n results = []\n\n for arguments in progress_bar(inputs, desc=task_name):\n result = pool.apply_async(func, arguments, **method_kwargs)\n results.append(result)\n # wait async run is done\n [result.wait() for result in results]\n return results\n\n\nPOOL_METHODS = {\n PoolMethodEnum.starmap: run_pool_star_map,\n PoolMethodEnum.apply_async: run_pool_async,\n}\n\nPARALLEL_BACKEND_MODULES = {\n ParallelBackendEnum.multiprocessing: get_multiprocessing,\n ParallelBackendEnum.ray: get_multiprocessing_ray,\n}\n", "path": "gammapy/utils/parallel.py"}]}
| 2,783 | 320 |
gh_patches_debug_35887
|
rasdani/github-patches
|
git_diff
|
facebookresearch__xformers-308
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot pickle nystrom based model
When I `torch.save` my model with `nystrom` attention I get the following pickle error:
```py
AttributeError: Can't pickle local object 'get_avg_pool.<locals>.avg_pool'
```
I believe coming from this function:
https://github.com/facebookresearch/xformers/blob/9232b2d27a775a43173e2fd86c03251ab64f7ede/xformers/components/attention/nystrom.py#L60
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xformers/components/attention/nystrom.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 import logging
8 from dataclasses import dataclass
9 from typing import Optional
10
11 import torch
12 import torch.nn as nn
13
14 from xformers.components.attention import Attention, AttentionConfig, register_attention
15 from xformers.components.attention.core import (
16 scaled_dot_product_attention,
17 scaled_query_key_softmax,
18 )
19 from xformers.components.attention.utils import (
20 bool_mask_to_additive,
21 iterative_pinv,
22 reshape_key_padding_mask,
23 )
24
25
26 @dataclass
27 class NystromSelfAttentionConfig(AttentionConfig):
28 """
29 num_heads Number of heads.
30 num_landmarks Number of landmarks to use for softmax approximation. 64 often sufficient for a good
31 approximation according to https://arxiv.org/pdf/2102.03902.pdf.
32 causal Apply a causal mask, in that the attention cannot be applied to the future.
33 use_razavi_pinverse If true, use iterative method from (Razavi et al. 2014) to approximate the Moore-Penrose
34 inverse, otherwise use standard torch inverse.
35 pinverse_original_init True if using original initialization when calculating Moore-Penrose pseudo inverse using
36 method from (Razavi et al. 2014).
37 False if using exact coefficient computation (leads to faster convergence).
38 inv_iterations Number of iterations for calculating the Moore-Penrose pseudo inverse.
39 v_skip_connection A module that will take V as input and will be added as a skip connection to the
40 softmax approximation. A skip connection is added in the paper to help with training.
41 conv_kernel_size Kernel size for convolution optionally added to help in training.
42 If v_skip_connection is not specified, this will be used to define the default
43 depth wise convolution used as a skip connection.
44 If both conv_kernel_size and v_skip_connection are None, no skip connection will
45 be added.
46 landmark_pooling Which module to use when computing landmarks. Default is AdaptiveAvgPool2d.
47 """
48
49 num_heads: int
50 num_landmarks: Optional[int]
51 landmark_pooling: Optional[nn.Module]
52 causal: Optional[bool]
53 pinverse_original_init: Optional[bool]
54 inv_iterations: Optional[int]
55 v_skip_connection: Optional[nn.Module]
56 conv_kernel_size: Optional[int]
57 use_razavi_pinverse: Optional[bool]
58
59
60 def get_avg_pool(n: int):
61 def avg_pool(x: torch.Tensor):
62 # Average independently for every segment in the sequence dimension
63 seq_len = x.shape[1]
64 head_dim = x.shape[2]
65 segments = seq_len // n
66
67 # Dimensions are a match
68 if seq_len % n == 0:
69 return x.reshape(
70 -1,
71 n,
72 segments,
73 head_dim,
74 ).mean(dim=-2)
75
76 # Handle the last segment boundary being off
77 n_round = n - seq_len % n
78
79 x_avg_round = (
80 x[:, : n_round * segments, :]
81 .reshape(-1, n_round, segments, head_dim)
82 .mean(dim=-2)
83 )
84 x_avg_off = (
85 x[:, n_round * segments :, :]
86 .reshape(-1, n - n_round, segments + 1, head_dim)
87 .mean(dim=-2)
88 )
89 return torch.cat((x_avg_round, x_avg_off), dim=-2)
90
91 return avg_pool
92
93
94 @register_attention("nystrom", NystromSelfAttentionConfig)
95 class NystromAttention(Attention):
96 # TODO: update defaults for use_razavi_pinverse and inv_iterations
97 def __init__(
98 self,
99 dropout: float,
100 num_heads: int,
101 num_landmarks: int = 64,
102 landmark_pooling: Optional[nn.Module] = None,
103 causal: bool = False,
104 use_razavi_pinverse: bool = True,
105 pinverse_original_init: bool = False,
106 inv_iterations: int = 6, # recommended default in paper was 6.
107 v_skip_connection: Optional[nn.Module] = None,
108 conv_kernel_size: Optional[int] = None,
109 *args,
110 **kwargs,
111 ):
112 """
113 Nystrom attention mechanism, from Nystromformer_.
114 ::
115
116 "A Nystrom-based Algorithm for Approximating Self-Attention."
117 Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., Singh, V. (2021)
118
119 Reference codebase: https://github.com/mlpen/Nystromformer
120
121 .. _Nystromformer: https://arxiv.org/pdf/2102.03902.pdf
122
123 """
124 super().__init__()
125 # merged key padding mask and attention mask is not accepted
126 self.requires_separate_masks = True
127 self.num_landmarks = num_landmarks
128 # TODO: should be able to not have to pass in num_heads
129 self.num_heads = num_heads
130 self.use_razavi_pinverse = use_razavi_pinverse
131 self.pinverse_original_init = pinverse_original_init
132 self.inv_iterations = inv_iterations
133 self.attn_drop = nn.Dropout(dropout)
134 self.skip_connection = v_skip_connection
135 self.causal = causal
136
137 if self.skip_connection is None and conv_kernel_size is not None:
138 self.skip_connection = nn.Conv2d(
139 in_channels=self.num_heads,
140 out_channels=self.num_heads,
141 kernel_size=(conv_kernel_size, 1),
142 padding=(conv_kernel_size // 2, 0),
143 bias=False,
144 groups=self.num_heads,
145 )
146
147 if landmark_pooling is not None:
148 self.landmark_pooling = landmark_pooling
149 else:
150 self.landmark_pooling = get_avg_pool(self.num_landmarks)
151
152 # Optional lower triangular masks for causal attention
153 self.causal_mask_1: Optional[torch.Tensor] = None
154 self.causal_mask_2: Optional[torch.Tensor] = None
155 self.causal_mask_3: Optional[torch.Tensor] = None
156
157 # This attention does not support attention masks
158 self.supports_attention_mask = False
159 self.supports_key_padding_mask = True
160
161 def forward(
162 self,
163 q: torch.Tensor,
164 k: torch.Tensor,
165 v: torch.Tensor,
166 key_padding_mask: Optional[torch.Tensor] = None,
167 *args,
168 **kwargs,
169 ):
170 r"""
171 key_padding_mask Only a key padding mask is accepted here. The size must be (batch size, sequence length) or
172 (batch size * num_heads, 1, sequence length). If dimensions are not correct, the mask will
173 be ignored. An additive mask is expected, meaning float values using "-inf" to mask values
174 """
175
176 batched_dim = k.size(0)
177 seq_len = k.size(-2)
178 tt = {"dtype": q.dtype, "device": q.device}
179
180 if key_padding_mask is not None:
181 if key_padding_mask.dtype == torch.bool:
182 logging.warning(
183 "Bool mask found, but an additive mask is expected. Converting but this is slow"
184 )
185 key_padding_mask = bool_mask_to_additive(key_padding_mask)
186
187 if key_padding_mask.ndim == 2:
188 key_padding_mask = reshape_key_padding_mask(
189 key_padding_mask, batched_dim
190 )
191
192 assert key_padding_mask.size() == (batched_dim, 1, seq_len), (
193 f"key_padding_mask has invalid dimensions {key_padding_mask.size()}."
194 f" Must have dimensions {batched_dim, 1, seq_len} or (batch_size, {seq_len})."
195 )
196
197 if self.num_landmarks >= seq_len:
198 mask: Optional[torch.Tensor] = None
199
200 if self.causal:
201 mask = self._triu_mask(batched_dim, seq_len, seq_len, **tt)
202
203 if key_padding_mask is not None:
204 mask = key_padding_mask if mask is None else mask + key_padding_mask
205
206 x = scaled_dot_product_attention(q=q, k=k, v=v, att_mask=mask)
207
208 else:
209 q_landmarks = self.landmark_pooling(q)
210 k_landmarks = self.landmark_pooling(k)
211
212 if self.causal and (
213 self.causal_mask_1 is None
214 or (batched_dim, seq_len, self.num_landmarks)
215 != self.causal_mask_1.size()
216 ):
217 self.causal_mask_1 = self._triu_mask(
218 batched_dim, seq_len, self.num_landmarks, **tt
219 )
220 self.causal_mask_2 = self._triu_mask(
221 batched_dim, self.num_landmarks, self.num_landmarks, **tt
222 )
223 self.causal_mask_3 = self._triu_mask(
224 batched_dim, self.num_landmarks, seq_len, **tt
225 )
226
227 mask_1: Optional[torch.Tensor] = self.causal_mask_1
228 mask_2: Optional[torch.Tensor] = self.causal_mask_2
229 mask_3: Optional[torch.Tensor] = self.causal_mask_3
230 if key_padding_mask is not None:
231 mask_1 = (
232 key_padding_mask.transpose(-2, -1)
233 if mask_1 is None
234 else mask_1 + key_padding_mask.transpose(-2, -1)
235 )
236 mask_3 = (
237 key_padding_mask if mask_3 is None else mask_3 + key_padding_mask
238 )
239
240 kernel_1 = scaled_query_key_softmax(q=q, k=k_landmarks, att_mask=mask_1)
241 kernel_2 = scaled_query_key_softmax(
242 q=q_landmarks, k=k_landmarks, att_mask=mask_2
243 )
244 kernel_3 = scaled_dot_product_attention(
245 q=q_landmarks, k=k, v=v, att_mask=mask_3
246 )
247
248 kernel_2_inv = (
249 iterative_pinv(
250 kernel_2, self.inv_iterations, self.pinverse_original_init
251 )
252 if self.use_razavi_pinverse
253 else torch.linalg.pinv(kernel_2)
254 )
255
256 x = torch.matmul(
257 torch.matmul(
258 kernel_1,
259 kernel_2_inv,
260 ),
261 kernel_3,
262 )
263
264 if self.skip_connection:
265 # Assumption here is that v is 3D.
266 v_conv = self.skip_connection(
267 v.reshape(-1, self.num_heads, v.size(-2), v.size(-1))
268 )
269 x += v_conv.reshape(-1, v_conv.size(-2), v_conv.size(-1))
270 x = self.attn_drop(x)
271 return x
272
273 def _triu_mask(self, dim_1: int, dim_2: int, dim_3: int, **kwargs) -> torch.Tensor:
274 device = kwargs["device"]
275 dtype = kwargs["dtype"]
276
277 return torch.triu(
278 torch.ones(dim_2, dim_3, dtype=dtype, device=device) * float("-inf"),
279 diagonal=1,
280 ).expand(
281 dim_1, -1, -1
282 ) # micro optim, save memory on the batch dimension
283
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/xformers/components/attention/nystrom.py b/xformers/components/attention/nystrom.py
--- a/xformers/components/attention/nystrom.py
+++ b/xformers/components/attention/nystrom.py
@@ -57,24 +57,29 @@
use_razavi_pinverse: Optional[bool]
-def get_avg_pool(n: int):
- def avg_pool(x: torch.Tensor):
+class AvgPool(nn.Module):
+ def __init__(self, n: int):
+ super().__init__()
+ self.n = n
+
+ def forward(self, x: torch.Tensor):
# Average independently for every segment in the sequence dimension
seq_len = x.shape[1]
head_dim = x.shape[2]
- segments = seq_len // n
+ segments = seq_len // self.n
+ assert segments > 0, "num_landmarks should be smaller than the sequence length"
# Dimensions are a match
- if seq_len % n == 0:
+ if seq_len % self.n == 0:
return x.reshape(
-1,
- n,
+ self.n,
segments,
head_dim,
).mean(dim=-2)
# Handle the last segment boundary being off
- n_round = n - seq_len % n
+ n_round = self.n - seq_len % self.n
x_avg_round = (
x[:, : n_round * segments, :]
@@ -83,13 +88,11 @@
)
x_avg_off = (
x[:, n_round * segments :, :]
- .reshape(-1, n - n_round, segments + 1, head_dim)
+ .reshape(-1, self.n - n_round, segments + 1, head_dim)
.mean(dim=-2)
)
return torch.cat((x_avg_round, x_avg_off), dim=-2)
- return avg_pool
-
@register_attention("nystrom", NystromSelfAttentionConfig)
class NystromAttention(Attention):
@@ -147,7 +150,7 @@
if landmark_pooling is not None:
self.landmark_pooling = landmark_pooling
else:
- self.landmark_pooling = get_avg_pool(self.num_landmarks)
+ self.landmark_pooling = AvgPool(n=self.num_landmarks)
# Optional lower triangular masks for causal attention
self.causal_mask_1: Optional[torch.Tensor] = None
|
{"golden_diff": "diff --git a/xformers/components/attention/nystrom.py b/xformers/components/attention/nystrom.py\n--- a/xformers/components/attention/nystrom.py\n+++ b/xformers/components/attention/nystrom.py\n@@ -57,24 +57,29 @@\n use_razavi_pinverse: Optional[bool]\n \n \n-def get_avg_pool(n: int):\n- def avg_pool(x: torch.Tensor):\n+class AvgPool(nn.Module):\n+ def __init__(self, n: int):\n+ super().__init__()\n+ self.n = n\n+\n+ def forward(self, x: torch.Tensor):\n # Average independently for every segment in the sequence dimension\n seq_len = x.shape[1]\n head_dim = x.shape[2]\n- segments = seq_len // n\n+ segments = seq_len // self.n\n+ assert segments > 0, \"num_landmarks should be smaller than the sequence length\"\n \n # Dimensions are a match\n- if seq_len % n == 0:\n+ if seq_len % self.n == 0:\n return x.reshape(\n -1,\n- n,\n+ self.n,\n segments,\n head_dim,\n ).mean(dim=-2)\n \n # Handle the last segment boundary being off\n- n_round = n - seq_len % n\n+ n_round = self.n - seq_len % self.n\n \n x_avg_round = (\n x[:, : n_round * segments, :]\n@@ -83,13 +88,11 @@\n )\n x_avg_off = (\n x[:, n_round * segments :, :]\n- .reshape(-1, n - n_round, segments + 1, head_dim)\n+ .reshape(-1, self.n - n_round, segments + 1, head_dim)\n .mean(dim=-2)\n )\n return torch.cat((x_avg_round, x_avg_off), dim=-2)\n \n- return avg_pool\n-\n \n @register_attention(\"nystrom\", NystromSelfAttentionConfig)\n class NystromAttention(Attention):\n@@ -147,7 +150,7 @@\n if landmark_pooling is not None:\n self.landmark_pooling = landmark_pooling\n else:\n- self.landmark_pooling = get_avg_pool(self.num_landmarks)\n+ self.landmark_pooling = AvgPool(n=self.num_landmarks)\n \n # Optional lower triangular masks for causal attention\n self.causal_mask_1: Optional[torch.Tensor] = None\n", "issue": "Cannot pickle nystrom based model\nWhen I `torch.save` my model with `nystrom` attention I get the following pickle error:\r\n\r\n```py\r\nAttributeError: Can't pickle local object 'get_avg_pool.<locals>.avg_pool'\r\n```\r\n\r\nI believe coming from this function:\r\n\r\nhttps://github.com/facebookresearch/xformers/blob/9232b2d27a775a43173e2fd86c03251ab64f7ede/xformers/components/attention/nystrom.py#L60\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom xformers.components.attention import Attention, AttentionConfig, register_attention\nfrom xformers.components.attention.core import (\n scaled_dot_product_attention,\n scaled_query_key_softmax,\n)\nfrom xformers.components.attention.utils import (\n bool_mask_to_additive,\n iterative_pinv,\n reshape_key_padding_mask,\n)\n\n\n@dataclass\nclass NystromSelfAttentionConfig(AttentionConfig):\n \"\"\"\n num_heads Number of heads.\n num_landmarks Number of landmarks to use for softmax approximation. 64 often sufficient for a good\n approximation according to https://arxiv.org/pdf/2102.03902.pdf.\n causal Apply a causal mask, in that the attention cannot be applied to the future.\n use_razavi_pinverse If true, use iterative method from (Razavi et al. 2014) to approximate the Moore-Penrose\n inverse, otherwise use standard torch inverse.\n pinverse_original_init True if using original initialization when calculating Moore-Penrose pseudo inverse using\n method from (Razavi et al. 2014).\n False if using exact coefficient computation (leads to faster convergence).\n inv_iterations Number of iterations for calculating the Moore-Penrose pseudo inverse.\n v_skip_connection A module that will take V as input and will be added as a skip connection to the\n softmax approximation. A skip connection is added in the paper to help with training.\n conv_kernel_size Kernel size for convolution optionally added to help in training.\n If v_skip_connection is not specified, this will be used to define the default\n depth wise convolution used as a skip connection.\n If both conv_kernel_size and v_skip_connection are None, no skip connection will\n be added.\n landmark_pooling Which module to use when computing landmarks. Default is AdaptiveAvgPool2d.\n \"\"\"\n\n num_heads: int\n num_landmarks: Optional[int]\n landmark_pooling: Optional[nn.Module]\n causal: Optional[bool]\n pinverse_original_init: Optional[bool]\n inv_iterations: Optional[int]\n v_skip_connection: Optional[nn.Module]\n conv_kernel_size: Optional[int]\n use_razavi_pinverse: Optional[bool]\n\n\ndef get_avg_pool(n: int):\n def avg_pool(x: torch.Tensor):\n # Average independently for every segment in the sequence dimension\n seq_len = x.shape[1]\n head_dim = x.shape[2]\n segments = seq_len // n\n\n # Dimensions are a match\n if seq_len % n == 0:\n return x.reshape(\n -1,\n n,\n segments,\n head_dim,\n ).mean(dim=-2)\n\n # Handle the last segment boundary being off\n n_round = n - seq_len % n\n\n x_avg_round = (\n x[:, : n_round * segments, :]\n .reshape(-1, n_round, segments, head_dim)\n .mean(dim=-2)\n )\n x_avg_off = (\n x[:, n_round * segments :, :]\n .reshape(-1, n - n_round, segments + 1, head_dim)\n .mean(dim=-2)\n )\n return torch.cat((x_avg_round, x_avg_off), dim=-2)\n\n return avg_pool\n\n\n@register_attention(\"nystrom\", NystromSelfAttentionConfig)\nclass NystromAttention(Attention):\n # TODO: update defaults for use_razavi_pinverse and inv_iterations\n def __init__(\n self,\n dropout: float,\n num_heads: int,\n num_landmarks: int = 64,\n landmark_pooling: Optional[nn.Module] = None,\n causal: bool = False,\n use_razavi_pinverse: bool = True,\n pinverse_original_init: bool = False,\n inv_iterations: int = 6, # recommended default in paper was 6.\n v_skip_connection: Optional[nn.Module] = None,\n conv_kernel_size: Optional[int] = None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Nystrom attention mechanism, from Nystromformer_.\n ::\n\n \"A Nystrom-based Algorithm for Approximating Self-Attention.\"\n Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., Singh, V. (2021)\n\n Reference codebase: https://github.com/mlpen/Nystromformer\n\n .. _Nystromformer: https://arxiv.org/pdf/2102.03902.pdf\n\n \"\"\"\n super().__init__()\n # merged key padding mask and attention mask is not accepted\n self.requires_separate_masks = True\n self.num_landmarks = num_landmarks\n # TODO: should be able to not have to pass in num_heads\n self.num_heads = num_heads\n self.use_razavi_pinverse = use_razavi_pinverse\n self.pinverse_original_init = pinverse_original_init\n self.inv_iterations = inv_iterations\n self.attn_drop = nn.Dropout(dropout)\n self.skip_connection = v_skip_connection\n self.causal = causal\n\n if self.skip_connection is None and conv_kernel_size is not None:\n self.skip_connection = nn.Conv2d(\n in_channels=self.num_heads,\n out_channels=self.num_heads,\n kernel_size=(conv_kernel_size, 1),\n padding=(conv_kernel_size // 2, 0),\n bias=False,\n groups=self.num_heads,\n )\n\n if landmark_pooling is not None:\n self.landmark_pooling = landmark_pooling\n else:\n self.landmark_pooling = get_avg_pool(self.num_landmarks)\n\n # Optional lower triangular masks for causal attention\n self.causal_mask_1: Optional[torch.Tensor] = None\n self.causal_mask_2: Optional[torch.Tensor] = None\n self.causal_mask_3: Optional[torch.Tensor] = None\n\n # This attention does not support attention masks\n self.supports_attention_mask = False\n self.supports_key_padding_mask = True\n\n def forward(\n self,\n q: torch.Tensor,\n k: torch.Tensor,\n v: torch.Tensor,\n key_padding_mask: Optional[torch.Tensor] = None,\n *args,\n **kwargs,\n ):\n r\"\"\"\n key_padding_mask Only a key padding mask is accepted here. The size must be (batch size, sequence length) or\n (batch size * num_heads, 1, sequence length). If dimensions are not correct, the mask will\n be ignored. An additive mask is expected, meaning float values using \"-inf\" to mask values\n \"\"\"\n\n batched_dim = k.size(0)\n seq_len = k.size(-2)\n tt = {\"dtype\": q.dtype, \"device\": q.device}\n\n if key_padding_mask is not None:\n if key_padding_mask.dtype == torch.bool:\n logging.warning(\n \"Bool mask found, but an additive mask is expected. Converting but this is slow\"\n )\n key_padding_mask = bool_mask_to_additive(key_padding_mask)\n\n if key_padding_mask.ndim == 2:\n key_padding_mask = reshape_key_padding_mask(\n key_padding_mask, batched_dim\n )\n\n assert key_padding_mask.size() == (batched_dim, 1, seq_len), (\n f\"key_padding_mask has invalid dimensions {key_padding_mask.size()}.\"\n f\" Must have dimensions {batched_dim, 1, seq_len} or (batch_size, {seq_len}).\"\n )\n\n if self.num_landmarks >= seq_len:\n mask: Optional[torch.Tensor] = None\n\n if self.causal:\n mask = self._triu_mask(batched_dim, seq_len, seq_len, **tt)\n\n if key_padding_mask is not None:\n mask = key_padding_mask if mask is None else mask + key_padding_mask\n\n x = scaled_dot_product_attention(q=q, k=k, v=v, att_mask=mask)\n\n else:\n q_landmarks = self.landmark_pooling(q)\n k_landmarks = self.landmark_pooling(k)\n\n if self.causal and (\n self.causal_mask_1 is None\n or (batched_dim, seq_len, self.num_landmarks)\n != self.causal_mask_1.size()\n ):\n self.causal_mask_1 = self._triu_mask(\n batched_dim, seq_len, self.num_landmarks, **tt\n )\n self.causal_mask_2 = self._triu_mask(\n batched_dim, self.num_landmarks, self.num_landmarks, **tt\n )\n self.causal_mask_3 = self._triu_mask(\n batched_dim, self.num_landmarks, seq_len, **tt\n )\n\n mask_1: Optional[torch.Tensor] = self.causal_mask_1\n mask_2: Optional[torch.Tensor] = self.causal_mask_2\n mask_3: Optional[torch.Tensor] = self.causal_mask_3\n if key_padding_mask is not None:\n mask_1 = (\n key_padding_mask.transpose(-2, -1)\n if mask_1 is None\n else mask_1 + key_padding_mask.transpose(-2, -1)\n )\n mask_3 = (\n key_padding_mask if mask_3 is None else mask_3 + key_padding_mask\n )\n\n kernel_1 = scaled_query_key_softmax(q=q, k=k_landmarks, att_mask=mask_1)\n kernel_2 = scaled_query_key_softmax(\n q=q_landmarks, k=k_landmarks, att_mask=mask_2\n )\n kernel_3 = scaled_dot_product_attention(\n q=q_landmarks, k=k, v=v, att_mask=mask_3\n )\n\n kernel_2_inv = (\n iterative_pinv(\n kernel_2, self.inv_iterations, self.pinverse_original_init\n )\n if self.use_razavi_pinverse\n else torch.linalg.pinv(kernel_2)\n )\n\n x = torch.matmul(\n torch.matmul(\n kernel_1,\n kernel_2_inv,\n ),\n kernel_3,\n )\n\n if self.skip_connection:\n # Assumption here is that v is 3D.\n v_conv = self.skip_connection(\n v.reshape(-1, self.num_heads, v.size(-2), v.size(-1))\n )\n x += v_conv.reshape(-1, v_conv.size(-2), v_conv.size(-1))\n x = self.attn_drop(x)\n return x\n\n def _triu_mask(self, dim_1: int, dim_2: int, dim_3: int, **kwargs) -> torch.Tensor:\n device = kwargs[\"device\"]\n dtype = kwargs[\"dtype\"]\n\n return torch.triu(\n torch.ones(dim_2, dim_3, dtype=dtype, device=device) * float(\"-inf\"),\n diagonal=1,\n ).expand(\n dim_1, -1, -1\n ) # micro optim, save memory on the batch dimension\n", "path": "xformers/components/attention/nystrom.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nimport logging\nfrom dataclasses import dataclass\nfrom typing import Optional\n\nimport torch\nimport torch.nn as nn\n\nfrom xformers.components.attention import Attention, AttentionConfig, register_attention\nfrom xformers.components.attention.core import (\n scaled_dot_product_attention,\n scaled_query_key_softmax,\n)\nfrom xformers.components.attention.utils import (\n bool_mask_to_additive,\n iterative_pinv,\n reshape_key_padding_mask,\n)\n\n\n@dataclass\nclass NystromSelfAttentionConfig(AttentionConfig):\n \"\"\"\n num_heads Number of heads.\n num_landmarks Number of landmarks to use for softmax approximation. 64 often sufficient for a good\n approximation according to https://arxiv.org/pdf/2102.03902.pdf.\n causal Apply a causal mask, in that the attention cannot be applied to the future.\n use_razavi_pinverse If true, use iterative method from (Razavi et al. 2014) to approximate the Moore-Penrose\n inverse, otherwise use standard torch inverse.\n pinverse_original_init True if using original initialization when calculating Moore-Penrose pseudo inverse using\n method from (Razavi et al. 2014).\n False if using exact coefficient computation (leads to faster convergence).\n inv_iterations Number of iterations for calculating the Moore-Penrose pseudo inverse.\n v_skip_connection A module that will take V as input and will be added as a skip connection to the\n softmax approximation. A skip connection is added in the paper to help with training.\n conv_kernel_size Kernel size for convolution optionally added to help in training.\n If v_skip_connection is not specified, this will be used to define the default\n depth wise convolution used as a skip connection.\n If both conv_kernel_size and v_skip_connection are None, no skip connection will\n be added.\n landmark_pooling Which module to use when computing landmarks. Default is AdaptiveAvgPool2d.\n \"\"\"\n\n num_heads: int\n num_landmarks: Optional[int]\n landmark_pooling: Optional[nn.Module]\n causal: Optional[bool]\n pinverse_original_init: Optional[bool]\n inv_iterations: Optional[int]\n v_skip_connection: Optional[nn.Module]\n conv_kernel_size: Optional[int]\n use_razavi_pinverse: Optional[bool]\n\n\nclass AvgPool(nn.Module):\n def __init__(self, n: int):\n super().__init__()\n self.n = n\n\n def forward(self, x: torch.Tensor):\n # Average independently for every segment in the sequence dimension\n seq_len = x.shape[1]\n head_dim = x.shape[2]\n segments = seq_len // self.n\n assert segments > 0, \"num_landmarks should be smaller than the sequence length\"\n\n # Dimensions are a match\n if seq_len % self.n == 0:\n return x.reshape(\n -1,\n self.n,\n segments,\n head_dim,\n ).mean(dim=-2)\n\n # Handle the last segment boundary being off\n n_round = self.n - seq_len % self.n\n\n x_avg_round = (\n x[:, : n_round * segments, :]\n .reshape(-1, n_round, segments, head_dim)\n .mean(dim=-2)\n )\n x_avg_off = (\n x[:, n_round * segments :, :]\n .reshape(-1, self.n - n_round, segments + 1, head_dim)\n .mean(dim=-2)\n )\n return torch.cat((x_avg_round, x_avg_off), dim=-2)\n\n\n@register_attention(\"nystrom\", NystromSelfAttentionConfig)\nclass NystromAttention(Attention):\n # TODO: update defaults for use_razavi_pinverse and inv_iterations\n def __init__(\n self,\n dropout: float,\n num_heads: int,\n num_landmarks: int = 64,\n landmark_pooling: Optional[nn.Module] = None,\n causal: bool = False,\n use_razavi_pinverse: bool = True,\n pinverse_original_init: bool = False,\n inv_iterations: int = 6, # recommended default in paper was 6.\n v_skip_connection: Optional[nn.Module] = None,\n conv_kernel_size: Optional[int] = None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Nystrom attention mechanism, from Nystromformer_.\n ::\n\n \"A Nystrom-based Algorithm for Approximating Self-Attention.\"\n Xiong, Y., Zeng, Z., Chakraborty, R., Tan, M., Fung, G., Li, Y., Singh, V. (2021)\n\n Reference codebase: https://github.com/mlpen/Nystromformer\n\n .. _Nystromformer: https://arxiv.org/pdf/2102.03902.pdf\n\n \"\"\"\n super().__init__()\n # merged key padding mask and attention mask is not accepted\n self.requires_separate_masks = True\n self.num_landmarks = num_landmarks\n # TODO: should be able to not have to pass in num_heads\n self.num_heads = num_heads\n self.use_razavi_pinverse = use_razavi_pinverse\n self.pinverse_original_init = pinverse_original_init\n self.inv_iterations = inv_iterations\n self.attn_drop = nn.Dropout(dropout)\n self.skip_connection = v_skip_connection\n self.causal = causal\n\n if self.skip_connection is None and conv_kernel_size is not None:\n self.skip_connection = nn.Conv2d(\n in_channels=self.num_heads,\n out_channels=self.num_heads,\n kernel_size=(conv_kernel_size, 1),\n padding=(conv_kernel_size // 2, 0),\n bias=False,\n groups=self.num_heads,\n )\n\n if landmark_pooling is not None:\n self.landmark_pooling = landmark_pooling\n else:\n self.landmark_pooling = AvgPool(n=self.num_landmarks)\n\n # Optional lower triangular masks for causal attention\n self.causal_mask_1: Optional[torch.Tensor] = None\n self.causal_mask_2: Optional[torch.Tensor] = None\n self.causal_mask_3: Optional[torch.Tensor] = None\n\n # This attention does not support attention masks\n self.supports_attention_mask = False\n self.supports_key_padding_mask = True\n\n def forward(\n self,\n q: torch.Tensor,\n k: torch.Tensor,\n v: torch.Tensor,\n key_padding_mask: Optional[torch.Tensor] = None,\n *args,\n **kwargs,\n ):\n r\"\"\"\n key_padding_mask Only a key padding mask is accepted here. The size must be (batch size, sequence length) or\n (batch size * num_heads, 1, sequence length). If dimensions are not correct, the mask will\n be ignored. An additive mask is expected, meaning float values using \"-inf\" to mask values\n \"\"\"\n\n batched_dim = k.size(0)\n seq_len = k.size(-2)\n tt = {\"dtype\": q.dtype, \"device\": q.device}\n\n if key_padding_mask is not None:\n if key_padding_mask.dtype == torch.bool:\n logging.warning(\n \"Bool mask found, but an additive mask is expected. Converting but this is slow\"\n )\n key_padding_mask = bool_mask_to_additive(key_padding_mask)\n\n if key_padding_mask.ndim == 2:\n key_padding_mask = reshape_key_padding_mask(\n key_padding_mask, batched_dim\n )\n\n assert key_padding_mask.size() == (batched_dim, 1, seq_len), (\n f\"key_padding_mask has invalid dimensions {key_padding_mask.size()}.\"\n f\" Must have dimensions {batched_dim, 1, seq_len} or (batch_size, {seq_len}).\"\n )\n\n if self.num_landmarks >= seq_len:\n mask: Optional[torch.Tensor] = None\n\n if self.causal:\n mask = self._triu_mask(batched_dim, seq_len, seq_len, **tt)\n\n if key_padding_mask is not None:\n mask = key_padding_mask if mask is None else mask + key_padding_mask\n\n x = scaled_dot_product_attention(q=q, k=k, v=v, att_mask=mask)\n\n else:\n q_landmarks = self.landmark_pooling(q)\n k_landmarks = self.landmark_pooling(k)\n\n if self.causal and (\n self.causal_mask_1 is None\n or (batched_dim, seq_len, self.num_landmarks)\n != self.causal_mask_1.size()\n ):\n self.causal_mask_1 = self._triu_mask(\n batched_dim, seq_len, self.num_landmarks, **tt\n )\n self.causal_mask_2 = self._triu_mask(\n batched_dim, self.num_landmarks, self.num_landmarks, **tt\n )\n self.causal_mask_3 = self._triu_mask(\n batched_dim, self.num_landmarks, seq_len, **tt\n )\n\n mask_1: Optional[torch.Tensor] = self.causal_mask_1\n mask_2: Optional[torch.Tensor] = self.causal_mask_2\n mask_3: Optional[torch.Tensor] = self.causal_mask_3\n if key_padding_mask is not None:\n mask_1 = (\n key_padding_mask.transpose(-2, -1)\n if mask_1 is None\n else mask_1 + key_padding_mask.transpose(-2, -1)\n )\n mask_3 = (\n key_padding_mask if mask_3 is None else mask_3 + key_padding_mask\n )\n\n kernel_1 = scaled_query_key_softmax(q=q, k=k_landmarks, att_mask=mask_1)\n kernel_2 = scaled_query_key_softmax(\n q=q_landmarks, k=k_landmarks, att_mask=mask_2\n )\n kernel_3 = scaled_dot_product_attention(\n q=q_landmarks, k=k, v=v, att_mask=mask_3\n )\n\n kernel_2_inv = (\n iterative_pinv(\n kernel_2, self.inv_iterations, self.pinverse_original_init\n )\n if self.use_razavi_pinverse\n else torch.linalg.pinv(kernel_2)\n )\n\n x = torch.matmul(\n torch.matmul(\n kernel_1,\n kernel_2_inv,\n ),\n kernel_3,\n )\n\n if self.skip_connection:\n # Assumption here is that v is 3D.\n v_conv = self.skip_connection(\n v.reshape(-1, self.num_heads, v.size(-2), v.size(-1))\n )\n x += v_conv.reshape(-1, v_conv.size(-2), v_conv.size(-1))\n x = self.attn_drop(x)\n return x\n\n def _triu_mask(self, dim_1: int, dim_2: int, dim_3: int, **kwargs) -> torch.Tensor:\n device = kwargs[\"device\"]\n dtype = kwargs[\"dtype\"]\n\n return torch.triu(\n torch.ones(dim_2, dim_3, dtype=dtype, device=device) * float(\"-inf\"),\n diagonal=1,\n ).expand(\n dim_1, -1, -1\n ) # micro optim, save memory on the batch dimension\n", "path": "xformers/components/attention/nystrom.py"}]}
| 3,618 | 552 |
gh_patches_debug_9777
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-1397
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
simplelistadapter should accept objects inheriting from list or tuple
I'll found it usefull if it was possible to extend the list object that I pass to the simplelistadapter, but an exception is raised.
Reproduce :
``` python
from kivy.adapters.simplelistadapter import SimpleListAdapter
class ExtendedList(list):
pass
list_adapter = SimpleListAdapter(data=ExtendedList())
```
A solution :
In kivy/adapters/simplelistadapter.py
``` python
47 if type(kwargs['data']) not in (tuple, list):
48 raise Exception('list adapter: data must be a tuple or list')
```
May be replaced by:
``` python
if not isinstance(kwargs['data'], list) and not isinstance(kwargs['data'], tuple)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `kivy/adapters/simplelistadapter.py`
Content:
```
1 '''
2 SimpleListAdapter
3 =================
4
5 .. versionadded:: 1.5
6
7 .. warning::
8
9 This code is still experimental, and its API is subject to change in a
10 future version.
11
12 The :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is used for
13 basic lists. For example, it can be used for displaying a list of read-only
14 strings that do not require user interaction.
15
16 '''
17
18 __all__ = ('SimpleListAdapter', )
19
20 from kivy.adapters.adapter import Adapter
21 from kivy.properties import ListProperty
22 from kivy.lang import Builder
23
24
25 class SimpleListAdapter(Adapter):
26 '''A :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is an
27 adapter around a Python list.
28
29 From :class:`~kivy.adapters.adapter.Adapter`, the
30 :class:`~kivy.adapters.simplelistadapter.ListAdapter` gets cls, template,
31 and args_converter properties.
32 '''
33
34 data = ListProperty([])
35 '''The data list property contains a list of objects (which can be strings)
36 that will be used directly if no args_converter function is provided. If
37 there is an args_converter, the data objects will be passed to it for
38 instantiating the item view class instances.
39
40 :data:`data` is a :class:`~kivy.properties.ListProperty` and
41 defaults to [].
42 '''
43
44 def __init__(self, **kwargs):
45 if 'data' not in kwargs:
46 raise Exception('list adapter: input must include data argument')
47 if type(kwargs['data']) not in (tuple, list):
48 raise Exception('list adapter: data must be a tuple or list')
49 super(SimpleListAdapter, self).__init__(**kwargs)
50
51 def get_count(self):
52 return len(self.data)
53
54 def get_data_item(self, index):
55 if index < 0 or index >= len(self.data):
56 return None
57 return self.data[index]
58
59 # Returns a view instance for an item.
60 def get_view(self, index):
61 item = self.get_data_item(index)
62
63 if item is None:
64 return None
65
66 item_args = self.args_converter(index, item)
67
68 if self.cls:
69 instance = self.cls(**item_args)
70 return instance
71 else:
72 return Builder.template(self.template, **item_args)
73
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/kivy/adapters/simplelistadapter.py b/kivy/adapters/simplelistadapter.py
--- a/kivy/adapters/simplelistadapter.py
+++ b/kivy/adapters/simplelistadapter.py
@@ -44,7 +44,8 @@
def __init__(self, **kwargs):
if 'data' not in kwargs:
raise Exception('list adapter: input must include data argument')
- if type(kwargs['data']) not in (tuple, list):
+ if not isinstance(kwargs['data'], list) and \
+ not isinstance(kwargs['data'], tuple):
raise Exception('list adapter: data must be a tuple or list')
super(SimpleListAdapter, self).__init__(**kwargs)
|
{"golden_diff": "diff --git a/kivy/adapters/simplelistadapter.py b/kivy/adapters/simplelistadapter.py\n--- a/kivy/adapters/simplelistadapter.py\n+++ b/kivy/adapters/simplelistadapter.py\n@@ -44,7 +44,8 @@\n def __init__(self, **kwargs):\n if 'data' not in kwargs:\n raise Exception('list adapter: input must include data argument')\n- if type(kwargs['data']) not in (tuple, list):\n+ if not isinstance(kwargs['data'], list) and \\\n+ not isinstance(kwargs['data'], tuple):\n raise Exception('list adapter: data must be a tuple or list')\n super(SimpleListAdapter, self).__init__(**kwargs)\n", "issue": "simplelistadapter should accept objects inheriting from list or tuple\nI'll found it usefull if it was possible to extend the list object that I pass to the simplelistadapter, but an exception is raised.\n\nReproduce :\n\n``` python\nfrom kivy.adapters.simplelistadapter import SimpleListAdapter\nclass ExtendedList(list):\n pass\n\nlist_adapter = SimpleListAdapter(data=ExtendedList())\n```\n\nA solution :\nIn kivy/adapters/simplelistadapter.py\n\n``` python\n 47 if type(kwargs['data']) not in (tuple, list): \n 48 raise Exception('list adapter: data must be a tuple or list') \n```\n\nMay be replaced by:\n\n``` python\nif not isinstance(kwargs['data'], list) and not isinstance(kwargs['data'], tuple)\n```\n\n", "before_files": [{"content": "'''\nSimpleListAdapter\n=================\n\n.. versionadded:: 1.5\n\n.. warning::\n\n This code is still experimental, and its API is subject to change in a\n future version.\n\nThe :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is used for\nbasic lists. For example, it can be used for displaying a list of read-only\nstrings that do not require user interaction.\n\n'''\n\n__all__ = ('SimpleListAdapter', )\n\nfrom kivy.adapters.adapter import Adapter\nfrom kivy.properties import ListProperty\nfrom kivy.lang import Builder\n\n\nclass SimpleListAdapter(Adapter):\n '''A :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is an\n adapter around a Python list.\n\n From :class:`~kivy.adapters.adapter.Adapter`, the\n :class:`~kivy.adapters.simplelistadapter.ListAdapter` gets cls, template,\n and args_converter properties.\n '''\n\n data = ListProperty([])\n '''The data list property contains a list of objects (which can be strings)\n that will be used directly if no args_converter function is provided. If\n there is an args_converter, the data objects will be passed to it for\n instantiating the item view class instances.\n\n :data:`data` is a :class:`~kivy.properties.ListProperty` and\n defaults to [].\n '''\n\n def __init__(self, **kwargs):\n if 'data' not in kwargs:\n raise Exception('list adapter: input must include data argument')\n if type(kwargs['data']) not in (tuple, list):\n raise Exception('list adapter: data must be a tuple or list')\n super(SimpleListAdapter, self).__init__(**kwargs)\n\n def get_count(self):\n return len(self.data)\n\n def get_data_item(self, index):\n if index < 0 or index >= len(self.data):\n return None\n return self.data[index]\n\n # Returns a view instance for an item.\n def get_view(self, index):\n item = self.get_data_item(index)\n\n if item is None:\n return None\n\n item_args = self.args_converter(index, item)\n\n if self.cls:\n instance = self.cls(**item_args)\n return instance\n else:\n return Builder.template(self.template, **item_args)\n", "path": "kivy/adapters/simplelistadapter.py"}], "after_files": [{"content": "'''\nSimpleListAdapter\n=================\n\n.. versionadded:: 1.5\n\n.. warning::\n\n This code is still experimental, and its API is subject to change in a\n future version.\n\nThe :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is used for\nbasic lists. For example, it can be used for displaying a list of read-only\nstrings that do not require user interaction.\n\n'''\n\n__all__ = ('SimpleListAdapter', )\n\nfrom kivy.adapters.adapter import Adapter\nfrom kivy.properties import ListProperty\nfrom kivy.lang import Builder\n\n\nclass SimpleListAdapter(Adapter):\n '''A :class:`~kivy.adapters.simplelistadapter.SimpleListAdapter` is an\n adapter around a Python list.\n\n From :class:`~kivy.adapters.adapter.Adapter`, the\n :class:`~kivy.adapters.simplelistadapter.ListAdapter` gets cls, template,\n and args_converter properties.\n '''\n\n data = ListProperty([])\n '''The data list property contains a list of objects (which can be strings)\n that will be used directly if no args_converter function is provided. If\n there is an args_converter, the data objects will be passed to it for\n instantiating the item view class instances.\n\n :data:`data` is a :class:`~kivy.properties.ListProperty` and\n defaults to [].\n '''\n\n def __init__(self, **kwargs):\n if 'data' not in kwargs:\n raise Exception('list adapter: input must include data argument')\n if not isinstance(kwargs['data'], list) and \\\n not isinstance(kwargs['data'], tuple):\n raise Exception('list adapter: data must be a tuple or list')\n super(SimpleListAdapter, self).__init__(**kwargs)\n\n def get_count(self):\n return len(self.data)\n\n def get_data_item(self, index):\n if index < 0 or index >= len(self.data):\n return None\n return self.data[index]\n\n # Returns a view instance for an item.\n def get_view(self, index):\n item = self.get_data_item(index)\n\n if item is None:\n return None\n\n item_args = self.args_converter(index, item)\n\n if self.cls:\n instance = self.cls(**item_args)\n return instance\n else:\n return Builder.template(self.template, **item_args)\n", "path": "kivy/adapters/simplelistadapter.py"}]}
| 1,054 | 154 |
gh_patches_debug_26995
|
rasdani/github-patches
|
git_diff
|
WordPress__openverse-api-411
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add Sentry to API
## Problem
<!-- Describe a problem solved by this feature; or delete the section entirely. -->
We don't have any visibility into the API service. Sentry would be a good and easy first step.
## Description
<!-- Describe the feature and how it solves the problem. -->
Let's add Sentry. Long term we have goals of adding other monitoring but Sentry is a good and easy first step.
## Additional context
<!-- Add any other context about the feature here; or delete the section entirely. -->
## Implementation
<!-- Replace the [ ] with [x] to check the box. -->
- [x] 🙋 I would be interested in implementing this feature.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openverse_api/catalog/settings.py`
Content:
```
1 """
2 Django settings for catalog project.
3
4 Generated by 'django-admin startproject' using Django 2.0.5.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/2.0/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/2.0/ref/settings/
11 """
12
13 from pathlib import Path
14 from socket import gethostbyname, gethostname
15
16 from decouple import config
17
18
19 # Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)
20 BASE_DIR = Path(__file__).resolve().parent.parent
21
22 # Where to collect static files in production/development deployments
23 STATIC_ROOT = "/var/api_static_content/static"
24
25 # Logo uploads
26 MEDIA_ROOT = "/var/api_media/"
27 MEDIA_URL = "/media/"
28
29 # Quick-start development settings - unsuitable for production
30 # See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
31
32 # SECURITY WARNING: keep the secret key used in production secret!
33 SECRET_KEY = config("DJANGO_SECRET_KEY") # required
34
35 # SECURITY WARNING: don't run with debug turned on in production!
36 DEBUG = config("DJANGO_DEBUG_ENABLED", default=False, cast=bool)
37
38 ALLOWED_HOSTS = [
39 "api-dev.openverse.engineering",
40 "api.openverse.engineering",
41 gethostname(),
42 gethostbyname(gethostname()),
43 ]
44
45 if lb_url := config("LOAD_BALANCER_URL", default=""):
46 ALLOWED_HOSTS.append(lb_url)
47
48 if DEBUG:
49 ALLOWED_HOSTS += [
50 "localhost",
51 "127.0.0.1",
52 "0.0.0.0",
53 ]
54
55 # Domains that shortened links may point to
56 SHORT_URL_WHITELIST = {
57 "api-dev.openverse.engineering",
58 "api.openverse.engineering",
59 "localhost:8000",
60 }
61 SHORT_URL_PATH_WHITELIST = ["/v1/list", "/v1/images/"]
62
63 USE_S3 = config("USE_S3", default=False, cast=bool)
64
65 # Application definition
66
67 INSTALLED_APPS = [
68 "catalog",
69 "catalog.api",
70 "drf_yasg",
71 "django.contrib.admin",
72 "django.contrib.auth",
73 "django.contrib.contenttypes",
74 "django.contrib.sessions",
75 "django.contrib.messages",
76 "django.contrib.staticfiles",
77 "oauth2_provider",
78 "rest_framework",
79 "corsheaders",
80 "sslserver",
81 ]
82
83 if USE_S3:
84 DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
85 AWS_STORAGE_BUCKET_NAME = config("LOGOS_BUCKET", default="openverse_api-logos-prod")
86 AWS_S3_SIGNATURE_VERSION = "s3v4"
87 INSTALLED_APPS.append("storages")
88
89 MIDDLEWARE = [
90 "django.middleware.security.SecurityMiddleware",
91 "django.contrib.sessions.middleware.SessionMiddleware",
92 "corsheaders.middleware.CorsMiddleware",
93 "django.middleware.common.CommonMiddleware",
94 "django.middleware.csrf.CsrfViewMiddleware",
95 "django.contrib.auth.middleware.AuthenticationMiddleware",
96 "django.contrib.messages.middleware.MessageMiddleware",
97 "django.middleware.clickjacking.XFrameOptionsMiddleware",
98 "oauth2_provider.middleware.OAuth2TokenMiddleware",
99 ]
100
101 SWAGGER_SETTINGS = {"SECURITY_DEFINITIONS": {}}
102
103 OAUTH2_PROVIDER = {
104 "SCOPES": {
105 "read": "Read scope",
106 "write": "Write scope",
107 }
108 }
109
110 OAUTH2_PROVIDER_APPLICATION_MODEL = "api.ThrottledApplication"
111
112 REST_FRAMEWORK = {
113 "DEFAULT_AUTHENTICATION_CLASSES": (
114 "oauth2_provider.contrib.rest_framework.OAuth2Authentication",
115 ),
116 "DEFAULT_VERSIONING_CLASS": "rest_framework.versioning.URLPathVersioning",
117 "DEFAULT_RENDERER_CLASSES": (
118 "rest_framework.renderers.JSONRenderer",
119 "rest_framework.renderers.BrowsableAPIRenderer",
120 "rest_framework_xml.renderers.XMLRenderer",
121 ),
122 "DEFAULT_THROTTLE_CLASSES": (
123 "catalog.api.utils.throttle.BurstRateThrottle",
124 "catalog.api.utils.throttle.SustainedRateThrottle",
125 "catalog.api.utils.throttle.OAuth2IdThrottleSustainedRate",
126 "catalog.api.utils.throttle.OAuth2IdThrottleBurstRate",
127 "catalog.api.utils.throttle.EnhancedOAuth2IdThrottleSustainedRate",
128 "catalog.api.utils.throttle.EnhancedOAuth2IdThrottleBurstRate",
129 ),
130 "DEFAULT_THROTTLE_RATES": {
131 "anon_burst": "60/min",
132 "anon_sustained": "5000/day",
133 "oauth2_client_credentials_sustained": "10000/day",
134 "oauth2_client_credentials_burst": "100/min",
135 "enhanced_oauth2_client_credentials_sustained": "20000/day",
136 "enhanced_oauth2_client_credentials_burst": "200/min",
137 },
138 "EXCEPTION_HANDLER": "catalog.api.utils.exceptions.exception_handler",
139 }
140
141 if config("DISABLE_GLOBAL_THROTTLING", default=True, cast=bool):
142 del REST_FRAMEWORK["DEFAULT_THROTTLE_RATES"]
143 del REST_FRAMEWORK["DEFAULT_THROTTLE_CLASSES"]
144
145 REDIS_HOST = config("REDIS_HOST", default="localhost")
146 REDIS_PORT = config("REDIS_PORT", default=6379, cast=int)
147 REDIS_PASSWORD = config("REDIS_PASSWORD", default="")
148 CACHES = {
149 # Site cache writes to 'default'
150 "default": {
151 "BACKEND": "django_redis.cache.RedisCache",
152 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/0",
153 "OPTIONS": {
154 "CLIENT_CLASS": "django_redis.client.DefaultClient",
155 },
156 },
157 # For rapidly changing stats that we don't want to hammer the database with
158 "traffic_stats": {
159 "BACKEND": "django_redis.cache.RedisCache",
160 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/1",
161 "OPTIONS": {
162 "CLIENT_CLASS": "django_redis.client.DefaultClient",
163 },
164 },
165 # For ensuring consistency among multiple Django workers and servers.
166 # Used by Redlock.
167 "locks": {
168 "BACKEND": "django_redis.cache.RedisCache",
169 "LOCATION": f"redis://{REDIS_HOST}:{REDIS_PORT}/2",
170 "OPTIONS": {
171 "CLIENT_CLASS": "django_redis.client.DefaultClient",
172 },
173 },
174 }
175
176 # Produce CC-hosted thumbnails dynamically through a proxy.
177 THUMBNAIL_PROXY_URL = config("THUMBNAIL_PROXY_URL", default="http://localhost:8222")
178
179 THUMBNAIL_WIDTH_PX = 600
180
181 AUTHENTICATION_BACKENDS = (
182 "oauth2_provider.backends.OAuth2Backend",
183 "django.contrib.auth.backends.ModelBackend",
184 )
185
186 ROOT_URLCONF = "catalog.urls"
187
188 TEMPLATES = [
189 {
190 "BACKEND": "django.template.backends.django.DjangoTemplates",
191 "DIRS": [BASE_DIR.joinpath("catalog", "templates")],
192 "APP_DIRS": True,
193 "OPTIONS": {
194 "context_processors": [
195 "django.template.context_processors.debug",
196 "django.template.context_processors.request",
197 "django.contrib.auth.context_processors.auth",
198 "django.contrib.messages.context_processors.messages",
199 ],
200 },
201 },
202 ]
203
204 WSGI_APPLICATION = "catalog.wsgi.application"
205
206 # Database
207 # https://docs.djangoproject.com/en/2.0/ref/settings/#databases
208
209 DATABASES = {
210 "default": {
211 "ENGINE": "django.db.backends.postgresql",
212 "HOST": config("DJANGO_DATABASE_HOST", default="localhost"),
213 "PORT": config("DJANGO_DATABASE_PORT", default=5432, cast=int),
214 "USER": config("DJANGO_DATABASE_USER", default="deploy"),
215 "PASSWORD": config("DJANGO_DATABASE_PASSWORD", default="deploy"),
216 "NAME": config("DJANGO_DATABASE_NAME", default="openledger"),
217 },
218 "upstream": {
219 "ENGINE": "django.db.backends.postgresql",
220 "HOST": config("UPSTREAM_DATABASE_HOST", default="localhost"),
221 "PORT": config("UPSTREAM_DATABASE_PORT", default=5433, cast=int),
222 "USER": config("UPSTREAM_DATABASE_USER", default="deploy"),
223 "PASSWORD": config("UPSTREAM_DATABASE_PASSWORD", default="deploy"),
224 "NAME": config("UPSTREAM_DATABASE_NAME", default="openledger"),
225 },
226 }
227
228 # Password validation
229 # https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
230
231 AUTH_PASSWORD_VALIDATORS = [
232 {
233 "NAME": "django.contrib.auth.password_validation"
234 ".UserAttributeSimilarityValidator",
235 },
236 {
237 "NAME": "django.contrib.auth.password_validation" ".MinimumLengthValidator",
238 },
239 {
240 "NAME": "django.contrib.auth.password_validation" ".CommonPasswordValidator",
241 },
242 {
243 "NAME": "django.contrib.auth.password_validation" ".NumericPasswordValidator",
244 },
245 ]
246
247 LOGGING = {
248 "version": 1,
249 "disable_existing_loggers": False,
250 "handlers": {
251 "console": {
252 "level": "INFO",
253 "class": "logging.StreamHandler",
254 },
255 },
256 "loggers": {
257 "django": {
258 "handlers": ["console"],
259 "level": "INFO",
260 "propagate": True,
261 },
262 # root logger
263 "": {
264 "level": "INFO",
265 "handlers": ["console"],
266 },
267 },
268 }
269
270 # Internationalization
271 # https://docs.djangoproject.com/en/2.0/topics/i18n/
272
273 LANGUAGE_CODE = "en-us"
274
275 TIME_ZONE = "UTC"
276
277 USE_I18N = True
278
279 USE_L10N = True
280
281 USE_TZ = True
282
283 # Static files (CSS, JavaScript, Images)
284 # https://docs.djangoproject.com/en/2.0/howto/static-files/
285
286 STATIC_URL = "/static/"
287
288 # Allow anybody to access the API from any domain
289 CORS_ORIGIN_ALLOW_ALL = True
290
291 # The version of the API. We follow the semantic version specification.
292 API_VERSION = config("SEMANTIC_VERSION", default="Version not specified")
293
294 # The contact email of the Openverse team
295 CONTACT_EMAIL = config("CONTACT_EMAIL", default="[email protected]")
296
297 WATERMARK_ENABLED = config("WATERMARK_ENABLED", default=False, cast=bool)
298
299 ELASTICSEARCH_URL = config("ELASTICSEARCH_URL", default="localhost")
300 ELASTICSEARCH_PORT = config("ELASTICSEARCH_PORT", default=9200, cast=int)
301 ELASTICSEARCH_AWS_REGION = config("ELASTICSEARCH_AWS_REGION", default="us-east-1")
302
303 # Additional settings for dev/prod environments
304 AWS_ACCESS_KEY_ID = config("AWS_ACCESS_KEY_ID", default="")
305 AWS_SECRET_ACCESS_KEY = config("AWS_SECRET_ACCESS_KEY", default="")
306
307 EMAIL_SENDER = config("EMAIL_SENDER", default="")
308 EMAIL_HOST = config("EMAIL_HOST", default="")
309 EMAIL_PORT = config("EMAIL_PORT", default=25, cast=int)
310 EMAIL_HOST_USER = config("EMAIL_HOST_USER", default="")
311 EMAIL_HOST_PASSWORD = config("EMAIL_HOST_PASSWORD", default="")
312 EMAIL_SUBJECT_PREFIX = "[noreply]"
313 EMAIL_USE_TLS = True
314
315 if EMAIL_HOST_USER or EMAIL_HOST_PASSWORD:
316 EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
317 else:
318 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
319
320 # Log full Elasticsearch response
321 VERBOSE_ES_RESPONSE = config("DEBUG_SCORES", default=False, cast=bool)
322
323 # Whether to boost results by authority and popularity
324 USE_RANK_FEATURES = config("USE_RANK_FEATURES", default=True, cast=bool)
325
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/openverse_api/catalog/settings.py b/openverse_api/catalog/settings.py
--- a/openverse_api/catalog/settings.py
+++ b/openverse_api/catalog/settings.py
@@ -13,7 +13,9 @@
from pathlib import Path
from socket import gethostbyname, gethostname
+import sentry_sdk
from decouple import config
+from sentry_sdk.integrations.django import DjangoIntegration
# Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)
@@ -35,6 +37,8 @@
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config("DJANGO_DEBUG_ENABLED", default=False, cast=bool)
+PYTHON_ENV = config("PYTHON_ENV", default="production")
+
ALLOWED_HOSTS = [
"api-dev.openverse.engineering",
"api.openverse.engineering",
@@ -322,3 +326,18 @@
# Whether to boost results by authority and popularity
USE_RANK_FEATURES = config("USE_RANK_FEATURES", default=True, cast=bool)
+
+SENTRY_DSN = config(
+ "SENTRY_DSN",
+ default="https://[email protected]/6107216",
+)
+SENTRY_SAMPLE_RATE = config("SENTRY_SAMPLE_RATE", default=1.0, cast=float)
+
+if not DEBUG:
+ sentry_sdk.init(
+ dsn=SENTRY_DSN,
+ integrations=[DjangoIntegration()],
+ traces_sample_rate=SENTRY_SAMPLE_RATE,
+ send_default_pii=False,
+ environment=PYTHON_ENV,
+ )
|
{"golden_diff": "diff --git a/openverse_api/catalog/settings.py b/openverse_api/catalog/settings.py\n--- a/openverse_api/catalog/settings.py\n+++ b/openverse_api/catalog/settings.py\n@@ -13,7 +13,9 @@\n from pathlib import Path\n from socket import gethostbyname, gethostname\n \n+import sentry_sdk\n from decouple import config\n+from sentry_sdk.integrations.django import DjangoIntegration\n \n \n # Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)\n@@ -35,6 +37,8 @@\n # SECURITY WARNING: don't run with debug turned on in production!\n DEBUG = config(\"DJANGO_DEBUG_ENABLED\", default=False, cast=bool)\n \n+PYTHON_ENV = config(\"PYTHON_ENV\", default=\"production\")\n+\n ALLOWED_HOSTS = [\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n@@ -322,3 +326,18 @@\n \n # Whether to boost results by authority and popularity\n USE_RANK_FEATURES = config(\"USE_RANK_FEATURES\", default=True, cast=bool)\n+\n+SENTRY_DSN = config(\n+ \"SENTRY_DSN\",\n+ default=\"https://[email protected]/6107216\",\n+)\n+SENTRY_SAMPLE_RATE = config(\"SENTRY_SAMPLE_RATE\", default=1.0, cast=float)\n+\n+if not DEBUG:\n+ sentry_sdk.init(\n+ dsn=SENTRY_DSN,\n+ integrations=[DjangoIntegration()],\n+ traces_sample_rate=SENTRY_SAMPLE_RATE,\n+ send_default_pii=False,\n+ environment=PYTHON_ENV,\n+ )\n", "issue": "Add Sentry to API\n## Problem\r\n<!-- Describe a problem solved by this feature; or delete the section entirely. -->\r\nWe don't have any visibility into the API service. Sentry would be a good and easy first step.\r\n\r\n## Description\r\n<!-- Describe the feature and how it solves the problem. -->\r\nLet's add Sentry. Long term we have goals of adding other monitoring but Sentry is a good and easy first step.\r\n\r\n## Additional context\r\n<!-- Add any other context about the feature here; or delete the section entirely. -->\r\n\r\n## Implementation\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [x] \ud83d\ude4b I would be interested in implementing this feature.\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for catalog project.\n\nGenerated by 'django-admin startproject' using Django 2.0.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/2.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/2.0/ref/settings/\n\"\"\"\n\nfrom pathlib import Path\nfrom socket import gethostbyname, gethostname\n\nfrom decouple import config\n\n\n# Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Where to collect static files in production/development deployments\nSTATIC_ROOT = \"/var/api_static_content/static\"\n\n# Logo uploads\nMEDIA_ROOT = \"/var/api_media/\"\nMEDIA_URL = \"/media/\"\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = config(\"DJANGO_SECRET_KEY\") # required\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = config(\"DJANGO_DEBUG_ENABLED\", default=False, cast=bool)\n\nALLOWED_HOSTS = [\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n gethostname(),\n gethostbyname(gethostname()),\n]\n\nif lb_url := config(\"LOAD_BALANCER_URL\", default=\"\"):\n ALLOWED_HOSTS.append(lb_url)\n\nif DEBUG:\n ALLOWED_HOSTS += [\n \"localhost\",\n \"127.0.0.1\",\n \"0.0.0.0\",\n ]\n\n# Domains that shortened links may point to\nSHORT_URL_WHITELIST = {\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n \"localhost:8000\",\n}\nSHORT_URL_PATH_WHITELIST = [\"/v1/list\", \"/v1/images/\"]\n\nUSE_S3 = config(\"USE_S3\", default=False, cast=bool)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"catalog\",\n \"catalog.api\",\n \"drf_yasg\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"oauth2_provider\",\n \"rest_framework\",\n \"corsheaders\",\n \"sslserver\",\n]\n\nif USE_S3:\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"\n AWS_STORAGE_BUCKET_NAME = config(\"LOGOS_BUCKET\", default=\"openverse_api-logos-prod\")\n AWS_S3_SIGNATURE_VERSION = \"s3v4\"\n INSTALLED_APPS.append(\"storages\")\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"oauth2_provider.middleware.OAuth2TokenMiddleware\",\n]\n\nSWAGGER_SETTINGS = {\"SECURITY_DEFINITIONS\": {}}\n\nOAUTH2_PROVIDER = {\n \"SCOPES\": {\n \"read\": \"Read scope\",\n \"write\": \"Write scope\",\n }\n}\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = \"api.ThrottledApplication\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n \"DEFAULT_RENDERER_CLASSES\": (\n \"rest_framework.renderers.JSONRenderer\",\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n \"rest_framework_xml.renderers.XMLRenderer\",\n ),\n \"DEFAULT_THROTTLE_CLASSES\": (\n \"catalog.api.utils.throttle.BurstRateThrottle\",\n \"catalog.api.utils.throttle.SustainedRateThrottle\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleBurstRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleBurstRate\",\n ),\n \"DEFAULT_THROTTLE_RATES\": {\n \"anon_burst\": \"60/min\",\n \"anon_sustained\": \"5000/day\",\n \"oauth2_client_credentials_sustained\": \"10000/day\",\n \"oauth2_client_credentials_burst\": \"100/min\",\n \"enhanced_oauth2_client_credentials_sustained\": \"20000/day\",\n \"enhanced_oauth2_client_credentials_burst\": \"200/min\",\n },\n \"EXCEPTION_HANDLER\": \"catalog.api.utils.exceptions.exception_handler\",\n}\n\nif config(\"DISABLE_GLOBAL_THROTTLING\", default=True, cast=bool):\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_RATES\"]\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_CLASSES\"]\n\nREDIS_HOST = config(\"REDIS_HOST\", default=\"localhost\")\nREDIS_PORT = config(\"REDIS_PORT\", default=6379, cast=int)\nREDIS_PASSWORD = config(\"REDIS_PASSWORD\", default=\"\")\nCACHES = {\n # Site cache writes to 'default'\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/0\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For rapidly changing stats that we don't want to hammer the database with\n \"traffic_stats\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/1\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For ensuring consistency among multiple Django workers and servers.\n # Used by Redlock.\n \"locks\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/2\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n}\n\n# Produce CC-hosted thumbnails dynamically through a proxy.\nTHUMBNAIL_PROXY_URL = config(\"THUMBNAIL_PROXY_URL\", default=\"http://localhost:8222\")\n\nTHUMBNAIL_WIDTH_PX = 600\n\nAUTHENTICATION_BACKENDS = (\n \"oauth2_provider.backends.OAuth2Backend\",\n \"django.contrib.auth.backends.ModelBackend\",\n)\n\nROOT_URLCONF = \"catalog.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR.joinpath(\"catalog\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"catalog.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/2.0/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"DJANGO_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"DJANGO_DATABASE_PORT\", default=5432, cast=int),\n \"USER\": config(\"DJANGO_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"DJANGO_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"DJANGO_DATABASE_NAME\", default=\"openledger\"),\n },\n \"upstream\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"UPSTREAM_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"UPSTREAM_DATABASE_PORT\", default=5433, cast=int),\n \"USER\": config(\"UPSTREAM_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"UPSTREAM_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"UPSTREAM_DATABASE_NAME\", default=\"openledger\"),\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation\"\n \".UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".NumericPasswordValidator\",\n },\n]\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n },\n },\n \"loggers\": {\n \"django\": {\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n \"propagate\": True,\n },\n # root logger\n \"\": {\n \"level\": \"INFO\",\n \"handlers\": [\"console\"],\n },\n },\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.0/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.0/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Allow anybody to access the API from any domain\nCORS_ORIGIN_ALLOW_ALL = True\n\n# The version of the API. We follow the semantic version specification.\nAPI_VERSION = config(\"SEMANTIC_VERSION\", default=\"Version not specified\")\n\n# The contact email of the Openverse team\nCONTACT_EMAIL = config(\"CONTACT_EMAIL\", default=\"[email protected]\")\n\nWATERMARK_ENABLED = config(\"WATERMARK_ENABLED\", default=False, cast=bool)\n\nELASTICSEARCH_URL = config(\"ELASTICSEARCH_URL\", default=\"localhost\")\nELASTICSEARCH_PORT = config(\"ELASTICSEARCH_PORT\", default=9200, cast=int)\nELASTICSEARCH_AWS_REGION = config(\"ELASTICSEARCH_AWS_REGION\", default=\"us-east-1\")\n\n# Additional settings for dev/prod environments\nAWS_ACCESS_KEY_ID = config(\"AWS_ACCESS_KEY_ID\", default=\"\")\nAWS_SECRET_ACCESS_KEY = config(\"AWS_SECRET_ACCESS_KEY\", default=\"\")\n\nEMAIL_SENDER = config(\"EMAIL_SENDER\", default=\"\")\nEMAIL_HOST = config(\"EMAIL_HOST\", default=\"\")\nEMAIL_PORT = config(\"EMAIL_PORT\", default=25, cast=int)\nEMAIL_HOST_USER = config(\"EMAIL_HOST_USER\", default=\"\")\nEMAIL_HOST_PASSWORD = config(\"EMAIL_HOST_PASSWORD\", default=\"\")\nEMAIL_SUBJECT_PREFIX = \"[noreply]\"\nEMAIL_USE_TLS = True\n\nif EMAIL_HOST_USER or EMAIL_HOST_PASSWORD:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\n# Log full Elasticsearch response\nVERBOSE_ES_RESPONSE = config(\"DEBUG_SCORES\", default=False, cast=bool)\n\n# Whether to boost results by authority and popularity\nUSE_RANK_FEATURES = config(\"USE_RANK_FEATURES\", default=True, cast=bool)\n", "path": "openverse_api/catalog/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for catalog project.\n\nGenerated by 'django-admin startproject' using Django 2.0.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/2.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/2.0/ref/settings/\n\"\"\"\n\nfrom pathlib import Path\nfrom socket import gethostbyname, gethostname\n\nimport sentry_sdk\nfrom decouple import config\nfrom sentry_sdk.integrations.django import DjangoIntegration\n\n\n# Build paths inside the project like this: BASE_DIR.join('dir', 'subdir'...)\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Where to collect static files in production/development deployments\nSTATIC_ROOT = \"/var/api_static_content/static\"\n\n# Logo uploads\nMEDIA_ROOT = \"/var/api_media/\"\nMEDIA_URL = \"/media/\"\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = config(\"DJANGO_SECRET_KEY\") # required\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = config(\"DJANGO_DEBUG_ENABLED\", default=False, cast=bool)\n\nPYTHON_ENV = config(\"PYTHON_ENV\", default=\"production\")\n\nALLOWED_HOSTS = [\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n gethostname(),\n gethostbyname(gethostname()),\n]\n\nif lb_url := config(\"LOAD_BALANCER_URL\", default=\"\"):\n ALLOWED_HOSTS.append(lb_url)\n\nif DEBUG:\n ALLOWED_HOSTS += [\n \"localhost\",\n \"127.0.0.1\",\n \"0.0.0.0\",\n ]\n\n# Domains that shortened links may point to\nSHORT_URL_WHITELIST = {\n \"api-dev.openverse.engineering\",\n \"api.openverse.engineering\",\n \"localhost:8000\",\n}\nSHORT_URL_PATH_WHITELIST = [\"/v1/list\", \"/v1/images/\"]\n\nUSE_S3 = config(\"USE_S3\", default=False, cast=bool)\n\n# Application definition\n\nINSTALLED_APPS = [\n \"catalog\",\n \"catalog.api\",\n \"drf_yasg\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"oauth2_provider\",\n \"rest_framework\",\n \"corsheaders\",\n \"sslserver\",\n]\n\nif USE_S3:\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"\n AWS_STORAGE_BUCKET_NAME = config(\"LOGOS_BUCKET\", default=\"openverse_api-logos-prod\")\n AWS_S3_SIGNATURE_VERSION = \"s3v4\"\n INSTALLED_APPS.append(\"storages\")\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"oauth2_provider.middleware.OAuth2TokenMiddleware\",\n]\n\nSWAGGER_SETTINGS = {\"SECURITY_DEFINITIONS\": {}}\n\nOAUTH2_PROVIDER = {\n \"SCOPES\": {\n \"read\": \"Read scope\",\n \"write\": \"Write scope\",\n }\n}\n\nOAUTH2_PROVIDER_APPLICATION_MODEL = \"api.ThrottledApplication\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"oauth2_provider.contrib.rest_framework.OAuth2Authentication\",\n ),\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n \"DEFAULT_RENDERER_CLASSES\": (\n \"rest_framework.renderers.JSONRenderer\",\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n \"rest_framework_xml.renderers.XMLRenderer\",\n ),\n \"DEFAULT_THROTTLE_CLASSES\": (\n \"catalog.api.utils.throttle.BurstRateThrottle\",\n \"catalog.api.utils.throttle.SustainedRateThrottle\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.OAuth2IdThrottleBurstRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleSustainedRate\",\n \"catalog.api.utils.throttle.EnhancedOAuth2IdThrottleBurstRate\",\n ),\n \"DEFAULT_THROTTLE_RATES\": {\n \"anon_burst\": \"60/min\",\n \"anon_sustained\": \"5000/day\",\n \"oauth2_client_credentials_sustained\": \"10000/day\",\n \"oauth2_client_credentials_burst\": \"100/min\",\n \"enhanced_oauth2_client_credentials_sustained\": \"20000/day\",\n \"enhanced_oauth2_client_credentials_burst\": \"200/min\",\n },\n \"EXCEPTION_HANDLER\": \"catalog.api.utils.exceptions.exception_handler\",\n}\n\nif config(\"DISABLE_GLOBAL_THROTTLING\", default=True, cast=bool):\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_RATES\"]\n del REST_FRAMEWORK[\"DEFAULT_THROTTLE_CLASSES\"]\n\nREDIS_HOST = config(\"REDIS_HOST\", default=\"localhost\")\nREDIS_PORT = config(\"REDIS_PORT\", default=6379, cast=int)\nREDIS_PASSWORD = config(\"REDIS_PASSWORD\", default=\"\")\nCACHES = {\n # Site cache writes to 'default'\n \"default\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/0\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For rapidly changing stats that we don't want to hammer the database with\n \"traffic_stats\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/1\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n # For ensuring consistency among multiple Django workers and servers.\n # Used by Redlock.\n \"locks\": {\n \"BACKEND\": \"django_redis.cache.RedisCache\",\n \"LOCATION\": f\"redis://{REDIS_HOST}:{REDIS_PORT}/2\",\n \"OPTIONS\": {\n \"CLIENT_CLASS\": \"django_redis.client.DefaultClient\",\n },\n },\n}\n\n# Produce CC-hosted thumbnails dynamically through a proxy.\nTHUMBNAIL_PROXY_URL = config(\"THUMBNAIL_PROXY_URL\", default=\"http://localhost:8222\")\n\nTHUMBNAIL_WIDTH_PX = 600\n\nAUTHENTICATION_BACKENDS = (\n \"oauth2_provider.backends.OAuth2Backend\",\n \"django.contrib.auth.backends.ModelBackend\",\n)\n\nROOT_URLCONF = \"catalog.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR.joinpath(\"catalog\", \"templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"catalog.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/2.0/ref/settings/#databases\n\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"DJANGO_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"DJANGO_DATABASE_PORT\", default=5432, cast=int),\n \"USER\": config(\"DJANGO_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"DJANGO_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"DJANGO_DATABASE_NAME\", default=\"openledger\"),\n },\n \"upstream\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": config(\"UPSTREAM_DATABASE_HOST\", default=\"localhost\"),\n \"PORT\": config(\"UPSTREAM_DATABASE_PORT\", default=5433, cast=int),\n \"USER\": config(\"UPSTREAM_DATABASE_USER\", default=\"deploy\"),\n \"PASSWORD\": config(\"UPSTREAM_DATABASE_PASSWORD\", default=\"deploy\"),\n \"NAME\": config(\"UPSTREAM_DATABASE_NAME\", default=\"openledger\"),\n },\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation\"\n \".UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation\" \".NumericPasswordValidator\",\n },\n]\n\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"handlers\": {\n \"console\": {\n \"level\": \"INFO\",\n \"class\": \"logging.StreamHandler\",\n },\n },\n \"loggers\": {\n \"django\": {\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n \"propagate\": True,\n },\n # root logger\n \"\": {\n \"level\": \"INFO\",\n \"handlers\": [\"console\"],\n },\n },\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.0/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.0/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\n# Allow anybody to access the API from any domain\nCORS_ORIGIN_ALLOW_ALL = True\n\n# The version of the API. We follow the semantic version specification.\nAPI_VERSION = config(\"SEMANTIC_VERSION\", default=\"Version not specified\")\n\n# The contact email of the Openverse team\nCONTACT_EMAIL = config(\"CONTACT_EMAIL\", default=\"[email protected]\")\n\nWATERMARK_ENABLED = config(\"WATERMARK_ENABLED\", default=False, cast=bool)\n\nELASTICSEARCH_URL = config(\"ELASTICSEARCH_URL\", default=\"localhost\")\nELASTICSEARCH_PORT = config(\"ELASTICSEARCH_PORT\", default=9200, cast=int)\nELASTICSEARCH_AWS_REGION = config(\"ELASTICSEARCH_AWS_REGION\", default=\"us-east-1\")\n\n# Additional settings for dev/prod environments\nAWS_ACCESS_KEY_ID = config(\"AWS_ACCESS_KEY_ID\", default=\"\")\nAWS_SECRET_ACCESS_KEY = config(\"AWS_SECRET_ACCESS_KEY\", default=\"\")\n\nEMAIL_SENDER = config(\"EMAIL_SENDER\", default=\"\")\nEMAIL_HOST = config(\"EMAIL_HOST\", default=\"\")\nEMAIL_PORT = config(\"EMAIL_PORT\", default=25, cast=int)\nEMAIL_HOST_USER = config(\"EMAIL_HOST_USER\", default=\"\")\nEMAIL_HOST_PASSWORD = config(\"EMAIL_HOST_PASSWORD\", default=\"\")\nEMAIL_SUBJECT_PREFIX = \"[noreply]\"\nEMAIL_USE_TLS = True\n\nif EMAIL_HOST_USER or EMAIL_HOST_PASSWORD:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\n# Log full Elasticsearch response\nVERBOSE_ES_RESPONSE = config(\"DEBUG_SCORES\", default=False, cast=bool)\n\n# Whether to boost results by authority and popularity\nUSE_RANK_FEATURES = config(\"USE_RANK_FEATURES\", default=True, cast=bool)\n\nSENTRY_DSN = config(\n \"SENTRY_DSN\",\n default=\"https://[email protected]/6107216\",\n)\nSENTRY_SAMPLE_RATE = config(\"SENTRY_SAMPLE_RATE\", default=1.0, cast=float)\n\nif not DEBUG:\n sentry_sdk.init(\n dsn=SENTRY_DSN,\n integrations=[DjangoIntegration()],\n traces_sample_rate=SENTRY_SAMPLE_RATE,\n send_default_pii=False,\n environment=PYTHON_ENV,\n )\n", "path": "openverse_api/catalog/settings.py"}]}
| 3,774 | 392 |
gh_patches_debug_24859
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-16242
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Enable translations for hotspots subsystem
There are unused translations at the hotspots subsystem, which could be enabled due to finished and available translations. At the moment there is a mix of English and the configured user language.
Affected file: zerver/lib/hotspots.py
Example (mixed English/German):

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/lib/hotspots.py`
Content:
```
1 # See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html
2 # for documentation on this subsystem.
3 from typing import Dict, List
4
5 from django.conf import settings
6 from django.utils.translation import ugettext as _
7
8 from zerver.models import UserHotspot, UserProfile
9
10 ALL_HOTSPOTS: Dict[str, Dict[str, str]] = {
11 'intro_reply': {
12 'title': _('Reply to a message'),
13 'description': _('Click anywhere on a message to reply.'),
14 },
15 'intro_streams': {
16 'title': _('Catch up on a stream'),
17 'description': _('Messages sent to a stream are seen by everyone subscribed '
18 'to that stream. Try clicking on one of the stream links below.'),
19 },
20 'intro_topics': {
21 'title': _('Topics'),
22 'description': _('Every message has a topic. Topics keep conversations '
23 'easy to follow, and make it easy to reply to conversations that start '
24 'while you are offline.'),
25 },
26 'intro_gear': {
27 'title': _('Settings'),
28 'description': _('Go to Settings to configure your '
29 'notifications and display settings.'),
30 },
31 'intro_compose': {
32 'title': _('Compose'),
33 'description': _('Click here to start a new conversation. Pick a topic '
34 '(2-3 words is best), and give it a go!'),
35 },
36 }
37
38 def get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:
39 # For manual testing, it can be convenient to set
40 # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to
41 # make it easy to click on all of the hotspots. Note that
42 # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link
43 # above) for details.
44 if settings.ALWAYS_SEND_ALL_HOTSPOTS:
45 return [{
46 'name': hotspot,
47 'title': ALL_HOTSPOTS[hotspot]['title'],
48 'description': ALL_HOTSPOTS[hotspot]['description'],
49 'delay': 0,
50 } for hotspot in ALL_HOTSPOTS]
51
52 if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:
53 return []
54
55 seen_hotspots = frozenset(UserHotspot.objects.filter(user=user).values_list('hotspot', flat=True))
56 for hotspot in ['intro_reply', 'intro_streams', 'intro_topics', 'intro_gear', 'intro_compose']:
57 if hotspot not in seen_hotspots:
58 return [{
59 'name': hotspot,
60 'title': ALL_HOTSPOTS[hotspot]['title'],
61 'description': ALL_HOTSPOTS[hotspot]['description'],
62 'delay': 0.5,
63 }]
64
65 user.tutorial_status = UserProfile.TUTORIAL_FINISHED
66 user.save(update_fields=['tutorial_status'])
67 return []
68
69 def copy_hotpots(source_profile: UserProfile, target_profile: UserProfile) -> None:
70 for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):
71 UserHotspot.objects.create(user=target_profile, hotspot=userhotspot.hotspot,
72 timestamp=userhotspot.timestamp)
73
74 target_profile.tutorial_status = source_profile.tutorial_status
75 target_profile.onboarding_steps = source_profile.onboarding_steps
76 target_profile.save(update_fields=['tutorial_status', 'onboarding_steps'])
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py
--- a/zerver/lib/hotspots.py
+++ b/zerver/lib/hotspots.py
@@ -3,7 +3,7 @@
from typing import Dict, List
from django.conf import settings
-from django.utils.translation import ugettext as _
+from django.utils.translation import ugettext_lazy as _
from zerver.models import UserHotspot, UserProfile
@@ -44,8 +44,8 @@
if settings.ALWAYS_SEND_ALL_HOTSPOTS:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0,
} for hotspot in ALL_HOTSPOTS]
@@ -57,8 +57,8 @@
if hotspot not in seen_hotspots:
return [{
'name': hotspot,
- 'title': ALL_HOTSPOTS[hotspot]['title'],
- 'description': ALL_HOTSPOTS[hotspot]['description'],
+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),
+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),
'delay': 0.5,
}]
|
{"golden_diff": "diff --git a/zerver/lib/hotspots.py b/zerver/lib/hotspots.py\n--- a/zerver/lib/hotspots.py\n+++ b/zerver/lib/hotspots.py\n@@ -3,7 +3,7 @@\n from typing import Dict, List\n \n from django.conf import settings\n-from django.utils.translation import ugettext as _\n+from django.utils.translation import ugettext_lazy as _\n \n from zerver.models import UserHotspot, UserProfile\n \n@@ -44,8 +44,8 @@\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [{\n 'name': hotspot,\n- 'title': ALL_HOTSPOTS[hotspot]['title'],\n- 'description': ALL_HOTSPOTS[hotspot]['description'],\n+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0,\n } for hotspot in ALL_HOTSPOTS]\n \n@@ -57,8 +57,8 @@\n if hotspot not in seen_hotspots:\n return [{\n 'name': hotspot,\n- 'title': ALL_HOTSPOTS[hotspot]['title'],\n- 'description': ALL_HOTSPOTS[hotspot]['description'],\n+ 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n+ 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0.5,\n }]\n", "issue": "Enable translations for hotspots subsystem\nThere are unused translations at the hotspots subsystem, which could be enabled due to finished and available translations. At the moment there is a mix of English and the configured user language.\r\n\r\nAffected file: zerver/lib/hotspots.py\r\n\r\nExample (mixed English/German):\r\n\r\n\n", "before_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html\n# for documentation on this subsystem.\nfrom typing import Dict, List\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext as _\n\nfrom zerver.models import UserHotspot, UserProfile\n\nALL_HOTSPOTS: Dict[str, Dict[str, str]] = {\n 'intro_reply': {\n 'title': _('Reply to a message'),\n 'description': _('Click anywhere on a message to reply.'),\n },\n 'intro_streams': {\n 'title': _('Catch up on a stream'),\n 'description': _('Messages sent to a stream are seen by everyone subscribed '\n 'to that stream. Try clicking on one of the stream links below.'),\n },\n 'intro_topics': {\n 'title': _('Topics'),\n 'description': _('Every message has a topic. Topics keep conversations '\n 'easy to follow, and make it easy to reply to conversations that start '\n 'while you are offline.'),\n },\n 'intro_gear': {\n 'title': _('Settings'),\n 'description': _('Go to Settings to configure your '\n 'notifications and display settings.'),\n },\n 'intro_compose': {\n 'title': _('Compose'),\n 'description': _('Click here to start a new conversation. Pick a topic '\n '(2-3 words is best), and give it a go!'),\n },\n}\n\ndef get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:\n # For manual testing, it can be convenient to set\n # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to\n # make it easy to click on all of the hotspots. Note that\n # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link\n # above) for details.\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [{\n 'name': hotspot,\n 'title': ALL_HOTSPOTS[hotspot]['title'],\n 'description': ALL_HOTSPOTS[hotspot]['description'],\n 'delay': 0,\n } for hotspot in ALL_HOTSPOTS]\n\n if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:\n return []\n\n seen_hotspots = frozenset(UserHotspot.objects.filter(user=user).values_list('hotspot', flat=True))\n for hotspot in ['intro_reply', 'intro_streams', 'intro_topics', 'intro_gear', 'intro_compose']:\n if hotspot not in seen_hotspots:\n return [{\n 'name': hotspot,\n 'title': ALL_HOTSPOTS[hotspot]['title'],\n 'description': ALL_HOTSPOTS[hotspot]['description'],\n 'delay': 0.5,\n }]\n\n user.tutorial_status = UserProfile.TUTORIAL_FINISHED\n user.save(update_fields=['tutorial_status'])\n return []\n\ndef copy_hotpots(source_profile: UserProfile, target_profile: UserProfile) -> None:\n for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):\n UserHotspot.objects.create(user=target_profile, hotspot=userhotspot.hotspot,\n timestamp=userhotspot.timestamp)\n\n target_profile.tutorial_status = source_profile.tutorial_status\n target_profile.onboarding_steps = source_profile.onboarding_steps\n target_profile.save(update_fields=['tutorial_status', 'onboarding_steps'])\n", "path": "zerver/lib/hotspots.py"}], "after_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/subsystems/hotspots.html\n# for documentation on this subsystem.\nfrom typing import Dict, List\n\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom zerver.models import UserHotspot, UserProfile\n\nALL_HOTSPOTS: Dict[str, Dict[str, str]] = {\n 'intro_reply': {\n 'title': _('Reply to a message'),\n 'description': _('Click anywhere on a message to reply.'),\n },\n 'intro_streams': {\n 'title': _('Catch up on a stream'),\n 'description': _('Messages sent to a stream are seen by everyone subscribed '\n 'to that stream. Try clicking on one of the stream links below.'),\n },\n 'intro_topics': {\n 'title': _('Topics'),\n 'description': _('Every message has a topic. Topics keep conversations '\n 'easy to follow, and make it easy to reply to conversations that start '\n 'while you are offline.'),\n },\n 'intro_gear': {\n 'title': _('Settings'),\n 'description': _('Go to Settings to configure your '\n 'notifications and display settings.'),\n },\n 'intro_compose': {\n 'title': _('Compose'),\n 'description': _('Click here to start a new conversation. Pick a topic '\n '(2-3 words is best), and give it a go!'),\n },\n}\n\ndef get_next_hotspots(user: UserProfile) -> List[Dict[str, object]]:\n # For manual testing, it can be convenient to set\n # ALWAYS_SEND_ALL_HOTSPOTS=True in `zproject/dev_settings.py` to\n # make it easy to click on all of the hotspots. Note that\n # ALWAYS_SEND_ALL_HOTSPOTS has some bugs; see ReadTheDocs (link\n # above) for details.\n if settings.ALWAYS_SEND_ALL_HOTSPOTS:\n return [{\n 'name': hotspot,\n 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0,\n } for hotspot in ALL_HOTSPOTS]\n\n if user.tutorial_status == UserProfile.TUTORIAL_FINISHED:\n return []\n\n seen_hotspots = frozenset(UserHotspot.objects.filter(user=user).values_list('hotspot', flat=True))\n for hotspot in ['intro_reply', 'intro_streams', 'intro_topics', 'intro_gear', 'intro_compose']:\n if hotspot not in seen_hotspots:\n return [{\n 'name': hotspot,\n 'title': str(ALL_HOTSPOTS[hotspot]['title']),\n 'description': str(ALL_HOTSPOTS[hotspot]['description']),\n 'delay': 0.5,\n }]\n\n user.tutorial_status = UserProfile.TUTORIAL_FINISHED\n user.save(update_fields=['tutorial_status'])\n return []\n\ndef copy_hotpots(source_profile: UserProfile, target_profile: UserProfile) -> None:\n for userhotspot in frozenset(UserHotspot.objects.filter(user=source_profile)):\n UserHotspot.objects.create(user=target_profile, hotspot=userhotspot.hotspot,\n timestamp=userhotspot.timestamp)\n\n target_profile.tutorial_status = source_profile.tutorial_status\n target_profile.onboarding_steps = source_profile.onboarding_steps\n target_profile.save(update_fields=['tutorial_status', 'onboarding_steps'])\n", "path": "zerver/lib/hotspots.py"}]}
| 1,247 | 314 |
gh_patches_debug_13021
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-173
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update PyPI description
At the moment I wouldn't be tempted if I first seen this page.
https://pypi.python.org/pypi/mkdocs
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 name = 'mkdocs'
12 package = 'mkdocs'
13 description = 'In progress.'
14 url = 'http://www.mkdocs.org'
15 author = 'Tom Christie'
16 author_email = '[email protected]'
17 license = 'BSD'
18 install_requires = [
19 'Jinja2>=2.7.1',
20 'Markdown>=2.3.1,<2.5',
21 'PyYAML>=3.10',
22 'watchdog>=0.7.0',
23 'ghp-import>=0.4.1'
24 ]
25
26 long_description = """Work in progress."""
27
28
29 def get_version(package):
30 """
31 Return package version as listed in `__version__` in `init.py`.
32 """
33 init_py = open(os.path.join(package, '__init__.py')).read()
34 return re.search("^__version__ = ['\"]([^'\"]+)['\"]", init_py, re.MULTILINE).group(1)
35
36
37 def get_packages(package):
38 """
39 Return root package and all sub-packages.
40 """
41 return [dirpath
42 for dirpath, dirnames, filenames in os.walk(package)
43 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
44
45
46 def get_package_data(package):
47 """
48 Return all files under the root package, that are not in a
49 package themselves.
50 """
51 walk = [(dirpath.replace(package + os.sep, '', 1), filenames)
52 for dirpath, dirnames, filenames in os.walk(package)
53 if not os.path.exists(os.path.join(dirpath, '__init__.py'))]
54
55 filepaths = []
56 for base, filenames in walk:
57 filepaths.extend([os.path.join(base, filename)
58 for filename in filenames])
59 return {package: filepaths}
60
61
62 if sys.argv[-1] == 'publish':
63 os.system("python setup.py sdist upload")
64 args = {'version': get_version(package)}
65 print("You probably want to also tag the version now:")
66 print(" git tag -a %(version)s -m 'version %(version)s'" % args)
67 print(" git push --tags")
68 sys.exit()
69
70
71 setup(
72 name=name,
73 version=get_version(package),
74 url=url,
75 license=license,
76 description=description,
77 long_description=long_description,
78 author=author,
79 author_email=author_email,
80 packages=get_packages(package),
81 package_data=get_package_data(package),
82 install_requires=install_requires,
83 entry_points={
84 'console_scripts': [
85 'mkdocs = mkdocs.main:run_main',
86 ],
87 },
88 classifiers=[
89 'Development Status :: 5 - Production/Stable',
90 'Environment :: Console',
91 'Environment :: Web Environment',
92 'Intended Audience :: Developers',
93 'License :: OSI Approved :: BSD License',
94 'Operating System :: OS Independent',
95 'Programming Language :: Python',
96 'Programming Language :: Python :: 2',
97 'Programming Language :: Python :: 2.6',
98 'Programming Language :: Python :: 2.7',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Topic :: Documentation',
103 'Topic :: Text Processing',
104 ]
105 )
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -10,7 +10,7 @@
name = 'mkdocs'
package = 'mkdocs'
-description = 'In progress.'
+description = 'Project documentation with Markdown.'
url = 'http://www.mkdocs.org'
author = 'Tom Christie'
author_email = '[email protected]'
@@ -23,7 +23,12 @@
'ghp-import>=0.4.1'
]
-long_description = """Work in progress."""
+long_description = (
+ "MkDocs is a fast, simple and downright gorgeous static site generator "
+ "that's geared towards building project documentation. Documentation "
+ "source files are written in Markdown, and configured with a single YAML "
+ "configuration file."
+)
def get_version(package):
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n \n name = 'mkdocs'\n package = 'mkdocs'\n-description = 'In progress.'\n+description = 'Project documentation with Markdown.'\n url = 'http://www.mkdocs.org'\n author = 'Tom Christie'\n author_email = '[email protected]'\n@@ -23,7 +23,12 @@\n 'ghp-import>=0.4.1'\n ]\n \n-long_description = \"\"\"Work in progress.\"\"\"\n+long_description = (\n+ \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n+ \"that's geared towards building project documentation. Documentation \"\n+ \"source files are written in Markdown, and configured with a single YAML \"\n+ \"configuration file.\"\n+)\n \n \n def get_version(package):\n", "issue": "Update PyPI description\nAt the moment I wouldn't be tempted if I first seen this page.\n\nhttps://pypi.python.org/pypi/mkdocs\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'In progress.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = \"\"\"Work in progress.\"\"\"\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nname = 'mkdocs'\npackage = 'mkdocs'\ndescription = 'Project documentation with Markdown.'\nurl = 'http://www.mkdocs.org'\nauthor = 'Tom Christie'\nauthor_email = '[email protected]'\nlicense = 'BSD'\ninstall_requires = [\n 'Jinja2>=2.7.1',\n 'Markdown>=2.3.1,<2.5',\n 'PyYAML>=3.10',\n 'watchdog>=0.7.0',\n 'ghp-import>=0.4.1'\n]\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"\n Return package version as listed in `__version__` in `init.py`.\n \"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"^__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py, re.MULTILINE).group(1)\n\n\ndef get_packages(package):\n \"\"\"\n Return root package and all sub-packages.\n \"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\ndef get_package_data(package):\n \"\"\"\n Return all files under the root package, that are not in a\n package themselves.\n \"\"\"\n walk = [(dirpath.replace(package + os.sep, '', 1), filenames)\n for dirpath, dirnames, filenames in os.walk(package)\n if not os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n filepaths = []\n for base, filenames in walk:\n filepaths.extend([os.path.join(base, filename)\n for filename in filenames])\n return {package: filepaths}\n\n\nif sys.argv[-1] == 'publish':\n os.system(\"python setup.py sdist upload\")\n args = {'version': get_version(package)}\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a %(version)s -m 'version %(version)s'\" % args)\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=name,\n version=get_version(package),\n url=url,\n license=license,\n description=description,\n long_description=long_description,\n author=author,\n author_email=author_email,\n packages=get_packages(package),\n package_data=get_package_data(package),\n install_requires=install_requires,\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.main:run_main',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ]\n)\n", "path": "setup.py"}]}
| 1,234 | 189 |
gh_patches_debug_3389
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-5011
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please make the rqrequeue service quieter
## Description
The rqrequeue service feels compelled to report that it has nothing to do, resulting in an endless stream of "No interrupted jobs found in started job registry." messages. This is not helpful during normal operations, and annoying during development.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `securedrop/worker.py`
Content:
```
1 import logging
2 import os
3 from typing import Optional, List
4
5 from redis import Redis
6 from rq.queue import Queue
7 from rq.worker import Worker, WorkerStatus
8 from rq.exceptions import InvalidJobOperation, NoSuchJobError
9 from rq.registry import StartedJobRegistry
10
11 from sdconfig import config
12
13
14 def create_queue(name=None, timeout=3600):
15 # type: (str, int) -> Queue
16 """
17 Create an rq ``Queue`` named ``name`` with default timeout ``timeout``.
18
19 If ``name`` is omitted, ``config.RQ_WORKER_NAME`` is used.
20 """
21 if name is None:
22 name = config.RQ_WORKER_NAME
23 q = Queue(name=name, connection=Redis(), default_timeout=timeout)
24 return q
25
26
27 def rq_workers(queue=None):
28 # type: (Queue) -> List[Worker]
29 """
30 Returns the list of current rq ``Worker``s.
31 """
32
33 return Worker.all(connection=Redis(), queue=queue)
34
35
36 def worker_for_job(job_id):
37 # type: (str) -> Optional[Worker]
38 """
39 If the job is being run, return its ``Worker``.
40 """
41 for worker in rq_workers():
42 # If the worker process no longer exists, skip it. From "man 2
43 # kill": "If sig is 0, then no signal is sent, but existence
44 # and permission checks are still performed; this can be used
45 # to check for the existence of a process ID or process group
46 # ID that the caller is permitted to signal."
47 try:
48 os.kill(worker.pid, 0)
49 except OSError:
50 continue
51
52 # If it's running and working on the given job, return it.
53 if worker.state == WorkerStatus.BUSY and job_id == worker.get_current_job_id():
54 return worker
55 return None
56
57
58 def requeue_interrupted_jobs(queue_name=None):
59 # type: (str) -> None
60 """
61 Requeues jobs found in the given queue's started job registry.
62
63 Only restarts those that aren't already queued or being run.
64
65 When rq starts a job, it records it in the queue's started job
66 registry. If the server is rebooted before the job completes, the
67 job is not automatically restarted from the information in the
68 registry. For tasks like secure deletion of files, this means that
69 information thought to be deleted is still present in the case of
70 seizure or compromise. We have manage.py tasks to clean such files
71 up, but this utility attempts to reduce the need for manual
72 intervention by automatically resuming interrupted jobs.
73
74 This function is predicated on a risky assumption: that all jobs
75 are idempotent. At time of writing, we use rq for securely
76 deleting submission files and hashing submissions for the ETag
77 header. Both of these can be safely repeated. If we add rq tasks
78 that cannot, this function should be improved to omit those.
79 """
80 queue = create_queue(queue_name)
81 started_job_registry = StartedJobRegistry(queue=queue)
82
83 queued_job_ids = queue.get_job_ids()
84 logging.debug("queued jobs: {}".format(queued_job_ids))
85 started_job_ids = started_job_registry.get_job_ids()
86 logging.debug("started jobs: {}".format(started_job_ids))
87 job_ids = [j for j in started_job_ids if j not in queued_job_ids]
88 logging.debug("candidate job ids: {}".format(job_ids))
89
90 if not job_ids:
91 logging.info("No interrupted jobs found in started job registry.")
92
93 for job_id in job_ids:
94 logging.debug("Considering job %s", job_id)
95 try:
96 job = started_job_registry.job_class.fetch(job_id, started_job_registry.connection)
97 except NoSuchJobError as e:
98 logging.error(
99 "Could not find details for job %s: %s", job_id, e
100 )
101 continue
102
103 logging.debug(
104 "Job %s enqueued at %s, started at %s", job_id, job.enqueued_at, job.started_at
105 )
106
107 worker = worker_for_job(job_id)
108 if worker:
109 logging.info(
110 "Skipping job %s, which is already being run by worker %s", job_id, worker.key
111 )
112 continue
113
114 logging.info("Requeuing job %s", job)
115
116 try:
117 started_job_registry.remove(job)
118 except InvalidJobOperation as e:
119 logging.error("Could not remove job %s from started job registry: %s", job, e)
120 continue
121
122 try:
123 queue.enqueue_job(job)
124 logging.debug("Job now enqueued at %s, started at %s", job.enqueued_at, job.started_at)
125 except Exception as e:
126 logging.error("Could not requeue job %s: %s", job, e)
127 continue
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/securedrop/worker.py b/securedrop/worker.py
--- a/securedrop/worker.py
+++ b/securedrop/worker.py
@@ -88,7 +88,7 @@
logging.debug("candidate job ids: {}".format(job_ids))
if not job_ids:
- logging.info("No interrupted jobs found in started job registry.")
+ logging.debug("No interrupted jobs found in started job registry.")
for job_id in job_ids:
logging.debug("Considering job %s", job_id)
|
{"golden_diff": "diff --git a/securedrop/worker.py b/securedrop/worker.py\n--- a/securedrop/worker.py\n+++ b/securedrop/worker.py\n@@ -88,7 +88,7 @@\n logging.debug(\"candidate job ids: {}\".format(job_ids))\n \n if not job_ids:\n- logging.info(\"No interrupted jobs found in started job registry.\")\n+ logging.debug(\"No interrupted jobs found in started job registry.\")\n \n for job_id in job_ids:\n logging.debug(\"Considering job %s\", job_id)\n", "issue": "Please make the rqrequeue service quieter\n## Description\r\n\r\nThe rqrequeue service feels compelled to report that it has nothing to do, resulting in an endless stream of \"No interrupted jobs found in started job registry.\" messages. This is not helpful during normal operations, and annoying during development.\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import Optional, List\n\nfrom redis import Redis\nfrom rq.queue import Queue\nfrom rq.worker import Worker, WorkerStatus\nfrom rq.exceptions import InvalidJobOperation, NoSuchJobError\nfrom rq.registry import StartedJobRegistry\n\nfrom sdconfig import config\n\n\ndef create_queue(name=None, timeout=3600):\n # type: (str, int) -> Queue\n \"\"\"\n Create an rq ``Queue`` named ``name`` with default timeout ``timeout``.\n\n If ``name`` is omitted, ``config.RQ_WORKER_NAME`` is used.\n \"\"\"\n if name is None:\n name = config.RQ_WORKER_NAME\n q = Queue(name=name, connection=Redis(), default_timeout=timeout)\n return q\n\n\ndef rq_workers(queue=None):\n # type: (Queue) -> List[Worker]\n \"\"\"\n Returns the list of current rq ``Worker``s.\n \"\"\"\n\n return Worker.all(connection=Redis(), queue=queue)\n\n\ndef worker_for_job(job_id):\n # type: (str) -> Optional[Worker]\n \"\"\"\n If the job is being run, return its ``Worker``.\n \"\"\"\n for worker in rq_workers():\n # If the worker process no longer exists, skip it. From \"man 2\n # kill\": \"If sig is 0, then no signal is sent, but existence\n # and permission checks are still performed; this can be used\n # to check for the existence of a process ID or process group\n # ID that the caller is permitted to signal.\"\n try:\n os.kill(worker.pid, 0)\n except OSError:\n continue\n\n # If it's running and working on the given job, return it.\n if worker.state == WorkerStatus.BUSY and job_id == worker.get_current_job_id():\n return worker\n return None\n\n\ndef requeue_interrupted_jobs(queue_name=None):\n # type: (str) -> None\n \"\"\"\n Requeues jobs found in the given queue's started job registry.\n\n Only restarts those that aren't already queued or being run.\n\n When rq starts a job, it records it in the queue's started job\n registry. If the server is rebooted before the job completes, the\n job is not automatically restarted from the information in the\n registry. For tasks like secure deletion of files, this means that\n information thought to be deleted is still present in the case of\n seizure or compromise. We have manage.py tasks to clean such files\n up, but this utility attempts to reduce the need for manual\n intervention by automatically resuming interrupted jobs.\n\n This function is predicated on a risky assumption: that all jobs\n are idempotent. At time of writing, we use rq for securely\n deleting submission files and hashing submissions for the ETag\n header. Both of these can be safely repeated. If we add rq tasks\n that cannot, this function should be improved to omit those.\n \"\"\"\n queue = create_queue(queue_name)\n started_job_registry = StartedJobRegistry(queue=queue)\n\n queued_job_ids = queue.get_job_ids()\n logging.debug(\"queued jobs: {}\".format(queued_job_ids))\n started_job_ids = started_job_registry.get_job_ids()\n logging.debug(\"started jobs: {}\".format(started_job_ids))\n job_ids = [j for j in started_job_ids if j not in queued_job_ids]\n logging.debug(\"candidate job ids: {}\".format(job_ids))\n\n if not job_ids:\n logging.info(\"No interrupted jobs found in started job registry.\")\n\n for job_id in job_ids:\n logging.debug(\"Considering job %s\", job_id)\n try:\n job = started_job_registry.job_class.fetch(job_id, started_job_registry.connection)\n except NoSuchJobError as e:\n logging.error(\n \"Could not find details for job %s: %s\", job_id, e\n )\n continue\n\n logging.debug(\n \"Job %s enqueued at %s, started at %s\", job_id, job.enqueued_at, job.started_at\n )\n\n worker = worker_for_job(job_id)\n if worker:\n logging.info(\n \"Skipping job %s, which is already being run by worker %s\", job_id, worker.key\n )\n continue\n\n logging.info(\"Requeuing job %s\", job)\n\n try:\n started_job_registry.remove(job)\n except InvalidJobOperation as e:\n logging.error(\"Could not remove job %s from started job registry: %s\", job, e)\n continue\n\n try:\n queue.enqueue_job(job)\n logging.debug(\"Job now enqueued at %s, started at %s\", job.enqueued_at, job.started_at)\n except Exception as e:\n logging.error(\"Could not requeue job %s: %s\", job, e)\n continue\n", "path": "securedrop/worker.py"}], "after_files": [{"content": "import logging\nimport os\nfrom typing import Optional, List\n\nfrom redis import Redis\nfrom rq.queue import Queue\nfrom rq.worker import Worker, WorkerStatus\nfrom rq.exceptions import InvalidJobOperation, NoSuchJobError\nfrom rq.registry import StartedJobRegistry\n\nfrom sdconfig import config\n\n\ndef create_queue(name=None, timeout=3600):\n # type: (str, int) -> Queue\n \"\"\"\n Create an rq ``Queue`` named ``name`` with default timeout ``timeout``.\n\n If ``name`` is omitted, ``config.RQ_WORKER_NAME`` is used.\n \"\"\"\n if name is None:\n name = config.RQ_WORKER_NAME\n q = Queue(name=name, connection=Redis(), default_timeout=timeout)\n return q\n\n\ndef rq_workers(queue=None):\n # type: (Queue) -> List[Worker]\n \"\"\"\n Returns the list of current rq ``Worker``s.\n \"\"\"\n\n return Worker.all(connection=Redis(), queue=queue)\n\n\ndef worker_for_job(job_id):\n # type: (str) -> Optional[Worker]\n \"\"\"\n If the job is being run, return its ``Worker``.\n \"\"\"\n for worker in rq_workers():\n # If the worker process no longer exists, skip it. From \"man 2\n # kill\": \"If sig is 0, then no signal is sent, but existence\n # and permission checks are still performed; this can be used\n # to check for the existence of a process ID or process group\n # ID that the caller is permitted to signal.\"\n try:\n os.kill(worker.pid, 0)\n except OSError:\n continue\n\n # If it's running and working on the given job, return it.\n if worker.state == WorkerStatus.BUSY and job_id == worker.get_current_job_id():\n return worker\n return None\n\n\ndef requeue_interrupted_jobs(queue_name=None):\n # type: (str) -> None\n \"\"\"\n Requeues jobs found in the given queue's started job registry.\n\n Only restarts those that aren't already queued or being run.\n\n When rq starts a job, it records it in the queue's started job\n registry. If the server is rebooted before the job completes, the\n job is not automatically restarted from the information in the\n registry. For tasks like secure deletion of files, this means that\n information thought to be deleted is still present in the case of\n seizure or compromise. We have manage.py tasks to clean such files\n up, but this utility attempts to reduce the need for manual\n intervention by automatically resuming interrupted jobs.\n\n This function is predicated on a risky assumption: that all jobs\n are idempotent. At time of writing, we use rq for securely\n deleting submission files and hashing submissions for the ETag\n header. Both of these can be safely repeated. If we add rq tasks\n that cannot, this function should be improved to omit those.\n \"\"\"\n queue = create_queue(queue_name)\n started_job_registry = StartedJobRegistry(queue=queue)\n\n queued_job_ids = queue.get_job_ids()\n logging.debug(\"queued jobs: {}\".format(queued_job_ids))\n started_job_ids = started_job_registry.get_job_ids()\n logging.debug(\"started jobs: {}\".format(started_job_ids))\n job_ids = [j for j in started_job_ids if j not in queued_job_ids]\n logging.debug(\"candidate job ids: {}\".format(job_ids))\n\n if not job_ids:\n logging.debug(\"No interrupted jobs found in started job registry.\")\n\n for job_id in job_ids:\n logging.debug(\"Considering job %s\", job_id)\n try:\n job = started_job_registry.job_class.fetch(job_id, started_job_registry.connection)\n except NoSuchJobError as e:\n logging.error(\n \"Could not find details for job %s: %s\", job_id, e\n )\n continue\n\n logging.debug(\n \"Job %s enqueued at %s, started at %s\", job_id, job.enqueued_at, job.started_at\n )\n\n worker = worker_for_job(job_id)\n if worker:\n logging.info(\n \"Skipping job %s, which is already being run by worker %s\", job_id, worker.key\n )\n continue\n\n logging.info(\"Requeuing job %s\", job)\n\n try:\n started_job_registry.remove(job)\n except InvalidJobOperation as e:\n logging.error(\"Could not remove job %s from started job registry: %s\", job, e)\n continue\n\n try:\n queue.enqueue_job(job)\n logging.debug(\"Job now enqueued at %s, started at %s\", job.enqueued_at, job.started_at)\n except Exception as e:\n logging.error(\"Could not requeue job %s: %s\", job, e)\n continue\n", "path": "securedrop/worker.py"}]}
| 1,645 | 117 |
gh_patches_debug_25831
|
rasdani/github-patches
|
git_diff
|
larq__larq-93
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docs: Add links to source code
This is really handy if people want to understand what's going on behind the scenes or want to implement more advanced stuff
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `generate_api_docs.py`
Content:
```
1 """https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py"""
2
3 import os
4 import sys
5 import yaml
6
7 from pydocmd.document import Index
8 from pydocmd.imp import dir_object
9 from pydocmd.loader import PythonLoader
10 from pydocmd.preprocessor import Preprocessor
11
12
13 with open("apidocs.yml", "r") as stream:
14 api_structure = yaml.safe_load(stream)
15
16 # Build the index and document structure first, we load the actual
17 # docstrings at a later point.
18 print("Building index...")
19 index = Index()
20
21
22 def add_sections(doc, object_names, depth=1):
23 if isinstance(object_names, list):
24 [add_sections(doc, x, depth) for x in object_names]
25 elif isinstance(object_names, dict):
26 for key, subsections in object_names.items():
27 add_sections(doc, key, depth)
28 add_sections(doc, subsections, depth + 1)
29 elif isinstance(object_names, str):
30 # Check how many levels of recursion we should be going.
31 expand_depth = len(object_names)
32 object_names = object_names.rstrip("+")
33 expand_depth -= len(object_names)
34
35 def create_sections(name, level):
36 if level > expand_depth:
37 return
38 index.new_section(doc, name, depth=depth + level, header_type="markdown")
39 for sub in dir_object(name, "line", False):
40 sub = name + "." + sub
41 create_sections(sub, level + 1)
42
43 create_sections(object_names, 0)
44 else:
45 raise RuntimeError(object_names)
46
47
48 # Make sure that we can find modules from the current working directory,
49 # and have them take precedence over installed modules.
50 sys.path.insert(0, ".")
51
52 for pages in api_structure:
53 for fname, object_names in pages.items():
54 doc = index.new_document(fname)
55 add_sections(doc, object_names)
56
57 loader = PythonLoader({})
58 preproc = Preprocessor({})
59
60 preproc.link_lookup = {}
61 for file, doc in index.documents.items():
62 for section in doc.sections:
63 preproc.link_lookup[section.identifier] = file
64 # Load the docstrings and fill the sections.
65 print("Started generating documentation...")
66 for doc in index.documents.values():
67 for section in filter(lambda s: s.identifier, doc.sections):
68 loader.load_section(section)
69 preproc.preprocess_section(section)
70
71 # Write out all the generated documents.
72 os.makedirs(os.path.join("docs", "api"), exist_ok=True)
73 for fname, doc in index.documents.items():
74 with open(os.path.join("docs", "api", fname), "w") as fp:
75 for section in doc.sections:
76 section.render(fp)
77
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/generate_api_docs.py b/generate_api_docs.py
--- a/generate_api_docs.py
+++ b/generate_api_docs.py
@@ -1,5 +1,6 @@
"""https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py"""
+import inspect
import os
import sys
import yaml
@@ -10,6 +11,23 @@
from pydocmd.preprocessor import Preprocessor
+def callable_to_source_link(obj, scope):
+ path = scope.__file__.lstrip(".")
+ source = inspect.getsourcelines(obj)
+ line = source[-1] + 1 if source[0][0].startswith("@") else source[-1]
+ link = f"https://github.com/plumerai/larq/blob/master{path}#L{line}"
+ return f'<a class="headerlink code-link" style="float:right;" href="{link}" title="Source Code"></a>'
+
+
+class PythonLoaderWithSource(PythonLoader):
+ def load_section(self, section):
+ super().load_section(section)
+ obj = section.loader_context["obj"]
+ if callable(obj):
+ scope = section.loader_context["scope"]
+ section.title += callable_to_source_link(obj, scope)
+
+
with open("apidocs.yml", "r") as stream:
api_structure = yaml.safe_load(stream)
@@ -54,7 +72,7 @@
doc = index.new_document(fname)
add_sections(doc, object_names)
-loader = PythonLoader({})
+loader = PythonLoaderWithSource({})
preproc = Preprocessor({})
preproc.link_lookup = {}
|
{"golden_diff": "diff --git a/generate_api_docs.py b/generate_api_docs.py\n--- a/generate_api_docs.py\n+++ b/generate_api_docs.py\n@@ -1,5 +1,6 @@\n \"\"\"https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py\"\"\"\n \n+import inspect\n import os\n import sys\n import yaml\n@@ -10,6 +11,23 @@\n from pydocmd.preprocessor import Preprocessor\n \n \n+def callable_to_source_link(obj, scope):\n+ path = scope.__file__.lstrip(\".\")\n+ source = inspect.getsourcelines(obj)\n+ line = source[-1] + 1 if source[0][0].startswith(\"@\") else source[-1]\n+ link = f\"https://github.com/plumerai/larq/blob/master{path}#L{line}\"\n+ return f'<a class=\"headerlink code-link\" style=\"float:right;\" href=\"{link}\" title=\"Source Code\"></a>'\n+\n+\n+class PythonLoaderWithSource(PythonLoader):\n+ def load_section(self, section):\n+ super().load_section(section)\n+ obj = section.loader_context[\"obj\"]\n+ if callable(obj):\n+ scope = section.loader_context[\"scope\"]\n+ section.title += callable_to_source_link(obj, scope)\n+\n+\n with open(\"apidocs.yml\", \"r\") as stream:\n api_structure = yaml.safe_load(stream)\n \n@@ -54,7 +72,7 @@\n doc = index.new_document(fname)\n add_sections(doc, object_names)\n \n-loader = PythonLoader({})\n+loader = PythonLoaderWithSource({})\n preproc = Preprocessor({})\n \n preproc.link_lookup = {}\n", "issue": "Docs: Add links to source code\nThis is really handy if people want to understand what's going on behind the scenes or want to implement more advanced stuff\n", "before_files": [{"content": "\"\"\"https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py\"\"\"\n\nimport os\nimport sys\nimport yaml\n\nfrom pydocmd.document import Index\nfrom pydocmd.imp import dir_object\nfrom pydocmd.loader import PythonLoader\nfrom pydocmd.preprocessor import Preprocessor\n\n\nwith open(\"apidocs.yml\", \"r\") as stream:\n api_structure = yaml.safe_load(stream)\n\n# Build the index and document structure first, we load the actual\n# docstrings at a later point.\nprint(\"Building index...\")\nindex = Index()\n\n\ndef add_sections(doc, object_names, depth=1):\n if isinstance(object_names, list):\n [add_sections(doc, x, depth) for x in object_names]\n elif isinstance(object_names, dict):\n for key, subsections in object_names.items():\n add_sections(doc, key, depth)\n add_sections(doc, subsections, depth + 1)\n elif isinstance(object_names, str):\n # Check how many levels of recursion we should be going.\n expand_depth = len(object_names)\n object_names = object_names.rstrip(\"+\")\n expand_depth -= len(object_names)\n\n def create_sections(name, level):\n if level > expand_depth:\n return\n index.new_section(doc, name, depth=depth + level, header_type=\"markdown\")\n for sub in dir_object(name, \"line\", False):\n sub = name + \".\" + sub\n create_sections(sub, level + 1)\n\n create_sections(object_names, 0)\n else:\n raise RuntimeError(object_names)\n\n\n# Make sure that we can find modules from the current working directory,\n# and have them take precedence over installed modules.\nsys.path.insert(0, \".\")\n\nfor pages in api_structure:\n for fname, object_names in pages.items():\n doc = index.new_document(fname)\n add_sections(doc, object_names)\n\nloader = PythonLoader({})\npreproc = Preprocessor({})\n\npreproc.link_lookup = {}\nfor file, doc in index.documents.items():\n for section in doc.sections:\n preproc.link_lookup[section.identifier] = file\n# Load the docstrings and fill the sections.\nprint(\"Started generating documentation...\")\nfor doc in index.documents.values():\n for section in filter(lambda s: s.identifier, doc.sections):\n loader.load_section(section)\n preproc.preprocess_section(section)\n\n# Write out all the generated documents.\nos.makedirs(os.path.join(\"docs\", \"api\"), exist_ok=True)\nfor fname, doc in index.documents.items():\n with open(os.path.join(\"docs\", \"api\", fname), \"w\") as fp:\n for section in doc.sections:\n section.render(fp)\n", "path": "generate_api_docs.py"}], "after_files": [{"content": "\"\"\"https://github.com/NiklasRosenstein/pydoc-markdown/blob/master/pydocmd/__main__.py\"\"\"\n\nimport inspect\nimport os\nimport sys\nimport yaml\n\nfrom pydocmd.document import Index\nfrom pydocmd.imp import dir_object\nfrom pydocmd.loader import PythonLoader\nfrom pydocmd.preprocessor import Preprocessor\n\n\ndef callable_to_source_link(obj, scope):\n path = scope.__file__.lstrip(\".\")\n source = inspect.getsourcelines(obj)\n line = source[-1] + 1 if source[0][0].startswith(\"@\") else source[-1]\n link = f\"https://github.com/plumerai/larq/blob/master{path}#L{line}\"\n return f'<a class=\"headerlink code-link\" style=\"float:right;\" href=\"{link}\" title=\"Source Code\"></a>'\n\n\nclass PythonLoaderWithSource(PythonLoader):\n def load_section(self, section):\n super().load_section(section)\n obj = section.loader_context[\"obj\"]\n if callable(obj):\n scope = section.loader_context[\"scope\"]\n section.title += callable_to_source_link(obj, scope)\n\n\nwith open(\"apidocs.yml\", \"r\") as stream:\n api_structure = yaml.safe_load(stream)\n\n# Build the index and document structure first, we load the actual\n# docstrings at a later point.\nprint(\"Building index...\")\nindex = Index()\n\n\ndef add_sections(doc, object_names, depth=1):\n if isinstance(object_names, list):\n [add_sections(doc, x, depth) for x in object_names]\n elif isinstance(object_names, dict):\n for key, subsections in object_names.items():\n add_sections(doc, key, depth)\n add_sections(doc, subsections, depth + 1)\n elif isinstance(object_names, str):\n # Check how many levels of recursion we should be going.\n expand_depth = len(object_names)\n object_names = object_names.rstrip(\"+\")\n expand_depth -= len(object_names)\n\n def create_sections(name, level):\n if level > expand_depth:\n return\n index.new_section(doc, name, depth=depth + level, header_type=\"markdown\")\n for sub in dir_object(name, \"line\", False):\n sub = name + \".\" + sub\n create_sections(sub, level + 1)\n\n create_sections(object_names, 0)\n else:\n raise RuntimeError(object_names)\n\n\n# Make sure that we can find modules from the current working directory,\n# and have them take precedence over installed modules.\nsys.path.insert(0, \".\")\n\nfor pages in api_structure:\n for fname, object_names in pages.items():\n doc = index.new_document(fname)\n add_sections(doc, object_names)\n\nloader = PythonLoaderWithSource({})\npreproc = Preprocessor({})\n\npreproc.link_lookup = {}\nfor file, doc in index.documents.items():\n for section in doc.sections:\n preproc.link_lookup[section.identifier] = file\n# Load the docstrings and fill the sections.\nprint(\"Started generating documentation...\")\nfor doc in index.documents.values():\n for section in filter(lambda s: s.identifier, doc.sections):\n loader.load_section(section)\n preproc.preprocess_section(section)\n\n# Write out all the generated documents.\nos.makedirs(os.path.join(\"docs\", \"api\"), exist_ok=True)\nfor fname, doc in index.documents.items():\n with open(os.path.join(\"docs\", \"api\", fname), \"w\") as fp:\n for section in doc.sections:\n section.render(fp)\n", "path": "generate_api_docs.py"}]}
| 1,006 | 367 |
gh_patches_debug_39768
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-4204
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
checkov skips all K8S standard policies if one or more custom policy is specified in --checks
**Description**
Using checkov to verify a kubernetes manifests (a single file with several objects: deployments, configmaps, etc) against a list of checks (so using the --check parameter), checkov verifies only the first check, and appears to skip all others checks in the provided list.
**Examples**
The [manifests are available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s-manifest-yaml)
The [parameters available in the log](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log-L33)
**Version (please complete the following information):**
- Checkov Version 2.2.232
**Additional context**
The [full log, LOG_DEVEL=DEBUG, is available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log)
The custom policies yaml files are available [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_pvc_gov01-yaml) and [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_sts_gov01-yaml)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `checkov/kubernetes/checks/resource/base_registry.py`
Content:
```
1 from __future__ import annotations
2
3 from typing import Any, TYPE_CHECKING
4
5 from checkov.common.checks.base_check_registry import BaseCheckRegistry
6
7 if TYPE_CHECKING:
8 from checkov.common.checks.base_check import BaseCheck
9 from checkov.common.typing import _SkippedCheck, _CheckResult
10 from checkov.runner_filter import RunnerFilter
11
12
13 class Registry(BaseCheckRegistry):
14 def __init__(self, report_type: str) -> None:
15 super().__init__(report_type)
16
17 def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, dict[str, Any]]: # type:ignore[override]
18 kind = entity.get("kind") or ""
19 conf = entity
20 return kind, conf
21
22 def scan(
23 self,
24 scanned_file: str,
25 entity: dict[str, Any],
26 skipped_checks: list[_SkippedCheck],
27 runner_filter: RunnerFilter,
28 report_type: str | None = None,
29 ) -> dict[BaseCheck, _CheckResult]:
30 (entity_type, entity_configuration) = self.extract_entity_details(entity)
31 results = {}
32 checks = self.get_checks(entity_type)
33 for check in checks:
34 skip_info: "_SkippedCheck" = {}
35 if skipped_checks:
36 if check.id in [x['id'] for x in skipped_checks]:
37 skip_info = [x for x in skipped_checks if x['id'] == check.id][0]
38
39 if self._should_run_scan(check, entity_configuration, runner_filter, self.report_type):
40 self.logger.debug("Running check: {} on file {}".format(check.name, scanned_file))
41
42 result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,
43 entity_name=entity_type, entity_type=entity_type, skip_info=skip_info)
44 results[check] = result
45 return results
46
47 @staticmethod
48 def _should_run_scan(
49 check: BaseCheck, entity_configuration: dict[str, Any], runner_filter: RunnerFilter, report_type: str
50 ) -> bool:
51 check_id_allowlist = runner_filter.checks
52 check_id_denylist = runner_filter.skip_checks
53 if check_id_allowlist or runner_filter.check_threshold:
54 # Allow list provides namespace-only allows, check-only allows, or both
55 # If namespaces not specified, all namespaces are scanned
56 # If checks not specified, all checks are scanned
57 run_check = False
58 allowed_namespaces = [string for string in check_id_allowlist if ("CKV_" not in string and "BC_" not in string)]
59 if not any(("CKV_" in check or "BC_" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:
60 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
61 if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
62 run_check = True
63 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
64 if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
65 run_check = True
66 else:
67 if "default" in allowed_namespaces:
68 run_check = True
69 else:
70 if runner_filter.should_run_check(check=check, report_type=report_type):
71 if allowed_namespaces:
72 # Check if namespace in allowed namespaces
73 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
74 if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
75 run_check = True
76 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
77 if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
78 run_check = True
79 else:
80 if "default" in allowed_namespaces:
81 run_check = True
82 else:
83 # No namespaces to filter
84 run_check = True
85 if run_check:
86 return True
87 elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:
88 namespace_skip = False
89 if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
90 if entity_configuration["metadata"]["namespace"] in check_id_denylist:
91 namespace_skip = True
92 elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
93 if entity_configuration["parent_metadata"]["namespace"] in check_id_denylist:
94 namespace_skip = True
95 else:
96 if "default" in check_id_denylist:
97 namespace_skip = True
98 if runner_filter.should_run_check(check=check, report_type=report_type) and not namespace_skip:
99 return True
100 else:
101 return True
102 return False
103
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/checkov/kubernetes/checks/resource/base_registry.py b/checkov/kubernetes/checks/resource/base_registry.py
--- a/checkov/kubernetes/checks/resource/base_registry.py
+++ b/checkov/kubernetes/checks/resource/base_registry.py
@@ -54,35 +54,27 @@
# Allow list provides namespace-only allows, check-only allows, or both
# If namespaces not specified, all namespaces are scanned
# If checks not specified, all checks are scanned
- run_check = False
- allowed_namespaces = [string for string in check_id_allowlist if ("CKV_" not in string and "BC_" not in string)]
- if not any(("CKV_" in check or "BC_" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:
+
+ if any("_" in check_id for check_id in check_id_allowlist) or runner_filter.check_threshold:
+ # a Kubernetes namespace can't have an '_' in its name,
+ # therefore we assume it is a built-in or custom check
+ if not runner_filter.should_run_check(check=check, report_type=report_type):
+ return False
+
+ allowed_namespaces = [check_id for check_id in check_id_allowlist if "_" not in check_id]
+ if allowed_namespaces:
+ # Check if namespace in allowed namespaces
if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
- run_check = True
+ return True
elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
- run_check = True
+ return True
else:
if "default" in allowed_namespaces:
- run_check = True
+ return True
else:
- if runner_filter.should_run_check(check=check, report_type=report_type):
- if allowed_namespaces:
- # Check if namespace in allowed namespaces
- if "metadata" in entity_configuration and "namespace" in entity_configuration["metadata"]:
- if entity_configuration["metadata"]["namespace"] in allowed_namespaces:
- run_check = True
- elif "parent_metadata" in entity_configuration and "namespace" in entity_configuration["parent_metadata"]:
- if entity_configuration["parent_metadata"]["namespace"] in allowed_namespaces:
- run_check = True
- else:
- if "default" in allowed_namespaces:
- run_check = True
- else:
- # No namespaces to filter
- run_check = True
- if run_check:
+ # No namespaces to filter
return True
elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:
namespace_skip = False
|
{"golden_diff": "diff --git a/checkov/kubernetes/checks/resource/base_registry.py b/checkov/kubernetes/checks/resource/base_registry.py\n--- a/checkov/kubernetes/checks/resource/base_registry.py\n+++ b/checkov/kubernetes/checks/resource/base_registry.py\n@@ -54,35 +54,27 @@\n # Allow list provides namespace-only allows, check-only allows, or both\n # If namespaces not specified, all namespaces are scanned\n # If checks not specified, all checks are scanned\n- run_check = False\n- allowed_namespaces = [string for string in check_id_allowlist if (\"CKV_\" not in string and \"BC_\" not in string)]\n- if not any((\"CKV_\" in check or \"BC_\" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:\n+\n+ if any(\"_\" in check_id for check_id in check_id_allowlist) or runner_filter.check_threshold:\n+ # a Kubernetes namespace can't have an '_' in its name,\n+ # therefore we assume it is a built-in or custom check\n+ if not runner_filter.should_run_check(check=check, report_type=report_type):\n+ return False\n+\n+ allowed_namespaces = [check_id for check_id in check_id_allowlist if \"_\" not in check_id]\n+ if allowed_namespaces:\n+ # Check if namespace in allowed namespaces\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n+ return True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n+ return True\n else:\n if \"default\" in allowed_namespaces:\n- run_check = True\n+ return True\n else:\n- if runner_filter.should_run_check(check=check, report_type=report_type):\n- if allowed_namespaces:\n- # Check if namespace in allowed namespaces\n- if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n- if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n- elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n- if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n- run_check = True\n- else:\n- if \"default\" in allowed_namespaces:\n- run_check = True\n- else:\n- # No namespaces to filter\n- run_check = True\n- if run_check:\n+ # No namespaces to filter\n return True\n elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:\n namespace_skip = False\n", "issue": "checkov skips all K8S standard policies if one or more custom policy is specified in --checks\n**Description**\r\nUsing checkov to verify a kubernetes manifests (a single file with several objects: deployments, configmaps, etc) against a list of checks (so using the --check parameter), checkov verifies only the first check, and appears to skip all others checks in the provided list.\r\n\r\n**Examples**\r\nThe [manifests are available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s-manifest-yaml)\r\nThe [parameters available in the log](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log-L33)\r\n\r\n**Version (please complete the following information):**\r\n - Checkov Version 2.2.232\r\n\r\n**Additional context**\r\nThe [full log, LOG_DEVEL=DEBUG, is available here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-full_log_debug-log)\r\nThe custom policies yaml files are available [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_pvc_gov01-yaml) and [here](https://gist.github.com/previ/cf193061c767f18be7616dd52739adb0#file-k8s_sts_gov01-yaml)\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, TYPE_CHECKING\n\nfrom checkov.common.checks.base_check_registry import BaseCheckRegistry\n\nif TYPE_CHECKING:\n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.typing import _SkippedCheck, _CheckResult\n from checkov.runner_filter import RunnerFilter\n\n\nclass Registry(BaseCheckRegistry):\n def __init__(self, report_type: str) -> None:\n super().__init__(report_type)\n\n def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, dict[str, Any]]: # type:ignore[override]\n kind = entity.get(\"kind\") or \"\"\n conf = entity\n return kind, conf\n\n def scan(\n self,\n scanned_file: str,\n entity: dict[str, Any],\n skipped_checks: list[_SkippedCheck],\n runner_filter: RunnerFilter,\n report_type: str | None = None,\n ) -> dict[BaseCheck, _CheckResult]:\n (entity_type, entity_configuration) = self.extract_entity_details(entity)\n results = {}\n checks = self.get_checks(entity_type)\n for check in checks:\n skip_info: \"_SkippedCheck\" = {}\n if skipped_checks:\n if check.id in [x['id'] for x in skipped_checks]:\n skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n\n if self._should_run_scan(check, entity_configuration, runner_filter, self.report_type):\n self.logger.debug(\"Running check: {} on file {}\".format(check.name, scanned_file))\n\n result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,\n entity_name=entity_type, entity_type=entity_type, skip_info=skip_info)\n results[check] = result\n return results\n\n @staticmethod\n def _should_run_scan(\n check: BaseCheck, entity_configuration: dict[str, Any], runner_filter: RunnerFilter, report_type: str\n ) -> bool:\n check_id_allowlist = runner_filter.checks\n check_id_denylist = runner_filter.skip_checks\n if check_id_allowlist or runner_filter.check_threshold:\n # Allow list provides namespace-only allows, check-only allows, or both\n # If namespaces not specified, all namespaces are scanned\n # If checks not specified, all checks are scanned\n run_check = False\n allowed_namespaces = [string for string in check_id_allowlist if (\"CKV_\" not in string and \"BC_\" not in string)]\n if not any((\"CKV_\" in check or \"BC_\" in check) for check in check_id_allowlist) and not runner_filter.check_threshold:\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n else:\n if \"default\" in allowed_namespaces:\n run_check = True\n else:\n if runner_filter.should_run_check(check=check, report_type=report_type):\n if allowed_namespaces:\n # Check if namespace in allowed namespaces\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n run_check = True\n else:\n if \"default\" in allowed_namespaces:\n run_check = True\n else:\n # No namespaces to filter\n run_check = True\n if run_check:\n return True\n elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:\n namespace_skip = False\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n else:\n if \"default\" in check_id_denylist:\n namespace_skip = True\n if runner_filter.should_run_check(check=check, report_type=report_type) and not namespace_skip:\n return True\n else:\n return True\n return False\n", "path": "checkov/kubernetes/checks/resource/base_registry.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom typing import Any, TYPE_CHECKING\n\nfrom checkov.common.checks.base_check_registry import BaseCheckRegistry\n\nif TYPE_CHECKING:\n from checkov.common.checks.base_check import BaseCheck\n from checkov.common.typing import _SkippedCheck, _CheckResult\n from checkov.runner_filter import RunnerFilter\n\n\nclass Registry(BaseCheckRegistry):\n def __init__(self, report_type: str) -> None:\n super().__init__(report_type)\n\n def extract_entity_details(self, entity: dict[str, Any]) -> tuple[str, dict[str, Any]]: # type:ignore[override]\n kind = entity.get(\"kind\") or \"\"\n conf = entity\n return kind, conf\n\n def scan(\n self,\n scanned_file: str,\n entity: dict[str, Any],\n skipped_checks: list[_SkippedCheck],\n runner_filter: RunnerFilter,\n report_type: str | None = None,\n ) -> dict[BaseCheck, _CheckResult]:\n (entity_type, entity_configuration) = self.extract_entity_details(entity)\n results = {}\n checks = self.get_checks(entity_type)\n for check in checks:\n skip_info: \"_SkippedCheck\" = {}\n if skipped_checks:\n if check.id in [x['id'] for x in skipped_checks]:\n skip_info = [x for x in skipped_checks if x['id'] == check.id][0]\n\n if self._should_run_scan(check, entity_configuration, runner_filter, self.report_type):\n self.logger.debug(\"Running check: {} on file {}\".format(check.name, scanned_file))\n\n result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,\n entity_name=entity_type, entity_type=entity_type, skip_info=skip_info)\n results[check] = result\n return results\n\n @staticmethod\n def _should_run_scan(\n check: BaseCheck, entity_configuration: dict[str, Any], runner_filter: RunnerFilter, report_type: str\n ) -> bool:\n check_id_allowlist = runner_filter.checks\n check_id_denylist = runner_filter.skip_checks\n if check_id_allowlist or runner_filter.check_threshold:\n # Allow list provides namespace-only allows, check-only allows, or both\n # If namespaces not specified, all namespaces are scanned\n # If checks not specified, all checks are scanned\n\n if any(\"_\" in check_id for check_id in check_id_allowlist) or runner_filter.check_threshold:\n # a Kubernetes namespace can't have an '_' in its name,\n # therefore we assume it is a built-in or custom check\n if not runner_filter.should_run_check(check=check, report_type=report_type):\n return False\n\n allowed_namespaces = [check_id for check_id in check_id_allowlist if \"_\" not in check_id]\n if allowed_namespaces:\n # Check if namespace in allowed namespaces\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in allowed_namespaces:\n return True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in allowed_namespaces:\n return True\n else:\n if \"default\" in allowed_namespaces:\n return True\n else:\n # No namespaces to filter\n return True\n elif check_id_denylist or runner_filter.skip_check_threshold or runner_filter.use_enforcement_rules:\n namespace_skip = False\n if \"metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"metadata\"]:\n if entity_configuration[\"metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n elif \"parent_metadata\" in entity_configuration and \"namespace\" in entity_configuration[\"parent_metadata\"]:\n if entity_configuration[\"parent_metadata\"][\"namespace\"] in check_id_denylist:\n namespace_skip = True\n else:\n if \"default\" in check_id_denylist:\n namespace_skip = True\n if runner_filter.should_run_check(check=check, report_type=report_type) and not namespace_skip:\n return True\n else:\n return True\n return False\n", "path": "checkov/kubernetes/checks/resource/base_registry.py"}]}
| 1,867 | 619 |
gh_patches_debug_13684
|
rasdani/github-patches
|
git_diff
|
freqtrade__freqtrade-2565
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Freqtrade bot cannot run - Error - cannot find pairlist
## Step 1: Have you search for this issue before posting it?
Yes
## Step 2: Describe your environment
* Operating system: Ubuntu 18.04.3 (LTS) x64
* Python Version: Python 2.7.15 (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Branch: Develop
* Last Commit ID: cab748588cdb5dfa9f89fe917b70235e32f6e373
## Step 3: Describe the problem:
I am not able to run the freqtrade bot in live mode. An Error always appear where it is trying to find the pairlist even though the pairlist is already defined in config.json
### Steps to reproduce:
1. Execute the command - source .env/bin/activate; freqtrade trade --logfile freqtrade.log --strategy AnyStrategy
### Observed Results:
* What happened?
The freqtrade bot did not run. Error message 'freqtrade - ERROR - No Pairlist defined!' appeared
* What did you expect to happen?
The freqtrade bot to run
### Relevant code exceptions or logs:
2019-11-24 08:45:33,554 - freqtrade.loggers - INFO - Verbosity set to 0
2019-11-24 08:45:33,554 - freqtrade.configuration.configuration - INFO - Dry run is disabled
2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using DB: "sqlite:///tradesv3.sqlite"
2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 15 ...
2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using user-data directory: /root/freqtrade/user_data ...
2019-11-24 08:45:33,556 - freqtrade.configuration.configuration - INFO - Using data directory: /root/freqtrade/user_data/data/binance ...
2019-11-24 08:45:33,556 - freqtrade.configuration.check_exchange - INFO - Checking exchange...
2019-11-24 08:45:33,556 - freqtrade.configuration.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team.
2019-11-24 08:45:33,556 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2019-11-24 08:45:33,557 - freqtrade.freqtradebot - INFO - Starting freqtrade develop-cab74858
2019-11-24 08:45:33,579 - root - INFO - Generating grammar tables from /usr/lib/python3.6/lib2to3/Grammar.txt
2019-11-24 08:45:33,598 - root - INFO - Generating grammar tables from /usr/lib/python3.6/lib2to3/PatternGrammar.txt
2019-11-24 08:45:33,933 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy AnyStrategy from '/root/freqtrade/user_data/strategies/AnyStrategy.py'...
2019-11-24 08:45:33,933 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'ticker_interval' with value in config file: 5m.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stoploss' with value in config file: -0.99.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'trailing_stop' with value in config file: False.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: BTC.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: 0.0005.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'use_sell_signal' with value in config file: True.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'sell_profit_only' with value in config file: False.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'ignore_roi_if_buy_signal' with value in config file: False.
2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'60': 0.01, '30': 0.03, '20': 0.04, '0': 0.05}
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ticker_interval: 5m
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.99
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: False
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'buy': 'limit', 'sell': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'buy': 'gtc', 'sell': 'gtc'}
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: BTC
2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: 0.0005
2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0
2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_sell_signal: True
2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using sell_profit_only: False
2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_buy_signal: False
2019-11-24 08:45:33,936 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'enableRateLimit': True}
2019-11-24 08:45:33,939 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'enableRateLimit': True, 'rateLimit': 500}
2019-11-24 08:45:33,942 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2019-11-24 08:45:34,297 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...
2019-11-24 08:45:34,528 - freqtrade.wallets - INFO - Wallets synced.
2019-11-24 08:45:34,528 - freqtrade - ERROR - No Pairlist defined!
-- config.json
{
"max_open_trades": 15,
"stake_currency": "BTC",
"stake_amount": 0.0005,
"fiat_display_currency": "PHP",
"dry_run": false,
"trailing_stop": false,
"unfilledtimeout": {
"buy": 10,
"sell": 30
},
"ticker_interval": "5m",
"stoploss": -0.99,
"bid_strategy": {
"ask_last_balance": 0.0,
"use_order_book": false,
"order_book_top": 1,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"ask_strategy":{
"use_order_book": false,
"order_book_min": 1,
"order_book_max": 9,
"use_sell_signal": true,
"sell_profit_only": false,
"ignore_roi_if_buy_signal": false
},
"exchange": {
"name": "binance",
"key": "<binance key>",
"secret": "<binace secret>",
"ccxt_config": {"enableRateLimit": true},
"ccxt_async_config": {
"enableRateLimit": true,
"rateLimit": 500
},
"pair_whitelist": [
"ETH/BTC",
"LTC/BTC",
"ETC/BTC",
"DASH/BTC",
"ZEC/BTC",
"XLM/BTC",
"POWR/BTC",
"ADA/BTC",
"XMR/BTC",
"BNB/BTC",
"WTC/BTC",
"TRX/BTC",
"EOS/BTC",
"XVG/BTC",
"BAT/BTC",
"STORJ/BTC",
"QTUM/BTC",
"WAVES/BTC",
"XRP/BTC",
"LSK/BTC",
"NEO/BTC",
"LINK/BTC",
"ONT/BTC",
"XEM/BTC",
"VET/BTC",
"ICX/BTC",
"HOT/BTC"
],
"pair_blacklist": [
"DOGE/BTC"
]
},
"edge": {
"enabled": false,
"process_throttle_secs": 3600,
"calculate_since_number_of_days": 7,
"capital_available_percentage": 0.5,
"allowed_risk": 0.01,
"stoploss_range_min": -0.01,
"stoploss_range_max": -0.1,
"stoploss_range_step": -0.01,
"minimum_winrate": 0.60,
"minimum_expectancy": 0.20,
"min_trade_number": 10,
"max_trade_duration_minute": 1440,
"remove_pumps": false
},
"telegram": {
"enabled": true,
"token": "<telegram token>",
"chat_id": "<telegram chat id>"
},
"initial_state": "running",
"forcebuy_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/configuration/deprecated_settings.py`
Content:
```
1 """
2 Functions to handle deprecated settings
3 """
4
5 import logging
6 from typing import Any, Dict
7
8 from freqtrade import OperationalException
9
10
11 logger = logging.getLogger(__name__)
12
13
14 def check_conflicting_settings(config: Dict[str, Any],
15 section1: str, name1: str,
16 section2: str, name2: str):
17 section1_config = config.get(section1, {})
18 section2_config = config.get(section2, {})
19 if name1 in section1_config and name2 in section2_config:
20 raise OperationalException(
21 f"Conflicting settings `{section1}.{name1}` and `{section2}.{name2}` "
22 "(DEPRECATED) detected in the configuration file. "
23 "This deprecated setting will be removed in the next versions of Freqtrade. "
24 f"Please delete it from your configuration and use the `{section1}.{name1}` "
25 "setting instead."
26 )
27
28
29 def process_deprecated_setting(config: Dict[str, Any],
30 section1: str, name1: str,
31 section2: str, name2: str):
32 section2_config = config.get(section2, {})
33
34 if name2 in section2_config:
35 logger.warning(
36 "DEPRECATED: "
37 f"The `{section2}.{name2}` setting is deprecated and "
38 "will be removed in the next versions of Freqtrade. "
39 f"Please use the `{section1}.{name1}` setting in your configuration instead."
40 )
41 section1_config = config.get(section1, {})
42 section1_config[name1] = section2_config[name2]
43
44
45 def process_temporary_deprecated_settings(config: Dict[str, Any]) -> None:
46
47 check_conflicting_settings(config, 'ask_strategy', 'use_sell_signal',
48 'experimental', 'use_sell_signal')
49 check_conflicting_settings(config, 'ask_strategy', 'sell_profit_only',
50 'experimental', 'sell_profit_only')
51 check_conflicting_settings(config, 'ask_strategy', 'ignore_roi_if_buy_signal',
52 'experimental', 'ignore_roi_if_buy_signal')
53
54 process_deprecated_setting(config, 'ask_strategy', 'use_sell_signal',
55 'experimental', 'use_sell_signal')
56 process_deprecated_setting(config, 'ask_strategy', 'sell_profit_only',
57 'experimental', 'sell_profit_only')
58 process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',
59 'experimental', 'ignore_roi_if_buy_signal')
60
61 if config.get('pairlist', {}).get("method") == 'VolumePairList':
62 logger.warning(
63 "DEPRECATED: "
64 f"Using VolumePairList in pairlist is deprecated and must be moved to pairlists. "
65 "Please refer to the docs on configuration details")
66 pl = {'method': 'VolumePairList'}
67 pl.update(config.get('pairlist', {}).get('config'))
68 config['pairlists'].append(pl)
69
70 if config.get('pairlist', {}).get('config', {}).get('precision_filter'):
71 logger.warning(
72 "DEPRECATED: "
73 f"Using precision_filter setting is deprecated and has been replaced by"
74 "PrecisionFilter. Please refer to the docs on configuration details")
75 config['pairlists'].append({'method': 'PrecisionFilter'})
76
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/freqtrade/configuration/deprecated_settings.py b/freqtrade/configuration/deprecated_settings.py
--- a/freqtrade/configuration/deprecated_settings.py
+++ b/freqtrade/configuration/deprecated_settings.py
@@ -58,6 +58,13 @@
process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',
'experimental', 'ignore_roi_if_buy_signal')
+ if not config.get('pairlists') and not config.get('pairlists'):
+ config['pairlists'] = [{'method': 'StaticPairList'}]
+ logger.warning(
+ "DEPRECATED: "
+ "Pairlists must be defined explicitly in the future."
+ "Defaulting to StaticPairList for now.")
+
if config.get('pairlist', {}).get("method") == 'VolumePairList':
logger.warning(
"DEPRECATED: "
|
{"golden_diff": "diff --git a/freqtrade/configuration/deprecated_settings.py b/freqtrade/configuration/deprecated_settings.py\n--- a/freqtrade/configuration/deprecated_settings.py\n+++ b/freqtrade/configuration/deprecated_settings.py\n@@ -58,6 +58,13 @@\n process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n \n+ if not config.get('pairlists') and not config.get('pairlists'):\n+ config['pairlists'] = [{'method': 'StaticPairList'}]\n+ logger.warning(\n+ \"DEPRECATED: \"\n+ \"Pairlists must be defined explicitly in the future.\"\n+ \"Defaulting to StaticPairList for now.\")\n+\n if config.get('pairlist', {}).get(\"method\") == 'VolumePairList':\n logger.warning(\n \"DEPRECATED: \"\n", "issue": "Freqtrade bot cannot run - Error - cannot find pairlist\n## Step 1: Have you search for this issue before posting it?\r\nYes\r\n\r\n## Step 2: Describe your environment\r\n\r\n * Operating system: Ubuntu 18.04.3 (LTS) x64\r\n * Python Version: Python 2.7.15 (`python -V`)\r\n * CCXT version: _____ (`pip freeze | grep ccxt`)\r\n * Branch: Develop\r\n * Last Commit ID: cab748588cdb5dfa9f89fe917b70235e32f6e373\r\n \r\n## Step 3: Describe the problem:\r\n\r\nI am not able to run the freqtrade bot in live mode. An Error always appear where it is trying to find the pairlist even though the pairlist is already defined in config.json\r\n\r\n### Steps to reproduce:\r\n\r\n 1. Execute the command - source .env/bin/activate; freqtrade trade --logfile freqtrade.log --strategy AnyStrategy\r\n\r\n### Observed Results:\r\n\r\n * What happened?\r\n The freqtrade bot did not run. Error message 'freqtrade - ERROR - No Pairlist defined!' appeared\r\n * What did you expect to happen?\r\n The freqtrade bot to run\r\n\r\n### Relevant code exceptions or logs:\r\n2019-11-24 08:45:33,554 - freqtrade.loggers - INFO - Verbosity set to 0\r\n2019-11-24 08:45:33,554 - freqtrade.configuration.configuration - INFO - Dry run is disabled\r\n2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using DB: \"sqlite:///tradesv3.sqlite\"\r\n2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using max_open_trades: 15 ...\r\n2019-11-24 08:45:33,555 - freqtrade.configuration.configuration - INFO - Using user-data directory: /root/freqtrade/user_data ...\r\n2019-11-24 08:45:33,556 - freqtrade.configuration.configuration - INFO - Using data directory: /root/freqtrade/user_data/data/binance ...\r\n2019-11-24 08:45:33,556 - freqtrade.configuration.check_exchange - INFO - Checking exchange...\r\n2019-11-24 08:45:33,556 - freqtrade.configuration.check_exchange - INFO - Exchange \"binance\" is officially supported by the Freqtrade development team.\r\n2019-11-24 08:45:33,556 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.\r\n2019-11-24 08:45:33,557 - freqtrade.freqtradebot - INFO - Starting freqtrade develop-cab74858\r\n2019-11-24 08:45:33,579 - root - INFO - Generating grammar tables from /usr/lib/python3.6/lib2to3/Grammar.txt\r\n2019-11-24 08:45:33,598 - root - INFO - Generating grammar tables from /usr/lib/python3.6/lib2to3/PatternGrammar.txt\r\n2019-11-24 08:45:33,933 - freqtrade.resolvers.iresolver - INFO - Using resolved strategy AnyStrategy from '/root/freqtrade/user_data/strategies/AnyStrategy.py'...\r\n2019-11-24 08:45:33,933 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'ticker_interval' with value in config file: 5m.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stoploss' with value in config file: -0.99.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'trailing_stop' with value in config file: False.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_currency' with value in config file: BTC.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'stake_amount' with value in config file: 0.0005.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'use_sell_signal' with value in config file: True.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'sell_profit_only' with value in config file: False.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Override strategy 'ignore_roi_if_buy_signal' with value in config file: False.\r\n2019-11-24 08:45:33,934 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using minimal_roi: {'60': 0.01, '30': 0.03, '20': 0.04, '0': 0.05}\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ticker_interval: 5m\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stoploss: -0.99\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop: False\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_stop_positive_offset: 0.0\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using trailing_only_offset_is_reached: False\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using process_only_new_candles: False\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_types: {'buy': 'limit', 'sell': 'limit', 'stoploss': 'limit', 'stoploss_on_exchange': False, 'stoploss_on_exchange_interval': 60}\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using order_time_in_force: {'buy': 'gtc', 'sell': 'gtc'}\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_currency: BTC\r\n2019-11-24 08:45:33,935 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using stake_amount: 0.0005\r\n2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using startup_candle_count: 0\r\n2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using use_sell_signal: True\r\n2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using sell_profit_only: False\r\n2019-11-24 08:45:33,936 - freqtrade.resolvers.strategy_resolver - INFO - Strategy using ignore_roi_if_buy_signal: False\r\n2019-11-24 08:45:33,936 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'enableRateLimit': True}\r\n2019-11-24 08:45:33,939 - freqtrade.exchange.exchange - INFO - Applying additional ccxt config: {'enableRateLimit': True, 'rateLimit': 500}\r\n2019-11-24 08:45:33,942 - freqtrade.exchange.exchange - INFO - Using Exchange \"Binance\"\r\n2019-11-24 08:45:34,297 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...\r\n2019-11-24 08:45:34,528 - freqtrade.wallets - INFO - Wallets synced.\r\n2019-11-24 08:45:34,528 - freqtrade - ERROR - No Pairlist defined!\r\n\r\n-- config.json\r\n\r\n{\r\n \"max_open_trades\": 15,\r\n \"stake_currency\": \"BTC\",\r\n \"stake_amount\": 0.0005,\r\n \"fiat_display_currency\": \"PHP\",\r\n \"dry_run\": false,\r\n \"trailing_stop\": false,\r\n \"unfilledtimeout\": {\r\n \"buy\": 10,\r\n \"sell\": 30\r\n },\r\n \"ticker_interval\": \"5m\",\r\n \"stoploss\": -0.99,\r\n \"bid_strategy\": {\r\n \"ask_last_balance\": 0.0,\r\n \"use_order_book\": false,\r\n \"order_book_top\": 1,\r\n \"check_depth_of_market\": {\r\n \"enabled\": false,\r\n \"bids_to_ask_delta\": 1\r\n }\r\n },\r\n \"ask_strategy\":{\r\n \"use_order_book\": false,\r\n \"order_book_min\": 1,\r\n \"order_book_max\": 9,\r\n \"use_sell_signal\": true,\r\n \"sell_profit_only\": false,\r\n \"ignore_roi_if_buy_signal\": false\r\n },\r\n \"exchange\": {\r\n \"name\": \"binance\",\r\n \"key\": \"<binance key>\",\r\n \"secret\": \"<binace secret>\",\r\n \"ccxt_config\": {\"enableRateLimit\": true},\r\n \"ccxt_async_config\": {\r\n \"enableRateLimit\": true,\r\n \"rateLimit\": 500\r\n },\r\n \"pair_whitelist\": [\r\n \"ETH/BTC\",\r\n \"LTC/BTC\",\r\n \"ETC/BTC\",\r\n \"DASH/BTC\",\r\n \"ZEC/BTC\",\r\n \"XLM/BTC\",\r\n \"POWR/BTC\",\r\n \"ADA/BTC\",\r\n \"XMR/BTC\",\r\n \"BNB/BTC\",\r\n \"WTC/BTC\",\r\n \"TRX/BTC\",\r\n \"EOS/BTC\",\r\n \"XVG/BTC\",\r\n \"BAT/BTC\",\r\n \"STORJ/BTC\",\r\n \"QTUM/BTC\",\r\n \"WAVES/BTC\",\r\n \"XRP/BTC\",\r\n \"LSK/BTC\",\r\n \"NEO/BTC\",\r\n \"LINK/BTC\",\r\n \"ONT/BTC\",\r\n \"XEM/BTC\",\r\n\t\t\t\"VET/BTC\",\r\n \"ICX/BTC\",\r\n \"HOT/BTC\"\r\n ],\r\n \"pair_blacklist\": [\r\n \"DOGE/BTC\"\r\n ]\r\n },\r\n \"edge\": {\r\n \"enabled\": false,\r\n \"process_throttle_secs\": 3600,\r\n \"calculate_since_number_of_days\": 7,\r\n \"capital_available_percentage\": 0.5,\r\n \"allowed_risk\": 0.01,\r\n \"stoploss_range_min\": -0.01,\r\n \"stoploss_range_max\": -0.1,\r\n \"stoploss_range_step\": -0.01,\r\n \"minimum_winrate\": 0.60,\r\n \"minimum_expectancy\": 0.20,\r\n \"min_trade_number\": 10,\r\n \"max_trade_duration_minute\": 1440,\r\n \"remove_pumps\": false\r\n },\r\n \"telegram\": {\r\n \"enabled\": true,\r\n \"token\": \"<telegram token>\",\r\n \"chat_id\": \"<telegram chat id>\"\r\n },\r\n \"initial_state\": \"running\",\r\n \"forcebuy_enable\": false,\r\n \"internals\": {\r\n \"process_throttle_secs\": 5\r\n }\r\n}\r\n\n", "before_files": [{"content": "\"\"\"\nFunctions to handle deprecated settings\n\"\"\"\n\nimport logging\nfrom typing import Any, Dict\n\nfrom freqtrade import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_conflicting_settings(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str):\n section1_config = config.get(section1, {})\n section2_config = config.get(section2, {})\n if name1 in section1_config and name2 in section2_config:\n raise OperationalException(\n f\"Conflicting settings `{section1}.{name1}` and `{section2}.{name2}` \"\n \"(DEPRECATED) detected in the configuration file. \"\n \"This deprecated setting will be removed in the next versions of Freqtrade. \"\n f\"Please delete it from your configuration and use the `{section1}.{name1}` \"\n \"setting instead.\"\n )\n\n\ndef process_deprecated_setting(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str):\n section2_config = config.get(section2, {})\n\n if name2 in section2_config:\n logger.warning(\n \"DEPRECATED: \"\n f\"The `{section2}.{name2}` setting is deprecated and \"\n \"will be removed in the next versions of Freqtrade. \"\n f\"Please use the `{section1}.{name1}` setting in your configuration instead.\"\n )\n section1_config = config.get(section1, {})\n section1_config[name1] = section2_config[name2]\n\n\ndef process_temporary_deprecated_settings(config: Dict[str, Any]) -> None:\n\n check_conflicting_settings(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n check_conflicting_settings(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n check_conflicting_settings(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n process_deprecated_setting(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n process_deprecated_setting(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n if config.get('pairlist', {}).get(\"method\") == 'VolumePairList':\n logger.warning(\n \"DEPRECATED: \"\n f\"Using VolumePairList in pairlist is deprecated and must be moved to pairlists. \"\n \"Please refer to the docs on configuration details\")\n pl = {'method': 'VolumePairList'}\n pl.update(config.get('pairlist', {}).get('config'))\n config['pairlists'].append(pl)\n\n if config.get('pairlist', {}).get('config', {}).get('precision_filter'):\n logger.warning(\n \"DEPRECATED: \"\n f\"Using precision_filter setting is deprecated and has been replaced by\"\n \"PrecisionFilter. Please refer to the docs on configuration details\")\n config['pairlists'].append({'method': 'PrecisionFilter'})\n", "path": "freqtrade/configuration/deprecated_settings.py"}], "after_files": [{"content": "\"\"\"\nFunctions to handle deprecated settings\n\"\"\"\n\nimport logging\nfrom typing import Any, Dict\n\nfrom freqtrade import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_conflicting_settings(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str):\n section1_config = config.get(section1, {})\n section2_config = config.get(section2, {})\n if name1 in section1_config and name2 in section2_config:\n raise OperationalException(\n f\"Conflicting settings `{section1}.{name1}` and `{section2}.{name2}` \"\n \"(DEPRECATED) detected in the configuration file. \"\n \"This deprecated setting will be removed in the next versions of Freqtrade. \"\n f\"Please delete it from your configuration and use the `{section1}.{name1}` \"\n \"setting instead.\"\n )\n\n\ndef process_deprecated_setting(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str):\n section2_config = config.get(section2, {})\n\n if name2 in section2_config:\n logger.warning(\n \"DEPRECATED: \"\n f\"The `{section2}.{name2}` setting is deprecated and \"\n \"will be removed in the next versions of Freqtrade. \"\n f\"Please use the `{section1}.{name1}` setting in your configuration instead.\"\n )\n section1_config = config.get(section1, {})\n section1_config[name1] = section2_config[name2]\n\n\ndef process_temporary_deprecated_settings(config: Dict[str, Any]) -> None:\n\n check_conflicting_settings(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n check_conflicting_settings(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n check_conflicting_settings(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n process_deprecated_setting(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n process_deprecated_setting(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n if not config.get('pairlists') and not config.get('pairlists'):\n config['pairlists'] = [{'method': 'StaticPairList'}]\n logger.warning(\n \"DEPRECATED: \"\n \"Pairlists must be defined explicitly in the future.\"\n \"Defaulting to StaticPairList for now.\")\n\n if config.get('pairlist', {}).get(\"method\") == 'VolumePairList':\n logger.warning(\n \"DEPRECATED: \"\n f\"Using VolumePairList in pairlist is deprecated and must be moved to pairlists. \"\n \"Please refer to the docs on configuration details\")\n pl = {'method': 'VolumePairList'}\n pl.update(config.get('pairlist', {}).get('config'))\n config['pairlists'].append(pl)\n\n if config.get('pairlist', {}).get('config', {}).get('precision_filter'):\n logger.warning(\n \"DEPRECATED: \"\n f\"Using precision_filter setting is deprecated and has been replaced by\"\n \"PrecisionFilter. Please refer to the docs on configuration details\")\n config['pairlists'].append({'method': 'PrecisionFilter'})\n", "path": "freqtrade/configuration/deprecated_settings.py"}]}
| 4,038 | 189 |
gh_patches_debug_7481
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-3268
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
AttributeError: 'SystemSettings' object has no attribute 'enable_password_policy' during `bench restore`
Hello,
I ran `bench update` then tried to restore a backup and this error starts popping up.
It seems it might have come in from 7ccbbce5720bf16d5d3cc94c627e22ef0541e53b
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `frappe/utils/scheduler.py`
Content:
```
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3 """
4 Events:
5 always
6 daily
7 monthly
8 weekly
9 """
10
11 from __future__ import unicode_literals, print_function
12
13 import frappe
14 import json
15 import schedule
16 import time
17 import MySQLdb
18 import frappe.utils
19 from frappe.utils import get_sites
20 from datetime import datetime
21 from background_jobs import enqueue, get_jobs, queue_timeout
22 from frappe.limits import has_expired
23 from frappe.utils.data import get_datetime, now_datetime
24 from frappe.core.doctype.user.user import STANDARD_USERS
25 from frappe.installer import update_site_config
26
27 DATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'
28
29 def start_scheduler():
30 '''Run enqueue_events_for_all_sites every 2 minutes (default).
31 Specify scheduler_interval in seconds in common_site_config.json'''
32
33 interval = frappe.get_conf().scheduler_interval or 240
34 schedule.every(interval).seconds.do(enqueue_events_for_all_sites)
35
36 while True:
37 schedule.run_pending()
38 time.sleep(1)
39
40 def enqueue_events_for_all_sites():
41 '''Loop through sites and enqueue events that are not already queued'''
42 with frappe.init_site():
43 jobs_per_site = get_jobs()
44 sites = get_sites()
45
46 for site in sites:
47 try:
48 enqueue_events_for_site(site=site, queued_jobs=jobs_per_site[site])
49 except:
50 # it should try to enqueue other sites
51 print(frappe.get_traceback())
52
53 def enqueue_events_for_site(site, queued_jobs):
54 try:
55 frappe.init(site=site)
56 if frappe.local.conf.maintenance_mode:
57 return
58
59 if frappe.local.conf.pause_scheduler:
60 return
61
62 frappe.connect()
63 if is_scheduler_disabled():
64 return
65
66 enqueue_events(site=site, queued_jobs=queued_jobs)
67
68 frappe.logger(__name__).debug('Queued events for site {0}'.format(site))
69
70 except:
71 frappe.logger(__name__).error('Exception in Enqueue Events for Site {0}'.format(site) +
72 '\n' + frappe.get_traceback())
73 raise
74
75 finally:
76 frappe.destroy()
77
78 def enqueue_events(site, queued_jobs):
79 nowtime = frappe.utils.now_datetime()
80 last = frappe.db.get_value('System Settings', 'System Settings', 'scheduler_last_event')
81
82 # set scheduler last event
83 frappe.db.set_value('System Settings', 'System Settings',
84 'scheduler_last_event', nowtime.strftime(DATETIME_FORMAT),
85 update_modified=False)
86 frappe.db.commit()
87
88 out = []
89 if last:
90 last = datetime.strptime(last, DATETIME_FORMAT)
91 out = enqueue_applicable_events(site, nowtime, last, queued_jobs)
92
93 return '\n'.join(out)
94
95 def enqueue_applicable_events(site, nowtime, last, queued_jobs=()):
96 nowtime_str = nowtime.strftime(DATETIME_FORMAT)
97 out = []
98
99 enabled_events = get_enabled_scheduler_events()
100
101 def trigger_if_enabled(site, event):
102 if event in enabled_events:
103 trigger(site, event, queued_jobs)
104 _log(event)
105
106 def _log(event):
107 out.append("{time} - {event} - queued".format(time=nowtime_str, event=event))
108
109 if nowtime.day != last.day:
110 # if first task of the day execute daily tasks
111 trigger_if_enabled(site, "daily")
112 trigger_if_enabled(site, "daily_long")
113
114 if nowtime.month != last.month:
115 trigger_if_enabled(site, "monthly")
116 trigger_if_enabled(site, "monthly_long")
117
118 if nowtime.weekday()==0:
119 trigger_if_enabled(site, "weekly")
120 trigger_if_enabled(site, "weekly_long")
121
122 if "all" not in enabled_events:
123 trigger(site, "all", queued_jobs)
124
125 if "hourly" not in enabled_events:
126 trigger(site, "hourly", queued_jobs)
127
128 if nowtime.hour != last.hour:
129 trigger_if_enabled(site, "hourly")
130 trigger_if_enabled(site, "hourly_long")
131
132 if "all" not in enabled_events:
133 trigger(site, "all", queued_jobs)
134
135 trigger_if_enabled(site, "all")
136
137 return out
138
139 def trigger(site, event, queued_jobs=(), now=False):
140 """trigger method in hooks.scheduler_events"""
141 queue = 'long' if event.endswith('_long') else 'short'
142 timeout = queue_timeout[queue]
143 if not queued_jobs and not now:
144 queued_jobs = get_jobs(site=site, queue=queue)
145
146 if frappe.flags.in_test:
147 frappe.flags.ran_schedulers.append(event)
148
149 events = get_scheduler_events(event)
150 if not events:
151 return
152
153 for handler in events:
154 if not now:
155 if handler not in queued_jobs:
156 enqueue(handler, queue, timeout, event)
157 else:
158 scheduler_task(site=site, event=event, handler=handler, now=True)
159
160 def get_scheduler_events(event):
161 '''Get scheduler events from hooks and integrations'''
162 scheduler_events = frappe.cache().get_value('scheduler_events')
163 if not scheduler_events:
164 scheduler_events = frappe.get_hooks("scheduler_events")
165 frappe.cache().set_value('scheduler_events', scheduler_events)
166
167 return scheduler_events.get(event) or []
168
169 def log(method, message=None):
170 """log error in patch_log"""
171 message = frappe.utils.cstr(message) + "\n" if message else ""
172 message += frappe.get_traceback()
173
174 if not (frappe.db and frappe.db._conn):
175 frappe.connect()
176
177 frappe.db.rollback()
178 frappe.db.begin()
179
180 d = frappe.new_doc("Error Log")
181 d.method = method
182 d.error = message
183 d.insert(ignore_permissions=True)
184
185 frappe.db.commit()
186
187 return message
188
189 def get_enabled_scheduler_events():
190 if 'enabled_events' in frappe.flags:
191 return frappe.flags.enabled_events
192
193 enabled_events = frappe.db.get_global("enabled_scheduler_events")
194 if enabled_events:
195 if isinstance(enabled_events, basestring):
196 enabled_events = json.loads(enabled_events)
197
198 return enabled_events
199
200 return ["all", "hourly", "hourly_long", "daily", "daily_long",
201 "weekly", "weekly_long", "monthly", "monthly_long"]
202
203 def is_scheduler_disabled():
204 if frappe.conf.disable_scheduler:
205 return True
206
207 return not frappe.utils.cint(frappe.db.get_single_value("System Settings", "enable_scheduler"))
208
209 def toggle_scheduler(enable):
210 ss = frappe.get_doc("System Settings")
211 ss.enable_scheduler = 1 if enable else 0
212 ss.flags.ignore_mandatory = True
213 ss.flags.ignore_permissions = True
214 ss.save()
215
216 def enable_scheduler():
217 toggle_scheduler(True)
218
219 def disable_scheduler():
220 toggle_scheduler(False)
221
222 def get_errors(from_date, to_date, limit):
223 errors = frappe.db.sql("""select modified, method, error from `tabError Log`
224 where date(modified) between %s and %s
225 and error not like '%%[Errno 110] Connection timed out%%'
226 order by modified limit %s""", (from_date, to_date, limit), as_dict=True)
227 return ["""<p>Time: {modified}</p><pre><code>Method: {method}\n{error}</code></pre>""".format(**e)
228 for e in errors]
229
230 def get_error_report(from_date=None, to_date=None, limit=10):
231 from frappe.utils import get_url, now_datetime, add_days
232
233 if not from_date:
234 from_date = add_days(now_datetime().date(), -1)
235 if not to_date:
236 to_date = add_days(now_datetime().date(), -1)
237
238 errors = get_errors(from_date, to_date, limit)
239
240 if errors:
241 return 1, """<h4>Error Logs (max {limit}):</h4>
242 <p>URL: <a href="{url}" target="_blank">{url}</a></p><hr>{errors}""".format(
243 limit=limit, url=get_url(), errors="<hr>".join(errors))
244 else:
245 return 0, "<p>No error logs</p>"
246
247 def scheduler_task(site, event, handler, now=False):
248 '''This is a wrapper function that runs a hooks.scheduler_events method'''
249 frappe.logger(__name__).info('running {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))
250 try:
251 if not now:
252 frappe.connect(site=site)
253
254 frappe.flags.in_scheduler = True
255 frappe.get_attr(handler)()
256
257 except Exception:
258 frappe.db.rollback()
259 traceback = log(handler, "Method: {event}, Handler: {handler}".format(event=event, handler=handler))
260 frappe.logger(__name__).error(traceback)
261 raise
262
263 else:
264 frappe.db.commit()
265
266 frappe.logger(__name__).info('ran {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))
267
268
269 def reset_enabled_scheduler_events(login_manager):
270 if login_manager.info.user_type == "System User":
271 try:
272 frappe.db.set_global('enabled_scheduler_events', None)
273 except MySQLdb.OperationalError as e:
274 if e.args[0]==1205:
275 frappe.log_error(frappe.get_traceback(), "Error in reset_enabled_scheduler_events")
276 else:
277 raise
278 else:
279 is_dormant = frappe.conf.get('dormant')
280 if is_dormant:
281 update_site_config('dormant', 'None')
282
283 def disable_scheduler_on_expiry():
284 if has_expired():
285 disable_scheduler()
286
287 def restrict_scheduler_events_if_dormant():
288 if is_dormant():
289 restrict_scheduler_events()
290 update_site_config('dormant', True)
291
292 def restrict_scheduler_events(*args, **kwargs):
293 val = json.dumps(["hourly", "hourly_long", "daily", "daily_long", "weekly", "weekly_long", "monthly", "monthly_long"])
294 frappe.db.set_global('enabled_scheduler_events', val)
295
296 def is_dormant(since = 345600):
297 last_active = get_datetime(get_last_active())
298 # Get now without tz info
299 now = now_datetime().replace(tzinfo=None)
300 time_since_last_active = now - last_active
301 if time_since_last_active.total_seconds() > since: # 4 days
302 return True
303 return False
304
305 def get_last_active():
306 return frappe.db.sql("""select max(ifnull(last_active, "2000-01-01 00:00:00")) from `tabUser`
307 where user_type = 'System User' and name not in ({standard_users})"""\
308 .format(standard_users=", ".join(["%s"]*len(STANDARD_USERS))),
309 STANDARD_USERS)[0][0]
310
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py
--- a/frappe/utils/scheduler.py
+++ b/frappe/utils/scheduler.py
@@ -207,11 +207,7 @@
return not frappe.utils.cint(frappe.db.get_single_value("System Settings", "enable_scheduler"))
def toggle_scheduler(enable):
- ss = frappe.get_doc("System Settings")
- ss.enable_scheduler = 1 if enable else 0
- ss.flags.ignore_mandatory = True
- ss.flags.ignore_permissions = True
- ss.save()
+ frappe.db.set_value("System Settings", None, "enable_scheduler", 1 if enable else 0)
def enable_scheduler():
toggle_scheduler(True)
|
{"golden_diff": "diff --git a/frappe/utils/scheduler.py b/frappe/utils/scheduler.py\n--- a/frappe/utils/scheduler.py\n+++ b/frappe/utils/scheduler.py\n@@ -207,11 +207,7 @@\n \treturn not frappe.utils.cint(frappe.db.get_single_value(\"System Settings\", \"enable_scheduler\"))\n \n def toggle_scheduler(enable):\n-\tss = frappe.get_doc(\"System Settings\")\n-\tss.enable_scheduler = 1 if enable else 0\n-\tss.flags.ignore_mandatory = True\n-\tss.flags.ignore_permissions = True\n-\tss.save()\n+\tfrappe.db.set_value(\"System Settings\", None, \"enable_scheduler\", 1 if enable else 0)\n \n def enable_scheduler():\n \ttoggle_scheduler(True)\n", "issue": "AttributeError: 'SystemSettings' object has no attribute 'enable_password_policy' during `bench restore`\nHello,\r\nI ran `bench update` then tried to restore a backup and this error starts popping up.\r\n\r\nIt seems it might have come in from 7ccbbce5720bf16d5d3cc94c627e22ef0541e53b\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\nfrom __future__ import unicode_literals, print_function\n\nimport frappe\nimport json\nimport schedule\nimport time\nimport MySQLdb\nimport frappe.utils\nfrom frappe.utils import get_sites\nfrom datetime import datetime\nfrom background_jobs import enqueue, get_jobs, queue_timeout\nfrom frappe.limits import has_expired\nfrom frappe.utils.data import get_datetime, now_datetime\nfrom frappe.core.doctype.user.user import STANDARD_USERS\nfrom frappe.installer import update_site_config\n\nDATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\ndef start_scheduler():\n\t'''Run enqueue_events_for_all_sites every 2 minutes (default).\n\tSpecify scheduler_interval in seconds in common_site_config.json'''\n\n\tinterval = frappe.get_conf().scheduler_interval or 240\n\tschedule.every(interval).seconds.do(enqueue_events_for_all_sites)\n\n\twhile True:\n\t\tschedule.run_pending()\n\t\ttime.sleep(1)\n\ndef enqueue_events_for_all_sites():\n\t'''Loop through sites and enqueue events that are not already queued'''\n\twith frappe.init_site():\n\t\tjobs_per_site = get_jobs()\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site, queued_jobs=jobs_per_site[site])\n\t\texcept:\n\t\t\t# it should try to enqueue other sites\n\t\t\tprint(frappe.get_traceback())\n\ndef enqueue_events_for_site(site, queued_jobs):\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tif frappe.local.conf.maintenance_mode:\n\t\t\treturn\n\n\t\tif frappe.local.conf.pause_scheduler:\n\t\t\treturn\n\n\t\tfrappe.connect()\n\t\tif is_scheduler_disabled():\n\t\t\treturn\n\n\t\tenqueue_events(site=site, queued_jobs=queued_jobs)\n\n\t\tfrappe.logger(__name__).debug('Queued events for site {0}'.format(site))\n\n\texcept:\n\t\tfrappe.logger(__name__).error('Exception in Enqueue Events for Site {0}'.format(site) +\n\t\t\t'\\n' + frappe.get_traceback())\n\t\traise\n\n\tfinally:\n\t\tfrappe.destroy()\n\ndef enqueue_events(site, queued_jobs):\n\tnowtime = frappe.utils.now_datetime()\n\tlast = frappe.db.get_value('System Settings', 'System Settings', 'scheduler_last_event')\n\n\t# set scheduler last event\n\tfrappe.db.set_value('System Settings', 'System Settings',\n\t\t'scheduler_last_event', nowtime.strftime(DATETIME_FORMAT),\n\t\tupdate_modified=False)\n\tfrappe.db.commit()\n\n\tout = []\n\tif last:\n\t\tlast = datetime.strptime(last, DATETIME_FORMAT)\n\t\tout = enqueue_applicable_events(site, nowtime, last, queued_jobs)\n\n\treturn '\\n'.join(out)\n\ndef enqueue_applicable_events(site, nowtime, last, queued_jobs=()):\n\tnowtime_str = nowtime.strftime(DATETIME_FORMAT)\n\tout = []\n\n\tenabled_events = get_enabled_scheduler_events()\n\n\tdef trigger_if_enabled(site, event):\n\t\tif event in enabled_events:\n\t\t\ttrigger(site, event, queued_jobs)\n\t\t\t_log(event)\n\n\tdef _log(event):\n\t\tout.append(\"{time} - {event} - queued\".format(time=nowtime_str, event=event))\n\n\tif nowtime.day != last.day:\n\t\t# if first task of the day execute daily tasks\n\t\ttrigger_if_enabled(site, \"daily\")\n\t\ttrigger_if_enabled(site, \"daily_long\")\n\n\t\tif nowtime.month != last.month:\n\t\t\ttrigger_if_enabled(site, \"monthly\")\n\t\t\ttrigger_if_enabled(site, \"monthly_long\")\n\n\t\tif nowtime.weekday()==0:\n\t\t\ttrigger_if_enabled(site, \"weekly\")\n\t\t\ttrigger_if_enabled(site, \"weekly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\t\tif \"hourly\" not in enabled_events:\n\t\t\ttrigger(site, \"hourly\", queued_jobs)\n\n\tif nowtime.hour != last.hour:\n\t\ttrigger_if_enabled(site, \"hourly\")\n\t\ttrigger_if_enabled(site, \"hourly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\ttrigger_if_enabled(site, \"all\")\n\n\treturn out\n\ndef trigger(site, event, queued_jobs=(), now=False):\n\t\"\"\"trigger method in hooks.scheduler_events\"\"\"\n\tqueue = 'long' if event.endswith('_long') else 'short'\n\ttimeout = queue_timeout[queue]\n\tif not queued_jobs and not now:\n\t\tqueued_jobs = get_jobs(site=site, queue=queue)\n\n\tif frappe.flags.in_test:\n\t\tfrappe.flags.ran_schedulers.append(event)\n\n\tevents = get_scheduler_events(event)\n\tif not events:\n\t\treturn\n\n\tfor handler in events:\n\t\tif not now:\n\t\t\tif handler not in queued_jobs:\n\t\t\t\tenqueue(handler, queue, timeout, event)\n\t\telse:\n\t\t\tscheduler_task(site=site, event=event, handler=handler, now=True)\n\ndef get_scheduler_events(event):\n\t'''Get scheduler events from hooks and integrations'''\n\tscheduler_events = frappe.cache().get_value('scheduler_events')\n\tif not scheduler_events:\n\t\tscheduler_events = frappe.get_hooks(\"scheduler_events\")\n\t\tfrappe.cache().set_value('scheduler_events', scheduler_events)\n\n\treturn scheduler_events.get(event) or []\n\ndef log(method, message=None):\n\t\"\"\"log error in patch_log\"\"\"\n\tmessage = frappe.utils.cstr(message) + \"\\n\" if message else \"\"\n\tmessage += frappe.get_traceback()\n\n\tif not (frappe.db and frappe.db._conn):\n\t\tfrappe.connect()\n\n\tfrappe.db.rollback()\n\tfrappe.db.begin()\n\n\td = frappe.new_doc(\"Error Log\")\n\td.method = method\n\td.error = message\n\td.insert(ignore_permissions=True)\n\n\tfrappe.db.commit()\n\n\treturn message\n\ndef get_enabled_scheduler_events():\n\tif 'enabled_events' in frappe.flags:\n\t\treturn frappe.flags.enabled_events\n\n\tenabled_events = frappe.db.get_global(\"enabled_scheduler_events\")\n\tif enabled_events:\n\t\tif isinstance(enabled_events, basestring):\n\t\t\tenabled_events = json.loads(enabled_events)\n\n\t\treturn enabled_events\n\n\treturn [\"all\", \"hourly\", \"hourly_long\", \"daily\", \"daily_long\",\n\t\t\"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"]\n\ndef is_scheduler_disabled():\n\tif frappe.conf.disable_scheduler:\n\t\treturn True\n\n\treturn not frappe.utils.cint(frappe.db.get_single_value(\"System Settings\", \"enable_scheduler\"))\n\ndef toggle_scheduler(enable):\n\tss = frappe.get_doc(\"System Settings\")\n\tss.enable_scheduler = 1 if enable else 0\n\tss.flags.ignore_mandatory = True\n\tss.flags.ignore_permissions = True\n\tss.save()\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\ndef get_errors(from_date, to_date, limit):\n\terrors = frappe.db.sql(\"\"\"select modified, method, error from `tabError Log`\n\t\twhere date(modified) between %s and %s\n\t\tand error not like '%%[Errno 110] Connection timed out%%'\n\t\torder by modified limit %s\"\"\", (from_date, to_date, limit), as_dict=True)\n\treturn [\"\"\"<p>Time: {modified}</p><pre><code>Method: {method}\\n{error}</code></pre>\"\"\".format(**e)\n\t\tfor e in errors]\n\ndef get_error_report(from_date=None, to_date=None, limit=10):\n\tfrom frappe.utils import get_url, now_datetime, add_days\n\n\tif not from_date:\n\t\tfrom_date = add_days(now_datetime().date(), -1)\n\tif not to_date:\n\t\tto_date = add_days(now_datetime().date(), -1)\n\n\terrors = get_errors(from_date, to_date, limit)\n\n\tif errors:\n\t\treturn 1, \"\"\"<h4>Error Logs (max {limit}):</h4>\n\t\t\t<p>URL: <a href=\"{url}\" target=\"_blank\">{url}</a></p><hr>{errors}\"\"\".format(\n\t\t\tlimit=limit, url=get_url(), errors=\"<hr>\".join(errors))\n\telse:\n\t\treturn 0, \"<p>No error logs</p>\"\n\ndef scheduler_task(site, event, handler, now=False):\n\t'''This is a wrapper function that runs a hooks.scheduler_events method'''\n\tfrappe.logger(__name__).info('running {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\ttry:\n\t\tif not now:\n\t\t\tfrappe.connect(site=site)\n\n\t\tfrappe.flags.in_scheduler = True\n\t\tfrappe.get_attr(handler)()\n\n\texcept Exception:\n\t\tfrappe.db.rollback()\n\t\ttraceback = log(handler, \"Method: {event}, Handler: {handler}\".format(event=event, handler=handler))\n\t\tfrappe.logger(__name__).error(traceback)\n\t\traise\n\n\telse:\n\t\tfrappe.db.commit()\n\n\tfrappe.logger(__name__).info('ran {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\n\ndef reset_enabled_scheduler_events(login_manager):\n\tif login_manager.info.user_type == \"System User\":\n\t\ttry:\n\t\t\tfrappe.db.set_global('enabled_scheduler_events', None)\n\t\texcept MySQLdb.OperationalError as e:\n\t\t\tif e.args[0]==1205:\n\t\t\t\tfrappe.log_error(frappe.get_traceback(), \"Error in reset_enabled_scheduler_events\")\n\t\t\telse:\n\t\t\t\traise\n\t\telse:\n\t\t\tis_dormant = frappe.conf.get('dormant')\n\t\t\tif is_dormant:\n\t\t\t\tupdate_site_config('dormant', 'None')\n\ndef disable_scheduler_on_expiry():\n\tif has_expired():\n\t\tdisable_scheduler()\n\ndef restrict_scheduler_events_if_dormant():\n\tif is_dormant():\n\t\trestrict_scheduler_events()\n\t\tupdate_site_config('dormant', True)\n\ndef restrict_scheduler_events(*args, **kwargs):\n\tval = json.dumps([\"hourly\", \"hourly_long\", \"daily\", \"daily_long\", \"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"])\n\tfrappe.db.set_global('enabled_scheduler_events', val)\n\ndef is_dormant(since = 345600):\n\tlast_active = get_datetime(get_last_active())\n\t# Get now without tz info\n\tnow = now_datetime().replace(tzinfo=None)\n\ttime_since_last_active = now - last_active\n\tif time_since_last_active.total_seconds() > since: # 4 days\n\t\treturn True\n\treturn False\n\ndef get_last_active():\n\treturn frappe.db.sql(\"\"\"select max(ifnull(last_active, \"2000-01-01 00:00:00\")) from `tabUser`\n\t\twhere user_type = 'System User' and name not in ({standard_users})\"\"\"\\\n\t\t.format(standard_users=\", \".join([\"%s\"]*len(STANDARD_USERS))),\n\t\tSTANDARD_USERS)[0][0]\n", "path": "frappe/utils/scheduler.py"}], "after_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\"\"\"\nEvents:\n\talways\n\tdaily\n\tmonthly\n\tweekly\n\"\"\"\n\nfrom __future__ import unicode_literals, print_function\n\nimport frappe\nimport json\nimport schedule\nimport time\nimport MySQLdb\nimport frappe.utils\nfrom frappe.utils import get_sites\nfrom datetime import datetime\nfrom background_jobs import enqueue, get_jobs, queue_timeout\nfrom frappe.limits import has_expired\nfrom frappe.utils.data import get_datetime, now_datetime\nfrom frappe.core.doctype.user.user import STANDARD_USERS\nfrom frappe.installer import update_site_config\n\nDATETIME_FORMAT = '%Y-%m-%d %H:%M:%S'\n\ndef start_scheduler():\n\t'''Run enqueue_events_for_all_sites every 2 minutes (default).\n\tSpecify scheduler_interval in seconds in common_site_config.json'''\n\n\tinterval = frappe.get_conf().scheduler_interval or 240\n\tschedule.every(interval).seconds.do(enqueue_events_for_all_sites)\n\n\twhile True:\n\t\tschedule.run_pending()\n\t\ttime.sleep(1)\n\ndef enqueue_events_for_all_sites():\n\t'''Loop through sites and enqueue events that are not already queued'''\n\twith frappe.init_site():\n\t\tjobs_per_site = get_jobs()\n\t\tsites = get_sites()\n\n\tfor site in sites:\n\t\ttry:\n\t\t\tenqueue_events_for_site(site=site, queued_jobs=jobs_per_site[site])\n\t\texcept:\n\t\t\t# it should try to enqueue other sites\n\t\t\tprint(frappe.get_traceback())\n\ndef enqueue_events_for_site(site, queued_jobs):\n\ttry:\n\t\tfrappe.init(site=site)\n\t\tif frappe.local.conf.maintenance_mode:\n\t\t\treturn\n\n\t\tif frappe.local.conf.pause_scheduler:\n\t\t\treturn\n\n\t\tfrappe.connect()\n\t\tif is_scheduler_disabled():\n\t\t\treturn\n\n\t\tenqueue_events(site=site, queued_jobs=queued_jobs)\n\n\t\tfrappe.logger(__name__).debug('Queued events for site {0}'.format(site))\n\n\texcept:\n\t\tfrappe.logger(__name__).error('Exception in Enqueue Events for Site {0}'.format(site) +\n\t\t\t'\\n' + frappe.get_traceback())\n\t\traise\n\n\tfinally:\n\t\tfrappe.destroy()\n\ndef enqueue_events(site, queued_jobs):\n\tnowtime = frappe.utils.now_datetime()\n\tlast = frappe.db.get_value('System Settings', 'System Settings', 'scheduler_last_event')\n\n\t# set scheduler last event\n\tfrappe.db.set_value('System Settings', 'System Settings',\n\t\t'scheduler_last_event', nowtime.strftime(DATETIME_FORMAT),\n\t\tupdate_modified=False)\n\tfrappe.db.commit()\n\n\tout = []\n\tif last:\n\t\tlast = datetime.strptime(last, DATETIME_FORMAT)\n\t\tout = enqueue_applicable_events(site, nowtime, last, queued_jobs)\n\n\treturn '\\n'.join(out)\n\ndef enqueue_applicable_events(site, nowtime, last, queued_jobs=()):\n\tnowtime_str = nowtime.strftime(DATETIME_FORMAT)\n\tout = []\n\n\tenabled_events = get_enabled_scheduler_events()\n\n\tdef trigger_if_enabled(site, event):\n\t\tif event in enabled_events:\n\t\t\ttrigger(site, event, queued_jobs)\n\t\t\t_log(event)\n\n\tdef _log(event):\n\t\tout.append(\"{time} - {event} - queued\".format(time=nowtime_str, event=event))\n\n\tif nowtime.day != last.day:\n\t\t# if first task of the day execute daily tasks\n\t\ttrigger_if_enabled(site, \"daily\")\n\t\ttrigger_if_enabled(site, \"daily_long\")\n\n\t\tif nowtime.month != last.month:\n\t\t\ttrigger_if_enabled(site, \"monthly\")\n\t\t\ttrigger_if_enabled(site, \"monthly_long\")\n\n\t\tif nowtime.weekday()==0:\n\t\t\ttrigger_if_enabled(site, \"weekly\")\n\t\t\ttrigger_if_enabled(site, \"weekly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\t\tif \"hourly\" not in enabled_events:\n\t\t\ttrigger(site, \"hourly\", queued_jobs)\n\n\tif nowtime.hour != last.hour:\n\t\ttrigger_if_enabled(site, \"hourly\")\n\t\ttrigger_if_enabled(site, \"hourly_long\")\n\n\t\tif \"all\" not in enabled_events:\n\t\t\ttrigger(site, \"all\", queued_jobs)\n\n\ttrigger_if_enabled(site, \"all\")\n\n\treturn out\n\ndef trigger(site, event, queued_jobs=(), now=False):\n\t\"\"\"trigger method in hooks.scheduler_events\"\"\"\n\tqueue = 'long' if event.endswith('_long') else 'short'\n\ttimeout = queue_timeout[queue]\n\tif not queued_jobs and not now:\n\t\tqueued_jobs = get_jobs(site=site, queue=queue)\n\n\tif frappe.flags.in_test:\n\t\tfrappe.flags.ran_schedulers.append(event)\n\n\tevents = get_scheduler_events(event)\n\tif not events:\n\t\treturn\n\n\tfor handler in events:\n\t\tif not now:\n\t\t\tif handler not in queued_jobs:\n\t\t\t\tenqueue(handler, queue, timeout, event)\n\t\telse:\n\t\t\tscheduler_task(site=site, event=event, handler=handler, now=True)\n\ndef get_scheduler_events(event):\n\t'''Get scheduler events from hooks and integrations'''\n\tscheduler_events = frappe.cache().get_value('scheduler_events')\n\tif not scheduler_events:\n\t\tscheduler_events = frappe.get_hooks(\"scheduler_events\")\n\t\tfrappe.cache().set_value('scheduler_events', scheduler_events)\n\n\treturn scheduler_events.get(event) or []\n\ndef log(method, message=None):\n\t\"\"\"log error in patch_log\"\"\"\n\tmessage = frappe.utils.cstr(message) + \"\\n\" if message else \"\"\n\tmessage += frappe.get_traceback()\n\n\tif not (frappe.db and frappe.db._conn):\n\t\tfrappe.connect()\n\n\tfrappe.db.rollback()\n\tfrappe.db.begin()\n\n\td = frappe.new_doc(\"Error Log\")\n\td.method = method\n\td.error = message\n\td.insert(ignore_permissions=True)\n\n\tfrappe.db.commit()\n\n\treturn message\n\ndef get_enabled_scheduler_events():\n\tif 'enabled_events' in frappe.flags:\n\t\treturn frappe.flags.enabled_events\n\n\tenabled_events = frappe.db.get_global(\"enabled_scheduler_events\")\n\tif enabled_events:\n\t\tif isinstance(enabled_events, basestring):\n\t\t\tenabled_events = json.loads(enabled_events)\n\n\t\treturn enabled_events\n\n\treturn [\"all\", \"hourly\", \"hourly_long\", \"daily\", \"daily_long\",\n\t\t\"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"]\n\ndef is_scheduler_disabled():\n\tif frappe.conf.disable_scheduler:\n\t\treturn True\n\n\treturn not frappe.utils.cint(frappe.db.get_single_value(\"System Settings\", \"enable_scheduler\"))\n\ndef toggle_scheduler(enable):\n\tfrappe.db.set_value(\"System Settings\", None, \"enable_scheduler\", 1 if enable else 0)\n\ndef enable_scheduler():\n\ttoggle_scheduler(True)\n\ndef disable_scheduler():\n\ttoggle_scheduler(False)\n\ndef get_errors(from_date, to_date, limit):\n\terrors = frappe.db.sql(\"\"\"select modified, method, error from `tabError Log`\n\t\twhere date(modified) between %s and %s\n\t\tand error not like '%%[Errno 110] Connection timed out%%'\n\t\torder by modified limit %s\"\"\", (from_date, to_date, limit), as_dict=True)\n\treturn [\"\"\"<p>Time: {modified}</p><pre><code>Method: {method}\\n{error}</code></pre>\"\"\".format(**e)\n\t\tfor e in errors]\n\ndef get_error_report(from_date=None, to_date=None, limit=10):\n\tfrom frappe.utils import get_url, now_datetime, add_days\n\n\tif not from_date:\n\t\tfrom_date = add_days(now_datetime().date(), -1)\n\tif not to_date:\n\t\tto_date = add_days(now_datetime().date(), -1)\n\n\terrors = get_errors(from_date, to_date, limit)\n\n\tif errors:\n\t\treturn 1, \"\"\"<h4>Error Logs (max {limit}):</h4>\n\t\t\t<p>URL: <a href=\"{url}\" target=\"_blank\">{url}</a></p><hr>{errors}\"\"\".format(\n\t\t\tlimit=limit, url=get_url(), errors=\"<hr>\".join(errors))\n\telse:\n\t\treturn 0, \"<p>No error logs</p>\"\n\ndef scheduler_task(site, event, handler, now=False):\n\t'''This is a wrapper function that runs a hooks.scheduler_events method'''\n\tfrappe.logger(__name__).info('running {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\ttry:\n\t\tif not now:\n\t\t\tfrappe.connect(site=site)\n\n\t\tfrappe.flags.in_scheduler = True\n\t\tfrappe.get_attr(handler)()\n\n\texcept Exception:\n\t\tfrappe.db.rollback()\n\t\ttraceback = log(handler, \"Method: {event}, Handler: {handler}\".format(event=event, handler=handler))\n\t\tfrappe.logger(__name__).error(traceback)\n\t\traise\n\n\telse:\n\t\tfrappe.db.commit()\n\n\tfrappe.logger(__name__).info('ran {handler} for {site} for event: {event}'.format(handler=handler, site=site, event=event))\n\n\ndef reset_enabled_scheduler_events(login_manager):\n\tif login_manager.info.user_type == \"System User\":\n\t\ttry:\n\t\t\tfrappe.db.set_global('enabled_scheduler_events', None)\n\t\texcept MySQLdb.OperationalError as e:\n\t\t\tif e.args[0]==1205:\n\t\t\t\tfrappe.log_error(frappe.get_traceback(), \"Error in reset_enabled_scheduler_events\")\n\t\t\telse:\n\t\t\t\traise\n\t\telse:\n\t\t\tis_dormant = frappe.conf.get('dormant')\n\t\t\tif is_dormant:\n\t\t\t\tupdate_site_config('dormant', 'None')\n\ndef disable_scheduler_on_expiry():\n\tif has_expired():\n\t\tdisable_scheduler()\n\ndef restrict_scheduler_events_if_dormant():\n\tif is_dormant():\n\t\trestrict_scheduler_events()\n\t\tupdate_site_config('dormant', True)\n\ndef restrict_scheduler_events(*args, **kwargs):\n\tval = json.dumps([\"hourly\", \"hourly_long\", \"daily\", \"daily_long\", \"weekly\", \"weekly_long\", \"monthly\", \"monthly_long\"])\n\tfrappe.db.set_global('enabled_scheduler_events', val)\n\ndef is_dormant(since = 345600):\n\tlast_active = get_datetime(get_last_active())\n\t# Get now without tz info\n\tnow = now_datetime().replace(tzinfo=None)\n\ttime_since_last_active = now - last_active\n\tif time_since_last_active.total_seconds() > since: # 4 days\n\t\treturn True\n\treturn False\n\ndef get_last_active():\n\treturn frappe.db.sql(\"\"\"select max(ifnull(last_active, \"2000-01-01 00:00:00\")) from `tabUser`\n\t\twhere user_type = 'System User' and name not in ({standard_users})\"\"\"\\\n\t\t.format(standard_users=\", \".join([\"%s\"]*len(STANDARD_USERS))),\n\t\tSTANDARD_USERS)[0][0]\n", "path": "frappe/utils/scheduler.py"}]}
| 3,641 | 161 |
gh_patches_debug_17733
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-5243
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unify JupyterLite Install Instructions
### Description
Instructions on installing Panel in a (Pyodide-based) JupyterLite environment are presently both outdated and broken.
There are two different outdated scripts (for Panel versions <1.0.0) at:
- [ ] [Setting up JupyterLite](https://panel.holoviz.org/how_to/wasm/jupyterlite.html#optimized-wheels-optional)
- [ ] [Installing Panel in the browser](https://panel.holoviz.org/how_to/wasm/standalone.html#pyodide)
If I try to install those into a JupyterLite Pyodide environment, I get:
```
await micropip.install("https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl", keep_going=True)
```
```
ValueError: Can't fetch wheel from 'https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl'.
One common reason for this is when the server blocks Cross-Origin Resource Sharing (CORS).
Check if the server is sending the correct 'Access-Control-Allow-Origin' header.
```
On the other hand, if I try to install the Bokeh and Panel `py3-none-any` wheels directly from pip, I get an error related to python packages that have not yet been compiled for WASM:
```
micropip.install("https://files.pythonhosted.org/packages/56/98/da78cec88a7c47b761c9b3a18677b5508ef17417184396b3d1361fc811f1/bokeh-3.2.0-py3-none-any.whl", keep_going=True)
```
```
File /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)
578 if transaction.failed:
579 failed_requirements = ", ".join([f"'{req}'" for req in transaction.failed])
--> 580 raise ValueError(
581 f"Can't find a pure Python 3 wheel for: {failed_requirements}\n"
582 f"See: {FAQ_URLS['cant_find_wheel']}\n"
583 )
585 wheel_promises = []
586 # Install built-in packages
ValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'
See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel
```
```
micropip.install("https://files.pythonhosted.org/packages/90/a3/cc9cfdf1b18e5456a0ebd9370baa0a5d58501b4904fa3b3d1ecccbdbd1a2/panel-1.1.1-py2.py3-none-any.whl", keep_going=True)
```
```
File /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)
578 if transaction.failed:
579 failed_requirements = ", ".join([f"'{req}'" for req in transaction.failed])
--> 580 raise ValueError(
581 f"Can't find a pure Python 3 wheel for: {failed_requirements}\n"
582 f"See: {FAQ_URLS['cant_find_wheel']}\n"
583 )
585 wheel_promises = []
586 # Install built-in packages
ValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'
See: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel
```
#### Describe the solution you'd like
A working, unified script pointing to the latest version of Panel and Bokeh.
#### Describe alternatives you've considered
N/A
#### Additional context
Try the installation for yourself at https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `doc/conf.py`
Content:
```
1 import json
2 import os
3 import pathlib
4
5 import param
6
7 param.parameterized.docstring_signature = False
8 param.parameterized.docstring_describe_params = False
9
10 from nbsite.shared_conf import *
11
12 project = 'Panel'
13 authors = 'Panel contributors'
14 copyright_years['start_year'] = '2019'
15 copyright = copyright_fmt.format(**copyright_years)
16 description = 'High-level dashboarding for python visualization libraries'
17
18 import panel
19
20 from panel.io.convert import BOKEH_VERSION, MINIMUM_VERSIONS, PY_VERSION
21 from panel.io.resources import CDN_DIST
22
23 PANEL_ROOT = pathlib.Path(panel.__file__).parent
24
25 version = release = base_version(panel.__version__)
26 js_version = json.loads((PANEL_ROOT / 'package.json').read_text())['version']
27
28 is_dev = any(ext in version for ext in ('a', 'b', 'rc'))
29
30 # For the interactivity warning box created by nbsite to point to the right
31 # git tag instead of the default i.e. main.
32 os.environ['BRANCH'] = f"v{release}"
33
34 html_static_path += ['_static']
35
36 html_css_files += [
37 'css/custom.css',
38 ]
39
40 html_theme = "pydata_sphinx_theme"
41 html_favicon = "_static/icons/favicon.ico"
42
43 html_theme_options = {
44 "logo": {
45 "image_light": "_static/logo_horizontal_light_theme.png",
46 "image_dark": "_static/logo_horizontal_dark_theme.png",
47 },
48 "github_url": "https://github.com/holoviz/panel",
49 "icon_links": [
50 {
51 "name": "Twitter",
52 "url": "https://twitter.com/Panel_Org",
53 "icon": "fa-brands fa-twitter-square",
54 },
55 {
56 "name": "Discourse",
57 "url": "https://discourse.holoviz.org/c/panel/5",
58 "icon": "fa-brands fa-discourse",
59 },
60 {
61 "name": "Discord",
62 "url": "https://discord.gg/UXdtYyGVQX",
63 "icon": "fa-brands fa-discord",
64 },
65 ],
66 "analytics": {"google_analytics_id": "G-L0C8PGT2LM"},
67 "pygment_light_style": "material",
68 "pygment_dark_style": "material",
69 "header_links_before_dropdown": 5,
70 'secondary_sidebar_items': [
71 "github-stars-button",
72 "panelitelink",
73 "page-toc",
74 ],
75 }
76
77 extensions += [
78 'sphinx.ext.napoleon',
79 'nbsite.gallery',
80 'sphinx_copybutton',
81 'nbsite.pyodide'
82 ]
83 napoleon_numpy_docstring = True
84
85 myst_enable_extensions = ["colon_fence", "deflist"]
86
87 gallery_endpoint = 'panel-gallery-dev' if is_dev else 'panel-gallery'
88 gallery_url = f'https://{gallery_endpoint}.pyviz.demo.anaconda.com'
89 jlite_url = 'https://pyviz-dev.github.io/panelite-dev' if is_dev else 'https://panelite.holoviz.org'
90 pyodide_url = 'https://pyviz-dev.github.io/panel/pyodide' if is_dev else 'https://panel.holoviz.org/pyodide'
91
92 nbsite_gallery_conf = {
93 'github_org': 'holoviz',
94 'github_project': 'panel',
95 'galleries': {
96 'reference': {
97 'title': 'Component Gallery',
98 'sections': [
99 'panes',
100 'layouts',
101 'templates',
102 'global',
103 'indicators',
104 'widgets',
105 ],
106 'titles': {
107 'Vega': 'Altair & Vega',
108 'DeckGL': 'PyDeck & Deck.gl',
109 'ECharts': 'PyEcharts & ECharts',
110 'IPyWidget': 'ipywidgets'
111 },
112 'as_pyodide': True,
113 'normalize_titles': False
114 }
115 },
116 'thumbnail_url': 'https://assets.holoviz.org/panel/thumbnails',
117 'deployment_url': gallery_url,
118 'jupyterlite_url': jlite_url,
119 }
120
121 if panel.__version__ != version and (PANEL_ROOT / 'dist' / 'wheels').is_dir():
122 py_version = panel.__version__.replace("-dirty", "")
123 panel_req = f'./wheels/panel-{py_version}-py3-none-any.whl'
124 bokeh_req = f'./wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'
125 else:
126 panel_req = f'{CDN_DIST}wheels/panel-{PY_VERSION}-py3-none-any.whl'
127 bokeh_req = f'{CDN_DIST}wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'
128
129 def get_requirements():
130 with open('pyodide_dependencies.json') as deps:
131 dependencies = json.load(deps)
132 requirements = {}
133 for src, deps in dependencies.items():
134 if deps is None:
135 continue
136 src = src.replace('.ipynb', '').replace('.md', '')
137 for name, min_version in MINIMUM_VERSIONS.items():
138 if any(name in req for req in deps):
139 deps = [f'{name}>={min_version}' if name in req else req for req in deps]
140 requirements[src] = deps
141 return requirements
142
143 nbsite_pyodide_conf = {
144 'PYODIDE_URL': 'https://cdn.jsdelivr.net/pyodide/v0.23.1/full/pyodide.js',
145 'requirements': [bokeh_req, panel_req, 'pyodide-http'],
146 'requires': get_requirements()
147 }
148
149 templates_path += [
150 '_templates'
151 ]
152
153 html_context.update({
154 "last_release": f"v{release}",
155 "github_user": "holoviz",
156 "github_repo": "panel",
157 "default_mode": "light",
158 "panelite_endpoint": jlite_url,
159 "gallery_url": gallery_url,
160 "pyodide_url": pyodide_url
161 })
162
163 nbbuild_patterns_to_take_along = ["simple.html", "*.json", "json_*"]
164
165 # Override the Sphinx default title that appends `documentation`
166 html_title = f'{project} v{version}'
167
168
169 # Patching GridItemCardDirective to be able to substitute the domain name
170 # in the link option.
171 from sphinx_design.cards import CardDirective
172 from sphinx_design.grids import GridItemCardDirective
173
174 orig_grid_run = GridItemCardDirective.run
175
176 def patched_grid_run(self):
177 app = self.state.document.settings.env.app
178 existing_link = self.options.get('link')
179 domain = getattr(app.config, 'grid_item_link_domain', None)
180 if self.has_content:
181 self.content.replace('|gallery-endpoint|', domain)
182 if existing_link and domain:
183 new_link = existing_link.replace('|gallery-endpoint|', domain)
184 self.options['link'] = new_link
185 return list(orig_grid_run(self))
186
187 GridItemCardDirective.run = patched_grid_run
188
189 orig_card_run = CardDirective.run
190
191 def patched_card_run(self):
192 app = self.state.document.settings.env.app
193 existing_link = self.options.get('link')
194 domain = getattr(app.config, 'grid_item_link_domain', None)
195 if existing_link and domain:
196 new_link = existing_link.replace('|gallery-endpoint|', domain)
197 self.options['link'] = new_link
198 return orig_card_run(self)
199
200 CardDirective.run = patched_card_run
201
202 def setup(app) -> None:
203 try:
204 from nbsite.paramdoc import param_formatter, param_skip
205 app.connect('autodoc-process-docstring', param_formatter)
206 app.connect('autodoc-skip-member', param_skip)
207 except ImportError:
208 print('no param_formatter (no param?)')
209
210 nbbuild.setup(app)
211 app.add_config_value('grid_item_link_domain', '', 'html')
212
213 grid_item_link_domain = gallery_endpoint
214
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -199,6 +199,19 @@
CardDirective.run = patched_card_run
+def update_versions(app, docname, source):
+ # Inspired by: https://stackoverflow.com/questions/8821511
+ version_replace = {
+ "{{PANEL_VERSION}}" : PY_VERSION,
+ "{{BOKEH_VERSION}}" : BOKEH_VERSION,
+ "{{PYSCRIPT_VERSION}}" : "2022.12.1",
+ "{{PYODIDE_VERSION}}" : "0.23.4",
+ }
+
+ for old, new in version_replace.items():
+ source[0] = source[0].replace(old, new)
+
+
def setup(app) -> None:
try:
from nbsite.paramdoc import param_formatter, param_skip
@@ -207,6 +220,7 @@
except ImportError:
print('no param_formatter (no param?)')
+ app.connect('source-read', update_versions)
nbbuild.setup(app)
app.add_config_value('grid_item_link_domain', '', 'html')
|
{"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -199,6 +199,19 @@\n \n CardDirective.run = patched_card_run\n \n+def update_versions(app, docname, source):\n+ # Inspired by: https://stackoverflow.com/questions/8821511\n+ version_replace = {\n+ \"{{PANEL_VERSION}}\" : PY_VERSION,\n+ \"{{BOKEH_VERSION}}\" : BOKEH_VERSION,\n+ \"{{PYSCRIPT_VERSION}}\" : \"2022.12.1\",\n+ \"{{PYODIDE_VERSION}}\" : \"0.23.4\",\n+ }\n+\n+ for old, new in version_replace.items():\n+ source[0] = source[0].replace(old, new)\n+\n+\n def setup(app) -> None:\n try:\n from nbsite.paramdoc import param_formatter, param_skip\n@@ -207,6 +220,7 @@\n except ImportError:\n print('no param_formatter (no param?)')\n \n+ app.connect('source-read', update_versions)\n nbbuild.setup(app)\n app.add_config_value('grid_item_link_domain', '', 'html')\n", "issue": "Unify JupyterLite Install Instructions\n### Description\r\n\r\nInstructions on installing Panel in a (Pyodide-based) JupyterLite environment are presently both outdated and broken.\r\n\r\nThere are two different outdated scripts (for Panel versions <1.0.0) at:\r\n\r\n- [ ] [Setting up JupyterLite](https://panel.holoviz.org/how_to/wasm/jupyterlite.html#optimized-wheels-optional)\r\n- [ ] [Installing Panel in the browser](https://panel.holoviz.org/how_to/wasm/standalone.html#pyodide)\r\n\r\nIf I try to install those into a JupyterLite Pyodide environment, I get:\r\n\r\n```\r\nawait micropip.install(\"https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nValueError: Can't fetch wheel from 'https://cdn.holoviz.org/panel/0.14.0/wheels/panel-0.14.0-py3-none-any.whl'.\r\nOne common reason for this is when the server blocks Cross-Origin Resource Sharing (CORS).\r\nCheck if the server is sending the correct 'Access-Control-Allow-Origin' header.\r\n```\r\n\r\nOn the other hand, if I try to install the Bokeh and Panel `py3-none-any` wheels directly from pip, I get an error related to python packages that have not yet been compiled for WASM:\r\n\r\n```\r\nmicropip.install(\"https://files.pythonhosted.org/packages/56/98/da78cec88a7c47b761c9b3a18677b5508ef17417184396b3d1361fc811f1/bokeh-3.2.0-py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nFile /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)\r\n 578 if transaction.failed:\r\n 579 failed_requirements = \", \".join([f\"'{req}'\" for req in transaction.failed])\r\n--> 580 raise ValueError(\r\n 581 f\"Can't find a pure Python 3 wheel for: {failed_requirements}\\n\"\r\n 582 f\"See: {FAQ_URLS['cant_find_wheel']}\\n\"\r\n 583 )\r\n 585 wheel_promises = []\r\n 586 # Install built-in packages\r\n\r\nValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'\r\nSee: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel\r\n```\r\n\r\n```\r\nmicropip.install(\"https://files.pythonhosted.org/packages/90/a3/cc9cfdf1b18e5456a0ebd9370baa0a5d58501b4904fa3b3d1ecccbdbd1a2/panel-1.1.1-py2.py3-none-any.whl\", keep_going=True)\r\n```\r\n\r\n```\r\nFile /lib/python3.11/site-packages/micropip/_micropip.py:580, in install(requirements, keep_going, deps, credentials, pre)\r\n 578 if transaction.failed:\r\n 579 failed_requirements = \", \".join([f\"'{req}'\" for req in transaction.failed])\r\n--> 580 raise ValueError(\r\n 581 f\"Can't find a pure Python 3 wheel for: {failed_requirements}\\n\"\r\n 582 f\"See: {FAQ_URLS['cant_find_wheel']}\\n\"\r\n 583 )\r\n 585 wheel_promises = []\r\n 586 # Install built-in packages\r\n\r\nValueError: Can't find a pure Python 3 wheel for: 'contourpy>=1', 'tornado>=5.1'\r\nSee: https://pyodide.org/en/stable/usage/faq.html#micropip-can-t-find-a-pure-python-wheel\r\n```\r\n\r\n\r\n#### Describe the solution you'd like\r\n\r\nA working, unified script pointing to the latest version of Panel and Bokeh.\r\n\r\n#### Describe alternatives you've considered\r\n\r\nN/A\r\n\r\n#### Additional context\r\n\r\nTry the installation for yourself at https://jupyterlite.readthedocs.io/en/latest/_static/lab/index.html\r\n\n", "before_files": [{"content": "import json\nimport os\nimport pathlib\n\nimport param\n\nparam.parameterized.docstring_signature = False\nparam.parameterized.docstring_describe_params = False\n\nfrom nbsite.shared_conf import *\n\nproject = 'Panel'\nauthors = 'Panel contributors'\ncopyright_years['start_year'] = '2019'\ncopyright = copyright_fmt.format(**copyright_years)\ndescription = 'High-level dashboarding for python visualization libraries'\n\nimport panel\n\nfrom panel.io.convert import BOKEH_VERSION, MINIMUM_VERSIONS, PY_VERSION\nfrom panel.io.resources import CDN_DIST\n\nPANEL_ROOT = pathlib.Path(panel.__file__).parent\n\nversion = release = base_version(panel.__version__)\njs_version = json.loads((PANEL_ROOT / 'package.json').read_text())['version']\n\nis_dev = any(ext in version for ext in ('a', 'b', 'rc'))\n\n# For the interactivity warning box created by nbsite to point to the right\n# git tag instead of the default i.e. main.\nos.environ['BRANCH'] = f\"v{release}\"\n\nhtml_static_path += ['_static']\n\nhtml_css_files += [\n 'css/custom.css',\n]\n\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_favicon = \"_static/icons/favicon.ico\"\n\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"_static/logo_horizontal_light_theme.png\",\n \"image_dark\": \"_static/logo_horizontal_dark_theme.png\",\n },\n \"github_url\": \"https://github.com/holoviz/panel\",\n \"icon_links\": [\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/Panel_Org\",\n \"icon\": \"fa-brands fa-twitter-square\",\n },\n {\n \"name\": \"Discourse\",\n \"url\": \"https://discourse.holoviz.org/c/panel/5\",\n \"icon\": \"fa-brands fa-discourse\",\n },\n {\n \"name\": \"Discord\",\n \"url\": \"https://discord.gg/UXdtYyGVQX\",\n \"icon\": \"fa-brands fa-discord\",\n },\n ],\n \"analytics\": {\"google_analytics_id\": \"G-L0C8PGT2LM\"},\n \"pygment_light_style\": \"material\",\n \"pygment_dark_style\": \"material\",\n \"header_links_before_dropdown\": 5,\n 'secondary_sidebar_items': [\n \"github-stars-button\",\n \"panelitelink\",\n \"page-toc\",\n ],\n}\n\nextensions += [\n 'sphinx.ext.napoleon',\n 'nbsite.gallery',\n 'sphinx_copybutton',\n 'nbsite.pyodide'\n]\nnapoleon_numpy_docstring = True\n\nmyst_enable_extensions = [\"colon_fence\", \"deflist\"]\n\ngallery_endpoint = 'panel-gallery-dev' if is_dev else 'panel-gallery'\ngallery_url = f'https://{gallery_endpoint}.pyviz.demo.anaconda.com'\njlite_url = 'https://pyviz-dev.github.io/panelite-dev' if is_dev else 'https://panelite.holoviz.org'\npyodide_url = 'https://pyviz-dev.github.io/panel/pyodide' if is_dev else 'https://panel.holoviz.org/pyodide'\n\nnbsite_gallery_conf = {\n 'github_org': 'holoviz',\n 'github_project': 'panel',\n 'galleries': {\n 'reference': {\n 'title': 'Component Gallery',\n 'sections': [\n 'panes',\n 'layouts',\n 'templates',\n 'global',\n 'indicators',\n 'widgets',\n ],\n 'titles': {\n 'Vega': 'Altair & Vega',\n 'DeckGL': 'PyDeck & Deck.gl',\n 'ECharts': 'PyEcharts & ECharts',\n 'IPyWidget': 'ipywidgets'\n },\n 'as_pyodide': True,\n 'normalize_titles': False\n }\n },\n 'thumbnail_url': 'https://assets.holoviz.org/panel/thumbnails',\n 'deployment_url': gallery_url,\n 'jupyterlite_url': jlite_url,\n}\n\nif panel.__version__ != version and (PANEL_ROOT / 'dist' / 'wheels').is_dir():\n py_version = panel.__version__.replace(\"-dirty\", \"\")\n panel_req = f'./wheels/panel-{py_version}-py3-none-any.whl'\n bokeh_req = f'./wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\nelse:\n panel_req = f'{CDN_DIST}wheels/panel-{PY_VERSION}-py3-none-any.whl'\n bokeh_req = f'{CDN_DIST}wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\n\ndef get_requirements():\n with open('pyodide_dependencies.json') as deps:\n dependencies = json.load(deps)\n requirements = {}\n for src, deps in dependencies.items():\n if deps is None:\n continue\n src = src.replace('.ipynb', '').replace('.md', '')\n for name, min_version in MINIMUM_VERSIONS.items():\n if any(name in req for req in deps):\n deps = [f'{name}>={min_version}' if name in req else req for req in deps]\n requirements[src] = deps\n return requirements\n\nnbsite_pyodide_conf = {\n 'PYODIDE_URL': 'https://cdn.jsdelivr.net/pyodide/v0.23.1/full/pyodide.js',\n 'requirements': [bokeh_req, panel_req, 'pyodide-http'],\n 'requires': get_requirements()\n}\n\ntemplates_path += [\n '_templates'\n]\n\nhtml_context.update({\n \"last_release\": f\"v{release}\",\n \"github_user\": \"holoviz\",\n \"github_repo\": \"panel\",\n \"default_mode\": \"light\",\n \"panelite_endpoint\": jlite_url,\n \"gallery_url\": gallery_url,\n \"pyodide_url\": pyodide_url\n})\n\nnbbuild_patterns_to_take_along = [\"simple.html\", \"*.json\", \"json_*\"]\n\n# Override the Sphinx default title that appends `documentation`\nhtml_title = f'{project} v{version}'\n\n\n# Patching GridItemCardDirective to be able to substitute the domain name\n# in the link option.\nfrom sphinx_design.cards import CardDirective\nfrom sphinx_design.grids import GridItemCardDirective\n\norig_grid_run = GridItemCardDirective.run\n\ndef patched_grid_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if self.has_content:\n self.content.replace('|gallery-endpoint|', domain)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return list(orig_grid_run(self))\n\nGridItemCardDirective.run = patched_grid_run\n\norig_card_run = CardDirective.run\n\ndef patched_card_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return orig_card_run(self)\n\nCardDirective.run = patched_card_run\n\ndef setup(app) -> None:\n try:\n from nbsite.paramdoc import param_formatter, param_skip\n app.connect('autodoc-process-docstring', param_formatter)\n app.connect('autodoc-skip-member', param_skip)\n except ImportError:\n print('no param_formatter (no param?)')\n\n nbbuild.setup(app)\n app.add_config_value('grid_item_link_domain', '', 'html')\n\ngrid_item_link_domain = gallery_endpoint\n", "path": "doc/conf.py"}], "after_files": [{"content": "import json\nimport os\nimport pathlib\n\nimport param\n\nparam.parameterized.docstring_signature = False\nparam.parameterized.docstring_describe_params = False\n\nfrom nbsite.shared_conf import *\n\nproject = 'Panel'\nauthors = 'Panel contributors'\ncopyright_years['start_year'] = '2019'\ncopyright = copyright_fmt.format(**copyright_years)\ndescription = 'High-level dashboarding for python visualization libraries'\n\nimport panel\n\nfrom panel.io.convert import BOKEH_VERSION, MINIMUM_VERSIONS, PY_VERSION\nfrom panel.io.resources import CDN_DIST\n\nPANEL_ROOT = pathlib.Path(panel.__file__).parent\n\nversion = release = base_version(panel.__version__)\njs_version = json.loads((PANEL_ROOT / 'package.json').read_text())['version']\n\nis_dev = any(ext in version for ext in ('a', 'b', 'rc'))\n\n# For the interactivity warning box created by nbsite to point to the right\n# git tag instead of the default i.e. main.\nos.environ['BRANCH'] = f\"v{release}\"\n\nhtml_static_path += ['_static']\n\nhtml_css_files += [\n 'css/custom.css',\n]\n\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_favicon = \"_static/icons/favicon.ico\"\n\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"_static/logo_horizontal_light_theme.png\",\n \"image_dark\": \"_static/logo_horizontal_dark_theme.png\",\n },\n \"github_url\": \"https://github.com/holoviz/panel\",\n \"icon_links\": [\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/Panel_Org\",\n \"icon\": \"fa-brands fa-twitter-square\",\n },\n {\n \"name\": \"Discourse\",\n \"url\": \"https://discourse.holoviz.org/c/panel/5\",\n \"icon\": \"fa-brands fa-discourse\",\n },\n {\n \"name\": \"Discord\",\n \"url\": \"https://discord.gg/UXdtYyGVQX\",\n \"icon\": \"fa-brands fa-discord\",\n },\n ],\n \"analytics\": {\"google_analytics_id\": \"G-L0C8PGT2LM\"},\n \"pygment_light_style\": \"material\",\n \"pygment_dark_style\": \"material\",\n \"header_links_before_dropdown\": 5,\n 'secondary_sidebar_items': [\n \"github-stars-button\",\n \"panelitelink\",\n \"page-toc\",\n ],\n}\n\nextensions += [\n 'sphinx.ext.napoleon',\n 'nbsite.gallery',\n 'sphinx_copybutton',\n 'nbsite.pyodide'\n]\nnapoleon_numpy_docstring = True\n\nmyst_enable_extensions = [\"colon_fence\", \"deflist\"]\n\ngallery_endpoint = 'panel-gallery-dev' if is_dev else 'panel-gallery'\ngallery_url = f'https://{gallery_endpoint}.pyviz.demo.anaconda.com'\njlite_url = 'https://pyviz-dev.github.io/panelite-dev' if is_dev else 'https://panelite.holoviz.org'\npyodide_url = 'https://pyviz-dev.github.io/panel/pyodide' if is_dev else 'https://panel.holoviz.org/pyodide'\n\nnbsite_gallery_conf = {\n 'github_org': 'holoviz',\n 'github_project': 'panel',\n 'galleries': {\n 'reference': {\n 'title': 'Component Gallery',\n 'sections': [\n 'panes',\n 'layouts',\n 'templates',\n 'global',\n 'indicators',\n 'widgets',\n ],\n 'titles': {\n 'Vega': 'Altair & Vega',\n 'DeckGL': 'PyDeck & Deck.gl',\n 'ECharts': 'PyEcharts & ECharts',\n 'IPyWidget': 'ipywidgets'\n },\n 'as_pyodide': True,\n 'normalize_titles': False\n }\n },\n 'thumbnail_url': 'https://assets.holoviz.org/panel/thumbnails',\n 'deployment_url': gallery_url,\n 'jupyterlite_url': jlite_url,\n}\n\nif panel.__version__ != version and (PANEL_ROOT / 'dist' / 'wheels').is_dir():\n py_version = panel.__version__.replace(\"-dirty\", \"\")\n panel_req = f'./wheels/panel-{py_version}-py3-none-any.whl'\n bokeh_req = f'./wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\nelse:\n panel_req = f'{CDN_DIST}wheels/panel-{PY_VERSION}-py3-none-any.whl'\n bokeh_req = f'{CDN_DIST}wheels/bokeh-{BOKEH_VERSION}-py3-none-any.whl'\n\ndef get_requirements():\n with open('pyodide_dependencies.json') as deps:\n dependencies = json.load(deps)\n requirements = {}\n for src, deps in dependencies.items():\n if deps is None:\n continue\n src = src.replace('.ipynb', '').replace('.md', '')\n for name, min_version in MINIMUM_VERSIONS.items():\n if any(name in req for req in deps):\n deps = [f'{name}>={min_version}' if name in req else req for req in deps]\n requirements[src] = deps\n return requirements\n\nnbsite_pyodide_conf = {\n 'PYODIDE_URL': 'https://cdn.jsdelivr.net/pyodide/v0.23.1/full/pyodide.js',\n 'requirements': [bokeh_req, panel_req, 'pyodide-http'],\n 'requires': get_requirements()\n}\n\ntemplates_path += [\n '_templates'\n]\n\nhtml_context.update({\n \"last_release\": f\"v{release}\",\n \"github_user\": \"holoviz\",\n \"github_repo\": \"panel\",\n \"default_mode\": \"light\",\n \"panelite_endpoint\": jlite_url,\n \"gallery_url\": gallery_url,\n \"pyodide_url\": pyodide_url\n})\n\nnbbuild_patterns_to_take_along = [\"simple.html\", \"*.json\", \"json_*\"]\n\n# Override the Sphinx default title that appends `documentation`\nhtml_title = f'{project} v{version}'\n\n\n# Patching GridItemCardDirective to be able to substitute the domain name\n# in the link option.\nfrom sphinx_design.cards import CardDirective\nfrom sphinx_design.grids import GridItemCardDirective\n\norig_grid_run = GridItemCardDirective.run\n\ndef patched_grid_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if self.has_content:\n self.content.replace('|gallery-endpoint|', domain)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return list(orig_grid_run(self))\n\nGridItemCardDirective.run = patched_grid_run\n\norig_card_run = CardDirective.run\n\ndef patched_card_run(self):\n app = self.state.document.settings.env.app\n existing_link = self.options.get('link')\n domain = getattr(app.config, 'grid_item_link_domain', None)\n if existing_link and domain:\n new_link = existing_link.replace('|gallery-endpoint|', domain)\n self.options['link'] = new_link\n return orig_card_run(self)\n\nCardDirective.run = patched_card_run\n\ndef update_versions(app, docname, source):\n # Inspired by: https://stackoverflow.com/questions/8821511\n version_replace = {\n \"{{PANEL_VERSION}}\" : PY_VERSION,\n \"{{BOKEH_VERSION}}\" : BOKEH_VERSION,\n \"{{PYSCRIPT_VERSION}}\" : \"2022.12.1\",\n \"{{PYODIDE_VERSION}}\" : \"0.23.4\",\n }\n\n for old, new in version_replace.items():\n source[0] = source[0].replace(old, new)\n\n\ndef setup(app) -> None:\n try:\n from nbsite.paramdoc import param_formatter, param_skip\n app.connect('autodoc-process-docstring', param_formatter)\n app.connect('autodoc-skip-member', param_skip)\n except ImportError:\n print('no param_formatter (no param?)')\n\n app.connect('source-read', update_versions)\n nbbuild.setup(app)\n app.add_config_value('grid_item_link_domain', '', 'html')\n\ngrid_item_link_domain = gallery_endpoint\n", "path": "doc/conf.py"}]}
| 3,502 | 264 |
gh_patches_debug_9300
|
rasdani/github-patches
|
git_diff
|
ietf-tools__datatracker-4546
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Many tests failing with YangModelExtractor error
### Describe the issue
Starting with [test run 790](https://github.com/ietf-tools/datatracker/actions/runs/3172961553), a few dozen tests are failing with the error
```
AttributeError: 'YangModuleExtractor' object has no attribute 'extract_yang_model'
```
The last passing test run was a couple hours before the release of [xym 0.6.0](https://pypi.org/project/xym/0.6.0/#history), so it seems likely this is related.
### Code of Conduct
- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `ietf/submit/checkers.py`
Content:
```
1 # Copyright The IETF Trust 2016-2020, All Rights Reserved
2 # -*- coding: utf-8 -*-
3
4
5 import io
6 import os
7 import re
8 import shutil
9 import sys
10 import tempfile
11
12 from xym import xym
13 from django.conf import settings
14
15 import debug # pyflakes:ignore
16
17 from ietf.utils.log import log, assertion
18 from ietf.utils.models import VersionInfo
19 from ietf.utils.pipe import pipe
20 from ietf.utils.test_runner import set_coverage_checking
21
22 class DraftSubmissionChecker(object):
23 name = ""
24
25 def check_file_txt(self, text):
26 "Run checks on a text file"
27 raise NotImplementedError
28
29 def check_file_xml(self, xml):
30 "Run checks on an xml file"
31 raise NotImplementedError
32
33 def check_fragment_txt(self, text):
34 "Run checks on a fragment from a text file"
35 raise NotImplementedError
36
37 def check_fragment_xml(self, xml):
38 "Run checks on a fragment from an xml file"
39 raise NotImplementedError
40
41
42 class DraftIdnitsChecker(object):
43 """
44 Draft checker class for idnits. Idnits can only handle whole text files,
45 so only check_file_txt() is defined; check_file_xml and check_fragment_*
46 methods are undefined.
47
48 Furthermore, idnits doesn't provide an error code or line-by-line errors,
49 so a bit of massage is needed in order to return the expected failure flag.
50 """
51 name = "idnits check"
52
53 # start using this when we provide more in the way of warnings during
54 # submission checking:
55 # symbol = '<span class="bi bi-check-square"></span>'
56 # symbol = u'<span class="large">\ua17d</span>' # Yi syllable 'nit'
57 # symbol = u'<span class="large">\ub2e1</span>' # Hangul syllable 'nit'
58
59 symbol = ""
60
61 def __init__(self, options=["--submitcheck", "--nitcount", ]):
62 assert isinstance(options, list)
63 if not "--nitcount" in options:
64 options.append("--nitcount")
65 self.options = ' '.join(options)
66
67 def check_file_txt(self, path):
68 """
69 Run an idnits check, and return a passed/failed indication, a message,
70 and error and warning messages.
71
72 Error and warning list items are tuples:
73 (line_number, line_text, message)
74 """
75 items = []
76 errors = 0
77 warnings = 0
78 errstart = [' ** ', ' ~~ ']
79 warnstart = [' == ', ' -- ']
80
81
82 cmd = "%s %s %s" % (settings.IDSUBMIT_IDNITS_BINARY, self.options, path)
83 code, out, err = pipe(cmd)
84 out = out.decode('utf-8')
85 err = err.decode('utf-8')
86 if code != 0 or out == "":
87 message = "idnits error: %s:\n Error %s: %s" %( cmd, code, err)
88 log(message)
89 passed = False
90
91 else:
92 message = out
93 if re.search(r"\s+Summary:\s+0\s+|No nits found", out):
94 passed = True
95 else:
96 passed = False
97
98 item = ""
99 for line in message.splitlines():
100 if line[:5] in (errstart + warnstart):
101 item = line.rstrip()
102 elif line.strip() == "" and item:
103 tuple = (None, None, item)
104 items.append(tuple)
105 if item[:5] in errstart:
106 errors += 1
107 elif item[:5] in warnstart:
108 warnings += 1
109 else:
110 raise RuntimeError("Unexpected state in idnits checker: item: %s, line: %s" % (item, line))
111 item = ""
112 elif item and line.strip() != "":
113 item += " " + line.strip()
114 else:
115 pass
116 info = {'checker': self.name, 'items': [], 'code': {}}
117
118 return passed, message, errors, warnings, info
119
120 class DraftYangChecker(object):
121
122 name = "yang validation"
123 symbol = '<i class="bi bi-yin-yang"></i>'
124
125 def check_file_txt(self, path):
126 name = os.path.basename(path)
127 workdir = tempfile.mkdtemp()
128 model_name_re = r'^[A-Za-z_][A-Za-z0-9_.-]*(@\d\d\d\d-\d\d-\d\d)?\.yang$'
129 errors = 0
130 warnings = 0
131 message = ""
132 results = []
133 passed = True # Used by the submission tool. Yang checks always pass.
134 model_list = []
135 info = {'checker': self.name, 'items': [], 'code': {}}
136
137 extractor = xym.YangModuleExtractor(path, workdir, strict=True, strict_examples=False, debug_level=1)
138 if not os.path.exists(path):
139 return None, "%s: No such file or directory: '%s'"%(name.capitalize(), path), errors, warnings, info
140 with open(path) as file:
141 out = ""
142 err = ""
143 code = 0
144 try:
145 # This places the yang models as files in workdir
146 saved_stdout = sys.stdout
147 saved_stderr = sys.stderr
148 sys.stdout = io.StringIO()
149 sys.stderr = io.StringIO()
150 extractor.extract_yang_model(file.readlines())
151 model_list = extractor.get_extracted_models(False, True)
152 out = sys.stdout.getvalue()
153 err = sys.stderr.getvalue()
154 # signature change in xym:
155 except Exception as exc:
156 sys.stdout = saved_stdout
157 sys.stderr = saved_stderr
158 msg = "Exception when running xym on %s: %s" % (name, exc)
159 log(msg)
160 raise
161 return None, msg, 0, 0, info
162 finally:
163 sys.stdout = saved_stdout
164 sys.stderr = saved_stderr
165 if not model_list:
166 # Found no yang models, don't deliver any YangChecker result
167 return None, "", 0, 0, info
168
169 for m in model_list:
170 if not re.search(model_name_re, m):
171 code += 1
172 err += "Error: Bad extracted model name: '%s'\n" % m
173 if len(set(model_list)) != len(model_list):
174 code += 1
175 err += "Error: Multiple models with the same name:\n %s\n" % ("\n ".join(model_list))
176
177 model_list = list(set(model_list))
178
179 command = "xym"
180 cmd_version = VersionInfo.objects.get(command=command).version
181 message = "%s:\n%s\n\n" % (cmd_version, out.replace('\n\n','\n').strip() if code == 0 else err)
182
183 results.append({
184 "name": name,
185 "passed": passed,
186 "message": message,
187 "warnings": 0,
188 "errors": code,
189 "items": [],
190 })
191
192 for model in model_list:
193 path = os.path.join(workdir, model)
194 message = ""
195 passed = True
196 errors = 0
197 warnings = 0
198 items = []
199 modpath = ':'.join([
200 workdir,
201 settings.SUBMIT_YANG_RFC_MODEL_DIR,
202 settings.SUBMIT_YANG_DRAFT_MODEL_DIR,
203 settings.SUBMIT_YANG_IANA_MODEL_DIR,
204 settings.SUBMIT_YANG_CATALOG_MODEL_DIR,
205 ])
206 if os.path.exists(path):
207 with io.open(path) as file:
208 text = file.readlines()
209 # pyang
210 cmd_template = settings.SUBMIT_PYANG_COMMAND
211 command = [ w for w in cmd_template.split() if not '=' in w ][0]
212 cmd_version = VersionInfo.objects.get(command=command).version
213 cmd = cmd_template.format(libs=modpath, model=path)
214 venv_path = os.environ.get('VIRTUAL_ENV') or os.path.join(os.getcwd(), 'env')
215 venv_bin = os.path.join(venv_path, 'bin')
216 if not venv_bin in os.environ.get('PATH', '').split(':'):
217 os.environ['PATH'] = os.environ.get('PATH', '') + ":" + venv_bin
218 code, out, err = pipe(cmd)
219 out = out.decode('utf-8')
220 err = err.decode('utf-8')
221 if code > 0 or len(err.strip()) > 0 :
222 error_lines = err.splitlines()
223 assertion('len(error_lines) > 0')
224 for line in error_lines:
225 if line.strip():
226 try:
227 fn, lnum, msg = line.split(':', 2)
228 lnum = int(lnum)
229 if fn == model and (lnum-1) in range(len(text)):
230 line = text[lnum-1].rstrip()
231 else:
232 line = None
233 items.append((lnum, line, msg))
234 if 'error: ' in msg:
235 errors += 1
236 if 'warning: ' in msg:
237 warnings += 1
238 except ValueError:
239 pass
240 #passed = passed and code == 0 # For the submission tool. Yang checks always pass
241 message += "%s: %s:\n%s\n" % (cmd_version, cmd_template, out+"No validation errors\n" if (code == 0 and len(err) == 0) else out+err)
242
243 # yanglint
244 set_coverage_checking(False) # we can't count the following as it may or may not be run, depending on setup
245 if settings.SUBMIT_YANGLINT_COMMAND and os.path.exists(settings.YANGLINT_BINARY):
246 cmd_template = settings.SUBMIT_YANGLINT_COMMAND
247 command = [ w for w in cmd_template.split() if not '=' in w ][0]
248 cmd_version = VersionInfo.objects.get(command=command).version
249 cmd = cmd_template.format(model=path, rfclib=settings.SUBMIT_YANG_RFC_MODEL_DIR, tmplib=workdir,
250 draftlib=settings.SUBMIT_YANG_DRAFT_MODEL_DIR, ianalib=settings.SUBMIT_YANG_IANA_MODEL_DIR,
251 cataloglib=settings.SUBMIT_YANG_CATALOG_MODEL_DIR, )
252 code, out, err = pipe(cmd)
253 out = out.decode('utf-8')
254 err = err.decode('utf-8')
255 if code > 0 or len(err.strip()) > 0:
256 err_lines = err.splitlines()
257 for line in err_lines:
258 if line.strip():
259 try:
260 if 'err : ' in line:
261 errors += 1
262 if 'warn: ' in line:
263 warnings += 1
264 except ValueError:
265 pass
266 #passed = passed and code == 0 # For the submission tool. Yang checks always pass
267 message += "%s: %s:\n%s\n" % (cmd_version, cmd_template, out+"No validation errors\n" if (code == 0 and len(err) == 0) else out+err)
268 set_coverage_checking(True)
269 else:
270 errors += 1
271 message += "No such file: %s\nPossible mismatch between extracted xym file name and returned module name?\n" % (path)
272
273 dest = os.path.join(settings.SUBMIT_YANG_DRAFT_MODEL_DIR, model)
274 shutil.move(path, dest)
275
276 # summary result
277 results.append({
278 "name": model,
279 "passed": passed,
280 "message": message,
281 "warnings": warnings,
282 "errors": errors,
283 "items": items,
284 })
285
286
287 shutil.rmtree(workdir)
288
289 passed = all( res["passed"] for res in results )
290 message = "\n".join([ "\n".join([res['name']+':', res["message"]]) for res in results ])
291 errors = sum(res["errors"] for res in results )
292 warnings = sum(res["warnings"] for res in results )
293 items = [ e for res in results for e in res["items"] ]
294 info['items'] = items
295 info['code']['yang'] = model_list
296 return passed, message, errors, warnings, info
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/ietf/submit/checkers.py b/ietf/submit/checkers.py
--- a/ietf/submit/checkers.py
+++ b/ietf/submit/checkers.py
@@ -147,7 +147,7 @@
saved_stderr = sys.stderr
sys.stdout = io.StringIO()
sys.stderr = io.StringIO()
- extractor.extract_yang_model(file.readlines())
+ extractor.extract_yang_model_text(file.read())
model_list = extractor.get_extracted_models(False, True)
out = sys.stdout.getvalue()
err = sys.stderr.getvalue()
|
{"golden_diff": "diff --git a/ietf/submit/checkers.py b/ietf/submit/checkers.py\n--- a/ietf/submit/checkers.py\n+++ b/ietf/submit/checkers.py\n@@ -147,7 +147,7 @@\n saved_stderr = sys.stderr\n sys.stdout = io.StringIO()\n sys.stderr = io.StringIO()\n- extractor.extract_yang_model(file.readlines())\n+ extractor.extract_yang_model_text(file.read())\n model_list = extractor.get_extracted_models(False, True)\n out = sys.stdout.getvalue()\n err = sys.stderr.getvalue()\n", "issue": "Many tests failing with YangModelExtractor error\n### Describe the issue\n\nStarting with [test run 790](https://github.com/ietf-tools/datatracker/actions/runs/3172961553), a few dozen tests are failing with the error\r\n```\r\nAttributeError: 'YangModuleExtractor' object has no attribute 'extract_yang_model'\r\n```\r\nThe last passing test run was a couple hours before the release of [xym 0.6.0](https://pypi.org/project/xym/0.6.0/#history), so it seems likely this is related.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "# Copyright The IETF Trust 2016-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n\nimport io\nimport os\nimport re\nimport shutil\nimport sys\nimport tempfile\n\nfrom xym import xym\nfrom django.conf import settings\n\nimport debug # pyflakes:ignore\n\nfrom ietf.utils.log import log, assertion\nfrom ietf.utils.models import VersionInfo\nfrom ietf.utils.pipe import pipe\nfrom ietf.utils.test_runner import set_coverage_checking\n\nclass DraftSubmissionChecker(object):\n name = \"\"\n\n def check_file_txt(self, text):\n \"Run checks on a text file\"\n raise NotImplementedError\n\n def check_file_xml(self, xml):\n \"Run checks on an xml file\"\n raise NotImplementedError\n\n def check_fragment_txt(self, text):\n \"Run checks on a fragment from a text file\"\n raise NotImplementedError\n\n def check_fragment_xml(self, xml):\n \"Run checks on a fragment from an xml file\"\n raise NotImplementedError\n\n\nclass DraftIdnitsChecker(object):\n \"\"\"\n Draft checker class for idnits. Idnits can only handle whole text files,\n so only check_file_txt() is defined; check_file_xml and check_fragment_*\n methods are undefined.\n\n Furthermore, idnits doesn't provide an error code or line-by-line errors,\n so a bit of massage is needed in order to return the expected failure flag.\n \"\"\"\n name = \"idnits check\"\n\n # start using this when we provide more in the way of warnings during\n # submission checking:\n # symbol = '<span class=\"bi bi-check-square\"></span>'\n # symbol = u'<span class=\"large\">\\ua17d</span>' # Yi syllable 'nit'\n # symbol = u'<span class=\"large\">\\ub2e1</span>' # Hangul syllable 'nit'\n\n symbol = \"\"\n\n def __init__(self, options=[\"--submitcheck\", \"--nitcount\", ]):\n assert isinstance(options, list)\n if not \"--nitcount\" in options:\n options.append(\"--nitcount\")\n self.options = ' '.join(options)\n\n def check_file_txt(self, path):\n \"\"\"\n Run an idnits check, and return a passed/failed indication, a message,\n and error and warning messages.\n\n Error and warning list items are tuples:\n (line_number, line_text, message)\n \"\"\"\n items = []\n errors = 0\n warnings = 0\n errstart = [' ** ', ' ~~ ']\n warnstart = [' == ', ' -- ']\n \n\n cmd = \"%s %s %s\" % (settings.IDSUBMIT_IDNITS_BINARY, self.options, path)\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code != 0 or out == \"\":\n message = \"idnits error: %s:\\n Error %s: %s\" %( cmd, code, err)\n log(message)\n passed = False\n \n else:\n message = out\n if re.search(r\"\\s+Summary:\\s+0\\s+|No nits found\", out):\n passed = True\n else:\n passed = False\n\n item = \"\"\n for line in message.splitlines():\n if line[:5] in (errstart + warnstart):\n item = line.rstrip()\n elif line.strip() == \"\" and item:\n tuple = (None, None, item)\n items.append(tuple)\n if item[:5] in errstart:\n errors += 1\n elif item[:5] in warnstart:\n warnings += 1\n else:\n raise RuntimeError(\"Unexpected state in idnits checker: item: %s, line: %s\" % (item, line))\n item = \"\"\n elif item and line.strip() != \"\":\n item += \" \" + line.strip()\n else:\n pass\n info = {'checker': self.name, 'items': [], 'code': {}}\n\n return passed, message, errors, warnings, info\n\nclass DraftYangChecker(object):\n\n name = \"yang validation\"\n symbol = '<i class=\"bi bi-yin-yang\"></i>'\n\n def check_file_txt(self, path):\n name = os.path.basename(path)\n workdir = tempfile.mkdtemp()\n model_name_re = r'^[A-Za-z_][A-Za-z0-9_.-]*(@\\d\\d\\d\\d-\\d\\d-\\d\\d)?\\.yang$'\n errors = 0\n warnings = 0\n message = \"\"\n results = []\n passed = True # Used by the submission tool. Yang checks always pass.\n model_list = []\n info = {'checker': self.name, 'items': [], 'code': {}}\n\n extractor = xym.YangModuleExtractor(path, workdir, strict=True, strict_examples=False, debug_level=1)\n if not os.path.exists(path):\n return None, \"%s: No such file or directory: '%s'\"%(name.capitalize(), path), errors, warnings, info\n with open(path) as file:\n out = \"\"\n err = \"\"\n code = 0\n try:\n # This places the yang models as files in workdir\n saved_stdout = sys.stdout\n saved_stderr = sys.stderr\n sys.stdout = io.StringIO()\n sys.stderr = io.StringIO()\n extractor.extract_yang_model(file.readlines())\n model_list = extractor.get_extracted_models(False, True)\n out = sys.stdout.getvalue()\n err = sys.stderr.getvalue()\n # signature change in xym:\n except Exception as exc:\n sys.stdout = saved_stdout\n sys.stderr = saved_stderr\n msg = \"Exception when running xym on %s: %s\" % (name, exc)\n log(msg)\n raise\n return None, msg, 0, 0, info\n finally:\n sys.stdout = saved_stdout\n sys.stderr = saved_stderr\n if not model_list:\n # Found no yang models, don't deliver any YangChecker result\n return None, \"\", 0, 0, info\n\n for m in model_list:\n if not re.search(model_name_re, m):\n code += 1\n err += \"Error: Bad extracted model name: '%s'\\n\" % m\n if len(set(model_list)) != len(model_list):\n code += 1\n err += \"Error: Multiple models with the same name:\\n %s\\n\" % (\"\\n \".join(model_list))\n\n model_list = list(set(model_list))\n\n command = \"xym\"\n cmd_version = VersionInfo.objects.get(command=command).version\n message = \"%s:\\n%s\\n\\n\" % (cmd_version, out.replace('\\n\\n','\\n').strip() if code == 0 else err)\n\n results.append({\n \"name\": name,\n \"passed\": passed,\n \"message\": message,\n \"warnings\": 0,\n \"errors\": code,\n \"items\": [],\n })\n\n for model in model_list:\n path = os.path.join(workdir, model)\n message = \"\"\n passed = True\n errors = 0\n warnings = 0\n items = []\n modpath = ':'.join([\n workdir,\n settings.SUBMIT_YANG_RFC_MODEL_DIR,\n settings.SUBMIT_YANG_DRAFT_MODEL_DIR,\n settings.SUBMIT_YANG_IANA_MODEL_DIR,\n settings.SUBMIT_YANG_CATALOG_MODEL_DIR,\n ])\n if os.path.exists(path):\n with io.open(path) as file:\n text = file.readlines()\n # pyang\n cmd_template = settings.SUBMIT_PYANG_COMMAND\n command = [ w for w in cmd_template.split() if not '=' in w ][0]\n cmd_version = VersionInfo.objects.get(command=command).version\n cmd = cmd_template.format(libs=modpath, model=path)\n venv_path = os.environ.get('VIRTUAL_ENV') or os.path.join(os.getcwd(), 'env')\n venv_bin = os.path.join(venv_path, 'bin')\n if not venv_bin in os.environ.get('PATH', '').split(':'):\n os.environ['PATH'] = os.environ.get('PATH', '') + \":\" + venv_bin\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code > 0 or len(err.strip()) > 0 :\n error_lines = err.splitlines()\n assertion('len(error_lines) > 0')\n for line in error_lines:\n if line.strip():\n try:\n fn, lnum, msg = line.split(':', 2)\n lnum = int(lnum)\n if fn == model and (lnum-1) in range(len(text)):\n line = text[lnum-1].rstrip()\n else:\n line = None\n items.append((lnum, line, msg))\n if 'error: ' in msg:\n errors += 1\n if 'warning: ' in msg:\n warnings += 1\n except ValueError:\n pass\n #passed = passed and code == 0 # For the submission tool. Yang checks always pass\n message += \"%s: %s:\\n%s\\n\" % (cmd_version, cmd_template, out+\"No validation errors\\n\" if (code == 0 and len(err) == 0) else out+err)\n\n # yanglint\n set_coverage_checking(False) # we can't count the following as it may or may not be run, depending on setup\n if settings.SUBMIT_YANGLINT_COMMAND and os.path.exists(settings.YANGLINT_BINARY):\n cmd_template = settings.SUBMIT_YANGLINT_COMMAND\n command = [ w for w in cmd_template.split() if not '=' in w ][0]\n cmd_version = VersionInfo.objects.get(command=command).version\n cmd = cmd_template.format(model=path, rfclib=settings.SUBMIT_YANG_RFC_MODEL_DIR, tmplib=workdir,\n draftlib=settings.SUBMIT_YANG_DRAFT_MODEL_DIR, ianalib=settings.SUBMIT_YANG_IANA_MODEL_DIR,\n cataloglib=settings.SUBMIT_YANG_CATALOG_MODEL_DIR, )\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code > 0 or len(err.strip()) > 0:\n err_lines = err.splitlines()\n for line in err_lines:\n if line.strip():\n try:\n if 'err : ' in line:\n errors += 1\n if 'warn: ' in line:\n warnings += 1\n except ValueError:\n pass\n #passed = passed and code == 0 # For the submission tool. Yang checks always pass\n message += \"%s: %s:\\n%s\\n\" % (cmd_version, cmd_template, out+\"No validation errors\\n\" if (code == 0 and len(err) == 0) else out+err)\n set_coverage_checking(True)\n else:\n errors += 1\n message += \"No such file: %s\\nPossible mismatch between extracted xym file name and returned module name?\\n\" % (path)\n\n dest = os.path.join(settings.SUBMIT_YANG_DRAFT_MODEL_DIR, model)\n shutil.move(path, dest)\n\n # summary result\n results.append({\n \"name\": model,\n \"passed\": passed,\n \"message\": message,\n \"warnings\": warnings,\n \"errors\": errors,\n \"items\": items,\n })\n\n\n shutil.rmtree(workdir)\n\n passed = all( res[\"passed\"] for res in results )\n message = \"\\n\".join([ \"\\n\".join([res['name']+':', res[\"message\"]]) for res in results ])\n errors = sum(res[\"errors\"] for res in results )\n warnings = sum(res[\"warnings\"] for res in results )\n items = [ e for res in results for e in res[\"items\"] ]\n info['items'] = items\n info['code']['yang'] = model_list\n return passed, message, errors, warnings, info", "path": "ietf/submit/checkers.py"}], "after_files": [{"content": "# Copyright The IETF Trust 2016-2020, All Rights Reserved\n# -*- coding: utf-8 -*-\n\n\nimport io\nimport os\nimport re\nimport shutil\nimport sys\nimport tempfile\n\nfrom xym import xym\nfrom django.conf import settings\n\nimport debug # pyflakes:ignore\n\nfrom ietf.utils.log import log, assertion\nfrom ietf.utils.models import VersionInfo\nfrom ietf.utils.pipe import pipe\nfrom ietf.utils.test_runner import set_coverage_checking\n\nclass DraftSubmissionChecker(object):\n name = \"\"\n\n def check_file_txt(self, text):\n \"Run checks on a text file\"\n raise NotImplementedError\n\n def check_file_xml(self, xml):\n \"Run checks on an xml file\"\n raise NotImplementedError\n\n def check_fragment_txt(self, text):\n \"Run checks on a fragment from a text file\"\n raise NotImplementedError\n\n def check_fragment_xml(self, xml):\n \"Run checks on a fragment from an xml file\"\n raise NotImplementedError\n\n\nclass DraftIdnitsChecker(object):\n \"\"\"\n Draft checker class for idnits. Idnits can only handle whole text files,\n so only check_file_txt() is defined; check_file_xml and check_fragment_*\n methods are undefined.\n\n Furthermore, idnits doesn't provide an error code or line-by-line errors,\n so a bit of massage is needed in order to return the expected failure flag.\n \"\"\"\n name = \"idnits check\"\n\n # start using this when we provide more in the way of warnings during\n # submission checking:\n # symbol = '<span class=\"bi bi-check-square\"></span>'\n # symbol = u'<span class=\"large\">\\ua17d</span>' # Yi syllable 'nit'\n # symbol = u'<span class=\"large\">\\ub2e1</span>' # Hangul syllable 'nit'\n\n symbol = \"\"\n\n def __init__(self, options=[\"--submitcheck\", \"--nitcount\", ]):\n assert isinstance(options, list)\n if not \"--nitcount\" in options:\n options.append(\"--nitcount\")\n self.options = ' '.join(options)\n\n def check_file_txt(self, path):\n \"\"\"\n Run an idnits check, and return a passed/failed indication, a message,\n and error and warning messages.\n\n Error and warning list items are tuples:\n (line_number, line_text, message)\n \"\"\"\n items = []\n errors = 0\n warnings = 0\n errstart = [' ** ', ' ~~ ']\n warnstart = [' == ', ' -- ']\n \n\n cmd = \"%s %s %s\" % (settings.IDSUBMIT_IDNITS_BINARY, self.options, path)\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code != 0 or out == \"\":\n message = \"idnits error: %s:\\n Error %s: %s\" %( cmd, code, err)\n log(message)\n passed = False\n \n else:\n message = out\n if re.search(r\"\\s+Summary:\\s+0\\s+|No nits found\", out):\n passed = True\n else:\n passed = False\n\n item = \"\"\n for line in message.splitlines():\n if line[:5] in (errstart + warnstart):\n item = line.rstrip()\n elif line.strip() == \"\" and item:\n tuple = (None, None, item)\n items.append(tuple)\n if item[:5] in errstart:\n errors += 1\n elif item[:5] in warnstart:\n warnings += 1\n else:\n raise RuntimeError(\"Unexpected state in idnits checker: item: %s, line: %s\" % (item, line))\n item = \"\"\n elif item and line.strip() != \"\":\n item += \" \" + line.strip()\n else:\n pass\n info = {'checker': self.name, 'items': [], 'code': {}}\n\n return passed, message, errors, warnings, info\n\nclass DraftYangChecker(object):\n\n name = \"yang validation\"\n symbol = '<i class=\"bi bi-yin-yang\"></i>'\n\n def check_file_txt(self, path):\n name = os.path.basename(path)\n workdir = tempfile.mkdtemp()\n model_name_re = r'^[A-Za-z_][A-Za-z0-9_.-]*(@\\d\\d\\d\\d-\\d\\d-\\d\\d)?\\.yang$'\n errors = 0\n warnings = 0\n message = \"\"\n results = []\n passed = True # Used by the submission tool. Yang checks always pass.\n model_list = []\n info = {'checker': self.name, 'items': [], 'code': {}}\n\n extractor = xym.YangModuleExtractor(path, workdir, strict=True, strict_examples=False, debug_level=1)\n if not os.path.exists(path):\n return None, \"%s: No such file or directory: '%s'\"%(name.capitalize(), path), errors, warnings, info\n with open(path) as file:\n out = \"\"\n err = \"\"\n code = 0\n try:\n # This places the yang models as files in workdir\n saved_stdout = sys.stdout\n saved_stderr = sys.stderr\n sys.stdout = io.StringIO()\n sys.stderr = io.StringIO()\n extractor.extract_yang_model_text(file.read())\n model_list = extractor.get_extracted_models(False, True)\n out = sys.stdout.getvalue()\n err = sys.stderr.getvalue()\n # signature change in xym:\n except Exception as exc:\n sys.stdout = saved_stdout\n sys.stderr = saved_stderr\n msg = \"Exception when running xym on %s: %s\" % (name, exc)\n log(msg)\n raise\n return None, msg, 0, 0, info\n finally:\n sys.stdout = saved_stdout\n sys.stderr = saved_stderr\n if not model_list:\n # Found no yang models, don't deliver any YangChecker result\n return None, \"\", 0, 0, info\n\n for m in model_list:\n if not re.search(model_name_re, m):\n code += 1\n err += \"Error: Bad extracted model name: '%s'\\n\" % m\n if len(set(model_list)) != len(model_list):\n code += 1\n err += \"Error: Multiple models with the same name:\\n %s\\n\" % (\"\\n \".join(model_list))\n\n model_list = list(set(model_list))\n\n command = \"xym\"\n cmd_version = VersionInfo.objects.get(command=command).version\n message = \"%s:\\n%s\\n\\n\" % (cmd_version, out.replace('\\n\\n','\\n').strip() if code == 0 else err)\n\n results.append({\n \"name\": name,\n \"passed\": passed,\n \"message\": message,\n \"warnings\": 0,\n \"errors\": code,\n \"items\": [],\n })\n\n for model in model_list:\n path = os.path.join(workdir, model)\n message = \"\"\n passed = True\n errors = 0\n warnings = 0\n items = []\n modpath = ':'.join([\n workdir,\n settings.SUBMIT_YANG_RFC_MODEL_DIR,\n settings.SUBMIT_YANG_DRAFT_MODEL_DIR,\n settings.SUBMIT_YANG_IANA_MODEL_DIR,\n settings.SUBMIT_YANG_CATALOG_MODEL_DIR,\n ])\n if os.path.exists(path):\n with io.open(path) as file:\n text = file.readlines()\n # pyang\n cmd_template = settings.SUBMIT_PYANG_COMMAND\n command = [ w for w in cmd_template.split() if not '=' in w ][0]\n cmd_version = VersionInfo.objects.get(command=command).version\n cmd = cmd_template.format(libs=modpath, model=path)\n venv_path = os.environ.get('VIRTUAL_ENV') or os.path.join(os.getcwd(), 'env')\n venv_bin = os.path.join(venv_path, 'bin')\n if not venv_bin in os.environ.get('PATH', '').split(':'):\n os.environ['PATH'] = os.environ.get('PATH', '') + \":\" + venv_bin\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code > 0 or len(err.strip()) > 0 :\n error_lines = err.splitlines()\n assertion('len(error_lines) > 0')\n for line in error_lines:\n if line.strip():\n try:\n fn, lnum, msg = line.split(':', 2)\n lnum = int(lnum)\n if fn == model and (lnum-1) in range(len(text)):\n line = text[lnum-1].rstrip()\n else:\n line = None\n items.append((lnum, line, msg))\n if 'error: ' in msg:\n errors += 1\n if 'warning: ' in msg:\n warnings += 1\n except ValueError:\n pass\n #passed = passed and code == 0 # For the submission tool. Yang checks always pass\n message += \"%s: %s:\\n%s\\n\" % (cmd_version, cmd_template, out+\"No validation errors\\n\" if (code == 0 and len(err) == 0) else out+err)\n\n # yanglint\n set_coverage_checking(False) # we can't count the following as it may or may not be run, depending on setup\n if settings.SUBMIT_YANGLINT_COMMAND and os.path.exists(settings.YANGLINT_BINARY):\n cmd_template = settings.SUBMIT_YANGLINT_COMMAND\n command = [ w for w in cmd_template.split() if not '=' in w ][0]\n cmd_version = VersionInfo.objects.get(command=command).version\n cmd = cmd_template.format(model=path, rfclib=settings.SUBMIT_YANG_RFC_MODEL_DIR, tmplib=workdir,\n draftlib=settings.SUBMIT_YANG_DRAFT_MODEL_DIR, ianalib=settings.SUBMIT_YANG_IANA_MODEL_DIR,\n cataloglib=settings.SUBMIT_YANG_CATALOG_MODEL_DIR, )\n code, out, err = pipe(cmd)\n out = out.decode('utf-8')\n err = err.decode('utf-8')\n if code > 0 or len(err.strip()) > 0:\n err_lines = err.splitlines()\n for line in err_lines:\n if line.strip():\n try:\n if 'err : ' in line:\n errors += 1\n if 'warn: ' in line:\n warnings += 1\n except ValueError:\n pass\n #passed = passed and code == 0 # For the submission tool. Yang checks always pass\n message += \"%s: %s:\\n%s\\n\" % (cmd_version, cmd_template, out+\"No validation errors\\n\" if (code == 0 and len(err) == 0) else out+err)\n set_coverage_checking(True)\n else:\n errors += 1\n message += \"No such file: %s\\nPossible mismatch between extracted xym file name and returned module name?\\n\" % (path)\n\n dest = os.path.join(settings.SUBMIT_YANG_DRAFT_MODEL_DIR, model)\n shutil.move(path, dest)\n\n # summary result\n results.append({\n \"name\": model,\n \"passed\": passed,\n \"message\": message,\n \"warnings\": warnings,\n \"errors\": errors,\n \"items\": items,\n })\n\n\n shutil.rmtree(workdir)\n\n passed = all( res[\"passed\"] for res in results )\n message = \"\\n\".join([ \"\\n\".join([res['name']+':', res[\"message\"]]) for res in results ])\n errors = sum(res[\"errors\"] for res in results )\n warnings = sum(res[\"warnings\"] for res in results )\n items = [ e for res in results for e in res[\"items\"] ]\n info['items'] = items\n info['code']['yang'] = model_list\n return passed, message, errors, warnings, info", "path": "ietf/submit/checkers.py"}]}
| 3,882 | 125 |
gh_patches_debug_38033
|
rasdani/github-patches
|
git_diff
|
google__clusterfuzz-1524
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support authentication with Cloud IAP
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/appengine/libs/auth.py`
Content:
```
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Authentication helpers."""
15
16 import collections
17
18 from firebase_admin import auth
19 from google.cloud import ndb
20 import webapp2
21
22 from base import utils
23 from config import local_config
24 from datastore import data_types
25 from metrics import logs
26 from system import environment
27
28 User = collections.namedtuple('User', ['email'])
29
30
31 class AuthError(Exception):
32 """Auth error."""
33
34
35 def auth_domain():
36 """Get the auth domain."""
37 domain = local_config.ProjectConfig().get('firebase.auth_domain')
38 if domain:
39 return domain
40
41 return utils.get_application_id() + '.firebaseapp.com'
42
43
44 def is_current_user_admin():
45 """Returns whether or not the current logged in user is an admin."""
46 if environment.is_local_development():
47 return True
48
49 user = get_current_user()
50 if not user:
51 return False
52
53 key = ndb.Key(data_types.Admin, user.email)
54 return bool(key.get())
55
56
57 def get_current_user():
58 """Get the current logged in user, or None."""
59 if environment.is_local_development():
60 return User('user@localhost')
61
62 loas_user = environment.get_value('LOAS_PEER_USERNAME')
63 if loas_user:
64 return User(loas_user + '@google.com')
65
66 current_request = get_current_request()
67 oauth_email = getattr(current_request, '_oauth_email', None)
68 if oauth_email:
69 return User(oauth_email)
70
71 cached_email = getattr(current_request, '_cached_email', None)
72 if cached_email:
73 return User(cached_email)
74
75 session_cookie = get_session_cookie()
76 if not session_cookie:
77 return None
78
79 try:
80 decoded_claims = decode_claims(get_session_cookie())
81 except AuthError:
82 logs.log_warn('Invalid session cookie.')
83 return None
84
85 if not decoded_claims.get('email_verified'):
86 return None
87
88 email = decoded_claims.get('email')
89 if not email:
90 return None
91
92 # We cache the email for this request if we've validated the user to make
93 # subsequent get_current_user() calls fast.
94 setattr(current_request, '_cached_email', email)
95 return User(email)
96
97
98 def create_session_cookie(id_token, expires_in):
99 """Create a new session cookie."""
100 try:
101 return auth.create_session_cookie(id_token, expires_in=expires_in)
102 except auth.AuthError:
103 raise AuthError('Failed to create session cookie.')
104
105
106 def get_current_request():
107 """Get the current request."""
108 return webapp2.get_request()
109
110
111 def get_session_cookie():
112 """Get the current session cookie."""
113 return get_current_request().cookies.get('session')
114
115
116 def revoke_session_cookie(session_cookie):
117 """Revoke a session cookie."""
118 decoded_claims = decode_claims(session_cookie)
119 auth.revoke_refresh_tokens(decoded_claims['sub'])
120
121
122 def decode_claims(session_cookie):
123 """Decode the claims for the current session cookie."""
124 try:
125 return auth.verify_session_cookie(session_cookie, check_revoked=True)
126 except (ValueError, auth.AuthError):
127 raise AuthError('Invalid session cookie.')
128
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py
--- a/src/appengine/libs/auth.py
+++ b/src/appengine/libs/auth.py
@@ -13,12 +13,17 @@
# limitations under the License.
"""Authentication helpers."""
+from builtins import str
import collections
+import jwt
from firebase_admin import auth
from google.cloud import ndb
+from googleapiclient.discovery import build
+import requests
import webapp2
+from base import memoize
from base import utils
from config import local_config
from datastore import data_types
@@ -54,6 +59,68 @@
return bool(key.get())
[email protected](memoize.FifoInMemory(1))
+def _project_number_from_id(project_id):
+ """Get the project number from project ID."""
+ resource_manager = build('cloudresourcemanager', 'v1')
+ result = resource_manager.projects().get(projectId=project_id).execute()
+ if 'projectNumber' not in result:
+ raise AuthError('Failed to get project number.')
+
+ return result['projectNumber']
+
+
[email protected](memoize.FifoInMemory(1))
+def _get_iap_key(key_id):
+ """Retrieves a public key from the list published by Identity-Aware Proxy,
+ re-fetching the key file if necessary.
+ """
+ resp = requests.get('https://www.gstatic.com/iap/verify/public_key')
+ if resp.status_code != 200:
+ raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(
+ resp.status_code, resp.headers, resp.text))
+
+ result = resp.json()
+ key = result.get(key_id)
+ if not key:
+ raise AuthError('Key {!r} not found'.format(key_id))
+
+ return key
+
+
+def _validate_iap_jwt(iap_jwt):
+ """Validate JWT assertion."""
+ project_id = utils.get_application_id()
+ expected_audience = '/projects/{}/apps/{}'.format(
+ _project_number_from_id(project_id), project_id)
+
+ try:
+ key_id = jwt.get_unverified_header(iap_jwt).get('kid')
+ if not key_id:
+ raise AuthError('No key ID.')
+
+ key = _get_iap_key(key_id)
+ decoded_jwt = jwt.decode(
+ iap_jwt,
+ key,
+ algorithms=['ES256'],
+ issuer='https://cloud.google.com/iap',
+ audience=expected_audience)
+ return decoded_jwt['email']
+ except (jwt.exceptions.InvalidTokenError,
+ requests.exceptions.RequestException) as e:
+ raise AuthError('JWT assertion decode error: ' + str(e))
+
+
+def get_iap_email(current_request):
+ """Get Cloud IAP email."""
+ jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')
+ if not jwt_assertion:
+ return None
+
+ return _validate_iap_jwt(jwt_assertion)
+
+
def get_current_user():
"""Get the current logged in user, or None."""
if environment.is_local_development():
@@ -64,6 +131,10 @@
return User(loas_user + '@google.com')
current_request = get_current_request()
+ iap_email = get_iap_email(current_request)
+ if iap_email:
+ return User(iap_email)
+
oauth_email = getattr(current_request, '_oauth_email', None)
if oauth_email:
return User(oauth_email)
|
{"golden_diff": "diff --git a/src/appengine/libs/auth.py b/src/appengine/libs/auth.py\n--- a/src/appengine/libs/auth.py\n+++ b/src/appengine/libs/auth.py\n@@ -13,12 +13,17 @@\n # limitations under the License.\n \"\"\"Authentication helpers.\"\"\"\n \n+from builtins import str\n import collections\n+import jwt\n \n from firebase_admin import auth\n from google.cloud import ndb\n+from googleapiclient.discovery import build\n+import requests\n import webapp2\n \n+from base import memoize\n from base import utils\n from config import local_config\n from datastore import data_types\n@@ -54,6 +59,68 @@\n return bool(key.get())\n \n \[email protected](memoize.FifoInMemory(1))\n+def _project_number_from_id(project_id):\n+ \"\"\"Get the project number from project ID.\"\"\"\n+ resource_manager = build('cloudresourcemanager', 'v1')\n+ result = resource_manager.projects().get(projectId=project_id).execute()\n+ if 'projectNumber' not in result:\n+ raise AuthError('Failed to get project number.')\n+\n+ return result['projectNumber']\n+\n+\[email protected](memoize.FifoInMemory(1))\n+def _get_iap_key(key_id):\n+ \"\"\"Retrieves a public key from the list published by Identity-Aware Proxy,\n+ re-fetching the key file if necessary.\n+ \"\"\"\n+ resp = requests.get('https://www.gstatic.com/iap/verify/public_key')\n+ if resp.status_code != 200:\n+ raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(\n+ resp.status_code, resp.headers, resp.text))\n+\n+ result = resp.json()\n+ key = result.get(key_id)\n+ if not key:\n+ raise AuthError('Key {!r} not found'.format(key_id))\n+\n+ return key\n+\n+\n+def _validate_iap_jwt(iap_jwt):\n+ \"\"\"Validate JWT assertion.\"\"\"\n+ project_id = utils.get_application_id()\n+ expected_audience = '/projects/{}/apps/{}'.format(\n+ _project_number_from_id(project_id), project_id)\n+\n+ try:\n+ key_id = jwt.get_unverified_header(iap_jwt).get('kid')\n+ if not key_id:\n+ raise AuthError('No key ID.')\n+\n+ key = _get_iap_key(key_id)\n+ decoded_jwt = jwt.decode(\n+ iap_jwt,\n+ key,\n+ algorithms=['ES256'],\n+ issuer='https://cloud.google.com/iap',\n+ audience=expected_audience)\n+ return decoded_jwt['email']\n+ except (jwt.exceptions.InvalidTokenError,\n+ requests.exceptions.RequestException) as e:\n+ raise AuthError('JWT assertion decode error: ' + str(e))\n+\n+\n+def get_iap_email(current_request):\n+ \"\"\"Get Cloud IAP email.\"\"\"\n+ jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')\n+ if not jwt_assertion:\n+ return None\n+\n+ return _validate_iap_jwt(jwt_assertion)\n+\n+\n def get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n@@ -64,6 +131,10 @@\n return User(loas_user + '@google.com')\n \n current_request = get_current_request()\n+ iap_email = get_iap_email(current_request)\n+ if iap_email:\n+ return User(iap_email)\n+\n oauth_email = getattr(current_request, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n", "issue": "Support authentication with Cloud IAP\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Authentication helpers.\"\"\"\n\nimport collections\n\nfrom firebase_admin import auth\nfrom google.cloud import ndb\nimport webapp2\n\nfrom base import utils\nfrom config import local_config\nfrom datastore import data_types\nfrom metrics import logs\nfrom system import environment\n\nUser = collections.namedtuple('User', ['email'])\n\n\nclass AuthError(Exception):\n \"\"\"Auth error.\"\"\"\n\n\ndef auth_domain():\n \"\"\"Get the auth domain.\"\"\"\n domain = local_config.ProjectConfig().get('firebase.auth_domain')\n if domain:\n return domain\n\n return utils.get_application_id() + '.firebaseapp.com'\n\n\ndef is_current_user_admin():\n \"\"\"Returns whether or not the current logged in user is an admin.\"\"\"\n if environment.is_local_development():\n return True\n\n user = get_current_user()\n if not user:\n return False\n\n key = ndb.Key(data_types.Admin, user.email)\n return bool(key.get())\n\n\ndef get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n return User('user@localhost')\n\n loas_user = environment.get_value('LOAS_PEER_USERNAME')\n if loas_user:\n return User(loas_user + '@google.com')\n\n current_request = get_current_request()\n oauth_email = getattr(current_request, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n\n cached_email = getattr(current_request, '_cached_email', None)\n if cached_email:\n return User(cached_email)\n\n session_cookie = get_session_cookie()\n if not session_cookie:\n return None\n\n try:\n decoded_claims = decode_claims(get_session_cookie())\n except AuthError:\n logs.log_warn('Invalid session cookie.')\n return None\n\n if not decoded_claims.get('email_verified'):\n return None\n\n email = decoded_claims.get('email')\n if not email:\n return None\n\n # We cache the email for this request if we've validated the user to make\n # subsequent get_current_user() calls fast.\n setattr(current_request, '_cached_email', email)\n return User(email)\n\n\ndef create_session_cookie(id_token, expires_in):\n \"\"\"Create a new session cookie.\"\"\"\n try:\n return auth.create_session_cookie(id_token, expires_in=expires_in)\n except auth.AuthError:\n raise AuthError('Failed to create session cookie.')\n\n\ndef get_current_request():\n \"\"\"Get the current request.\"\"\"\n return webapp2.get_request()\n\n\ndef get_session_cookie():\n \"\"\"Get the current session cookie.\"\"\"\n return get_current_request().cookies.get('session')\n\n\ndef revoke_session_cookie(session_cookie):\n \"\"\"Revoke a session cookie.\"\"\"\n decoded_claims = decode_claims(session_cookie)\n auth.revoke_refresh_tokens(decoded_claims['sub'])\n\n\ndef decode_claims(session_cookie):\n \"\"\"Decode the claims for the current session cookie.\"\"\"\n try:\n return auth.verify_session_cookie(session_cookie, check_revoked=True)\n except (ValueError, auth.AuthError):\n raise AuthError('Invalid session cookie.')\n", "path": "src/appengine/libs/auth.py"}], "after_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Authentication helpers.\"\"\"\n\nfrom builtins import str\nimport collections\nimport jwt\n\nfrom firebase_admin import auth\nfrom google.cloud import ndb\nfrom googleapiclient.discovery import build\nimport requests\nimport webapp2\n\nfrom base import memoize\nfrom base import utils\nfrom config import local_config\nfrom datastore import data_types\nfrom metrics import logs\nfrom system import environment\n\nUser = collections.namedtuple('User', ['email'])\n\n\nclass AuthError(Exception):\n \"\"\"Auth error.\"\"\"\n\n\ndef auth_domain():\n \"\"\"Get the auth domain.\"\"\"\n domain = local_config.ProjectConfig().get('firebase.auth_domain')\n if domain:\n return domain\n\n return utils.get_application_id() + '.firebaseapp.com'\n\n\ndef is_current_user_admin():\n \"\"\"Returns whether or not the current logged in user is an admin.\"\"\"\n if environment.is_local_development():\n return True\n\n user = get_current_user()\n if not user:\n return False\n\n key = ndb.Key(data_types.Admin, user.email)\n return bool(key.get())\n\n\[email protected](memoize.FifoInMemory(1))\ndef _project_number_from_id(project_id):\n \"\"\"Get the project number from project ID.\"\"\"\n resource_manager = build('cloudresourcemanager', 'v1')\n result = resource_manager.projects().get(projectId=project_id).execute()\n if 'projectNumber' not in result:\n raise AuthError('Failed to get project number.')\n\n return result['projectNumber']\n\n\[email protected](memoize.FifoInMemory(1))\ndef _get_iap_key(key_id):\n \"\"\"Retrieves a public key from the list published by Identity-Aware Proxy,\n re-fetching the key file if necessary.\n \"\"\"\n resp = requests.get('https://www.gstatic.com/iap/verify/public_key')\n if resp.status_code != 200:\n raise AuthError('Unable to fetch IAP keys: {} / {} / {}'.format(\n resp.status_code, resp.headers, resp.text))\n\n result = resp.json()\n key = result.get(key_id)\n if not key:\n raise AuthError('Key {!r} not found'.format(key_id))\n\n return key\n\n\ndef _validate_iap_jwt(iap_jwt):\n \"\"\"Validate JWT assertion.\"\"\"\n project_id = utils.get_application_id()\n expected_audience = '/projects/{}/apps/{}'.format(\n _project_number_from_id(project_id), project_id)\n\n try:\n key_id = jwt.get_unverified_header(iap_jwt).get('kid')\n if not key_id:\n raise AuthError('No key ID.')\n\n key = _get_iap_key(key_id)\n decoded_jwt = jwt.decode(\n iap_jwt,\n key,\n algorithms=['ES256'],\n issuer='https://cloud.google.com/iap',\n audience=expected_audience)\n return decoded_jwt['email']\n except (jwt.exceptions.InvalidTokenError,\n requests.exceptions.RequestException) as e:\n raise AuthError('JWT assertion decode error: ' + str(e))\n\n\ndef get_iap_email(current_request):\n \"\"\"Get Cloud IAP email.\"\"\"\n jwt_assertion = current_request.headers.get('X-Goog-IAP-JWT-Assertion')\n if not jwt_assertion:\n return None\n\n return _validate_iap_jwt(jwt_assertion)\n\n\ndef get_current_user():\n \"\"\"Get the current logged in user, or None.\"\"\"\n if environment.is_local_development():\n return User('user@localhost')\n\n loas_user = environment.get_value('LOAS_PEER_USERNAME')\n if loas_user:\n return User(loas_user + '@google.com')\n\n current_request = get_current_request()\n iap_email = get_iap_email(current_request)\n if iap_email:\n return User(iap_email)\n\n oauth_email = getattr(current_request, '_oauth_email', None)\n if oauth_email:\n return User(oauth_email)\n\n cached_email = getattr(current_request, '_cached_email', None)\n if cached_email:\n return User(cached_email)\n\n session_cookie = get_session_cookie()\n if not session_cookie:\n return None\n\n try:\n decoded_claims = decode_claims(get_session_cookie())\n except AuthError:\n logs.log_warn('Invalid session cookie.')\n return None\n\n if not decoded_claims.get('email_verified'):\n return None\n\n email = decoded_claims.get('email')\n if not email:\n return None\n\n # We cache the email for this request if we've validated the user to make\n # subsequent get_current_user() calls fast.\n setattr(current_request, '_cached_email', email)\n return User(email)\n\n\ndef create_session_cookie(id_token, expires_in):\n \"\"\"Create a new session cookie.\"\"\"\n try:\n return auth.create_session_cookie(id_token, expires_in=expires_in)\n except auth.AuthError:\n raise AuthError('Failed to create session cookie.')\n\n\ndef get_current_request():\n \"\"\"Get the current request.\"\"\"\n return webapp2.get_request()\n\n\ndef get_session_cookie():\n \"\"\"Get the current session cookie.\"\"\"\n return get_current_request().cookies.get('session')\n\n\ndef revoke_session_cookie(session_cookie):\n \"\"\"Revoke a session cookie.\"\"\"\n decoded_claims = decode_claims(session_cookie)\n auth.revoke_refresh_tokens(decoded_claims['sub'])\n\n\ndef decode_claims(session_cookie):\n \"\"\"Decode the claims for the current session cookie.\"\"\"\n try:\n return auth.verify_session_cookie(session_cookie, check_revoked=True)\n except (ValueError, auth.AuthError):\n raise AuthError('Invalid session cookie.')\n", "path": "src/appengine/libs/auth.py"}]}
| 1,335 | 810 |
gh_patches_debug_15672
|
rasdani/github-patches
|
git_diff
|
spack__spack-20794
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to concretize with Clingo when libyogrt is part of dependency tree
<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.
Example: "I ran `spack find` to list all the installed packages and ..." -->
### Steps to reproduce the issue
Any of the above result in the same error:
```console
$ spack spec -I libyogrt
$ spack spec -I scr # SCR depends on libyogrt
$ spack spec -I axom # axom depends on SCR
$ spack spec -I macsio # macsio depends on SCR
...
```
### Error Message
<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->
```console
Concretized
--------------------------------
==> Error: invalid values for variant "scheduler" in package "libyogrt": ['lsf']
```
I imagine this is because https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L39
references an invalid value of the 'scheduler` variant:
https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L36
Adding `lsf` to the possible values for `scheduler` fixes the issue, but I am not sure that this fix is correct.
### Information on your system
* **Spack:** 0.16.0
* **Python:** 3.7.2
* **Platform:** linux-rhel7-power9le
* **Concretizer:** clingo
### Additional information
<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->
- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [x] I have searched the issues of this repo and believe this is not a duplicate
- [x] I have run the failing commands in debug mode and reported the output
<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!
If you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.
Other than that, thanks for taking the time to contribute to Spack! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `var/spack/repos/builtin/packages/libyogrt/package.py`
Content:
```
1 # Copyright 2013-2021 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Libyogrt(AutotoolsPackage):
10 """Your One Get Remaining Time Library."""
11
12 homepage = "https://github.com/LLNL/libyogrt"
13 url = "https://github.com/LLNL/libyogrt/releases/download/1.21/libyogrt-1.21.tar.gz"
14
15 version('1.24', sha256='36695030e72b24b1f22bfcfe42bfd1d3c87f9c0eea5e94ce0120782581ea522f')
16 version('1.23', sha256='c95e7a6be29c0d1ac1b673b0ba1d4e5781981722f93d0da99ae62ff3b5f35b5f')
17 version('1.22', sha256='38e7d1ea3fa030f0169197aa96cde9f01caa595a590764ef1cb2ae07379cb711')
18 version('1.21', sha256='5f8f0942d35ee4e418273e478e632210b3fa648dcb6a2e6a92c6ba4213cdc362')
19 version('1.20-7', sha256='735e9d6fa572e239ccc73e11c84b4583338b24df0fa91c48e8bc038d882003f7')
20 version('1.20-6', sha256='ba5a2e202f995cf7ae3bf87b451943733e760ede02ca172f712cbf2eea693222')
21 version('1.20-5', sha256='1e41bc656abffb121145264bc898421c3f355d3be35f1711b7b5e3ffe7effdd9')
22 version('1.20-4', sha256='0858a729068b272d4047d79f6a5187cdbd427bdfec64db4e143524b4789a06c5')
23 version('1.20-3', sha256='61a8f28f452aef0e09d700dbaaffd91ae3855f7ac221c7ebe478a028df635e31')
24 version('1.20-2', sha256='bf22a82ab3bfede780be3fb6c132cc354234f8d57d3cccd58fe594f074ed7f95')
25
26 # libyogrt supports the following schedulers:
27 # lcrm, lsf, moab, slurm, AIX+slurm
28
29 # however, only slurm exists in spack
30 # libyogrt's build system is smart enough to detect the system scheduler
31 # the slurm option here connects to a spack-installed slurm
32 # if/when other schedulers have spack packages, they can be added
33
34 variant('scheduler', default='system',
35 description="Select scheduler integration",
36 values=['system', 'slurm'], multi=False)
37 depends_on('slurm', when='scheduler=slurm')
38
39 conflicts('scheduler=lsf', when='@:1.22')
40
41 variant('static', default='False',
42 description="build static library")
43
44 def url_for_version(self, version):
45 if version < Version(1.21):
46 return "https://github.com/LLNL/libyogrt/archive/%s.tar.gz" % version
47 else:
48 return "https://github.com/LLNL/libyogrt/releases/download/{0}/libyogrt-{0}.tar.gz".format(version)
49
50 def configure_args(self):
51 args = []
52
53 sched = self.spec.variants['scheduler'].value
54 if sched != "system":
55 args.append('--with-%s=%s' % (sched, self.spec[sched].prefix))
56
57 if '+static' in self.spec:
58 args.append('--enable-static=yes')
59
60 return args
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/var/spack/repos/builtin/packages/libyogrt/package.py b/var/spack/repos/builtin/packages/libyogrt/package.py
--- a/var/spack/repos/builtin/packages/libyogrt/package.py
+++ b/var/spack/repos/builtin/packages/libyogrt/package.py
@@ -34,13 +34,11 @@
variant('scheduler', default='system',
description="Select scheduler integration",
values=['system', 'slurm'], multi=False)
- depends_on('slurm', when='scheduler=slurm')
-
- conflicts('scheduler=lsf', when='@:1.22')
-
variant('static', default='False',
description="build static library")
+ depends_on('slurm', when='scheduler=slurm')
+
def url_for_version(self, version):
if version < Version(1.21):
return "https://github.com/LLNL/libyogrt/archive/%s.tar.gz" % version
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/libyogrt/package.py b/var/spack/repos/builtin/packages/libyogrt/package.py\n--- a/var/spack/repos/builtin/packages/libyogrt/package.py\n+++ b/var/spack/repos/builtin/packages/libyogrt/package.py\n@@ -34,13 +34,11 @@\n variant('scheduler', default='system',\n description=\"Select scheduler integration\",\n values=['system', 'slurm'], multi=False)\n- depends_on('slurm', when='scheduler=slurm')\n-\n- conflicts('scheduler=lsf', when='@:1.22')\n-\n variant('static', default='False',\n description=\"build static library\")\n \n+ depends_on('slurm', when='scheduler=slurm')\n+\n def url_for_version(self, version):\n if version < Version(1.21):\n return \"https://github.com/LLNL/libyogrt/archive/%s.tar.gz\" % version\n", "issue": "Unable to concretize with Clingo when libyogrt is part of dependency tree\n<!-- Explain, in a clear and concise way, the command you ran and the result you were trying to achieve.\r\nExample: \"I ran `spack find` to list all the installed packages and ...\" -->\r\n\r\n### Steps to reproduce the issue\r\nAny of the above result in the same error:\r\n```console\r\n$ spack spec -I libyogrt\r\n$ spack spec -I scr # SCR depends on libyogrt\r\n$ spack spec -I axom # axom depends on SCR\r\n$ spack spec -I macsio # macsio depends on SCR\r\n...\r\n```\r\n\r\n### Error Message\r\n\r\n<!-- If Spack reported an error, provide the error message. If it did not report an error but the output appears incorrect, provide the incorrect output. If there was no error message and no output but the result is incorrect, describe how it does not match what you expect. -->\r\n```console\r\nConcretized\r\n--------------------------------\r\n==> Error: invalid values for variant \"scheduler\" in package \"libyogrt\": ['lsf']\r\n```\r\n\r\nI imagine this is because https://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L39\r\n\r\nreferences an invalid value of the 'scheduler` variant:\r\nhttps://github.com/spack/spack/blob/c22141f444861abeaee297a3d92696e9ae94a509/var/spack/repos/builtin/packages/libyogrt/package.py#L36\r\n\r\nAdding `lsf` to the possible values for `scheduler` fixes the issue, but I am not sure that this fix is correct.\r\n\r\n### Information on your system\r\n\r\n* **Spack:** 0.16.0\r\n* **Python:** 3.7.2\r\n* **Platform:** linux-rhel7-power9le\r\n* **Concretizer:** clingo\r\n\r\n\r\n### Additional information\r\n\r\n<!-- These boxes can be checked by replacing [ ] with [x] or by clicking them after submitting the issue. -->\r\n- [x] I have run `spack debug report` and reported the version of Spack/Python/Platform\r\n- [x] I have searched the issues of this repo and believe this is not a duplicate\r\n- [x] I have run the failing commands in debug mode and reported the output\r\n\r\n<!-- We encourage you to try, as much as possible, to reduce your problem to the minimal example that still reproduces the issue. That would help us a lot in fixing it quickly and effectively!\r\n\r\nIf you want to ask a question about the tool (how to use it, what it can currently do, etc.), try the `#general` channel on our Slack first. We have a welcoming community and chances are you'll get your reply faster and without opening an issue.\r\n\r\nOther than that, thanks for taking the time to contribute to Spack! -->\r\n\n", "before_files": [{"content": "# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Libyogrt(AutotoolsPackage):\n \"\"\"Your One Get Remaining Time Library.\"\"\"\n\n homepage = \"https://github.com/LLNL/libyogrt\"\n url = \"https://github.com/LLNL/libyogrt/releases/download/1.21/libyogrt-1.21.tar.gz\"\n\n version('1.24', sha256='36695030e72b24b1f22bfcfe42bfd1d3c87f9c0eea5e94ce0120782581ea522f')\n version('1.23', sha256='c95e7a6be29c0d1ac1b673b0ba1d4e5781981722f93d0da99ae62ff3b5f35b5f')\n version('1.22', sha256='38e7d1ea3fa030f0169197aa96cde9f01caa595a590764ef1cb2ae07379cb711')\n version('1.21', sha256='5f8f0942d35ee4e418273e478e632210b3fa648dcb6a2e6a92c6ba4213cdc362')\n version('1.20-7', sha256='735e9d6fa572e239ccc73e11c84b4583338b24df0fa91c48e8bc038d882003f7')\n version('1.20-6', sha256='ba5a2e202f995cf7ae3bf87b451943733e760ede02ca172f712cbf2eea693222')\n version('1.20-5', sha256='1e41bc656abffb121145264bc898421c3f355d3be35f1711b7b5e3ffe7effdd9')\n version('1.20-4', sha256='0858a729068b272d4047d79f6a5187cdbd427bdfec64db4e143524b4789a06c5')\n version('1.20-3', sha256='61a8f28f452aef0e09d700dbaaffd91ae3855f7ac221c7ebe478a028df635e31')\n version('1.20-2', sha256='bf22a82ab3bfede780be3fb6c132cc354234f8d57d3cccd58fe594f074ed7f95')\n\n # libyogrt supports the following schedulers:\n # lcrm, lsf, moab, slurm, AIX+slurm\n\n # however, only slurm exists in spack\n # libyogrt's build system is smart enough to detect the system scheduler\n # the slurm option here connects to a spack-installed slurm\n # if/when other schedulers have spack packages, they can be added\n\n variant('scheduler', default='system',\n description=\"Select scheduler integration\",\n values=['system', 'slurm'], multi=False)\n depends_on('slurm', when='scheduler=slurm')\n\n conflicts('scheduler=lsf', when='@:1.22')\n\n variant('static', default='False',\n description=\"build static library\")\n\n def url_for_version(self, version):\n if version < Version(1.21):\n return \"https://github.com/LLNL/libyogrt/archive/%s.tar.gz\" % version\n else:\n return \"https://github.com/LLNL/libyogrt/releases/download/{0}/libyogrt-{0}.tar.gz\".format(version)\n\n def configure_args(self):\n args = []\n\n sched = self.spec.variants['scheduler'].value\n if sched != \"system\":\n args.append('--with-%s=%s' % (sched, self.spec[sched].prefix))\n\n if '+static' in self.spec:\n args.append('--enable-static=yes')\n\n return args\n", "path": "var/spack/repos/builtin/packages/libyogrt/package.py"}], "after_files": [{"content": "# Copyright 2013-2021 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Libyogrt(AutotoolsPackage):\n \"\"\"Your One Get Remaining Time Library.\"\"\"\n\n homepage = \"https://github.com/LLNL/libyogrt\"\n url = \"https://github.com/LLNL/libyogrt/releases/download/1.21/libyogrt-1.21.tar.gz\"\n\n version('1.24', sha256='36695030e72b24b1f22bfcfe42bfd1d3c87f9c0eea5e94ce0120782581ea522f')\n version('1.23', sha256='c95e7a6be29c0d1ac1b673b0ba1d4e5781981722f93d0da99ae62ff3b5f35b5f')\n version('1.22', sha256='38e7d1ea3fa030f0169197aa96cde9f01caa595a590764ef1cb2ae07379cb711')\n version('1.21', sha256='5f8f0942d35ee4e418273e478e632210b3fa648dcb6a2e6a92c6ba4213cdc362')\n version('1.20-7', sha256='735e9d6fa572e239ccc73e11c84b4583338b24df0fa91c48e8bc038d882003f7')\n version('1.20-6', sha256='ba5a2e202f995cf7ae3bf87b451943733e760ede02ca172f712cbf2eea693222')\n version('1.20-5', sha256='1e41bc656abffb121145264bc898421c3f355d3be35f1711b7b5e3ffe7effdd9')\n version('1.20-4', sha256='0858a729068b272d4047d79f6a5187cdbd427bdfec64db4e143524b4789a06c5')\n version('1.20-3', sha256='61a8f28f452aef0e09d700dbaaffd91ae3855f7ac221c7ebe478a028df635e31')\n version('1.20-2', sha256='bf22a82ab3bfede780be3fb6c132cc354234f8d57d3cccd58fe594f074ed7f95')\n\n # libyogrt supports the following schedulers:\n # lcrm, lsf, moab, slurm, AIX+slurm\n\n # however, only slurm exists in spack\n # libyogrt's build system is smart enough to detect the system scheduler\n # the slurm option here connects to a spack-installed slurm\n # if/when other schedulers have spack packages, they can be added\n\n variant('scheduler', default='system',\n description=\"Select scheduler integration\",\n values=['system', 'slurm'], multi=False)\n variant('static', default='False',\n description=\"build static library\")\n\n depends_on('slurm', when='scheduler=slurm')\n\n def url_for_version(self, version):\n if version < Version(1.21):\n return \"https://github.com/LLNL/libyogrt/archive/%s.tar.gz\" % version\n else:\n return \"https://github.com/LLNL/libyogrt/releases/download/{0}/libyogrt-{0}.tar.gz\".format(version)\n\n def configure_args(self):\n args = []\n\n sched = self.spec.variants['scheduler'].value\n if sched != \"system\":\n args.append('--with-%s=%s' % (sched, self.spec[sched].prefix))\n\n if '+static' in self.spec:\n args.append('--enable-static=yes')\n\n return args\n", "path": "var/spack/repos/builtin/packages/libyogrt/package.py"}]}
| 2,222 | 217 |
gh_patches_debug_31328
|
rasdani/github-patches
|
git_diff
|
ResonantGeoData__ResonantGeoData-466
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
STAC serializer output Band info is incorrect
Figure out where this is coming from:
```
'assets': {
'image-15030': {
'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',
'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',
'eo:bands': [{'name': 'B1'}],
'roles': ['data'],
},
'image-15041': {
'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',
'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',
'eo:bands': [{'name': 'B1'}],
'roles': ['data'],
},
```
Note that both have `[{'name': 'B1'}]` which is incorrect.
First we need to make sure the `BandMeta` fields are correct then see where this breaks in the serializer
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django-rgd-imagery/rgd_imagery/serializers/stac.py`
Content:
```
1 import json
2
3 import dateutil.parser
4 from django.contrib.gis.geos import Polygon
5 from django.db import transaction
6 from pyproj import CRS
7 import pystac
8 from rest_framework import serializers
9 from rgd.models import ChecksumFile, FileSourceType
10 from rgd.utility import get_or_create_no_commit
11
12 from .. import models
13
14
15 class STACRasterSerializer(serializers.BaseSerializer):
16 def to_internal_value(self, data):
17 # item = pystac.Item.from_dict(data)
18 # errors = item.validate()
19 # if errors:
20 # raise serializers.ValidationError(errors)
21 return data
22
23 def to_representation(self, instance: models.RasterMeta) -> dict:
24 item = pystac.Item(
25 id=instance.pk,
26 geometry=json.loads(instance.footprint.json),
27 bbox=instance.extent,
28 datetime=(instance.acquisition_date or instance.modified or instance.created),
29 properties=dict(
30 datetime=str(instance.acquisition_date),
31 platform=instance.instrumentation,
32 ),
33 )
34 # 'proj' extension
35 item.ext.enable('projection')
36 item.ext.projection.apply(
37 epsg=CRS.from_proj4(instance.crs).to_epsg(),
38 transform=instance.transform,
39 )
40 # 'eo' extension
41 item.ext.enable('eo')
42 item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])
43 # Add assets
44 for image in instance.parent_raster.image_set.images.all():
45 if image.file.type != FileSourceType.URL:
46 # TODO: we need fix this
47 raise ValueError('Files must point to valid URL resources, not internal storage.')
48 asset = pystac.Asset(
49 href=image.file.get_url(),
50 title=image.file.name,
51 roles=[
52 'data',
53 ],
54 )
55 item.ext.eo.set_bands(
56 bands=[
57 pystac.extensions.eo.Band.create(
58 name=f'B{bandmeta.band_number}',
59 description=bandmeta.description,
60 )
61 for bandmeta in image.bandmeta_set.all()
62 ],
63 asset=asset,
64 )
65 item.add_asset(f'image-{image.pk}', asset)
66
67 for ancillary_file in instance.parent_raster.ancillary_files.all():
68 asset = pystac.Asset(
69 href=ancillary_file.get_url(),
70 title=ancillary_file.name,
71 roles=[
72 'metadata',
73 ],
74 )
75 item.add_asset(f'ancillary-{ancillary_file.pk}', asset)
76
77 return item.to_dict()
78
79 @transaction.atomic
80 def create(self, data):
81 item = pystac.Item.from_dict(data)
82 image_ids, ancillary = [], []
83 single_asset = False
84 if len(item.assets) == 1:
85 single_asset = True
86 for name in item.assets:
87 asset = item.assets[name]
88 checksum_file, _ = ChecksumFile.objects.get_or_create(
89 type=FileSourceType.URL,
90 url=asset.href,
91 )
92 if single_asset or (asset.roles and 'data' in asset.roles):
93 image, _ = models.Image.objects.get_or_create(file=checksum_file)
94 image_ids.append(image.pk)
95 else:
96 ancillary.append(checksum_file)
97
98 image_set, image_set_created = models.get_or_create_image_set(
99 image_ids, defaults=dict(name=item.id)
100 )
101
102 raster, raster_created = get_or_create_no_commit(
103 models.Raster, image_set=image_set, defaults=dict(name=item.id)
104 )
105 raster.skip_signal = True
106 raster.save()
107 [raster.ancillary_files.add(af) for af in ancillary]
108 raster.save()
109
110 outline = Polygon(
111 (
112 [item.bbox[0], item.bbox[1]],
113 [item.bbox[0], item.bbox[3]],
114 [item.bbox[2], item.bbox[3]],
115 [item.bbox[2], item.bbox[1]],
116 [item.bbox[0], item.bbox[1]],
117 )
118 )
119
120 raster_meta = dict(
121 footprint=json.dumps(item.geometry),
122 crs=f'+init=epsg:{item.ext.projection.epsg}',
123 cloud_cover=item.ext.eo.cloud_cover,
124 transform=item.ext.projection.transform,
125 extent=item.bbox,
126 origin=(item.bbox[0], item.bbox[1]),
127 resolution=(0, 0), # TODO: fix
128 outline=outline,
129 acquisition_date=dateutil.parser.isoparser().isoparse(item.properties['datetime']),
130 instrumentation=item.properties['platform'],
131 )
132
133 if raster_created:
134 instance = models.RasterMeta(**raster_meta)
135 instance.parent_raster = raster
136 else:
137 models.RasterMeta.objects.filter(parent_raster=raster).update(**raster_meta)
138 instance = models.RasterMeta.objects.get(parent_raster=raster)
139 instance.save()
140
141 return instance
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/django-rgd-imagery/rgd_imagery/serializers/stac.py b/django-rgd-imagery/rgd_imagery/serializers/stac.py
--- a/django-rgd-imagery/rgd_imagery/serializers/stac.py
+++ b/django-rgd-imagery/rgd_imagery/serializers/stac.py
@@ -41,6 +41,7 @@
item.ext.enable('eo')
item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])
# Add assets
+ band_num = 0
for image in instance.parent_raster.image_set.images.all():
if image.file.type != FileSourceType.URL:
# TODO: we need fix this
@@ -52,17 +53,27 @@
'data',
],
)
- item.ext.eo.set_bands(
- bands=[
+ if image.imagemeta.number_of_bands == 1:
+ bands = [
+ pystac.extensions.eo.Band.create(
+ name=image.file.name,
+ description=image.bandmeta_set.first().description,
+ )
+ ]
+ else:
+ bands = [
pystac.extensions.eo.Band.create(
- name=f'B{bandmeta.band_number}',
+ name=f'B{bandmeta.band_number + band_num}',
description=bandmeta.description,
)
for bandmeta in image.bandmeta_set.all()
- ],
+ ]
+ item.ext.eo.set_bands(
+ bands=bands,
asset=asset,
)
item.add_asset(f'image-{image.pk}', asset)
+ band_num += image.imagemeta.number_of_bands
for ancillary_file in instance.parent_raster.ancillary_files.all():
asset = pystac.Asset(
|
{"golden_diff": "diff --git a/django-rgd-imagery/rgd_imagery/serializers/stac.py b/django-rgd-imagery/rgd_imagery/serializers/stac.py\n--- a/django-rgd-imagery/rgd_imagery/serializers/stac.py\n+++ b/django-rgd-imagery/rgd_imagery/serializers/stac.py\n@@ -41,6 +41,7 @@\n item.ext.enable('eo')\n item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])\n # Add assets\n+ band_num = 0\n for image in instance.parent_raster.image_set.images.all():\n if image.file.type != FileSourceType.URL:\n # TODO: we need fix this\n@@ -52,17 +53,27 @@\n 'data',\n ],\n )\n- item.ext.eo.set_bands(\n- bands=[\n+ if image.imagemeta.number_of_bands == 1:\n+ bands = [\n+ pystac.extensions.eo.Band.create(\n+ name=image.file.name,\n+ description=image.bandmeta_set.first().description,\n+ )\n+ ]\n+ else:\n+ bands = [\n pystac.extensions.eo.Band.create(\n- name=f'B{bandmeta.band_number}',\n+ name=f'B{bandmeta.band_number + band_num}',\n description=bandmeta.description,\n )\n for bandmeta in image.bandmeta_set.all()\n- ],\n+ ]\n+ item.ext.eo.set_bands(\n+ bands=bands,\n asset=asset,\n )\n item.add_asset(f'image-{image.pk}', asset)\n+ band_num += image.imagemeta.number_of_bands\n \n for ancillary_file in instance.parent_raster.ancillary_files.all():\n asset = pystac.Asset(\n", "issue": "STAC serializer output Band info is incorrect\nFigure out where this is coming from:\r\n\r\n```\r\n'assets': {\r\n 'image-15030': {\r\n 'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',\r\n 'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B01.jp2',\r\n 'eo:bands': [{'name': 'B1'}],\r\n 'roles': ['data'],\r\n },\r\n 'image-15041': {\r\n 'href': 'http://storage.googleapis.com/gcp-public-data-sentinel-2/tiles/17/S/MS/S2A_MSIL1C_20210302T161201_N0209_R140_T17SMS_20210302T200521.SAFE/GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',\r\n 'title': 'GRANULE/L1C_T17SMS_A029738_20210302T161751/IMG_DATA/T17SMS_20210302T161201_B02.jp2',\r\n 'eo:bands': [{'name': 'B1'}],\r\n 'roles': ['data'],\r\n },\r\n```\r\n\r\nNote that both have `[{'name': 'B1'}]` which is incorrect.\r\n\r\nFirst we need to make sure the `BandMeta` fields are correct then see where this breaks in the serializer\n", "before_files": [{"content": "import json\n\nimport dateutil.parser\nfrom django.contrib.gis.geos import Polygon\nfrom django.db import transaction\nfrom pyproj import CRS\nimport pystac\nfrom rest_framework import serializers\nfrom rgd.models import ChecksumFile, FileSourceType\nfrom rgd.utility import get_or_create_no_commit\n\nfrom .. import models\n\n\nclass STACRasterSerializer(serializers.BaseSerializer):\n def to_internal_value(self, data):\n # item = pystac.Item.from_dict(data)\n # errors = item.validate()\n # if errors:\n # raise serializers.ValidationError(errors)\n return data\n\n def to_representation(self, instance: models.RasterMeta) -> dict:\n item = pystac.Item(\n id=instance.pk,\n geometry=json.loads(instance.footprint.json),\n bbox=instance.extent,\n datetime=(instance.acquisition_date or instance.modified or instance.created),\n properties=dict(\n datetime=str(instance.acquisition_date),\n platform=instance.instrumentation,\n ),\n )\n # 'proj' extension\n item.ext.enable('projection')\n item.ext.projection.apply(\n epsg=CRS.from_proj4(instance.crs).to_epsg(),\n transform=instance.transform,\n )\n # 'eo' extension\n item.ext.enable('eo')\n item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])\n # Add assets\n for image in instance.parent_raster.image_set.images.all():\n if image.file.type != FileSourceType.URL:\n # TODO: we need fix this\n raise ValueError('Files must point to valid URL resources, not internal storage.')\n asset = pystac.Asset(\n href=image.file.get_url(),\n title=image.file.name,\n roles=[\n 'data',\n ],\n )\n item.ext.eo.set_bands(\n bands=[\n pystac.extensions.eo.Band.create(\n name=f'B{bandmeta.band_number}',\n description=bandmeta.description,\n )\n for bandmeta in image.bandmeta_set.all()\n ],\n asset=asset,\n )\n item.add_asset(f'image-{image.pk}', asset)\n\n for ancillary_file in instance.parent_raster.ancillary_files.all():\n asset = pystac.Asset(\n href=ancillary_file.get_url(),\n title=ancillary_file.name,\n roles=[\n 'metadata',\n ],\n )\n item.add_asset(f'ancillary-{ancillary_file.pk}', asset)\n\n return item.to_dict()\n\n @transaction.atomic\n def create(self, data):\n item = pystac.Item.from_dict(data)\n image_ids, ancillary = [], []\n single_asset = False\n if len(item.assets) == 1:\n single_asset = True\n for name in item.assets:\n asset = item.assets[name]\n checksum_file, _ = ChecksumFile.objects.get_or_create(\n type=FileSourceType.URL,\n url=asset.href,\n )\n if single_asset or (asset.roles and 'data' in asset.roles):\n image, _ = models.Image.objects.get_or_create(file=checksum_file)\n image_ids.append(image.pk)\n else:\n ancillary.append(checksum_file)\n\n image_set, image_set_created = models.get_or_create_image_set(\n image_ids, defaults=dict(name=item.id)\n )\n\n raster, raster_created = get_or_create_no_commit(\n models.Raster, image_set=image_set, defaults=dict(name=item.id)\n )\n raster.skip_signal = True\n raster.save()\n [raster.ancillary_files.add(af) for af in ancillary]\n raster.save()\n\n outline = Polygon(\n (\n [item.bbox[0], item.bbox[1]],\n [item.bbox[0], item.bbox[3]],\n [item.bbox[2], item.bbox[3]],\n [item.bbox[2], item.bbox[1]],\n [item.bbox[0], item.bbox[1]],\n )\n )\n\n raster_meta = dict(\n footprint=json.dumps(item.geometry),\n crs=f'+init=epsg:{item.ext.projection.epsg}',\n cloud_cover=item.ext.eo.cloud_cover,\n transform=item.ext.projection.transform,\n extent=item.bbox,\n origin=(item.bbox[0], item.bbox[1]),\n resolution=(0, 0), # TODO: fix\n outline=outline,\n acquisition_date=dateutil.parser.isoparser().isoparse(item.properties['datetime']),\n instrumentation=item.properties['platform'],\n )\n\n if raster_created:\n instance = models.RasterMeta(**raster_meta)\n instance.parent_raster = raster\n else:\n models.RasterMeta.objects.filter(parent_raster=raster).update(**raster_meta)\n instance = models.RasterMeta.objects.get(parent_raster=raster)\n instance.save()\n\n return instance\n", "path": "django-rgd-imagery/rgd_imagery/serializers/stac.py"}], "after_files": [{"content": "import json\n\nimport dateutil.parser\nfrom django.contrib.gis.geos import Polygon\nfrom django.db import transaction\nfrom pyproj import CRS\nimport pystac\nfrom rest_framework import serializers\nfrom rgd.models import ChecksumFile, FileSourceType\nfrom rgd.utility import get_or_create_no_commit\n\nfrom .. import models\n\n\nclass STACRasterSerializer(serializers.BaseSerializer):\n def to_internal_value(self, data):\n # item = pystac.Item.from_dict(data)\n # errors = item.validate()\n # if errors:\n # raise serializers.ValidationError(errors)\n return data\n\n def to_representation(self, instance: models.RasterMeta) -> dict:\n item = pystac.Item(\n id=instance.pk,\n geometry=json.loads(instance.footprint.json),\n bbox=instance.extent,\n datetime=(instance.acquisition_date or instance.modified or instance.created),\n properties=dict(\n datetime=str(instance.acquisition_date),\n platform=instance.instrumentation,\n ),\n )\n # 'proj' extension\n item.ext.enable('projection')\n item.ext.projection.apply(\n epsg=CRS.from_proj4(instance.crs).to_epsg(),\n transform=instance.transform,\n )\n # 'eo' extension\n item.ext.enable('eo')\n item.ext.eo.apply(cloud_cover=instance.cloud_cover, bands=[])\n # Add assets\n band_num = 0\n for image in instance.parent_raster.image_set.images.all():\n if image.file.type != FileSourceType.URL:\n # TODO: we need fix this\n raise ValueError('Files must point to valid URL resources, not internal storage.')\n asset = pystac.Asset(\n href=image.file.get_url(),\n title=image.file.name,\n roles=[\n 'data',\n ],\n )\n if image.imagemeta.number_of_bands == 1:\n bands = [\n pystac.extensions.eo.Band.create(\n name=image.file.name,\n description=image.bandmeta_set.first().description,\n )\n ]\n else:\n bands = [\n pystac.extensions.eo.Band.create(\n name=f'B{bandmeta.band_number + band_num}',\n description=bandmeta.description,\n )\n for bandmeta in image.bandmeta_set.all()\n ]\n item.ext.eo.set_bands(\n bands=bands,\n asset=asset,\n )\n item.add_asset(f'image-{image.pk}', asset)\n band_num += image.imagemeta.number_of_bands\n\n for ancillary_file in instance.parent_raster.ancillary_files.all():\n asset = pystac.Asset(\n href=ancillary_file.get_url(),\n title=ancillary_file.name,\n roles=[\n 'metadata',\n ],\n )\n item.add_asset(f'ancillary-{ancillary_file.pk}', asset)\n\n return item.to_dict()\n\n @transaction.atomic\n def create(self, data):\n item = pystac.Item.from_dict(data)\n image_ids, ancillary = [], []\n single_asset = False\n if len(item.assets) == 1:\n single_asset = True\n for name in item.assets:\n asset = item.assets[name]\n checksum_file, _ = ChecksumFile.objects.get_or_create(\n type=FileSourceType.URL,\n url=asset.href,\n )\n if single_asset or (asset.roles and 'data' in asset.roles):\n image, _ = models.Image.objects.get_or_create(file=checksum_file)\n image_ids.append(image.pk)\n else:\n ancillary.append(checksum_file)\n\n image_set, image_set_created = models.get_or_create_image_set(\n image_ids, defaults=dict(name=item.id)\n )\n\n raster, raster_created = get_or_create_no_commit(\n models.Raster, image_set=image_set, defaults=dict(name=item.id)\n )\n raster.skip_signal = True\n raster.save()\n [raster.ancillary_files.add(af) for af in ancillary]\n raster.save()\n\n outline = Polygon(\n (\n [item.bbox[0], item.bbox[1]],\n [item.bbox[0], item.bbox[3]],\n [item.bbox[2], item.bbox[3]],\n [item.bbox[2], item.bbox[1]],\n [item.bbox[0], item.bbox[1]],\n )\n )\n\n raster_meta = dict(\n footprint=json.dumps(item.geometry),\n crs=f'+init=epsg:{item.ext.projection.epsg}',\n cloud_cover=item.ext.eo.cloud_cover,\n transform=item.ext.projection.transform,\n extent=item.bbox,\n origin=(item.bbox[0], item.bbox[1]),\n resolution=(0, 0), # TODO: fix\n outline=outline,\n acquisition_date=dateutil.parser.isoparser().isoparse(item.properties['datetime']),\n instrumentation=item.properties['platform'],\n )\n\n if raster_created:\n instance = models.RasterMeta(**raster_meta)\n instance.parent_raster = raster\n else:\n models.RasterMeta.objects.filter(parent_raster=raster).update(**raster_meta)\n instance = models.RasterMeta.objects.get(parent_raster=raster)\n instance.save()\n\n return instance\n", "path": "django-rgd-imagery/rgd_imagery/serializers/stac.py"}]}
| 2,179 | 405 |
gh_patches_debug_25186
|
rasdani/github-patches
|
git_diff
|
deis__deis-347
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Vagrant provider repeatedly errors on formation if node dir is deleted
Needs to be more robust in some error cases such as this one:
1) Provision a controller but somehow forget to add _deis-controler_ to the admins group, despite all documentation and fuschia-colored warnings at the command-line
2) Create a formation and scale it upward, e.g. `deis nodes:scale form1 runtime=2`
3) Try to scale down the formation, get an appropriate error about "couldn't remove chef node"
4) All subsequent formation commands--including destroy!--will fail when trying to access the local vagrant node dir, which apparently was removed in step 3).
This shouldn't happen often, but it can and I think ignoring this error at least in the case of `deis formations:destroy` would provide a way out of this dead end.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `provider/vagrant.py`
Content:
```
1 """
2 Deis cloud provider implementation for local vagrant setups.
3 """
4
5 from __future__ import unicode_literals
6
7 from api.ssh import exec_ssh, connect_ssh
8
9 import json
10 import logging
11 import string
12 import subprocess
13 import uuid
14
15 from api.models import Layer
16 from api.models import Node
17
18 logger = logging.getLogger(__name__)
19
20 # Collect details for connecting to the host machine
21 try:
22 HOST_NODES_DIR = open('/home/vagrant/.host_nodes_dir').read().strip()
23 PKEY = open('/home/vagrant/.ssh/id_rsa').read()
24 except IOError as err:
25 logger.warn(err)
26
27
28 def seed_flavors():
29 """Seed the database with default flavors for vagrant.
30
31 :rtype: list of dicts containing flavor data
32 """
33 flavors = []
34 for m in ['512', '1024', '2048']:
35 flavors.append({
36 'id': "vagrant-{}".format(m),
37 'provider': 'vagrant',
38 'params': json.dumps({
39 'memory': m
40 })
41 })
42 return flavors
43
44
45 def build_layer(layer):
46 """
47 Build a layer.
48
49 :param layer: a dict containing formation, id, params, and creds info
50 """
51
52 # This can also be done with `deis layers:update` now.
53 layer_ = Layer.objects.get(id=layer['id'], formation__id=layer['formation'])
54 layer_.ssh_username = 'vagrant'
55 layer_.save()
56
57
58 def destroy_layer(layer):
59 """
60 Destroy a layer.
61
62 :param layer: a dict containing formation, id, params, and creds info
63 """
64 pass
65
66
67 def build_node(node):
68 """
69 Build a node.
70
71 :param node: a dict containing formation, layer, params, and creds info.
72 :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)
73 """
74
75 # Can't use the vagrant UUID because it's not booted yet
76 uid = str(uuid.uuid1())
77
78 # Create a new Vagrantfile from a template
79 node['params'].setdefault('memory', '512')
80 template = open('/opt/deis/controller/contrib/vagrant/nodes_vagrantfile_template.rb')
81 raw = string.Template(template.read())
82 result = raw.substitute({
83 'id': uid,
84 'ipaddress': '192.168.61.' + str(Node.objects.all().count() + 100),
85 'memory': node['params']['memory']
86 })
87
88 # Make a folder for the VM with its own Vagrantfile. Vagrant will then create a .vagrant folder
89 # there too when it first gets booted.
90 node_dir = HOST_NODES_DIR + '/' + uid
91 mkdir = 'mkdir -p ' + node_dir
92 cp_tpl = 'echo "' + result.replace('"', '\\"') + '" > ' + node_dir + '/Vagrantfile'
93 _host_ssh(commands=[mkdir, cp_tpl], creds=node['creds'])
94
95 # Boot the VM
96 _run_vagrant_command(uid, args=['up'], creds=node['creds'])
97
98 # Copy the layer's public SSH key to the VM so that the Controller can access it.
99 _run_vagrant_command(
100 uid,
101 args=[
102 'ssh',
103 '-c',
104 '"echo \\"' + node['ssh_public_key'] + '\\" >> /home/vagrant/.ssh/authorized_keys"'
105 ],
106 creds=node['creds'],
107 )
108
109 provider_id = uid
110 fqdn = provider_id
111 if not fqdn.endswith('.local'):
112 fqdn += '.local' # hostname is broadcast via avahi-daemon
113 metadata = {
114 'id': uid,
115 'fqdn': fqdn,
116 'flavor': node['params']['memory']
117 }
118 return provider_id, fqdn, metadata
119
120
121 def destroy_node(node):
122 """
123 Destroy a node.
124
125 :param node: a dict containing a node's provider_id, params, and creds
126 """
127
128 # This is useful if node creation failed. So that there's a record in the DB, but it has no
129 # ID associated with it.
130 if node['provider_id'] is None:
131 return
132
133 # Shut the VM down and destroy it
134 _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
135 node_dir = HOST_NODES_DIR + '/' + node['provider_id']
136
137 # Sanity check before `rm -rf`
138 if 'contrib/vagrant' not in node_dir:
139 raise RuntimeError("Aborted node destruction: attempting to 'rm -rf' unexpected directory")
140
141 # Completely remove the folder that contained the VM
142 rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
143 rm_node_dir = 'rm -rf ' + node_dir
144 _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
145
146
147 def _run_vagrant_command(node_id, args=[], creds={}):
148 """
149 args: A tuple of arguments to a vagrant command line.
150 e.g. ['up', 'my_vm_name', '--no-provision']
151 """
152
153 cd = 'cd ' + HOST_NODES_DIR + '/' + node_id
154 command = ['vagrant'] + [arg for arg in args if arg is not None]
155 return _host_ssh(commands=[cd, ' '.join(command)], creds=creds)
156
157
158 def _host_ssh(creds={}, commands=[]):
159 """
160 Connect to the host machine. Namely the user's local machine.
161 """
162 if creds == {}:
163 raise RuntimeError("No credentials provided to _host_ssh()")
164 command = ' && '.join(commands)
165
166 # First check if we can access the host machine. It's likely that their
167 # IP address changes every time they request a DHCP lease.
168 # TODO: Find a way of passing this error onto the CLI client.
169 try:
170 subprocess.check_call([
171 'nc', '-z', '-w2', creds['host'], '22'
172 ], stderr=subprocess.PIPE)
173 except subprocess.CalledProcessError:
174 raise RuntimeError("Couldn't ping port 22 at host with IP " + creds['host'])
175
176 ssh = connect_ssh(creds['user'], creds['host'], 22, PKEY, timeout=120)
177 result, status = exec_ssh(ssh, command)
178 if status > 0:
179 raise RuntimeError(
180 'SSH to Vagrant host error: ' + result.decode('utf-8') +
181 'Command: ' + command.decode('utf-8'))
182 return result
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/provider/vagrant.py b/provider/vagrant.py
--- a/provider/vagrant.py
+++ b/provider/vagrant.py
@@ -131,17 +131,25 @@
return
# Shut the VM down and destroy it
- _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
- node_dir = HOST_NODES_DIR + '/' + node['provider_id']
-
- # Sanity check before `rm -rf`
- if 'contrib/vagrant' not in node_dir:
- raise RuntimeError("Aborted node destruction: attempting to 'rm -rf' unexpected directory")
-
- # Completely remove the folder that contained the VM
- rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
- rm_node_dir = 'rm -rf ' + node_dir
- _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
+ try:
+ _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])
+ node_dir = HOST_NODES_DIR + '/' + node['provider_id']
+
+ # Sanity check before `rm -rf`
+ if 'contrib/vagrant' not in node_dir:
+ raise RuntimeError(
+ "Aborted node destruction: attempting to 'rm -rf' unexpected directory")
+
+ # Completely remove the folder that contained the VM
+ rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'
+ rm_node_dir = 'rm -rf ' + node_dir
+ _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])
+ except RuntimeError as err:
+ # If we couldn't cd to the node dir, just log that as a warning
+ if 'No such file or directory' in str(err):
+ logger.warn(err)
+ else:
+ raise
def _run_vagrant_command(node_id, args=[], creds={}):
|
{"golden_diff": "diff --git a/provider/vagrant.py b/provider/vagrant.py\n--- a/provider/vagrant.py\n+++ b/provider/vagrant.py\n@@ -131,17 +131,25 @@\n return\n \n # Shut the VM down and destroy it\n- _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n- node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n-\n- # Sanity check before `rm -rf`\n- if 'contrib/vagrant' not in node_dir:\n- raise RuntimeError(\"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n-\n- # Completely remove the folder that contained the VM\n- rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n- rm_node_dir = 'rm -rf ' + node_dir\n- _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n+ try:\n+ _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n+ node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n+\n+ # Sanity check before `rm -rf`\n+ if 'contrib/vagrant' not in node_dir:\n+ raise RuntimeError(\n+ \"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n+\n+ # Completely remove the folder that contained the VM\n+ rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n+ rm_node_dir = 'rm -rf ' + node_dir\n+ _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n+ except RuntimeError as err:\n+ # If we couldn't cd to the node dir, just log that as a warning\n+ if 'No such file or directory' in str(err):\n+ logger.warn(err)\n+ else:\n+ raise\n \n \n def _run_vagrant_command(node_id, args=[], creds={}):\n", "issue": "Vagrant provider repeatedly errors on formation if node dir is deleted\nNeeds to be more robust in some error cases such as this one:\n1) Provision a controller but somehow forget to add _deis-controler_ to the admins group, despite all documentation and fuschia-colored warnings at the command-line\n2) Create a formation and scale it upward, e.g. `deis nodes:scale form1 runtime=2`\n3) Try to scale down the formation, get an appropriate error about \"couldn't remove chef node\"\n4) All subsequent formation commands--including destroy!--will fail when trying to access the local vagrant node dir, which apparently was removed in step 3).\n\nThis shouldn't happen often, but it can and I think ignoring this error at least in the case of `deis formations:destroy` would provide a way out of this dead end.\n\n", "before_files": [{"content": "\"\"\"\nDeis cloud provider implementation for local vagrant setups.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom api.ssh import exec_ssh, connect_ssh\n\nimport json\nimport logging\nimport string\nimport subprocess\nimport uuid\n\nfrom api.models import Layer\nfrom api.models import Node\n\nlogger = logging.getLogger(__name__)\n\n# Collect details for connecting to the host machine\ntry:\n HOST_NODES_DIR = open('/home/vagrant/.host_nodes_dir').read().strip()\n PKEY = open('/home/vagrant/.ssh/id_rsa').read()\nexcept IOError as err:\n logger.warn(err)\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for vagrant.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for m in ['512', '1024', '2048']:\n flavors.append({\n 'id': \"vagrant-{}\".format(m),\n 'provider': 'vagrant',\n 'params': json.dumps({\n 'memory': m\n })\n })\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n\n # This can also be done with `deis layers:update` now.\n layer_ = Layer.objects.get(id=layer['id'], formation__id=layer['formation'])\n layer_.ssh_username = 'vagrant'\n layer_.save()\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n pass\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n\n # Can't use the vagrant UUID because it's not booted yet\n uid = str(uuid.uuid1())\n\n # Create a new Vagrantfile from a template\n node['params'].setdefault('memory', '512')\n template = open('/opt/deis/controller/contrib/vagrant/nodes_vagrantfile_template.rb')\n raw = string.Template(template.read())\n result = raw.substitute({\n 'id': uid,\n 'ipaddress': '192.168.61.' + str(Node.objects.all().count() + 100),\n 'memory': node['params']['memory']\n })\n\n # Make a folder for the VM with its own Vagrantfile. Vagrant will then create a .vagrant folder\n # there too when it first gets booted.\n node_dir = HOST_NODES_DIR + '/' + uid\n mkdir = 'mkdir -p ' + node_dir\n cp_tpl = 'echo \"' + result.replace('\"', '\\\\\"') + '\" > ' + node_dir + '/Vagrantfile'\n _host_ssh(commands=[mkdir, cp_tpl], creds=node['creds'])\n\n # Boot the VM\n _run_vagrant_command(uid, args=['up'], creds=node['creds'])\n\n # Copy the layer's public SSH key to the VM so that the Controller can access it.\n _run_vagrant_command(\n uid,\n args=[\n 'ssh',\n '-c',\n '\"echo \\\\\"' + node['ssh_public_key'] + '\\\\\" >> /home/vagrant/.ssh/authorized_keys\"'\n ],\n creds=node['creds'],\n )\n\n provider_id = uid\n fqdn = provider_id\n if not fqdn.endswith('.local'):\n fqdn += '.local' # hostname is broadcast via avahi-daemon\n metadata = {\n 'id': uid,\n 'fqdn': fqdn,\n 'flavor': node['params']['memory']\n }\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n\n # This is useful if node creation failed. So that there's a record in the DB, but it has no\n # ID associated with it.\n if node['provider_id'] is None:\n return\n\n # Shut the VM down and destroy it\n _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n\n # Sanity check before `rm -rf`\n if 'contrib/vagrant' not in node_dir:\n raise RuntimeError(\"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n\n # Completely remove the folder that contained the VM\n rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n rm_node_dir = 'rm -rf ' + node_dir\n _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n\n\ndef _run_vagrant_command(node_id, args=[], creds={}):\n \"\"\"\n args: A tuple of arguments to a vagrant command line.\n e.g. ['up', 'my_vm_name', '--no-provision']\n \"\"\"\n\n cd = 'cd ' + HOST_NODES_DIR + '/' + node_id\n command = ['vagrant'] + [arg for arg in args if arg is not None]\n return _host_ssh(commands=[cd, ' '.join(command)], creds=creds)\n\n\ndef _host_ssh(creds={}, commands=[]):\n \"\"\"\n Connect to the host machine. Namely the user's local machine.\n \"\"\"\n if creds == {}:\n raise RuntimeError(\"No credentials provided to _host_ssh()\")\n command = ' && '.join(commands)\n\n # First check if we can access the host machine. It's likely that their\n # IP address changes every time they request a DHCP lease.\n # TODO: Find a way of passing this error onto the CLI client.\n try:\n subprocess.check_call([\n 'nc', '-z', '-w2', creds['host'], '22'\n ], stderr=subprocess.PIPE)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Couldn't ping port 22 at host with IP \" + creds['host'])\n\n ssh = connect_ssh(creds['user'], creds['host'], 22, PKEY, timeout=120)\n result, status = exec_ssh(ssh, command)\n if status > 0:\n raise RuntimeError(\n 'SSH to Vagrant host error: ' + result.decode('utf-8') +\n 'Command: ' + command.decode('utf-8'))\n return result\n", "path": "provider/vagrant.py"}], "after_files": [{"content": "\"\"\"\nDeis cloud provider implementation for local vagrant setups.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom api.ssh import exec_ssh, connect_ssh\n\nimport json\nimport logging\nimport string\nimport subprocess\nimport uuid\n\nfrom api.models import Layer\nfrom api.models import Node\n\nlogger = logging.getLogger(__name__)\n\n# Collect details for connecting to the host machine\ntry:\n HOST_NODES_DIR = open('/home/vagrant/.host_nodes_dir').read().strip()\n PKEY = open('/home/vagrant/.ssh/id_rsa').read()\nexcept IOError as err:\n logger.warn(err)\n\n\ndef seed_flavors():\n \"\"\"Seed the database with default flavors for vagrant.\n\n :rtype: list of dicts containing flavor data\n \"\"\"\n flavors = []\n for m in ['512', '1024', '2048']:\n flavors.append({\n 'id': \"vagrant-{}\".format(m),\n 'provider': 'vagrant',\n 'params': json.dumps({\n 'memory': m\n })\n })\n return flavors\n\n\ndef build_layer(layer):\n \"\"\"\n Build a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n\n # This can also be done with `deis layers:update` now.\n layer_ = Layer.objects.get(id=layer['id'], formation__id=layer['formation'])\n layer_.ssh_username = 'vagrant'\n layer_.save()\n\n\ndef destroy_layer(layer):\n \"\"\"\n Destroy a layer.\n\n :param layer: a dict containing formation, id, params, and creds info\n \"\"\"\n pass\n\n\ndef build_node(node):\n \"\"\"\n Build a node.\n\n :param node: a dict containing formation, layer, params, and creds info.\n :rtype: a tuple of (provider_id, fully_qualified_domain_name, metadata)\n \"\"\"\n\n # Can't use the vagrant UUID because it's not booted yet\n uid = str(uuid.uuid1())\n\n # Create a new Vagrantfile from a template\n node['params'].setdefault('memory', '512')\n template = open('/opt/deis/controller/contrib/vagrant/nodes_vagrantfile_template.rb')\n raw = string.Template(template.read())\n result = raw.substitute({\n 'id': uid,\n 'ipaddress': '192.168.61.' + str(Node.objects.all().count() + 100),\n 'memory': node['params']['memory']\n })\n\n # Make a folder for the VM with its own Vagrantfile. Vagrant will then create a .vagrant folder\n # there too when it first gets booted.\n node_dir = HOST_NODES_DIR + '/' + uid\n mkdir = 'mkdir -p ' + node_dir\n cp_tpl = 'echo \"' + result.replace('\"', '\\\\\"') + '\" > ' + node_dir + '/Vagrantfile'\n _host_ssh(commands=[mkdir, cp_tpl], creds=node['creds'])\n\n # Boot the VM\n _run_vagrant_command(uid, args=['up'], creds=node['creds'])\n\n # Copy the layer's public SSH key to the VM so that the Controller can access it.\n _run_vagrant_command(\n uid,\n args=[\n 'ssh',\n '-c',\n '\"echo \\\\\"' + node['ssh_public_key'] + '\\\\\" >> /home/vagrant/.ssh/authorized_keys\"'\n ],\n creds=node['creds'],\n )\n\n provider_id = uid\n fqdn = provider_id\n if not fqdn.endswith('.local'):\n fqdn += '.local' # hostname is broadcast via avahi-daemon\n metadata = {\n 'id': uid,\n 'fqdn': fqdn,\n 'flavor': node['params']['memory']\n }\n return provider_id, fqdn, metadata\n\n\ndef destroy_node(node):\n \"\"\"\n Destroy a node.\n\n :param node: a dict containing a node's provider_id, params, and creds\n \"\"\"\n\n # This is useful if node creation failed. So that there's a record in the DB, but it has no\n # ID associated with it.\n if node['provider_id'] is None:\n return\n\n # Shut the VM down and destroy it\n try:\n _run_vagrant_command(node['provider_id'], args=['destroy', '--force'], creds=node['creds'])\n node_dir = HOST_NODES_DIR + '/' + node['provider_id']\n\n # Sanity check before `rm -rf`\n if 'contrib/vagrant' not in node_dir:\n raise RuntimeError(\n \"Aborted node destruction: attempting to 'rm -rf' unexpected directory\")\n\n # Completely remove the folder that contained the VM\n rm_vagrantfile = 'rm ' + node_dir + '/Vagrantfile'\n rm_node_dir = 'rm -rf ' + node_dir\n _host_ssh(commands=[rm_vagrantfile, rm_node_dir], creds=node['creds'])\n except RuntimeError as err:\n # If we couldn't cd to the node dir, just log that as a warning\n if 'No such file or directory' in str(err):\n logger.warn(err)\n else:\n raise\n\n\ndef _run_vagrant_command(node_id, args=[], creds={}):\n \"\"\"\n args: A tuple of arguments to a vagrant command line.\n e.g. ['up', 'my_vm_name', '--no-provision']\n \"\"\"\n\n cd = 'cd ' + HOST_NODES_DIR + '/' + node_id\n command = ['vagrant'] + [arg for arg in args if arg is not None]\n return _host_ssh(commands=[cd, ' '.join(command)], creds=creds)\n\n\ndef _host_ssh(creds={}, commands=[]):\n \"\"\"\n Connect to the host machine. Namely the user's local machine.\n \"\"\"\n if creds == {}:\n raise RuntimeError(\"No credentials provided to _host_ssh()\")\n command = ' && '.join(commands)\n\n # First check if we can access the host machine. It's likely that their\n # IP address changes every time they request a DHCP lease.\n # TODO: Find a way of passing this error onto the CLI client.\n try:\n subprocess.check_call([\n 'nc', '-z', '-w2', creds['host'], '22'\n ], stderr=subprocess.PIPE)\n except subprocess.CalledProcessError:\n raise RuntimeError(\"Couldn't ping port 22 at host with IP \" + creds['host'])\n\n ssh = connect_ssh(creds['user'], creds['host'], 22, PKEY, timeout=120)\n result, status = exec_ssh(ssh, command)\n if status > 0:\n raise RuntimeError(\n 'SSH to Vagrant host error: ' + result.decode('utf-8') +\n 'Command: ' + command.decode('utf-8'))\n return result\n", "path": "provider/vagrant.py"}]}
| 2,317 | 440 |
gh_patches_debug_41065
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-1665
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` failing
Update: the failure has been silenced in the tests to enable refactor work on CI/CD, but the bug is still there.
To reproduce, the test should be run manually, or with the silencing disabled (`tests._config.EXCLUDED_TESTS`)
---
**Describe the bug**
In the refactored CI pipeline based on github actions #1620, and is blocking the PR.
The test `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` from `tests/test_all_estimators.py` fails on linux with python3.6-3.9 and macos 3.6-3.9 with the error below.
Curiously the test are passing in CI pipelines currently on `main` branch.
```
____________ test_fit_does_not_overwrite_hyper_params[FeatureUnion] ____________
[gw0] darwin -- Python 3.7.12 /Users/runner/hostedtoolcache/Python/3.7.12/x64/bin/python
estimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
def test_fit_does_not_overwrite_hyper_params(estimator_instance):
"""Check that we do not overwrite hyper-parameters in fit."""
estimator = estimator_instance
set_random_state(estimator)
# Make a physical copy of the original estimator parameters before fitting.
params = estimator.get_params()
original_params = deepcopy(params)
# Fit the model
fit_args = _make_args(estimator, "fit")
estimator.fit(*fit_args)
# Compare the state of the model parameters with the original parameters
new_params = estimator.get_params()
for param_name, original_value in original_params.items():
new_value = new_params[param_name]
# We should never change or mutate the internal state of input
# parameters by default. To check this we use the joblib.hash function
# that introspects recursively any subobjects to compute a checksum.
# The only exception to this rule of immutable constructor parameters
# is possible RandomState instance but in this check we explicitly
# fixed the random_state params recursively to be integer seeds.
> assert joblib.hash(new_value) == joblib.hash(original_value), (
"Estimator %s should not change or mutate "
" the parameter %s from %s to %s during fit."
% (estimator.__class__.__name__, param_name, original_value, new_value)
)
E AssertionError: Estimator FeatureUnion should not change or mutate the parameter transformer_list from [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True)))] to [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,
E transformer=StandardScaler(copy=True,
E with_mean=True,
E with_std=True)))] during fit.
E assert '7f94d1fc7e1f...888be251ce7b2' == 'b03f493febd2...c60681b4af6e4'
E - b03f493febd2f1d6da1c60681b4af6e4
E + 7f94d1fc7e1f285e1e5888be251ce7b2
estimator = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
estimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,
transformer_list=[('transformer1',
... with_std=True)))],
transformer_weights=None)
fit_args = ( var_0
0 0 -0.116020
1 0.343339
2 -0.464066
3...
1 0 ...0
7 1
8 0
9 0
10 0
11 0
12 1
13 1
14 1
15 1
16 0
17 0
18 1
19 1
dtype: int64)
new_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
new_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
transformer=Stand... with_mean=True,
with_std=True)))]
original_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
original_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,
transformer=Stand... with_mean=True,
with_std=True)))]
param_name = 'transformer_list'
params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,
... with_std=True)), 'transformer1__check_transformer': False, ...}
```
**To Reproduce**
Run the test with:
```bash
pytest sktime/tests/test_all_estimators.py
```
**Expected behavior**
Test passes
**Additional context**
**Versions**
See github actions under #1620
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
</details>
<!-- Thanks for contributing! -->
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sktime/series_as_features/compose/_pipeline.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import numpy as np
3 import pandas as pd
4 from joblib import Parallel, delayed
5 from scipy import sparse
6 from sklearn.pipeline import FeatureUnion as _FeatureUnion
7 from sklearn.pipeline import _fit_transform_one, _transform_one
8
9 from sktime.transformations.base import _PanelToPanelTransformer
10
11 __all__ = ["FeatureUnion"]
12 __author__ = ["Markus Löning"]
13
14
15 class FeatureUnion(_FeatureUnion, _PanelToPanelTransformer):
16 """Concatenates results of multiple transformer objects.
17
18 This estimator applies a list of transformer objects in parallel to the
19 input data, then concatenates the results. This is useful to combine
20 several feature extraction mechanisms into a single transformer.
21 Parameters of the transformations may be set using its name and the
22 parameter name separated by a '__'. A transformer may be replaced entirely by
23 setting the parameter with its name to another transformer,
24 or removed by setting to 'drop' or ``None``.
25
26 Parameters
27 ----------
28 transformer_list : list of (string, transformer) tuples
29 List of transformer objects to be applied to the data. The first
30 half of each tuple is the name of the transformer.
31 n_jobs : int or None, optional (default=None)
32 Number of jobs to run in parallel.
33 ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
34 context.
35 ``-1`` means using all processors.
36 transformer_weights : dict, optional
37 Multiplicative weights for features per transformer.
38 Keys are transformer names, values the weights.
39 preserve_dataframe : bool
40 Save constructed dataframe.
41 """
42
43 _required_parameters = ["transformer_list"]
44
45 def __init__(
46 self,
47 transformer_list,
48 n_jobs=None,
49 transformer_weights=None,
50 preserve_dataframe=True,
51 ):
52 self.preserve_dataframe = preserve_dataframe
53 super(FeatureUnion, self).__init__(
54 transformer_list, n_jobs=n_jobs, transformer_weights=transformer_weights
55 )
56
57 # We need to add is-fitted state when inheriting from scikit-learn
58 self._is_fitted = False
59
60 def fit_transform(self, X, y=None, **fit_params):
61 """Fit all transformations, transform the data and concatenate results.
62
63 Parameters
64 ----------
65 X : pandas DataFrame
66 Input data to be transformed.
67 y : pandas Series, shape (n_samples, ...), optional
68 Targets for supervised learning.
69
70 Returns
71 -------
72 Xt : pandas DataFrame
73 hstack of results of transformations. sum_n_components is the
74 sum of n_components (output dimension) over transformations.
75 """
76 self._validate_transformers()
77 result = Parallel(n_jobs=self.n_jobs)(
78 delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)
79 for name, trans, weight in self._iter()
80 )
81
82 if not result:
83 # All transformations are None
84 return np.zeros((X.shape[0], 0))
85
86 Xs, transformers = zip(*result)
87 self._update_transformer_list(transformers)
88
89 Xs = self._hstack(list(Xs))
90 self._is_fitted = True
91 return Xs
92
93 def fit(self, X, y=None, **fit_params):
94 """Fit parameters."""
95 super(FeatureUnion, self).fit(X, y, **fit_params)
96 self._is_fitted = True
97 return self
98
99 def transform(self, X):
100 """Transform X separately by each transformer, concatenate results.
101
102 Parameters
103 ----------
104 X : pandas DataFrame
105 Input data to be transformed.
106
107 Returns
108 -------
109 Xt : pandas DataFrame
110 hstack of results of transformations. sum_n_components is the
111 sum of n_components (output dimension) over transformations.
112 """
113 self.check_is_fitted()
114 Xs = Parallel(n_jobs=self.n_jobs)(
115 delayed(_transform_one)(trans, X, None, weight)
116 for name, trans, weight in self._iter()
117 )
118
119 if not Xs:
120 # All transformations are None
121 return np.zeros((X.shape[0], 0))
122
123 else:
124 return self._hstack(list(Xs))
125
126 def _hstack(self, Xs):
127 """
128 Stacks X horizontally.
129
130 Supports input types (X): list of
131 numpy arrays, sparse arrays and DataFrames.
132 """
133 if any(sparse.issparse(f) for f in Xs):
134 Xs = sparse.hstack(Xs).tocsr()
135
136 types = set(type(X) for X in Xs)
137 if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):
138 return pd.concat(Xs, axis=1)
139
140 else:
141 return np.hstack(Xs)
142
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/sktime/series_as_features/compose/_pipeline.py b/sktime/series_as_features/compose/_pipeline.py
--- a/sktime/series_as_features/compose/_pipeline.py
+++ b/sktime/series_as_features/compose/_pipeline.py
@@ -1,10 +1,9 @@
# -*- coding: utf-8 -*-
+
import numpy as np
import pandas as pd
-from joblib import Parallel, delayed
from scipy import sparse
from sklearn.pipeline import FeatureUnion as _FeatureUnion
-from sklearn.pipeline import _fit_transform_one, _transform_one
from sktime.transformations.base import _PanelToPanelTransformer
@@ -57,71 +56,20 @@
# We need to add is-fitted state when inheriting from scikit-learn
self._is_fitted = False
- def fit_transform(self, X, y=None, **fit_params):
- """Fit all transformations, transform the data and concatenate results.
-
- Parameters
- ----------
- X : pandas DataFrame
- Input data to be transformed.
- y : pandas Series, shape (n_samples, ...), optional
- Targets for supervised learning.
-
- Returns
- -------
- Xt : pandas DataFrame
- hstack of results of transformations. sum_n_components is the
- sum of n_components (output dimension) over transformations.
- """
- self._validate_transformers()
- result = Parallel(n_jobs=self.n_jobs)(
- delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)
- for name, trans, weight in self._iter()
- )
-
- if not result:
- # All transformations are None
- return np.zeros((X.shape[0], 0))
-
- Xs, transformers = zip(*result)
- self._update_transformer_list(transformers)
-
- Xs = self._hstack(list(Xs))
- self._is_fitted = True
- return Xs
-
def fit(self, X, y=None, **fit_params):
"""Fit parameters."""
- super(FeatureUnion, self).fit(X, y, **fit_params)
+ super().fit(X, y, **fit_params)
self._is_fitted = True
return self
def transform(self, X):
- """Transform X separately by each transformer, concatenate results.
-
- Parameters
- ----------
- X : pandas DataFrame
- Input data to be transformed.
-
- Returns
- -------
- Xt : pandas DataFrame
- hstack of results of transformations. sum_n_components is the
- sum of n_components (output dimension) over transformations.
- """
+ """Transform X separately by each transformer, concatenate results."""
self.check_is_fitted()
- Xs = Parallel(n_jobs=self.n_jobs)(
- delayed(_transform_one)(trans, X, None, weight)
- for name, trans, weight in self._iter()
- )
-
- if not Xs:
- # All transformations are None
- return np.zeros((X.shape[0], 0))
+ return super().transform(X)
- else:
- return self._hstack(list(Xs))
+ def fit_transform(self, X, y, **fit_params):
+ """Transform X separately by each transformer, concatenate results."""
+ return self.fit(X, y, **fit_params).transform(X)
def _hstack(self, Xs):
"""
@@ -133,7 +81,7 @@
if any(sparse.issparse(f) for f in Xs):
Xs = sparse.hstack(Xs).tocsr()
- types = set(type(X) for X in Xs)
+ types = {type(X) for X in Xs}
if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):
return pd.concat(Xs, axis=1)
|
{"golden_diff": "diff --git a/sktime/series_as_features/compose/_pipeline.py b/sktime/series_as_features/compose/_pipeline.py\n--- a/sktime/series_as_features/compose/_pipeline.py\n+++ b/sktime/series_as_features/compose/_pipeline.py\n@@ -1,10 +1,9 @@\n # -*- coding: utf-8 -*-\n+\n import numpy as np\n import pandas as pd\n-from joblib import Parallel, delayed\n from scipy import sparse\n from sklearn.pipeline import FeatureUnion as _FeatureUnion\n-from sklearn.pipeline import _fit_transform_one, _transform_one\n \n from sktime.transformations.base import _PanelToPanelTransformer\n \n@@ -57,71 +56,20 @@\n # We need to add is-fitted state when inheriting from scikit-learn\n self._is_fitted = False\n \n- def fit_transform(self, X, y=None, **fit_params):\n- \"\"\"Fit all transformations, transform the data and concatenate results.\n-\n- Parameters\n- ----------\n- X : pandas DataFrame\n- Input data to be transformed.\n- y : pandas Series, shape (n_samples, ...), optional\n- Targets for supervised learning.\n-\n- Returns\n- -------\n- Xt : pandas DataFrame\n- hstack of results of transformations. sum_n_components is the\n- sum of n_components (output dimension) over transformations.\n- \"\"\"\n- self._validate_transformers()\n- result = Parallel(n_jobs=self.n_jobs)(\n- delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)\n- for name, trans, weight in self._iter()\n- )\n-\n- if not result:\n- # All transformations are None\n- return np.zeros((X.shape[0], 0))\n-\n- Xs, transformers = zip(*result)\n- self._update_transformer_list(transformers)\n-\n- Xs = self._hstack(list(Xs))\n- self._is_fitted = True\n- return Xs\n-\n def fit(self, X, y=None, **fit_params):\n \"\"\"Fit parameters.\"\"\"\n- super(FeatureUnion, self).fit(X, y, **fit_params)\n+ super().fit(X, y, **fit_params)\n self._is_fitted = True\n return self\n \n def transform(self, X):\n- \"\"\"Transform X separately by each transformer, concatenate results.\n-\n- Parameters\n- ----------\n- X : pandas DataFrame\n- Input data to be transformed.\n-\n- Returns\n- -------\n- Xt : pandas DataFrame\n- hstack of results of transformations. sum_n_components is the\n- sum of n_components (output dimension) over transformations.\n- \"\"\"\n+ \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n self.check_is_fitted()\n- Xs = Parallel(n_jobs=self.n_jobs)(\n- delayed(_transform_one)(trans, X, None, weight)\n- for name, trans, weight in self._iter()\n- )\n-\n- if not Xs:\n- # All transformations are None\n- return np.zeros((X.shape[0], 0))\n+ return super().transform(X)\n \n- else:\n- return self._hstack(list(Xs))\n+ def fit_transform(self, X, y, **fit_params):\n+ \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n+ return self.fit(X, y, **fit_params).transform(X)\n \n def _hstack(self, Xs):\n \"\"\"\n@@ -133,7 +81,7 @@\n if any(sparse.issparse(f) for f in Xs):\n Xs = sparse.hstack(Xs).tocsr()\n \n- types = set(type(X) for X in Xs)\n+ types = {type(X) for X in Xs}\n if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):\n return pd.concat(Xs, axis=1)\n", "issue": "[BUG] `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` failing\nUpdate: the failure has been silenced in the tests to enable refactor work on CI/CD, but the bug is still there.\r\nTo reproduce, the test should be run manually, or with the silencing disabled (`tests._config.EXCLUDED_TESTS`)\r\n\r\n---\r\n\r\n**Describe the bug**\r\n\r\nIn the refactored CI pipeline based on github actions #1620, and is blocking the PR.\r\n\r\nThe test `test_fit_does_not_overwrite_hyper_params[FeatureUnion]` from `tests/test_all_estimators.py` fails on linux with python3.6-3.9 and macos 3.6-3.9 with the error below.\r\n\r\nCuriously the test are passing in CI pipelines currently on `main` branch.\r\n\r\n```\r\n____________ test_fit_does_not_overwrite_hyper_params[FeatureUnion] ____________\r\n[gw0] darwin -- Python 3.7.12 /Users/runner/hostedtoolcache/Python/3.7.12/x64/bin/python\r\n\r\nestimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\n\r\n def test_fit_does_not_overwrite_hyper_params(estimator_instance):\r\n \"\"\"Check that we do not overwrite hyper-parameters in fit.\"\"\"\r\n estimator = estimator_instance\r\n set_random_state(estimator)\r\n \r\n # Make a physical copy of the original estimator parameters before fitting.\r\n params = estimator.get_params()\r\n original_params = deepcopy(params)\r\n \r\n # Fit the model\r\n fit_args = _make_args(estimator, \"fit\")\r\n estimator.fit(*fit_args)\r\n \r\n # Compare the state of the model parameters with the original parameters\r\n new_params = estimator.get_params()\r\n for param_name, original_value in original_params.items():\r\n new_value = new_params[param_name]\r\n \r\n # We should never change or mutate the internal state of input\r\n # parameters by default. To check this we use the joblib.hash function\r\n # that introspects recursively any subobjects to compute a checksum.\r\n # The only exception to this rule of immutable constructor parameters\r\n # is possible RandomState instance but in this check we explicitly\r\n # fixed the random_state params recursively to be integer seeds.\r\n> assert joblib.hash(new_value) == joblib.hash(original_value), (\r\n \"Estimator %s should not change or mutate \"\r\n \" the parameter %s from %s to %s during fit.\"\r\n % (estimator.__class__.__name__, param_name, original_value, new_value)\r\n )\r\nE AssertionError: Estimator FeatureUnion should not change or mutate the parameter transformer_list from [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True)))] to [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True))), ('transformer2', SeriesToSeriesRowTransformer(check_transformer=False,\r\nE transformer=StandardScaler(copy=True,\r\nE with_mean=True,\r\nE with_std=True)))] during fit.\r\nE assert '7f94d1fc7e1f...888be251ce7b2' == 'b03f493febd2...c60681b4af6e4'\r\nE - b03f493febd2f1d6da1c60681b4af6e4\r\nE + 7f94d1fc7e1f285e1e5888be251ce7b2\r\n\r\nestimator = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\nestimator_instance = FeatureUnion(n_jobs=None, preserve_dataframe=True,\r\n transformer_list=[('transformer1',\r\n ... with_std=True)))],\r\n transformer_weights=None)\r\nfit_args = ( var_0\r\n0 0 -0.116020\r\n1 0.343339\r\n2 -0.464066\r\n3...\r\n1 0 ...0\r\n7 1\r\n8 0\r\n9 0\r\n10 0\r\n11 0\r\n12 1\r\n13 1\r\n14 1\r\n15 1\r\n16 0\r\n17 0\r\n18 1\r\n19 1\r\ndtype: int64)\r\nnew_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\nnew_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\n transformer=Stand... with_mean=True,\r\n with_std=True)))]\r\noriginal_params = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\noriginal_value = [('transformer1', SeriesToSeriesRowTransformer(check_transformer=False,\r\n transformer=Stand... with_mean=True,\r\n with_std=True)))]\r\nparam_name = 'transformer_list'\r\nparams = {'n_jobs': None, 'preserve_dataframe': True, 'transformer1': SeriesToSeriesRowTransformer(check_transformer=False,\r\n ... with_std=True)), 'transformer1__check_transformer': False, ...}\r\n```\r\n\r\n**To Reproduce**\r\n\r\nRun the test with:\r\n\r\n```bash\r\npytest sktime/tests/test_all_estimators.py\r\n```\r\n\r\n**Expected behavior**\r\n\r\nTest passes\r\n\r\n**Additional context**\r\n\r\n**Versions**\r\n\r\nSee github actions under #1620 \r\n\r\n<!--\r\nPlease run the following code snippet and paste the output here:\r\n \r\nfrom sktime import show_versions; show_versions()\r\n-->\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport numpy as np\nimport pandas as pd\nfrom joblib import Parallel, delayed\nfrom scipy import sparse\nfrom sklearn.pipeline import FeatureUnion as _FeatureUnion\nfrom sklearn.pipeline import _fit_transform_one, _transform_one\n\nfrom sktime.transformations.base import _PanelToPanelTransformer\n\n__all__ = [\"FeatureUnion\"]\n__author__ = [\"Markus L\u00f6ning\"]\n\n\nclass FeatureUnion(_FeatureUnion, _PanelToPanelTransformer):\n \"\"\"Concatenates results of multiple transformer objects.\n\n This estimator applies a list of transformer objects in parallel to the\n input data, then concatenates the results. This is useful to combine\n several feature extraction mechanisms into a single transformer.\n Parameters of the transformations may be set using its name and the\n parameter name separated by a '__'. A transformer may be replaced entirely by\n setting the parameter with its name to another transformer,\n or removed by setting to 'drop' or ``None``.\n\n Parameters\n ----------\n transformer_list : list of (string, transformer) tuples\n List of transformer objects to be applied to the data. The first\n half of each tuple is the name of the transformer.\n n_jobs : int or None, optional (default=None)\n Number of jobs to run in parallel.\n ``None`` means 1 unless in a :obj:`joblib.parallel_backend`\n context.\n ``-1`` means using all processors.\n transformer_weights : dict, optional\n Multiplicative weights for features per transformer.\n Keys are transformer names, values the weights.\n preserve_dataframe : bool\n Save constructed dataframe.\n \"\"\"\n\n _required_parameters = [\"transformer_list\"]\n\n def __init__(\n self,\n transformer_list,\n n_jobs=None,\n transformer_weights=None,\n preserve_dataframe=True,\n ):\n self.preserve_dataframe = preserve_dataframe\n super(FeatureUnion, self).__init__(\n transformer_list, n_jobs=n_jobs, transformer_weights=transformer_weights\n )\n\n # We need to add is-fitted state when inheriting from scikit-learn\n self._is_fitted = False\n\n def fit_transform(self, X, y=None, **fit_params):\n \"\"\"Fit all transformations, transform the data and concatenate results.\n\n Parameters\n ----------\n X : pandas DataFrame\n Input data to be transformed.\n y : pandas Series, shape (n_samples, ...), optional\n Targets for supervised learning.\n\n Returns\n -------\n Xt : pandas DataFrame\n hstack of results of transformations. sum_n_components is the\n sum of n_components (output dimension) over transformations.\n \"\"\"\n self._validate_transformers()\n result = Parallel(n_jobs=self.n_jobs)(\n delayed(_fit_transform_one)(trans, X, y, weight, **fit_params)\n for name, trans, weight in self._iter()\n )\n\n if not result:\n # All transformations are None\n return np.zeros((X.shape[0], 0))\n\n Xs, transformers = zip(*result)\n self._update_transformer_list(transformers)\n\n Xs = self._hstack(list(Xs))\n self._is_fitted = True\n return Xs\n\n def fit(self, X, y=None, **fit_params):\n \"\"\"Fit parameters.\"\"\"\n super(FeatureUnion, self).fit(X, y, **fit_params)\n self._is_fitted = True\n return self\n\n def transform(self, X):\n \"\"\"Transform X separately by each transformer, concatenate results.\n\n Parameters\n ----------\n X : pandas DataFrame\n Input data to be transformed.\n\n Returns\n -------\n Xt : pandas DataFrame\n hstack of results of transformations. sum_n_components is the\n sum of n_components (output dimension) over transformations.\n \"\"\"\n self.check_is_fitted()\n Xs = Parallel(n_jobs=self.n_jobs)(\n delayed(_transform_one)(trans, X, None, weight)\n for name, trans, weight in self._iter()\n )\n\n if not Xs:\n # All transformations are None\n return np.zeros((X.shape[0], 0))\n\n else:\n return self._hstack(list(Xs))\n\n def _hstack(self, Xs):\n \"\"\"\n Stacks X horizontally.\n\n Supports input types (X): list of\n numpy arrays, sparse arrays and DataFrames.\n \"\"\"\n if any(sparse.issparse(f) for f in Xs):\n Xs = sparse.hstack(Xs).tocsr()\n\n types = set(type(X) for X in Xs)\n if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):\n return pd.concat(Xs, axis=1)\n\n else:\n return np.hstack(Xs)\n", "path": "sktime/series_as_features/compose/_pipeline.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport numpy as np\nimport pandas as pd\nfrom scipy import sparse\nfrom sklearn.pipeline import FeatureUnion as _FeatureUnion\n\nfrom sktime.transformations.base import _PanelToPanelTransformer\n\n__all__ = [\"FeatureUnion\"]\n__author__ = [\"Markus L\u00f6ning\"]\n\n\nclass FeatureUnion(_FeatureUnion, _PanelToPanelTransformer):\n \"\"\"Concatenates results of multiple transformer objects.\n\n This estimator applies a list of transformer objects in parallel to the\n input data, then concatenates the results. This is useful to combine\n several feature extraction mechanisms into a single transformer.\n Parameters of the transformations may be set using its name and the\n parameter name separated by a '__'. A transformer may be replaced entirely by\n setting the parameter with its name to another transformer,\n or removed by setting to 'drop' or ``None``.\n\n Parameters\n ----------\n transformer_list : list of (string, transformer) tuples\n List of transformer objects to be applied to the data. The first\n half of each tuple is the name of the transformer.\n n_jobs : int or None, optional (default=None)\n Number of jobs to run in parallel.\n ``None`` means 1 unless in a :obj:`joblib.parallel_backend`\n context.\n ``-1`` means using all processors.\n transformer_weights : dict, optional\n Multiplicative weights for features per transformer.\n Keys are transformer names, values the weights.\n preserve_dataframe : bool\n Save constructed dataframe.\n \"\"\"\n\n _required_parameters = [\"transformer_list\"]\n\n def __init__(\n self,\n transformer_list,\n n_jobs=None,\n transformer_weights=None,\n preserve_dataframe=True,\n ):\n self.preserve_dataframe = preserve_dataframe\n super(FeatureUnion, self).__init__(\n transformer_list, n_jobs=n_jobs, transformer_weights=transformer_weights\n )\n\n # We need to add is-fitted state when inheriting from scikit-learn\n self._is_fitted = False\n\n def fit(self, X, y=None, **fit_params):\n \"\"\"Fit parameters.\"\"\"\n super().fit(X, y, **fit_params)\n self._is_fitted = True\n return self\n\n def transform(self, X):\n \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n self.check_is_fitted()\n return super().transform(X)\n\n def fit_transform(self, X, y, **fit_params):\n \"\"\"Transform X separately by each transformer, concatenate results.\"\"\"\n return self.fit(X, y, **fit_params).transform(X)\n\n def _hstack(self, Xs):\n \"\"\"\n Stacks X horizontally.\n\n Supports input types (X): list of\n numpy arrays, sparse arrays and DataFrames.\n \"\"\"\n if any(sparse.issparse(f) for f in Xs):\n Xs = sparse.hstack(Xs).tocsr()\n\n types = {type(X) for X in Xs}\n if self.preserve_dataframe and (pd.Series in types or pd.DataFrame in types):\n return pd.concat(Xs, axis=1)\n\n else:\n return np.hstack(Xs)\n", "path": "sktime/series_as_features/compose/_pipeline.py"}]}
| 2,988 | 868 |
gh_patches_debug_16788
|
rasdani/github-patches
|
git_diff
|
python-pillow__Pillow-5641
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect documentation for ImagePalette size parameter (and maybe not needed at all)
The documentation for the `ImagePalette` initializer in version 8.3.1 says the `palette` parameter must be "of length `size` times the number of colors in `mode`". Therefore, for an RGB image, I would expect `len(palette) == size * 3`. However, the code asserts that `len(palette) == size`, so I believe the code and documentation are inconsistent. (The same problem existed in 8.2.0 before some ImagePalette improvements were made, so this wasn't introduced with that change.)
Furthermore, it isn't clear to me that the `size` parameter is needed at all. It isn't stored on `self`, and the only place it's used in the initializer is to assert that its value is `0` or `len(palette)`, so it doesn't seem to provide any benefit. The only reason to keep it that I can think of is to maintain backwards compatibility with existing code that explicitly passes the parameter.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/ImagePalette.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # image palette object
6 #
7 # History:
8 # 1996-03-11 fl Rewritten.
9 # 1997-01-03 fl Up and running.
10 # 1997-08-23 fl Added load hack
11 # 2001-04-16 fl Fixed randint shadow bug in random()
12 #
13 # Copyright (c) 1997-2001 by Secret Labs AB
14 # Copyright (c) 1996-1997 by Fredrik Lundh
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 import array
20
21 from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile
22
23
24 class ImagePalette:
25 """
26 Color palette for palette mapped images
27
28 :param mode: The mode to use for the Palette. See:
29 :ref:`concept-modes`. Defaults to "RGB"
30 :param palette: An optional palette. If given, it must be a bytearray,
31 an array or a list of ints between 0-255. The list must be aligned
32 by channel (All R values must be contiguous in the list before G
33 and B values.) Defaults to 0 through 255 per channel.
34 :param size: An optional palette size. If given, an error is raised
35 if ``palette`` is not of equal length.
36 """
37
38 def __init__(self, mode="RGB", palette=None, size=0):
39 self.mode = mode
40 self.rawmode = None # if set, palette contains raw data
41 self.palette = palette or bytearray()
42 self.dirty = None
43 if size != 0 and size != len(self.palette):
44 raise ValueError("wrong palette size")
45
46 @property
47 def palette(self):
48 return self._palette
49
50 @palette.setter
51 def palette(self, palette):
52 self._palette = palette
53
54 mode_len = len(self.mode)
55 self.colors = {}
56 for i in range(0, len(self.palette), mode_len):
57 color = tuple(self.palette[i : i + mode_len])
58 if color in self.colors:
59 continue
60 self.colors[color] = i // mode_len
61
62 def copy(self):
63 new = ImagePalette()
64
65 new.mode = self.mode
66 new.rawmode = self.rawmode
67 if self.palette is not None:
68 new.palette = self.palette[:]
69 new.dirty = self.dirty
70
71 return new
72
73 def getdata(self):
74 """
75 Get palette contents in format suitable for the low-level
76 ``im.putpalette`` primitive.
77
78 .. warning:: This method is experimental.
79 """
80 if self.rawmode:
81 return self.rawmode, self.palette
82 return self.mode, self.tobytes()
83
84 def tobytes(self):
85 """Convert palette to bytes.
86
87 .. warning:: This method is experimental.
88 """
89 if self.rawmode:
90 raise ValueError("palette contains raw palette data")
91 if isinstance(self.palette, bytes):
92 return self.palette
93 arr = array.array("B", self.palette)
94 return arr.tobytes()
95
96 # Declare tostring as an alias for tobytes
97 tostring = tobytes
98
99 def getcolor(self, color, image=None):
100 """Given an rgb tuple, allocate palette entry.
101
102 .. warning:: This method is experimental.
103 """
104 if self.rawmode:
105 raise ValueError("palette contains raw palette data")
106 if isinstance(color, tuple):
107 if self.mode == "RGB":
108 if len(color) == 4 and color[3] == 255:
109 color = color[:3]
110 elif self.mode == "RGBA":
111 if len(color) == 3:
112 color += (255,)
113 try:
114 return self.colors[color]
115 except KeyError as e:
116 # allocate new color slot
117 if not isinstance(self.palette, bytearray):
118 self._palette = bytearray(self.palette)
119 index = len(self.palette) // 3
120 special_colors = ()
121 if image:
122 special_colors = (
123 image.info.get("background"),
124 image.info.get("transparency"),
125 )
126 while index in special_colors:
127 index += 1
128 if index >= 256:
129 if image:
130 # Search for an unused index
131 for i, count in reversed(list(enumerate(image.histogram()))):
132 if count == 0 and i not in special_colors:
133 index = i
134 break
135 if index >= 256:
136 raise ValueError("cannot allocate more than 256 colors") from e
137 self.colors[color] = index
138 if index * 3 < len(self.palette):
139 self._palette = (
140 self.palette[: index * 3]
141 + bytes(color)
142 + self.palette[index * 3 + 3 :]
143 )
144 else:
145 self._palette += bytes(color)
146 self.dirty = 1
147 return index
148 else:
149 raise ValueError(f"unknown color specifier: {repr(color)}")
150
151 def save(self, fp):
152 """Save palette to text file.
153
154 .. warning:: This method is experimental.
155 """
156 if self.rawmode:
157 raise ValueError("palette contains raw palette data")
158 if isinstance(fp, str):
159 fp = open(fp, "w")
160 fp.write("# Palette\n")
161 fp.write(f"# Mode: {self.mode}\n")
162 for i in range(256):
163 fp.write(f"{i}")
164 for j in range(i * len(self.mode), (i + 1) * len(self.mode)):
165 try:
166 fp.write(f" {self.palette[j]}")
167 except IndexError:
168 fp.write(" 0")
169 fp.write("\n")
170 fp.close()
171
172
173 # --------------------------------------------------------------------
174 # Internal
175
176
177 def raw(rawmode, data):
178 palette = ImagePalette()
179 palette.rawmode = rawmode
180 palette.palette = data
181 palette.dirty = 1
182 return palette
183
184
185 # --------------------------------------------------------------------
186 # Factories
187
188
189 def make_linear_lut(black, white):
190 lut = []
191 if black == 0:
192 for i in range(256):
193 lut.append(white * i // 255)
194 else:
195 raise NotImplementedError # FIXME
196 return lut
197
198
199 def make_gamma_lut(exp):
200 lut = []
201 for i in range(256):
202 lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5))
203 return lut
204
205
206 def negative(mode="RGB"):
207 palette = list(range(256))
208 palette.reverse()
209 return ImagePalette(mode, palette * len(mode))
210
211
212 def random(mode="RGB"):
213 from random import randint
214
215 palette = []
216 for i in range(256 * len(mode)):
217 palette.append(randint(0, 255))
218 return ImagePalette(mode, palette)
219
220
221 def sepia(white="#fff0c0"):
222 r, g, b = ImageColor.getrgb(white)
223 r = make_linear_lut(0, r)
224 g = make_linear_lut(0, g)
225 b = make_linear_lut(0, b)
226 return ImagePalette("RGB", r + g + b)
227
228
229 def wedge(mode="RGB"):
230 return ImagePalette(mode, list(range(256)) * len(mode))
231
232
233 def load(filename):
234
235 # FIXME: supports GIMP gradients only
236
237 with open(filename, "rb") as fp:
238
239 for paletteHandler in [
240 GimpPaletteFile.GimpPaletteFile,
241 GimpGradientFile.GimpGradientFile,
242 PaletteFile.PaletteFile,
243 ]:
244 try:
245 fp.seek(0)
246 lut = paletteHandler(fp).getpalette()
247 if lut:
248 break
249 except (SyntaxError, ValueError):
250 # import traceback
251 # traceback.print_exc()
252 pass
253 else:
254 raise OSError("cannot load palette")
255
256 return lut # data, rawmode
257
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/src/PIL/ImagePalette.py b/src/PIL/ImagePalette.py
--- a/src/PIL/ImagePalette.py
+++ b/src/PIL/ImagePalette.py
@@ -17,6 +17,7 @@
#
import array
+import warnings
from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile
@@ -40,8 +41,14 @@
self.rawmode = None # if set, palette contains raw data
self.palette = palette or bytearray()
self.dirty = None
- if size != 0 and size != len(self.palette):
- raise ValueError("wrong palette size")
+ if size != 0:
+ warnings.warn(
+ "The size parameter is deprecated and will be removed in Pillow 10 "
+ "(2023-01-02).",
+ DeprecationWarning,
+ )
+ if size != len(self.palette):
+ raise ValueError("wrong palette size")
@property
def palette(self):
|
{"golden_diff": "diff --git a/src/PIL/ImagePalette.py b/src/PIL/ImagePalette.py\n--- a/src/PIL/ImagePalette.py\n+++ b/src/PIL/ImagePalette.py\n@@ -17,6 +17,7 @@\n #\n \n import array\n+import warnings\n \n from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile\n \n@@ -40,8 +41,14 @@\n self.rawmode = None # if set, palette contains raw data\n self.palette = palette or bytearray()\n self.dirty = None\n- if size != 0 and size != len(self.palette):\n- raise ValueError(\"wrong palette size\")\n+ if size != 0:\n+ warnings.warn(\n+ \"The size parameter is deprecated and will be removed in Pillow 10 \"\n+ \"(2023-01-02).\",\n+ DeprecationWarning,\n+ )\n+ if size != len(self.palette):\n+ raise ValueError(\"wrong palette size\")\n \n @property\n def palette(self):\n", "issue": "Incorrect documentation for ImagePalette size parameter (and maybe not needed at all)\nThe documentation for the `ImagePalette` initializer in version 8.3.1 says the `palette` parameter must be \"of length `size` times the number of colors in `mode`\". Therefore, for an RGB image, I would expect `len(palette) == size * 3`. However, the code asserts that `len(palette) == size`, so I believe the code and documentation are inconsistent. (The same problem existed in 8.2.0 before some ImagePalette improvements were made, so this wasn't introduced with that change.)\r\n\r\nFurthermore, it isn't clear to me that the `size` parameter is needed at all. It isn't stored on `self`, and the only place it's used in the initializer is to assert that its value is `0` or `len(palette)`, so it doesn't seem to provide any benefit. The only reason to keep it that I can think of is to maintain backwards compatibility with existing code that explicitly passes the parameter.\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# image palette object\n#\n# History:\n# 1996-03-11 fl Rewritten.\n# 1997-01-03 fl Up and running.\n# 1997-08-23 fl Added load hack\n# 2001-04-16 fl Fixed randint shadow bug in random()\n#\n# Copyright (c) 1997-2001 by Secret Labs AB\n# Copyright (c) 1996-1997 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport array\n\nfrom . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile\n\n\nclass ImagePalette:\n \"\"\"\n Color palette for palette mapped images\n\n :param mode: The mode to use for the Palette. See:\n :ref:`concept-modes`. Defaults to \"RGB\"\n :param palette: An optional palette. If given, it must be a bytearray,\n an array or a list of ints between 0-255. The list must be aligned\n by channel (All R values must be contiguous in the list before G\n and B values.) Defaults to 0 through 255 per channel.\n :param size: An optional palette size. If given, an error is raised\n if ``palette`` is not of equal length.\n \"\"\"\n\n def __init__(self, mode=\"RGB\", palette=None, size=0):\n self.mode = mode\n self.rawmode = None # if set, palette contains raw data\n self.palette = palette or bytearray()\n self.dirty = None\n if size != 0 and size != len(self.palette):\n raise ValueError(\"wrong palette size\")\n\n @property\n def palette(self):\n return self._palette\n\n @palette.setter\n def palette(self, palette):\n self._palette = palette\n\n mode_len = len(self.mode)\n self.colors = {}\n for i in range(0, len(self.palette), mode_len):\n color = tuple(self.palette[i : i + mode_len])\n if color in self.colors:\n continue\n self.colors[color] = i // mode_len\n\n def copy(self):\n new = ImagePalette()\n\n new.mode = self.mode\n new.rawmode = self.rawmode\n if self.palette is not None:\n new.palette = self.palette[:]\n new.dirty = self.dirty\n\n return new\n\n def getdata(self):\n \"\"\"\n Get palette contents in format suitable for the low-level\n ``im.putpalette`` primitive.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n return self.rawmode, self.palette\n return self.mode, self.tobytes()\n\n def tobytes(self):\n \"\"\"Convert palette to bytes.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(self.palette, bytes):\n return self.palette\n arr = array.array(\"B\", self.palette)\n return arr.tobytes()\n\n # Declare tostring as an alias for tobytes\n tostring = tobytes\n\n def getcolor(self, color, image=None):\n \"\"\"Given an rgb tuple, allocate palette entry.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(color, tuple):\n if self.mode == \"RGB\":\n if len(color) == 4 and color[3] == 255:\n color = color[:3]\n elif self.mode == \"RGBA\":\n if len(color) == 3:\n color += (255,)\n try:\n return self.colors[color]\n except KeyError as e:\n # allocate new color slot\n if not isinstance(self.palette, bytearray):\n self._palette = bytearray(self.palette)\n index = len(self.palette) // 3\n special_colors = ()\n if image:\n special_colors = (\n image.info.get(\"background\"),\n image.info.get(\"transparency\"),\n )\n while index in special_colors:\n index += 1\n if index >= 256:\n if image:\n # Search for an unused index\n for i, count in reversed(list(enumerate(image.histogram()))):\n if count == 0 and i not in special_colors:\n index = i\n break\n if index >= 256:\n raise ValueError(\"cannot allocate more than 256 colors\") from e\n self.colors[color] = index\n if index * 3 < len(self.palette):\n self._palette = (\n self.palette[: index * 3]\n + bytes(color)\n + self.palette[index * 3 + 3 :]\n )\n else:\n self._palette += bytes(color)\n self.dirty = 1\n return index\n else:\n raise ValueError(f\"unknown color specifier: {repr(color)}\")\n\n def save(self, fp):\n \"\"\"Save palette to text file.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(fp, str):\n fp = open(fp, \"w\")\n fp.write(\"# Palette\\n\")\n fp.write(f\"# Mode: {self.mode}\\n\")\n for i in range(256):\n fp.write(f\"{i}\")\n for j in range(i * len(self.mode), (i + 1) * len(self.mode)):\n try:\n fp.write(f\" {self.palette[j]}\")\n except IndexError:\n fp.write(\" 0\")\n fp.write(\"\\n\")\n fp.close()\n\n\n# --------------------------------------------------------------------\n# Internal\n\n\ndef raw(rawmode, data):\n palette = ImagePalette()\n palette.rawmode = rawmode\n palette.palette = data\n palette.dirty = 1\n return palette\n\n\n# --------------------------------------------------------------------\n# Factories\n\n\ndef make_linear_lut(black, white):\n lut = []\n if black == 0:\n for i in range(256):\n lut.append(white * i // 255)\n else:\n raise NotImplementedError # FIXME\n return lut\n\n\ndef make_gamma_lut(exp):\n lut = []\n for i in range(256):\n lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5))\n return lut\n\n\ndef negative(mode=\"RGB\"):\n palette = list(range(256))\n palette.reverse()\n return ImagePalette(mode, palette * len(mode))\n\n\ndef random(mode=\"RGB\"):\n from random import randint\n\n palette = []\n for i in range(256 * len(mode)):\n palette.append(randint(0, 255))\n return ImagePalette(mode, palette)\n\n\ndef sepia(white=\"#fff0c0\"):\n r, g, b = ImageColor.getrgb(white)\n r = make_linear_lut(0, r)\n g = make_linear_lut(0, g)\n b = make_linear_lut(0, b)\n return ImagePalette(\"RGB\", r + g + b)\n\n\ndef wedge(mode=\"RGB\"):\n return ImagePalette(mode, list(range(256)) * len(mode))\n\n\ndef load(filename):\n\n # FIXME: supports GIMP gradients only\n\n with open(filename, \"rb\") as fp:\n\n for paletteHandler in [\n GimpPaletteFile.GimpPaletteFile,\n GimpGradientFile.GimpGradientFile,\n PaletteFile.PaletteFile,\n ]:\n try:\n fp.seek(0)\n lut = paletteHandler(fp).getpalette()\n if lut:\n break\n except (SyntaxError, ValueError):\n # import traceback\n # traceback.print_exc()\n pass\n else:\n raise OSError(\"cannot load palette\")\n\n return lut # data, rawmode\n", "path": "src/PIL/ImagePalette.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# image palette object\n#\n# History:\n# 1996-03-11 fl Rewritten.\n# 1997-01-03 fl Up and running.\n# 1997-08-23 fl Added load hack\n# 2001-04-16 fl Fixed randint shadow bug in random()\n#\n# Copyright (c) 1997-2001 by Secret Labs AB\n# Copyright (c) 1996-1997 by Fredrik Lundh\n#\n# See the README file for information on usage and redistribution.\n#\n\nimport array\nimport warnings\n\nfrom . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile\n\n\nclass ImagePalette:\n \"\"\"\n Color palette for palette mapped images\n\n :param mode: The mode to use for the Palette. See:\n :ref:`concept-modes`. Defaults to \"RGB\"\n :param palette: An optional palette. If given, it must be a bytearray,\n an array or a list of ints between 0-255. The list must be aligned\n by channel (All R values must be contiguous in the list before G\n and B values.) Defaults to 0 through 255 per channel.\n :param size: An optional palette size. If given, an error is raised\n if ``palette`` is not of equal length.\n \"\"\"\n\n def __init__(self, mode=\"RGB\", palette=None, size=0):\n self.mode = mode\n self.rawmode = None # if set, palette contains raw data\n self.palette = palette or bytearray()\n self.dirty = None\n if size != 0:\n warnings.warn(\n \"The size parameter is deprecated and will be removed in Pillow 10 \"\n \"(2023-01-02).\",\n DeprecationWarning,\n )\n if size != len(self.palette):\n raise ValueError(\"wrong palette size\")\n\n @property\n def palette(self):\n return self._palette\n\n @palette.setter\n def palette(self, palette):\n self._palette = palette\n\n mode_len = len(self.mode)\n self.colors = {}\n for i in range(0, len(self.palette), mode_len):\n color = tuple(self.palette[i : i + mode_len])\n if color in self.colors:\n continue\n self.colors[color] = i // mode_len\n\n def copy(self):\n new = ImagePalette()\n\n new.mode = self.mode\n new.rawmode = self.rawmode\n if self.palette is not None:\n new.palette = self.palette[:]\n new.dirty = self.dirty\n\n return new\n\n def getdata(self):\n \"\"\"\n Get palette contents in format suitable for the low-level\n ``im.putpalette`` primitive.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n return self.rawmode, self.palette\n return self.mode, self.tobytes()\n\n def tobytes(self):\n \"\"\"Convert palette to bytes.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(self.palette, bytes):\n return self.palette\n arr = array.array(\"B\", self.palette)\n return arr.tobytes()\n\n # Declare tostring as an alias for tobytes\n tostring = tobytes\n\n def getcolor(self, color, image=None):\n \"\"\"Given an rgb tuple, allocate palette entry.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(color, tuple):\n if self.mode == \"RGB\":\n if len(color) == 4 and color[3] == 255:\n color = color[:3]\n elif self.mode == \"RGBA\":\n if len(color) == 3:\n color += (255,)\n try:\n return self.colors[color]\n except KeyError as e:\n # allocate new color slot\n if not isinstance(self.palette, bytearray):\n self._palette = bytearray(self.palette)\n index = len(self.palette) // 3\n special_colors = ()\n if image:\n special_colors = (\n image.info.get(\"background\"),\n image.info.get(\"transparency\"),\n )\n while index in special_colors:\n index += 1\n if index >= 256:\n if image:\n # Search for an unused index\n for i, count in reversed(list(enumerate(image.histogram()))):\n if count == 0 and i not in special_colors:\n index = i\n break\n if index >= 256:\n raise ValueError(\"cannot allocate more than 256 colors\") from e\n self.colors[color] = index\n if index * 3 < len(self.palette):\n self._palette = (\n self.palette[: index * 3]\n + bytes(color)\n + self.palette[index * 3 + 3 :]\n )\n else:\n self._palette += bytes(color)\n self.dirty = 1\n return index\n else:\n raise ValueError(f\"unknown color specifier: {repr(color)}\")\n\n def save(self, fp):\n \"\"\"Save palette to text file.\n\n .. warning:: This method is experimental.\n \"\"\"\n if self.rawmode:\n raise ValueError(\"palette contains raw palette data\")\n if isinstance(fp, str):\n fp = open(fp, \"w\")\n fp.write(\"# Palette\\n\")\n fp.write(f\"# Mode: {self.mode}\\n\")\n for i in range(256):\n fp.write(f\"{i}\")\n for j in range(i * len(self.mode), (i + 1) * len(self.mode)):\n try:\n fp.write(f\" {self.palette[j]}\")\n except IndexError:\n fp.write(\" 0\")\n fp.write(\"\\n\")\n fp.close()\n\n\n# --------------------------------------------------------------------\n# Internal\n\n\ndef raw(rawmode, data):\n palette = ImagePalette()\n palette.rawmode = rawmode\n palette.palette = data\n palette.dirty = 1\n return palette\n\n\n# --------------------------------------------------------------------\n# Factories\n\n\ndef make_linear_lut(black, white):\n lut = []\n if black == 0:\n for i in range(256):\n lut.append(white * i // 255)\n else:\n raise NotImplementedError # FIXME\n return lut\n\n\ndef make_gamma_lut(exp):\n lut = []\n for i in range(256):\n lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5))\n return lut\n\n\ndef negative(mode=\"RGB\"):\n palette = list(range(256))\n palette.reverse()\n return ImagePalette(mode, palette * len(mode))\n\n\ndef random(mode=\"RGB\"):\n from random import randint\n\n palette = []\n for i in range(256 * len(mode)):\n palette.append(randint(0, 255))\n return ImagePalette(mode, palette)\n\n\ndef sepia(white=\"#fff0c0\"):\n r, g, b = ImageColor.getrgb(white)\n r = make_linear_lut(0, r)\n g = make_linear_lut(0, g)\n b = make_linear_lut(0, b)\n return ImagePalette(\"RGB\", r + g + b)\n\n\ndef wedge(mode=\"RGB\"):\n return ImagePalette(mode, list(range(256)) * len(mode))\n\n\ndef load(filename):\n\n # FIXME: supports GIMP gradients only\n\n with open(filename, \"rb\") as fp:\n\n for paletteHandler in [\n GimpPaletteFile.GimpPaletteFile,\n GimpGradientFile.GimpGradientFile,\n PaletteFile.PaletteFile,\n ]:\n try:\n fp.seek(0)\n lut = paletteHandler(fp).getpalette()\n if lut:\n break\n except (SyntaxError, ValueError):\n # import traceback\n # traceback.print_exc()\n pass\n else:\n raise OSError(\"cannot load palette\")\n\n return lut # data, rawmode\n", "path": "src/PIL/ImagePalette.py"}]}
| 2,892 | 226 |
gh_patches_debug_14454
|
rasdani/github-patches
|
git_diff
|
microsoft__onnxscript-1472
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Optimizer fails on shape inference error over native_batch_norm
The optimizer fails for the attach model (so dort fails as well). It was obtained with the latest onnx, onnxscript and torch nightly.
[dump3bug.zip](https://github.com/microsoft/onnxscript/files/15106272/dump3bug.zip)
To replicate:
```python
import onnx
from onnxscript import optimizer
onx = onnx.load(model)
optimized = optimizer.optimize(onx)
```
It is coming from the following graph module.
```
graph():
%primals_7 : [num_users=1] = placeholder[target=primals_7]
%primals_1 : [num_users=1] = placeholder[target=primals_1]
%primals_2 : [num_users=1] = placeholder[target=primals_2]
%primals_3 : [num_users=1] = placeholder[target=primals_3]
%primals_4 : [num_users=1] = placeholder[target=primals_4]
%primals_5 : [num_users=1] = placeholder[target=primals_5]
%add : [num_users=2] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_7, %primals_1), kwargs = {})
%_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%add, %primals_2, %primals_3, %primals_4, %primals_5, 0.1, 1e-05), kwargs = {})
%getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})
return (add, getitem)
```
Error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "check_model.py", line 43, in <module>
optimized = optimizer.optimize(onx)
File "onnxscript/onnxscript/optimizer/__init__.py", line 61, in optimize
model = onnx.shape_inference.infer_shapes(
File "onnx/onnx/shape_inference.py", line 46, in infer_shapes
inferred_model_str = C.infer_shapes(
onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_inference_onnx, node name: _aten_native_batch_norm_inference_onnx_2): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 0: (2) vs (0)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `onnxscript/optimizer/__init__.py`
Content:
```
1 import logging
2 from typing import Any
3
4 import onnx
5 import onnx.shape_inference
6
7 from onnxscript import rewriter
8 from onnxscript.optimizer.constant_folding import fold_constants
9 from onnxscript.optimizer.copy_propagation import (
10 do_copy_propagation,
11 do_sequence_simplification,
12 )
13 from onnxscript.optimizer.remove_unused import remove_unused_nodes
14 from onnxscript.optimizer.remove_unused_function import remove_unused_functions
15 from onnxscript.optimizer.simple_function_folding import (
16 inline_functions_with_unused_outputs,
17 inline_simple_functions,
18 )
19 from onnxscript.rewriter import (
20 broadcast_to_matmul,
21 cast_constant_of_shape,
22 gemm_to_matmul_add,
23 no_op,
24 )
25
26 logger = logging.getLogger(__name__)
27
28
29 def optimize(
30 model: onnx.ModelProto,
31 num_iterations: int = 2,
32 *,
33 onnx_shape_inference: bool = True,
34 stop_if_no_change: bool = True,
35 external_data_folder: str = "",
36 **kwargs: Any,
37 ) -> onnx.ModelProto:
38 """Optimize the model. Perform optimizations and clean-ups such as constant folding, dead code elimination, etc.
39
40 Args:
41 model (onnx.ModelProto): The model to optimize.
42 num_iterations (int, optional): Number of iterations to perform.
43 onnx_shape_inference (bool, optional): Whether to perform onnx shape inference on the model.
44 Set this to False to turn off onnx shape inference, and rely on model carried shapes and types.
45 This is useful for models produced by PyTorch 2.2+ dynamo onnx exporter, where the model carries
46 the symbolic shapes recorded from dynamo tracing.
47 stop_if_no_change (bool, optional): Whether to stop if no change is detected.
48 external_data_folder (str, optional): The folder to store external data.
49 **kwargs: Additional keyword arguments. For BC purposes.
50 """
51 if kwargs.pop("function_aware_folding", None) is not None:
52 logger.warning(
53 "'function_aware_folding' is deprecated. 'optimize' now supports both fully inlined models and models with functions. "
54 "To achieve the same behavior as 'function_aware_folding=True' before, set 'onnx_shape_inference=False'. "
55 "This would turn off incremental onnx shape inference and rely on model carried shapes and types. "
56 "See 'onnx_shape_inference' for more details."
57 )
58 for _ in range(num_iterations):
59 if onnx_shape_inference:
60 if model.ByteSize() < 1024 * 1024 * 1024 * 2:
61 model = onnx.shape_inference.infer_shapes(
62 model, check_type=True, strict_mode=True, data_prop=True
63 )
64 else:
65 logger.warning(
66 "The model size is too large for full model shape inference. "
67 "Skipping this step."
68 )
69
70 inline_simple_functions(model)
71 modified = fold_constants(
72 model, external_data_folder, onnx_shape_inference=onnx_shape_inference
73 )
74
75 remove_unused_nodes(model)
76 inline_simple_functions(model)
77 remove_unused_functions(model)
78 inline_functions_with_unused_outputs(model)
79 # NOTE: This is general rewrite rules
80 model = rewriter.rewrite(
81 model,
82 pattern_rewrite_rules=[
83 *no_op.rules.rules, # TODO: merge this rule into constant folding?
84 *broadcast_to_matmul.rules.rules,
85 gemm_to_matmul_add.rule,
86 *cast_constant_of_shape.rules.rules,
87 ],
88 )
89 if stop_if_no_change and not modified:
90 logger.debug("Stopping after %d iterations.", _)
91 break
92
93 for node in model.graph.node:
94 logger.debug("Node %s::%s name %s.", node.domain, node.op_type, node.name)
95
96 for function in model.functions:
97 for node in function.node:
98 logger.debug(
99 "Function %s::%s node %s::%s name %s.",
100 function.domain,
101 function.name,
102 node.domain,
103 node.op_type,
104 node.name,
105 )
106
107 # do_sequence_simplification(model)
108 return model
109
110
111 __all__ = [
112 "fold_constants",
113 "remove_unused_nodes",
114 "optimize",
115 "do_copy_propagation",
116 "do_sequence_simplification",
117 ]
118
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/onnxscript/optimizer/__init__.py b/onnxscript/optimizer/__init__.py
--- a/onnxscript/optimizer/__init__.py
+++ b/onnxscript/optimizer/__init__.py
@@ -58,8 +58,12 @@
for _ in range(num_iterations):
if onnx_shape_inference:
if model.ByteSize() < 1024 * 1024 * 1024 * 2:
+ # NOTE: strict mode is disabled because it crashes on the models
+ # that have different shapes inferred from the model carried shapes.
+ # The case can be found in:
+ # https://github.com/microsoft/onnxscript/issues/1443
model = onnx.shape_inference.infer_shapes(
- model, check_type=True, strict_mode=True, data_prop=True
+ model, check_type=True, strict_mode=False, data_prop=True
)
else:
logger.warning(
|
{"golden_diff": "diff --git a/onnxscript/optimizer/__init__.py b/onnxscript/optimizer/__init__.py\n--- a/onnxscript/optimizer/__init__.py\n+++ b/onnxscript/optimizer/__init__.py\n@@ -58,8 +58,12 @@\n for _ in range(num_iterations):\n if onnx_shape_inference:\n if model.ByteSize() < 1024 * 1024 * 1024 * 2:\n+ # NOTE: strict mode is disabled because it crashes on the models\n+ # that have different shapes inferred from the model carried shapes.\n+ # The case can be found in:\n+ # https://github.com/microsoft/onnxscript/issues/1443\n model = onnx.shape_inference.infer_shapes(\n- model, check_type=True, strict_mode=True, data_prop=True\n+ model, check_type=True, strict_mode=False, data_prop=True\n )\n else:\n logger.warning(\n", "issue": "Optimizer fails on shape inference error over native_batch_norm\nThe optimizer fails for the attach model (so dort fails as well). It was obtained with the latest onnx, onnxscript and torch nightly.\r\n\r\n[dump3bug.zip](https://github.com/microsoft/onnxscript/files/15106272/dump3bug.zip)\r\n\r\nTo replicate:\r\n\r\n```python\r\nimport onnx\r\nfrom onnxscript import optimizer\r\nonx = onnx.load(model)\r\noptimized = optimizer.optimize(onx)\r\n```\r\n\r\nIt is coming from the following graph module.\r\n\r\n```\r\ngraph():\r\n %primals_7 : [num_users=1] = placeholder[target=primals_7]\r\n %primals_1 : [num_users=1] = placeholder[target=primals_1]\r\n %primals_2 : [num_users=1] = placeholder[target=primals_2]\r\n %primals_3 : [num_users=1] = placeholder[target=primals_3]\r\n %primals_4 : [num_users=1] = placeholder[target=primals_4]\r\n %primals_5 : [num_users=1] = placeholder[target=primals_5]\r\n %add : [num_users=2] = call_function[target=torch.ops.aten.add.Tensor](args = (%primals_7, %primals_1), kwargs = {})\r\n %_native_batch_norm_legit_no_training : [num_users=1] = call_function[target=torch.ops.aten._native_batch_norm_legit_no_training.default](args = (%add, %primals_2, %primals_3, %primals_4, %primals_5, 0.1, 1e-05), kwargs = {})\r\n %getitem : [num_users=1] = call_function[target=operator.getitem](args = (%_native_batch_norm_legit_no_training, 0), kwargs = {})\r\n return (add, getitem)\r\n```\r\n\r\nError:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"check_model.py\", line 43, in <module>\r\n optimized = optimizer.optimize(onx)\r\n File \"onnxscript/onnxscript/optimizer/__init__.py\", line 61, in optimize\r\n model = onnx.shape_inference.infer_shapes(\r\n File \"onnx/onnx/shape_inference.py\", line 46, in infer_shapes\r\n inferred_model_str = C.infer_shapes(\r\nonnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:_aten_native_batch_norm_inference_onnx, node name: _aten_native_batch_norm_inference_onnx_2): [ShapeInferenceError] Inferred shape and existing shape differ in dimension 0: (2) vs (0)\r\n```\n", "before_files": [{"content": "import logging\nfrom typing import Any\n\nimport onnx\nimport onnx.shape_inference\n\nfrom onnxscript import rewriter\nfrom onnxscript.optimizer.constant_folding import fold_constants\nfrom onnxscript.optimizer.copy_propagation import (\n do_copy_propagation,\n do_sequence_simplification,\n)\nfrom onnxscript.optimizer.remove_unused import remove_unused_nodes\nfrom onnxscript.optimizer.remove_unused_function import remove_unused_functions\nfrom onnxscript.optimizer.simple_function_folding import (\n inline_functions_with_unused_outputs,\n inline_simple_functions,\n)\nfrom onnxscript.rewriter import (\n broadcast_to_matmul,\n cast_constant_of_shape,\n gemm_to_matmul_add,\n no_op,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef optimize(\n model: onnx.ModelProto,\n num_iterations: int = 2,\n *,\n onnx_shape_inference: bool = True,\n stop_if_no_change: bool = True,\n external_data_folder: str = \"\",\n **kwargs: Any,\n) -> onnx.ModelProto:\n \"\"\"Optimize the model. Perform optimizations and clean-ups such as constant folding, dead code elimination, etc.\n\n Args:\n model (onnx.ModelProto): The model to optimize.\n num_iterations (int, optional): Number of iterations to perform.\n onnx_shape_inference (bool, optional): Whether to perform onnx shape inference on the model.\n Set this to False to turn off onnx shape inference, and rely on model carried shapes and types.\n This is useful for models produced by PyTorch 2.2+ dynamo onnx exporter, where the model carries\n the symbolic shapes recorded from dynamo tracing.\n stop_if_no_change (bool, optional): Whether to stop if no change is detected.\n external_data_folder (str, optional): The folder to store external data.\n **kwargs: Additional keyword arguments. For BC purposes.\n \"\"\"\n if kwargs.pop(\"function_aware_folding\", None) is not None:\n logger.warning(\n \"'function_aware_folding' is deprecated. 'optimize' now supports both fully inlined models and models with functions. \"\n \"To achieve the same behavior as 'function_aware_folding=True' before, set 'onnx_shape_inference=False'. \"\n \"This would turn off incremental onnx shape inference and rely on model carried shapes and types. \"\n \"See 'onnx_shape_inference' for more details.\"\n )\n for _ in range(num_iterations):\n if onnx_shape_inference:\n if model.ByteSize() < 1024 * 1024 * 1024 * 2:\n model = onnx.shape_inference.infer_shapes(\n model, check_type=True, strict_mode=True, data_prop=True\n )\n else:\n logger.warning(\n \"The model size is too large for full model shape inference. \"\n \"Skipping this step.\"\n )\n\n inline_simple_functions(model)\n modified = fold_constants(\n model, external_data_folder, onnx_shape_inference=onnx_shape_inference\n )\n\n remove_unused_nodes(model)\n inline_simple_functions(model)\n remove_unused_functions(model)\n inline_functions_with_unused_outputs(model)\n # NOTE: This is general rewrite rules\n model = rewriter.rewrite(\n model,\n pattern_rewrite_rules=[\n *no_op.rules.rules, # TODO: merge this rule into constant folding?\n *broadcast_to_matmul.rules.rules,\n gemm_to_matmul_add.rule,\n *cast_constant_of_shape.rules.rules,\n ],\n )\n if stop_if_no_change and not modified:\n logger.debug(\"Stopping after %d iterations.\", _)\n break\n\n for node in model.graph.node:\n logger.debug(\"Node %s::%s name %s.\", node.domain, node.op_type, node.name)\n\n for function in model.functions:\n for node in function.node:\n logger.debug(\n \"Function %s::%s node %s::%s name %s.\",\n function.domain,\n function.name,\n node.domain,\n node.op_type,\n node.name,\n )\n\n # do_sequence_simplification(model)\n return model\n\n\n__all__ = [\n \"fold_constants\",\n \"remove_unused_nodes\",\n \"optimize\",\n \"do_copy_propagation\",\n \"do_sequence_simplification\",\n]\n", "path": "onnxscript/optimizer/__init__.py"}], "after_files": [{"content": "import logging\nfrom typing import Any\n\nimport onnx\nimport onnx.shape_inference\n\nfrom onnxscript import rewriter\nfrom onnxscript.optimizer.constant_folding import fold_constants\nfrom onnxscript.optimizer.copy_propagation import (\n do_copy_propagation,\n do_sequence_simplification,\n)\nfrom onnxscript.optimizer.remove_unused import remove_unused_nodes\nfrom onnxscript.optimizer.remove_unused_function import remove_unused_functions\nfrom onnxscript.optimizer.simple_function_folding import (\n inline_functions_with_unused_outputs,\n inline_simple_functions,\n)\nfrom onnxscript.rewriter import (\n broadcast_to_matmul,\n cast_constant_of_shape,\n gemm_to_matmul_add,\n no_op,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef optimize(\n model: onnx.ModelProto,\n num_iterations: int = 2,\n *,\n onnx_shape_inference: bool = True,\n stop_if_no_change: bool = True,\n external_data_folder: str = \"\",\n **kwargs: Any,\n) -> onnx.ModelProto:\n \"\"\"Optimize the model. Perform optimizations and clean-ups such as constant folding, dead code elimination, etc.\n\n Args:\n model (onnx.ModelProto): The model to optimize.\n num_iterations (int, optional): Number of iterations to perform.\n onnx_shape_inference (bool, optional): Whether to perform onnx shape inference on the model.\n Set this to False to turn off onnx shape inference, and rely on model carried shapes and types.\n This is useful for models produced by PyTorch 2.2+ dynamo onnx exporter, where the model carries\n the symbolic shapes recorded from dynamo tracing.\n stop_if_no_change (bool, optional): Whether to stop if no change is detected.\n external_data_folder (str, optional): The folder to store external data.\n **kwargs: Additional keyword arguments. For BC purposes.\n \"\"\"\n if kwargs.pop(\"function_aware_folding\", None) is not None:\n logger.warning(\n \"'function_aware_folding' is deprecated. 'optimize' now supports both fully inlined models and models with functions. \"\n \"To achieve the same behavior as 'function_aware_folding=True' before, set 'onnx_shape_inference=False'. \"\n \"This would turn off incremental onnx shape inference and rely on model carried shapes and types. \"\n \"See 'onnx_shape_inference' for more details.\"\n )\n for _ in range(num_iterations):\n if onnx_shape_inference:\n if model.ByteSize() < 1024 * 1024 * 1024 * 2:\n # NOTE: strict mode is disabled because it crashes on the models\n # that have different shapes inferred from the model carried shapes.\n # The case can be found in:\n # https://github.com/microsoft/onnxscript/issues/1443\n model = onnx.shape_inference.infer_shapes(\n model, check_type=True, strict_mode=False, data_prop=True\n )\n else:\n logger.warning(\n \"The model size is too large for full model shape inference. \"\n \"Skipping this step.\"\n )\n\n inline_simple_functions(model)\n modified = fold_constants(\n model, external_data_folder, onnx_shape_inference=onnx_shape_inference\n )\n\n remove_unused_nodes(model)\n inline_simple_functions(model)\n remove_unused_functions(model)\n inline_functions_with_unused_outputs(model)\n # NOTE: This is general rewrite rules\n model = rewriter.rewrite(\n model,\n pattern_rewrite_rules=[\n *no_op.rules.rules, # TODO: merge this rule into constant folding?\n *broadcast_to_matmul.rules.rules,\n gemm_to_matmul_add.rule,\n *cast_constant_of_shape.rules.rules,\n ],\n )\n if stop_if_no_change and not modified:\n logger.debug(\"Stopping after %d iterations.\", _)\n break\n\n for node in model.graph.node:\n logger.debug(\"Node %s::%s name %s.\", node.domain, node.op_type, node.name)\n\n for function in model.functions:\n for node in function.node:\n logger.debug(\n \"Function %s::%s node %s::%s name %s.\",\n function.domain,\n function.name,\n node.domain,\n node.op_type,\n node.name,\n )\n\n # do_sequence_simplification(model)\n return model\n\n\n__all__ = [\n \"fold_constants\",\n \"remove_unused_nodes\",\n \"optimize\",\n \"do_copy_propagation\",\n \"do_sequence_simplification\",\n]\n", "path": "onnxscript/optimizer/__init__.py"}]}
| 2,109 | 217 |
gh_patches_debug_25819
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-4312
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[bigo] Extractor returning invalid parameters
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Description
As of about 3 weeks ago, I now receive the following error on all live streams: `Bigo says: paramters invalid (code 1)`
### Verbose log
```shell
$ yt-dlp -vU -g https://www.bigo.tv/841947363
[debug] Command-line config: ['-vU', '-g', 'https://www.bigo.tv/841947363']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.06.22.1 [a86e01e]
[debug] Python version 3.10.4 (CPython 64bit) - macOS-12.4-arm64-arm-64bit
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 5.0.1 (setts), ffprobe 5.0.1
[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.06.22.1, Current version: 2022.06.22.1
yt-dlp is up to date (2022.06.22.1)
[debug] [Bigo] Extracting URL: https://www.bigo.tv/841947363
[Bigo] 841947363: Downloading JSON metadata
ERROR: [Bigo] 841947363: Bigo says: paramters invalid (code 1)
File "/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 647, in extract
ie_result = self._real_extract(url)
File "/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/bigo.py", line 37, in _real_extract
raise ExtractorError(
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `yt_dlp/extractor/bigo.py`
Content:
```
1 from .common import InfoExtractor
2 from ..utils import ExtractorError, urlencode_postdata
3
4
5 class BigoIE(InfoExtractor):
6 _VALID_URL = r'https?://(?:www\.)?bigo\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'
7
8 _TESTS = [{
9 'url': 'https://www.bigo.tv/ja/221338632',
10 'info_dict': {
11 'id': '6576287577575737440',
12 'title': '土よ〜💁♂️ 休憩室/REST room',
13 'thumbnail': r're:https?://.+',
14 'uploader': '✨Shin💫',
15 'uploader_id': '221338632',
16 'is_live': True,
17 },
18 'skip': 'livestream',
19 }, {
20 'url': 'https://www.bigo.tv/th/Tarlerm1304',
21 'only_matching': True,
22 }, {
23 'url': 'https://bigo.tv/115976881',
24 'only_matching': True,
25 }]
26
27 def _real_extract(self, url):
28 user_id = self._match_id(url)
29
30 info_raw = self._download_json(
31 'https://bigo.tv/studio/getInternalStudioInfo',
32 user_id, data=urlencode_postdata({'siteId': user_id}))
33
34 if not isinstance(info_raw, dict):
35 raise ExtractorError('Received invalid JSON data')
36 if info_raw.get('code'):
37 raise ExtractorError(
38 'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)
39 info = info_raw.get('data') or {}
40
41 if not info.get('alive'):
42 raise ExtractorError('This user is offline.', expected=True)
43
44 return {
45 'id': info.get('roomId') or user_id,
46 'title': info.get('roomTopic') or info.get('nick_name') or user_id,
47 'formats': [{
48 'url': info.get('hls_src'),
49 'ext': 'mp4',
50 'protocol': 'm3u8',
51 }],
52 'thumbnail': info.get('snapshot'),
53 'uploader': info.get('nick_name'),
54 'uploader_id': user_id,
55 'is_live': True,
56 }
57
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/yt_dlp/extractor/bigo.py b/yt_dlp/extractor/bigo.py
--- a/yt_dlp/extractor/bigo.py
+++ b/yt_dlp/extractor/bigo.py
@@ -28,7 +28,7 @@
user_id = self._match_id(url)
info_raw = self._download_json(
- 'https://bigo.tv/studio/getInternalStudioInfo',
+ 'https://ta.bigo.tv/official_website/studio/getInternalStudioInfo',
user_id, data=urlencode_postdata({'siteId': user_id}))
if not isinstance(info_raw, dict):
@@ -41,14 +41,14 @@
if not info.get('alive'):
raise ExtractorError('This user is offline.', expected=True)
+ formats, subs = self._extract_m3u8_formats_and_subtitles(
+ info.get('hls_src'), user_id, 'mp4', 'm3u8')
+
return {
'id': info.get('roomId') or user_id,
'title': info.get('roomTopic') or info.get('nick_name') or user_id,
- 'formats': [{
- 'url': info.get('hls_src'),
- 'ext': 'mp4',
- 'protocol': 'm3u8',
- }],
+ 'formats': formats,
+ 'subtitles': subs,
'thumbnail': info.get('snapshot'),
'uploader': info.get('nick_name'),
'uploader_id': user_id,
|
{"golden_diff": "diff --git a/yt_dlp/extractor/bigo.py b/yt_dlp/extractor/bigo.py\n--- a/yt_dlp/extractor/bigo.py\n+++ b/yt_dlp/extractor/bigo.py\n@@ -28,7 +28,7 @@\n user_id = self._match_id(url)\n \n info_raw = self._download_json(\n- 'https://bigo.tv/studio/getInternalStudioInfo',\n+ 'https://ta.bigo.tv/official_website/studio/getInternalStudioInfo',\n user_id, data=urlencode_postdata({'siteId': user_id}))\n \n if not isinstance(info_raw, dict):\n@@ -41,14 +41,14 @@\n if not info.get('alive'):\n raise ExtractorError('This user is offline.', expected=True)\n \n+ formats, subs = self._extract_m3u8_formats_and_subtitles(\n+ info.get('hls_src'), user_id, 'mp4', 'm3u8')\n+\n return {\n 'id': info.get('roomId') or user_id,\n 'title': info.get('roomTopic') or info.get('nick_name') or user_id,\n- 'formats': [{\n- 'url': info.get('hls_src'),\n- 'ext': 'mp4',\n- 'protocol': 'm3u8',\n- }],\n+ 'formats': formats,\n+ 'subtitles': subs,\n 'thumbnail': info.get('snapshot'),\n 'uploader': info.get('nick_name'),\n 'uploader_id': user_id,\n", "issue": "[bigo] Extractor returning invalid parameters\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Description\n\nAs of about 3 weeks ago, I now receive the following error on all live streams: `Bigo says: paramters invalid (code 1)`\n\n### Verbose log\n\n```shell\n$ yt-dlp -vU -g https://www.bigo.tv/841947363\r\n[debug] Command-line config: ['-vU', '-g', 'https://www.bigo.tv/841947363']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.06.22.1 [a86e01e]\r\n[debug] Python version 3.10.4 (CPython 64bit) - macOS-12.4-arm64-arm-64bit\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 5.0.1 (setts), ffprobe 5.0.1\r\n[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3\r\n[debug] Proxy map: {}\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.06.22.1, Current version: 2022.06.22.1\r\nyt-dlp is up to date (2022.06.22.1)\r\n[debug] [Bigo] Extracting URL: https://www.bigo.tv/841947363\r\n[Bigo] 841947363: Downloading JSON metadata\r\nERROR: [Bigo] 841947363: Bigo says: paramters invalid (code 1)\r\n File \"/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 647, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/opt/homebrew/Cellar/yt-dlp/2022.6.22.1/libexec/lib/python3.10/site-packages/yt_dlp/extractor/bigo.py\", line 37, in _real_extract\r\n raise ExtractorError(\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import ExtractorError, urlencode_postdata\n\n\nclass BigoIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bigo\\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'\n\n _TESTS = [{\n 'url': 'https://www.bigo.tv/ja/221338632',\n 'info_dict': {\n 'id': '6576287577575737440',\n 'title': '\u571f\u3088\u301c\ud83d\udc81\u200d\u2642\ufe0f \u4f11\u61a9\u5ba4/REST room',\n 'thumbnail': r're:https?://.+',\n 'uploader': '\u2728Shin\ud83d\udcab',\n 'uploader_id': '221338632',\n 'is_live': True,\n },\n 'skip': 'livestream',\n }, {\n 'url': 'https://www.bigo.tv/th/Tarlerm1304',\n 'only_matching': True,\n }, {\n 'url': 'https://bigo.tv/115976881',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n user_id = self._match_id(url)\n\n info_raw = self._download_json(\n 'https://bigo.tv/studio/getInternalStudioInfo',\n user_id, data=urlencode_postdata({'siteId': user_id}))\n\n if not isinstance(info_raw, dict):\n raise ExtractorError('Received invalid JSON data')\n if info_raw.get('code'):\n raise ExtractorError(\n 'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)\n info = info_raw.get('data') or {}\n\n if not info.get('alive'):\n raise ExtractorError('This user is offline.', expected=True)\n\n return {\n 'id': info.get('roomId') or user_id,\n 'title': info.get('roomTopic') or info.get('nick_name') or user_id,\n 'formats': [{\n 'url': info.get('hls_src'),\n 'ext': 'mp4',\n 'protocol': 'm3u8',\n }],\n 'thumbnail': info.get('snapshot'),\n 'uploader': info.get('nick_name'),\n 'uploader_id': user_id,\n 'is_live': True,\n }\n", "path": "yt_dlp/extractor/bigo.py"}], "after_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import ExtractorError, urlencode_postdata\n\n\nclass BigoIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?bigo\\.tv/(?:[a-z]{2,}/)?(?P<id>[^/]+)'\n\n _TESTS = [{\n 'url': 'https://www.bigo.tv/ja/221338632',\n 'info_dict': {\n 'id': '6576287577575737440',\n 'title': '\u571f\u3088\u301c\ud83d\udc81\u200d\u2642\ufe0f \u4f11\u61a9\u5ba4/REST room',\n 'thumbnail': r're:https?://.+',\n 'uploader': '\u2728Shin\ud83d\udcab',\n 'uploader_id': '221338632',\n 'is_live': True,\n },\n 'skip': 'livestream',\n }, {\n 'url': 'https://www.bigo.tv/th/Tarlerm1304',\n 'only_matching': True,\n }, {\n 'url': 'https://bigo.tv/115976881',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n user_id = self._match_id(url)\n\n info_raw = self._download_json(\n 'https://ta.bigo.tv/official_website/studio/getInternalStudioInfo',\n user_id, data=urlencode_postdata({'siteId': user_id}))\n\n if not isinstance(info_raw, dict):\n raise ExtractorError('Received invalid JSON data')\n if info_raw.get('code'):\n raise ExtractorError(\n 'Bigo says: %s (code %s)' % (info_raw.get('msg'), info_raw.get('code')), expected=True)\n info = info_raw.get('data') or {}\n\n if not info.get('alive'):\n raise ExtractorError('This user is offline.', expected=True)\n\n formats, subs = self._extract_m3u8_formats_and_subtitles(\n info.get('hls_src'), user_id, 'mp4', 'm3u8')\n\n return {\n 'id': info.get('roomId') or user_id,\n 'title': info.get('roomTopic') or info.get('nick_name') or user_id,\n 'formats': formats,\n 'subtitles': subs,\n 'thumbnail': info.get('snapshot'),\n 'uploader': info.get('nick_name'),\n 'uploader_id': user_id,\n 'is_live': True,\n }\n", "path": "yt_dlp/extractor/bigo.py"}]}
| 1,840 | 342 |
gh_patches_debug_28600
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-3822
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[beta][v20] Lire une notification n'invalide pas le cache
Serveur : Beta
Version : v20-RC2/99bee1d
Système : Mac OS X
Navigateur : 52.0.2743.116 (64-bit)
---
1. Générez une notification.
2. Lisez là depuis le site.
3. Récupérez la liste des notifications par l'API.
4. Si le timeout de 15 minutes n'est pas passé par là, la notification est toujours marquée comme non lue dans la réponse de l'API.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zds/notification/api/views.py`
Content:
```
1 # coding: utf-8
2 from dry_rest_permissions.generics import DRYPermissions
3 from rest_framework import filters
4 from rest_framework.generics import ListAPIView
5 from rest_framework.permissions import IsAuthenticated
6 from rest_framework_extensions.cache.decorators import cache_response
7 from rest_framework_extensions.etag.decorators import etag
8 from rest_framework_extensions.key_constructor import bits
9 from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor
10
11 from zds.api.bits import DJRF3xPaginationKeyBit
12 from zds.notification.api.serializers import NotificationSerializer
13 from zds.notification.models import Notification
14
15
16 class PagingNotificationListKeyConstructor(DefaultKeyConstructor):
17 pagination = DJRF3xPaginationKeyBit()
18 search = bits.QueryParamsKeyBit(['search', 'ordering', 'type'])
19 list_sql_query = bits.ListSqlQueryKeyBit()
20 unique_view_id = bits.UniqueViewIdKeyBit()
21 user = bits.UserKeyBit()
22
23
24 class NotificationListAPI(ListAPIView):
25 """
26 List of notification.
27 """
28
29 filter_backends = (filters.SearchFilter, filters.OrderingFilter)
30 search_fields = ('title',)
31 ordering_fields = ('pubdate', 'title',)
32 list_key_func = PagingNotificationListKeyConstructor()
33 serializer_class = NotificationSerializer
34 permission_classes = (IsAuthenticated, DRYPermissions,)
35
36 @etag(list_key_func)
37 @cache_response(key_func=list_key_func)
38 def get(self, request, *args, **kwargs):
39 """
40 Lists all notifications of a user.
41 ---
42
43 parameters:
44 - name: Authorization
45 description: Bearer token to make an authenticated request.
46 required: true
47 paramType: header
48 - name: page
49 description: Restricts output to the given page number.
50 required: false
51 paramType: query
52 - name: page_size
53 description: Sets the number of notifications per page.
54 required: false
55 paramType: query
56 - name: search
57 description: Filters by title.
58 required: false
59 paramType: query
60 - name: ordering
61 description: Sorts the results. You can order by (-)pubdate or (-)title.
62 paramType: query
63 - name: type
64 description: Filters by notification type.
65 paramType: query
66 - name: subscription_type
67 description: Filters by subscription type.
68 paramType: query
69 - name: expand
70 description: Returns an object instead of an identifier representing the given field.
71 required: false
72 paramType: query
73 responseMessages:
74 - code: 401
75 message: Not Authenticated
76 - code: 404
77 message: Not Found
78 """
79 return self.list(request, *args, **kwargs)
80
81 def get_queryset(self):
82 queryset = Notification.objects.get_notifications_of(self.request.user)
83 subscription_type = self.request.query_params.get('subscription_type', None)
84 if subscription_type:
85 queryset = queryset.filter(subscription__content_type__model=subscription_type)
86 _type = self.request.query_params.get('type', None)
87 if _type:
88 queryset = queryset.filter(content_type__model=_type)
89 return queryset
90
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py
--- a/zds/notification/api/views.py
+++ b/zds/notification/api/views.py
@@ -1,4 +1,8 @@
# coding: utf-8
+import datetime
+from django.core.cache import cache
+from django.db.models.signals import post_delete
+from django.db.models.signals import post_save
from dry_rest_permissions.generics import DRYPermissions
from rest_framework import filters
from rest_framework.generics import ListAPIView
@@ -8,7 +12,7 @@
from rest_framework_extensions.key_constructor import bits
from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor
-from zds.api.bits import DJRF3xPaginationKeyBit
+from zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit
from zds.notification.api.serializers import NotificationSerializer
from zds.notification.models import Notification
@@ -19,6 +23,15 @@
list_sql_query = bits.ListSqlQueryKeyBit()
unique_view_id = bits.UniqueViewIdKeyBit()
user = bits.UserKeyBit()
+ updated_at = UpdatedAtKeyBit('api_updated_notification')
+
+
+def change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):
+ cache.set('api_updated_notification', datetime.datetime.utcnow())
+
+
+post_save.connect(receiver=change_api_notification_updated_at, sender=Notification)
+post_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)
class NotificationListAPI(ListAPIView):
|
{"golden_diff": "diff --git a/zds/notification/api/views.py b/zds/notification/api/views.py\n--- a/zds/notification/api/views.py\n+++ b/zds/notification/api/views.py\n@@ -1,4 +1,8 @@\n # coding: utf-8\n+import datetime\n+from django.core.cache import cache\n+from django.db.models.signals import post_delete\n+from django.db.models.signals import post_save\n from dry_rest_permissions.generics import DRYPermissions\n from rest_framework import filters\n from rest_framework.generics import ListAPIView\n@@ -8,7 +12,7 @@\n from rest_framework_extensions.key_constructor import bits\n from rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n \n-from zds.api.bits import DJRF3xPaginationKeyBit\n+from zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit\n from zds.notification.api.serializers import NotificationSerializer\n from zds.notification.models import Notification\n \n@@ -19,6 +23,15 @@\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n+ updated_at = UpdatedAtKeyBit('api_updated_notification')\n+\n+\n+def change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):\n+ cache.set('api_updated_notification', datetime.datetime.utcnow())\n+\n+\n+post_save.connect(receiver=change_api_notification_updated_at, sender=Notification)\n+post_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)\n \n \n class NotificationListAPI(ListAPIView):\n", "issue": "[beta][v20] Lire une notification n'invalide pas le cache\nServeur : Beta\nVersion : v20-RC2/99bee1d\nSyst\u00e8me : Mac OS X\nNavigateur : 52.0.2743.116 (64-bit)\n\n---\n1. G\u00e9n\u00e9rez une notification.\n2. Lisez l\u00e0 depuis le site.\n3. R\u00e9cup\u00e9rez la liste des notifications par l'API.\n4. Si le timeout de 15 minutes n'est pas pass\u00e9 par l\u00e0, la notification est toujours marqu\u00e9e comme non lue dans la r\u00e9ponse de l'API.\n\n", "before_files": [{"content": "# coding: utf-8\nfrom dry_rest_permissions.generics import DRYPermissions\nfrom rest_framework import filters\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework_extensions.cache.decorators import cache_response\nfrom rest_framework_extensions.etag.decorators import etag\nfrom rest_framework_extensions.key_constructor import bits\nfrom rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n\nfrom zds.api.bits import DJRF3xPaginationKeyBit\nfrom zds.notification.api.serializers import NotificationSerializer\nfrom zds.notification.models import Notification\n\n\nclass PagingNotificationListKeyConstructor(DefaultKeyConstructor):\n pagination = DJRF3xPaginationKeyBit()\n search = bits.QueryParamsKeyBit(['search', 'ordering', 'type'])\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n\n\nclass NotificationListAPI(ListAPIView):\n \"\"\"\n List of notification.\n \"\"\"\n\n filter_backends = (filters.SearchFilter, filters.OrderingFilter)\n search_fields = ('title',)\n ordering_fields = ('pubdate', 'title',)\n list_key_func = PagingNotificationListKeyConstructor()\n serializer_class = NotificationSerializer\n permission_classes = (IsAuthenticated, DRYPermissions,)\n\n @etag(list_key_func)\n @cache_response(key_func=list_key_func)\n def get(self, request, *args, **kwargs):\n \"\"\"\n Lists all notifications of a user.\n ---\n\n parameters:\n - name: Authorization\n description: Bearer token to make an authenticated request.\n required: true\n paramType: header\n - name: page\n description: Restricts output to the given page number.\n required: false\n paramType: query\n - name: page_size\n description: Sets the number of notifications per page.\n required: false\n paramType: query\n - name: search\n description: Filters by title.\n required: false\n paramType: query\n - name: ordering\n description: Sorts the results. You can order by (-)pubdate or (-)title.\n paramType: query\n - name: type\n description: Filters by notification type.\n paramType: query\n - name: subscription_type\n description: Filters by subscription type.\n paramType: query\n - name: expand\n description: Returns an object instead of an identifier representing the given field.\n required: false\n paramType: query\n responseMessages:\n - code: 401\n message: Not Authenticated\n - code: 404\n message: Not Found\n \"\"\"\n return self.list(request, *args, **kwargs)\n\n def get_queryset(self):\n queryset = Notification.objects.get_notifications_of(self.request.user)\n subscription_type = self.request.query_params.get('subscription_type', None)\n if subscription_type:\n queryset = queryset.filter(subscription__content_type__model=subscription_type)\n _type = self.request.query_params.get('type', None)\n if _type:\n queryset = queryset.filter(content_type__model=_type)\n return queryset\n", "path": "zds/notification/api/views.py"}], "after_files": [{"content": "# coding: utf-8\nimport datetime\nfrom django.core.cache import cache\nfrom django.db.models.signals import post_delete\nfrom django.db.models.signals import post_save\nfrom dry_rest_permissions.generics import DRYPermissions\nfrom rest_framework import filters\nfrom rest_framework.generics import ListAPIView\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework_extensions.cache.decorators import cache_response\nfrom rest_framework_extensions.etag.decorators import etag\nfrom rest_framework_extensions.key_constructor import bits\nfrom rest_framework_extensions.key_constructor.constructors import DefaultKeyConstructor\n\nfrom zds.api.bits import DJRF3xPaginationKeyBit, UpdatedAtKeyBit\nfrom zds.notification.api.serializers import NotificationSerializer\nfrom zds.notification.models import Notification\n\n\nclass PagingNotificationListKeyConstructor(DefaultKeyConstructor):\n pagination = DJRF3xPaginationKeyBit()\n search = bits.QueryParamsKeyBit(['search', 'ordering', 'type'])\n list_sql_query = bits.ListSqlQueryKeyBit()\n unique_view_id = bits.UniqueViewIdKeyBit()\n user = bits.UserKeyBit()\n updated_at = UpdatedAtKeyBit('api_updated_notification')\n\n\ndef change_api_notification_updated_at(sender=None, instance=None, *args, **kwargs):\n cache.set('api_updated_notification', datetime.datetime.utcnow())\n\n\npost_save.connect(receiver=change_api_notification_updated_at, sender=Notification)\npost_delete.connect(receiver=change_api_notification_updated_at, sender=Notification)\n\n\nclass NotificationListAPI(ListAPIView):\n \"\"\"\n List of notification.\n \"\"\"\n\n filter_backends = (filters.SearchFilter, filters.OrderingFilter)\n search_fields = ('title',)\n ordering_fields = ('pubdate', 'title',)\n list_key_func = PagingNotificationListKeyConstructor()\n serializer_class = NotificationSerializer\n permission_classes = (IsAuthenticated, DRYPermissions,)\n\n @etag(list_key_func)\n @cache_response(key_func=list_key_func)\n def get(self, request, *args, **kwargs):\n \"\"\"\n Lists all notifications of a user.\n ---\n\n parameters:\n - name: Authorization\n description: Bearer token to make an authenticated request.\n required: true\n paramType: header\n - name: page\n description: Restricts output to the given page number.\n required: false\n paramType: query\n - name: page_size\n description: Sets the number of notifications per page.\n required: false\n paramType: query\n - name: search\n description: Filters by title.\n required: false\n paramType: query\n - name: ordering\n description: Sorts the results. You can order by (-)pubdate or (-)title.\n paramType: query\n - name: type\n description: Filters by notification type.\n paramType: query\n - name: subscription_type\n description: Filters by subscription type.\n paramType: query\n - name: expand\n description: Returns an object instead of an identifier representing the given field.\n required: false\n paramType: query\n responseMessages:\n - code: 401\n message: Not Authenticated\n - code: 404\n message: Not Found\n \"\"\"\n return self.list(request, *args, **kwargs)\n\n def get_queryset(self):\n queryset = Notification.objects.get_notifications_of(self.request.user)\n subscription_type = self.request.query_params.get('subscription_type', None)\n if subscription_type:\n queryset = queryset.filter(subscription__content_type__model=subscription_type)\n _type = self.request.query_params.get('type', None)\n if _type:\n queryset = queryset.filter(content_type__model=_type)\n return queryset\n", "path": "zds/notification/api/views.py"}]}
| 1,246 | 332 |
gh_patches_debug_35738
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1682
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Increase streaming unit tests
reach parity with C# unit tests
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `libraries/botframework-streaming/botframework/streaming/receive_request.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List
5
6 from botframework.streaming.payloads import ContentStream
7
8
9 class ReceiveRequest:
10 def __init__(
11 self, *, verb: str = None, path: str = None, streams: List[ContentStream]
12 ):
13 self.verb = verb
14 self.path = path
15 self.streams: List[ContentStream] = streams or []
16
17 async def read_body_as_str(self) -> str:
18 try:
19 content_stream = self.streams[0] if self.streams else None
20
21 if not content_stream:
22 # TODO: maybe raise an error
23 return ""
24
25 # TODO: encoding double check
26 stream = await content_stream.stream.read_until_end()
27 return bytes(stream).decode("utf-8-sig")
28 except Exception as error:
29 raise error
30
```
Path: `libraries/botframework-streaming/botframework/streaming/streaming_response.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 import json
5 from uuid import UUID, uuid4
6 from typing import List, Union
7
8 from msrest.serialization import Model
9 from botframework.streaming.payloads import ResponseMessageStream
10 from botframework.streaming.payloads.models import Serializable
11
12
13 class StreamingResponse:
14 def __init__(
15 self, *, status_code: int = None, streams: List[ResponseMessageStream] = None
16 ):
17 self.status_code = status_code
18 self.streams = streams
19
20 def add_stream(self, content: object, identifier: UUID = None):
21 if not content:
22 raise TypeError("content can't be None")
23
24 if self.streams is None:
25 self.streams: List[ResponseMessageStream] = []
26
27 self.streams.append(
28 ResponseMessageStream(id=identifier or uuid4(), content=content)
29 )
30
31 def set_body(self, body: Union[str, Serializable, Model]):
32 # TODO: verify if msrest.serialization.Model is necessary
33 if not body:
34 return
35
36 if isinstance(body, Serializable):
37 body = body.to_json()
38 elif isinstance(body, Model):
39 body = json.dumps(body.as_dict())
40
41 self.add_stream(list(body.encode()))
42
43 @staticmethod
44 def create_response(status_code: int, body: object) -> "StreamingResponse":
45 response = StreamingResponse(status_code=status_code)
46
47 if body:
48 response.add_stream(body)
49
50 return response
51
```
Path: `libraries/botframework-streaming/botframework/streaming/receive_response.py`
Content:
```
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from typing import List, Union, Type
5
6 from msrest.serialization import Model
7 from botframework.streaming.payloads import ContentStream
8 from botframework.streaming.payloads.models import Serializable
9
10
11 class ReceiveResponse:
12 def __init__(self, status_code: int = None, streams: List[ContentStream] = None):
13 self.status_code = status_code
14 self.streams = streams
15
16 def read_body_as_json(
17 self, cls: Union[Type[Model], Type[Serializable]]
18 ) -> Union[Model, Serializable]:
19 try:
20 body_str = self.read_body_as_str()
21 body = None
22
23 if issubclass(cls, Serializable):
24 body = cls().from_json(body_str)
25 elif isinstance(cls, Model):
26 body = cls.deserialize(body_str)
27 return body
28 except Exception as error:
29 raise error
30
31 def read_body_as_str(self) -> str:
32 try:
33 content_stream = self.read_body()
34
35 if not content_stream:
36 return ""
37
38 # TODO: encoding double check
39 return content_stream.decode("utf8")
40 except Exception as error:
41 raise error
42
43 def read_body(self) -> bytes:
44 try:
45 content_stream = self.streams[0] if self.streams else None
46
47 if not content_stream:
48 return None
49
50 return bytes(content_stream.stream)
51 except Exception as error:
52 raise error
53
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/libraries/botframework-streaming/botframework/streaming/receive_request.py b/libraries/botframework-streaming/botframework/streaming/receive_request.py
--- a/libraries/botframework-streaming/botframework/streaming/receive_request.py
+++ b/libraries/botframework-streaming/botframework/streaming/receive_request.py
@@ -8,7 +8,7 @@
class ReceiveRequest:
def __init__(
- self, *, verb: str = None, path: str = None, streams: List[ContentStream]
+ self, *, verb: str = None, path: str = None, streams: List[ContentStream] = None
):
self.verb = verb
self.path = path
diff --git a/libraries/botframework-streaming/botframework/streaming/receive_response.py b/libraries/botframework-streaming/botframework/streaming/receive_response.py
--- a/libraries/botframework-streaming/botframework/streaming/receive_response.py
+++ b/libraries/botframework-streaming/botframework/streaming/receive_response.py
@@ -9,9 +9,9 @@
class ReceiveResponse:
- def __init__(self, status_code: int = None, streams: List[ContentStream] = None):
+ def __init__(self, status_code: int = 0, streams: List[ContentStream] = None):
self.status_code = status_code
- self.streams = streams
+ self.streams = streams or []
def read_body_as_json(
self, cls: Union[Type[Model], Type[Serializable]]
diff --git a/libraries/botframework-streaming/botframework/streaming/streaming_response.py b/libraries/botframework-streaming/botframework/streaming/streaming_response.py
--- a/libraries/botframework-streaming/botframework/streaming/streaming_response.py
+++ b/libraries/botframework-streaming/botframework/streaming/streaming_response.py
@@ -2,6 +2,7 @@
# Licensed under the MIT License.
import json
+from http import HTTPStatus
from uuid import UUID, uuid4
from typing import List, Union
@@ -12,7 +13,7 @@
class StreamingResponse:
def __init__(
- self, *, status_code: int = None, streams: List[ResponseMessageStream] = None
+ self, *, status_code: int = 0, streams: List[ResponseMessageStream] = None
):
self.status_code = status_code
self.streams = streams
@@ -48,3 +49,20 @@
response.add_stream(body)
return response
+
+ @staticmethod
+ def not_found(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.NOT_FOUND, body)
+
+ @staticmethod
+ def forbidden(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.FORBIDDEN, body)
+
+ # pylint: disable=invalid-name
+ @staticmethod
+ def ok(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.OK, body)
+
+ @staticmethod
+ def internal_server_error(body: object = None) -> "StreamingResponse":
+ return StreamingResponse.create_response(HTTPStatus.INTERNAL_SERVER_ERROR, body)
|
{"golden_diff": "diff --git a/libraries/botframework-streaming/botframework/streaming/receive_request.py b/libraries/botframework-streaming/botframework/streaming/receive_request.py\n--- a/libraries/botframework-streaming/botframework/streaming/receive_request.py\n+++ b/libraries/botframework-streaming/botframework/streaming/receive_request.py\n@@ -8,7 +8,7 @@\n \n class ReceiveRequest:\n def __init__(\n- self, *, verb: str = None, path: str = None, streams: List[ContentStream]\n+ self, *, verb: str = None, path: str = None, streams: List[ContentStream] = None\n ):\n self.verb = verb\n self.path = path\ndiff --git a/libraries/botframework-streaming/botframework/streaming/receive_response.py b/libraries/botframework-streaming/botframework/streaming/receive_response.py\n--- a/libraries/botframework-streaming/botframework/streaming/receive_response.py\n+++ b/libraries/botframework-streaming/botframework/streaming/receive_response.py\n@@ -9,9 +9,9 @@\n \n \n class ReceiveResponse:\n- def __init__(self, status_code: int = None, streams: List[ContentStream] = None):\n+ def __init__(self, status_code: int = 0, streams: List[ContentStream] = None):\n self.status_code = status_code\n- self.streams = streams\n+ self.streams = streams or []\n \n def read_body_as_json(\n self, cls: Union[Type[Model], Type[Serializable]]\ndiff --git a/libraries/botframework-streaming/botframework/streaming/streaming_response.py b/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n--- a/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n+++ b/libraries/botframework-streaming/botframework/streaming/streaming_response.py\n@@ -2,6 +2,7 @@\n # Licensed under the MIT License.\n \n import json\n+from http import HTTPStatus\n from uuid import UUID, uuid4\n from typing import List, Union\n \n@@ -12,7 +13,7 @@\n \n class StreamingResponse:\n def __init__(\n- self, *, status_code: int = None, streams: List[ResponseMessageStream] = None\n+ self, *, status_code: int = 0, streams: List[ResponseMessageStream] = None\n ):\n self.status_code = status_code\n self.streams = streams\n@@ -48,3 +49,20 @@\n response.add_stream(body)\n \n return response\n+\n+ @staticmethod\n+ def not_found(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.NOT_FOUND, body)\n+\n+ @staticmethod\n+ def forbidden(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.FORBIDDEN, body)\n+\n+ # pylint: disable=invalid-name\n+ @staticmethod\n+ def ok(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.OK, body)\n+\n+ @staticmethod\n+ def internal_server_error(body: object = None) -> \"StreamingResponse\":\n+ return StreamingResponse.create_response(HTTPStatus.INTERNAL_SERVER_ERROR, body)\n", "issue": "Increase streaming unit tests\nreach parity with C# unit tests\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List\n\nfrom botframework.streaming.payloads import ContentStream\n\n\nclass ReceiveRequest:\n def __init__(\n self, *, verb: str = None, path: str = None, streams: List[ContentStream]\n ):\n self.verb = verb\n self.path = path\n self.streams: List[ContentStream] = streams or []\n\n async def read_body_as_str(self) -> str:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n # TODO: maybe raise an error\n return \"\"\n\n # TODO: encoding double check\n stream = await content_stream.stream.read_until_end()\n return bytes(stream).decode(\"utf-8-sig\")\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_request.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport json\nfrom uuid import UUID, uuid4\nfrom typing import List, Union\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ResponseMessageStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass StreamingResponse:\n def __init__(\n self, *, status_code: int = None, streams: List[ResponseMessageStream] = None\n ):\n self.status_code = status_code\n self.streams = streams\n\n def add_stream(self, content: object, identifier: UUID = None):\n if not content:\n raise TypeError(\"content can't be None\")\n\n if self.streams is None:\n self.streams: List[ResponseMessageStream] = []\n\n self.streams.append(\n ResponseMessageStream(id=identifier or uuid4(), content=content)\n )\n\n def set_body(self, body: Union[str, Serializable, Model]):\n # TODO: verify if msrest.serialization.Model is necessary\n if not body:\n return\n\n if isinstance(body, Serializable):\n body = body.to_json()\n elif isinstance(body, Model):\n body = json.dumps(body.as_dict())\n\n self.add_stream(list(body.encode()))\n\n @staticmethod\n def create_response(status_code: int, body: object) -> \"StreamingResponse\":\n response = StreamingResponse(status_code=status_code)\n\n if body:\n response.add_stream(body)\n\n return response\n", "path": "libraries/botframework-streaming/botframework/streaming/streaming_response.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List, Union, Type\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ContentStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass ReceiveResponse:\n def __init__(self, status_code: int = None, streams: List[ContentStream] = None):\n self.status_code = status_code\n self.streams = streams\n\n def read_body_as_json(\n self, cls: Union[Type[Model], Type[Serializable]]\n ) -> Union[Model, Serializable]:\n try:\n body_str = self.read_body_as_str()\n body = None\n\n if issubclass(cls, Serializable):\n body = cls().from_json(body_str)\n elif isinstance(cls, Model):\n body = cls.deserialize(body_str)\n return body\n except Exception as error:\n raise error\n\n def read_body_as_str(self) -> str:\n try:\n content_stream = self.read_body()\n\n if not content_stream:\n return \"\"\n\n # TODO: encoding double check\n return content_stream.decode(\"utf8\")\n except Exception as error:\n raise error\n\n def read_body(self) -> bytes:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n return None\n\n return bytes(content_stream.stream)\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_response.py"}], "after_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List\n\nfrom botframework.streaming.payloads import ContentStream\n\n\nclass ReceiveRequest:\n def __init__(\n self, *, verb: str = None, path: str = None, streams: List[ContentStream] = None\n ):\n self.verb = verb\n self.path = path\n self.streams: List[ContentStream] = streams or []\n\n async def read_body_as_str(self) -> str:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n # TODO: maybe raise an error\n return \"\"\n\n # TODO: encoding double check\n stream = await content_stream.stream.read_until_end()\n return bytes(stream).decode(\"utf-8-sig\")\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_request.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nimport json\nfrom http import HTTPStatus\nfrom uuid import UUID, uuid4\nfrom typing import List, Union\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ResponseMessageStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass StreamingResponse:\n def __init__(\n self, *, status_code: int = 0, streams: List[ResponseMessageStream] = None\n ):\n self.status_code = status_code\n self.streams = streams\n\n def add_stream(self, content: object, identifier: UUID = None):\n if not content:\n raise TypeError(\"content can't be None\")\n\n if self.streams is None:\n self.streams: List[ResponseMessageStream] = []\n\n self.streams.append(\n ResponseMessageStream(id=identifier or uuid4(), content=content)\n )\n\n def set_body(self, body: Union[str, Serializable, Model]):\n # TODO: verify if msrest.serialization.Model is necessary\n if not body:\n return\n\n if isinstance(body, Serializable):\n body = body.to_json()\n elif isinstance(body, Model):\n body = json.dumps(body.as_dict())\n\n self.add_stream(list(body.encode()))\n\n @staticmethod\n def create_response(status_code: int, body: object) -> \"StreamingResponse\":\n response = StreamingResponse(status_code=status_code)\n\n if body:\n response.add_stream(body)\n\n return response\n\n @staticmethod\n def not_found(body: object = None) -> \"StreamingResponse\":\n return StreamingResponse.create_response(HTTPStatus.NOT_FOUND, body)\n\n @staticmethod\n def forbidden(body: object = None) -> \"StreamingResponse\":\n return StreamingResponse.create_response(HTTPStatus.FORBIDDEN, body)\n\n # pylint: disable=invalid-name\n @staticmethod\n def ok(body: object = None) -> \"StreamingResponse\":\n return StreamingResponse.create_response(HTTPStatus.OK, body)\n\n @staticmethod\n def internal_server_error(body: object = None) -> \"StreamingResponse\":\n return StreamingResponse.create_response(HTTPStatus.INTERNAL_SERVER_ERROR, body)\n", "path": "libraries/botframework-streaming/botframework/streaming/streaming_response.py"}, {"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom typing import List, Union, Type\n\nfrom msrest.serialization import Model\nfrom botframework.streaming.payloads import ContentStream\nfrom botframework.streaming.payloads.models import Serializable\n\n\nclass ReceiveResponse:\n def __init__(self, status_code: int = 0, streams: List[ContentStream] = None):\n self.status_code = status_code\n self.streams = streams or []\n\n def read_body_as_json(\n self, cls: Union[Type[Model], Type[Serializable]]\n ) -> Union[Model, Serializable]:\n try:\n body_str = self.read_body_as_str()\n body = None\n\n if issubclass(cls, Serializable):\n body = cls().from_json(body_str)\n elif isinstance(cls, Model):\n body = cls.deserialize(body_str)\n return body\n except Exception as error:\n raise error\n\n def read_body_as_str(self) -> str:\n try:\n content_stream = self.read_body()\n\n if not content_stream:\n return \"\"\n\n # TODO: encoding double check\n return content_stream.decode(\"utf8\")\n except Exception as error:\n raise error\n\n def read_body(self) -> bytes:\n try:\n content_stream = self.streams[0] if self.streams else None\n\n if not content_stream:\n return None\n\n return bytes(content_stream.stream)\n except Exception as error:\n raise error\n", "path": "libraries/botframework-streaming/botframework/streaming/receive_response.py"}]}
| 1,410 | 735 |
gh_patches_debug_14007
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1219
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CSV export broken
### Short description
Export CSV failed when the plot name has decode error characters.
### Code to reproduce
```python
from pyqtgraph.Qt import QtGui, QtCore
import numpy as np
import pyqtgraph as pg
#QtGui.QApplication.setGraphicsSystem('raster')
app = QtGui.QApplication([])
win = pg.GraphicsLayoutWidget(show=True, title="Basic plotting examples")
win.resize(1000,600)
win.setWindowTitle('pyqtgraph example: Plotting')
pg.setConfigOptions(antialias=True)
pw = win.addPlot(title="Scatter plot, axis labels, log scale")
pw.addLegend()
pw .plot(np.random.normal(size=100), pen=(255,0,0), name="\u00A0下加热体")
QtGui.QApplication.instance().exec_()
```
### Expected behavior
Export CSV Success
### Real behavior
Export CSV Failed
```
---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
c:\program files\python37\lib\site-packages\pyqtgraph\exporters\Exporter.py in fileSaveFinished(self, fileName)
75 fileName = fileName + '.' + selectedExt.lstrip('.')
76
---> 77 self.export(fileName=fileName, **self.fileDialog.opts)
78
79 def getScene(self):
c:\program files\python37\lib\site-packages\pyqtgraph\exporters\CSVExporter.py in export(self, fileName)
58
59 with open(fileName, 'w') as fd:
---> 60 fd.write(sep.join(header) + '\n')
61 i = 0
62 numFormat = '%%0.%dg' % self.params['precision']
UnicodeEncodeError: 'gbk' codec can't encode character '\xa0' in position 1: illegal multibyte sequence
```
### Tested environment(s)
* PyQtGraph version: 0.11.0.dev0+g2203933
* Qt Python binding: PyQt5 5.13.2 Qt 5.13.2
* Python version: Python 3.7.5
* NumPy version: 1.17.4
* Operating system: Windows 7 X64
* Installation method: pip git+
### Additional context
I use "\u00A0" because i want to add some space before label name in the legend.
Could i use the csv export by "utf-8" but not "gbk" ?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/exporters/CSVExporter.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from ..Qt import QtGui, QtCore
3 from .Exporter import Exporter
4 from ..parametertree import Parameter
5 from .. import PlotItem
6
7 __all__ = ['CSVExporter']
8
9
10 class CSVExporter(Exporter):
11 Name = "CSV from plot data"
12 windows = []
13 def __init__(self, item):
14 Exporter.__init__(self, item)
15 self.params = Parameter(name='params', type='group', children=[
16 {'name': 'separator', 'type': 'list', 'value': 'comma', 'values': ['comma', 'tab']},
17 {'name': 'precision', 'type': 'int', 'value': 10, 'limits': [0, None]},
18 {'name': 'columnMode', 'type': 'list', 'values': ['(x,y) per plot', '(x,y,y,y) for all plots']}
19 ])
20
21 def parameters(self):
22 return self.params
23
24 def export(self, fileName=None):
25
26 if not isinstance(self.item, PlotItem):
27 raise Exception("Must have a PlotItem selected for CSV export.")
28
29 if fileName is None:
30 self.fileSaveDialog(filter=["*.csv", "*.tsv"])
31 return
32
33 data = []
34 header = []
35
36 appendAllX = self.params['columnMode'] == '(x,y) per plot'
37
38 for i, c in enumerate(self.item.curves):
39 cd = c.getData()
40 if cd[0] is None:
41 continue
42 data.append(cd)
43 if hasattr(c, 'implements') and c.implements('plotData') and c.name() is not None:
44 name = c.name().replace('"', '""') + '_'
45 xName, yName = '"'+name+'x"', '"'+name+'y"'
46 else:
47 xName = 'x%04d' % i
48 yName = 'y%04d' % i
49 if appendAllX or i == 0:
50 header.extend([xName, yName])
51 else:
52 header.extend([yName])
53
54 if self.params['separator'] == 'comma':
55 sep = ','
56 else:
57 sep = '\t'
58
59 with open(fileName, 'w') as fd:
60 fd.write(sep.join(header) + '\n')
61 i = 0
62 numFormat = '%%0.%dg' % self.params['precision']
63 numRows = max([len(d[0]) for d in data])
64 for i in range(numRows):
65 for j, d in enumerate(data):
66 # write x value if this is the first column, or if we want
67 # x for all rows
68 if appendAllX or j == 0:
69 if d is not None and i < len(d[0]):
70 fd.write(numFormat % d[0][i] + sep)
71 else:
72 fd.write(' %s' % sep)
73
74 # write y value
75 if d is not None and i < len(d[1]):
76 fd.write(numFormat % d[1][i] + sep)
77 else:
78 fd.write(' %s' % sep)
79 fd.write('\n')
80
81
82 CSVExporter.register()
83
84
85
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyqtgraph/exporters/CSVExporter.py b/pyqtgraph/exporters/CSVExporter.py
--- a/pyqtgraph/exporters/CSVExporter.py
+++ b/pyqtgraph/exporters/CSVExporter.py
@@ -3,6 +3,7 @@
from .Exporter import Exporter
from ..parametertree import Parameter
from .. import PlotItem
+from ..python2_3 import asUnicode
__all__ = ['CSVExporter']
@@ -57,7 +58,7 @@
sep = '\t'
with open(fileName, 'w') as fd:
- fd.write(sep.join(header) + '\n')
+ fd.write(sep.join(map(asUnicode, header)) + '\n')
i = 0
numFormat = '%%0.%dg' % self.params['precision']
numRows = max([len(d[0]) for d in data])
|
{"golden_diff": "diff --git a/pyqtgraph/exporters/CSVExporter.py b/pyqtgraph/exporters/CSVExporter.py\n--- a/pyqtgraph/exporters/CSVExporter.py\n+++ b/pyqtgraph/exporters/CSVExporter.py\n@@ -3,6 +3,7 @@\n from .Exporter import Exporter\n from ..parametertree import Parameter\n from .. import PlotItem\n+from ..python2_3 import asUnicode\n \n __all__ = ['CSVExporter']\n \n@@ -57,7 +58,7 @@\n sep = '\\t'\n \n with open(fileName, 'w') as fd:\n- fd.write(sep.join(header) + '\\n')\n+ fd.write(sep.join(map(asUnicode, header)) + '\\n')\n i = 0\n numFormat = '%%0.%dg' % self.params['precision']\n numRows = max([len(d[0]) for d in data])\n", "issue": "CSV export broken\n### Short description\r\nExport CSV failed when the plot name has decode error characters.\r\n\r\n### Code to reproduce\r\n```python\r\nfrom pyqtgraph.Qt import QtGui, QtCore\r\nimport numpy as np\r\nimport pyqtgraph as pg\r\n\r\n#QtGui.QApplication.setGraphicsSystem('raster')\r\napp = QtGui.QApplication([])\r\nwin = pg.GraphicsLayoutWidget(show=True, title=\"Basic plotting examples\")\r\nwin.resize(1000,600)\r\nwin.setWindowTitle('pyqtgraph example: Plotting')\r\n\r\n\r\npg.setConfigOptions(antialias=True)\r\n\r\npw = win.addPlot(title=\"Scatter plot, axis labels, log scale\")\r\npw.addLegend()\r\npw .plot(np.random.normal(size=100), pen=(255,0,0), name=\"\\u00A0\u4e0b\u52a0\u70ed\u4f53\")\r\n\r\nQtGui.QApplication.instance().exec_()\r\n```\r\n\r\n### Expected behavior\r\nExport CSV Success\r\n\r\n### Real behavior\r\nExport CSV Failed\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnicodeEncodeError Traceback (most recent call last)\r\nc:\\program files\\python37\\lib\\site-packages\\pyqtgraph\\exporters\\Exporter.py in fileSaveFinished(self, fileName)\r\n 75 fileName = fileName + '.' + selectedExt.lstrip('.')\r\n 76\r\n---> 77 self.export(fileName=fileName, **self.fileDialog.opts)\r\n 78\r\n 79 def getScene(self):\r\n\r\nc:\\program files\\python37\\lib\\site-packages\\pyqtgraph\\exporters\\CSVExporter.py in export(self, fileName)\r\n 58\r\n 59 with open(fileName, 'w') as fd:\r\n---> 60 fd.write(sep.join(header) + '\\n')\r\n 61 i = 0\r\n 62 numFormat = '%%0.%dg' % self.params['precision']\r\n\r\nUnicodeEncodeError: 'gbk' codec can't encode character '\\xa0' in position 1: illegal multibyte sequence\r\n```\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.11.0.dev0+g2203933\r\n * Qt Python binding: PyQt5 5.13.2 Qt 5.13.2\r\n * Python version: Python 3.7.5 \r\n * NumPy version: 1.17.4\r\n * Operating system: Windows 7 X64\r\n * Installation method: pip git+\r\n\r\n### Additional context\r\nI use \"\\u00A0\" because i want to add some space before label name in the legend.\r\nCould i use the csv export by \"utf-8\" but not \"gbk\" ?\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom ..Qt import QtGui, QtCore\nfrom .Exporter import Exporter\nfrom ..parametertree import Parameter\nfrom .. import PlotItem\n\n__all__ = ['CSVExporter']\n \n \nclass CSVExporter(Exporter):\n Name = \"CSV from plot data\"\n windows = []\n def __init__(self, item):\n Exporter.__init__(self, item)\n self.params = Parameter(name='params', type='group', children=[\n {'name': 'separator', 'type': 'list', 'value': 'comma', 'values': ['comma', 'tab']},\n {'name': 'precision', 'type': 'int', 'value': 10, 'limits': [0, None]},\n {'name': 'columnMode', 'type': 'list', 'values': ['(x,y) per plot', '(x,y,y,y) for all plots']}\n ])\n \n def parameters(self):\n return self.params\n \n def export(self, fileName=None):\n \n if not isinstance(self.item, PlotItem):\n raise Exception(\"Must have a PlotItem selected for CSV export.\")\n \n if fileName is None:\n self.fileSaveDialog(filter=[\"*.csv\", \"*.tsv\"])\n return\n\n data = []\n header = []\n\n appendAllX = self.params['columnMode'] == '(x,y) per plot'\n\n for i, c in enumerate(self.item.curves):\n cd = c.getData()\n if cd[0] is None:\n continue\n data.append(cd)\n if hasattr(c, 'implements') and c.implements('plotData') and c.name() is not None:\n name = c.name().replace('\"', '\"\"') + '_'\n xName, yName = '\"'+name+'x\"', '\"'+name+'y\"'\n else:\n xName = 'x%04d' % i\n yName = 'y%04d' % i\n if appendAllX or i == 0:\n header.extend([xName, yName])\n else:\n header.extend([yName])\n\n if self.params['separator'] == 'comma':\n sep = ','\n else:\n sep = '\\t'\n\n with open(fileName, 'w') as fd:\n fd.write(sep.join(header) + '\\n')\n i = 0\n numFormat = '%%0.%dg' % self.params['precision']\n numRows = max([len(d[0]) for d in data])\n for i in range(numRows):\n for j, d in enumerate(data):\n # write x value if this is the first column, or if we want\n # x for all rows\n if appendAllX or j == 0:\n if d is not None and i < len(d[0]):\n fd.write(numFormat % d[0][i] + sep)\n else:\n fd.write(' %s' % sep)\n\n # write y value\n if d is not None and i < len(d[1]):\n fd.write(numFormat % d[1][i] + sep)\n else:\n fd.write(' %s' % sep)\n fd.write('\\n')\n\n\nCSVExporter.register() \n \n \n", "path": "pyqtgraph/exporters/CSVExporter.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom ..Qt import QtGui, QtCore\nfrom .Exporter import Exporter\nfrom ..parametertree import Parameter\nfrom .. import PlotItem\nfrom ..python2_3 import asUnicode\n\n__all__ = ['CSVExporter']\n \n \nclass CSVExporter(Exporter):\n Name = \"CSV from plot data\"\n windows = []\n def __init__(self, item):\n Exporter.__init__(self, item)\n self.params = Parameter(name='params', type='group', children=[\n {'name': 'separator', 'type': 'list', 'value': 'comma', 'values': ['comma', 'tab']},\n {'name': 'precision', 'type': 'int', 'value': 10, 'limits': [0, None]},\n {'name': 'columnMode', 'type': 'list', 'values': ['(x,y) per plot', '(x,y,y,y) for all plots']}\n ])\n \n def parameters(self):\n return self.params\n \n def export(self, fileName=None):\n \n if not isinstance(self.item, PlotItem):\n raise Exception(\"Must have a PlotItem selected for CSV export.\")\n \n if fileName is None:\n self.fileSaveDialog(filter=[\"*.csv\", \"*.tsv\"])\n return\n\n data = []\n header = []\n\n appendAllX = self.params['columnMode'] == '(x,y) per plot'\n\n for i, c in enumerate(self.item.curves):\n cd = c.getData()\n if cd[0] is None:\n continue\n data.append(cd)\n if hasattr(c, 'implements') and c.implements('plotData') and c.name() is not None:\n name = c.name().replace('\"', '\"\"') + '_'\n xName, yName = '\"'+name+'x\"', '\"'+name+'y\"'\n else:\n xName = 'x%04d' % i\n yName = 'y%04d' % i\n if appendAllX or i == 0:\n header.extend([xName, yName])\n else:\n header.extend([yName])\n\n if self.params['separator'] == 'comma':\n sep = ','\n else:\n sep = '\\t'\n\n with open(fileName, 'w') as fd:\n fd.write(sep.join(map(asUnicode, header)) + '\\n')\n i = 0\n numFormat = '%%0.%dg' % self.params['precision']\n numRows = max([len(d[0]) for d in data])\n for i in range(numRows):\n for j, d in enumerate(data):\n # write x value if this is the first column, or if we want\n # x for all rows\n if appendAllX or j == 0:\n if d is not None and i < len(d[0]):\n fd.write(numFormat % d[0][i] + sep)\n else:\n fd.write(' %s' % sep)\n\n # write y value\n if d is not None and i < len(d[1]):\n fd.write(numFormat % d[1][i] + sep)\n else:\n fd.write(' %s' % sep)\n fd.write('\\n')\n\n\nCSVExporter.register() \n \n \n", "path": "pyqtgraph/exporters/CSVExporter.py"}]}
| 1,690 | 197 |
gh_patches_debug_36559
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2930
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add API endpoint for event slugs
### Is your feature request related to a problem? Please describe.
For the app we want to get events based on their slug, this is currently not possible.
### Describe the solution you'd like
Add an API endpoint for event slugs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/api/v2/urls.py`
Content:
```
1 """Events app API v2 urls."""
2 from django.urls import path
3
4 from events.api.v2.views import (
5 EventDetailView,
6 EventListView,
7 EventRegistrationDetailView,
8 EventRegistrationFieldsView,
9 EventRegistrationsView,
10 ExternalEventDetailView,
11 ExternalEventListView,
12 MarkPresentAPIView,
13 )
14
15 app_name = "events"
16
17 urlpatterns = [
18 path("events/", EventListView.as_view(), name="events-list"),
19 path(
20 "events/<int:pk>/",
21 EventDetailView.as_view(),
22 name="event-detail",
23 ),
24 path(
25 "events/<int:pk>/registrations/",
26 EventRegistrationsView.as_view(),
27 name="event-registrations",
28 ),
29 path(
30 "events/<int:event_id>/registrations/<int:pk>/",
31 EventRegistrationDetailView.as_view(),
32 name="event-registration-detail",
33 ),
34 path(
35 "events/<int:event_id>/registrations/<int:registration_id>/fields/",
36 EventRegistrationFieldsView.as_view(),
37 name="event-registration-fields",
38 ),
39 path(
40 "events/<int:pk>/mark-present/<uuid:token>/",
41 MarkPresentAPIView.as_view(),
42 name="mark-present",
43 ),
44 path(
45 "events/external/", ExternalEventListView.as_view(), name="external-events-list"
46 ),
47 path(
48 "events/external/<int:pk>/",
49 ExternalEventDetailView.as_view(),
50 name="external-event-detail",
51 ),
52 ]
53
```
Path: `website/events/api/v2/serializers/event.py`
Content:
```
1 from rest_framework import serializers
2
3 from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
4 from documents.api.v2.serializers.document import DocumentSerializer
5 from events import services
6 from events.api.v2.serializers.event_registration import EventRegistrationSerializer
7 from events.models import Event
8 from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer
9 from thaliawebsite.api.v2.serializers import CleanedHTMLSerializer
10 from thaliawebsite.api.v2.serializers.cleaned_model_serializer import (
11 CleanedModelSerializer,
12 )
13 from utils.snippets import create_google_maps_url
14
15
16 class EventSerializer(CleanedModelSerializer):
17 """Serializer for events."""
18
19 class Meta:
20 model = Event
21 fields = (
22 "pk",
23 "title",
24 "description",
25 "caption",
26 "start",
27 "end",
28 "category",
29 "registration_start",
30 "registration_end",
31 "cancel_deadline",
32 "optional_registrations",
33 "location",
34 "price",
35 "fine",
36 "num_participants",
37 "max_participants",
38 "no_registration_message",
39 "registration_status",
40 "cancel_too_late_message",
41 "has_fields",
42 "food_event",
43 "maps_url",
44 "user_permissions",
45 "user_registration",
46 "organisers",
47 "documents",
48 )
49
50 description = CleanedHTMLSerializer()
51 organisers = MemberGroupSerializer(many=True)
52 user_registration = serializers.SerializerMethodField("_user_registration")
53 num_participants = serializers.SerializerMethodField("_num_participants")
54 maps_url = serializers.SerializerMethodField("_maps_url")
55 registration_status = serializers.SerializerMethodField("_registration_status")
56 price = PaymentAmountSerializer()
57 fine = PaymentAmountSerializer()
58 documents = DocumentSerializer(many=True)
59 user_permissions = serializers.SerializerMethodField("_user_permissions")
60
61 def _user_registration(self, instance: Event):
62 if self.context["request"].member and len(instance.member_registration) > 0:
63 registration = instance.member_registration[-1]
64 return EventRegistrationSerializer(
65 registration,
66 context=self.context,
67 fields=(
68 "pk",
69 "present",
70 "queue_position",
71 "is_cancelled",
72 "is_late_cancellation",
73 "date",
74 "payment",
75 ),
76 ).data
77 return None
78
79 def _registration_status(self, instance: Event):
80 if self.context["request"].member and len(instance.member_registration) > 0:
81 registration = instance.member_registration[-1]
82 else:
83 registration = None
84 status = services.registration_status(
85 instance, registration, self.context["request"].member
86 )
87 cancel_status = services.cancel_status(instance, registration)
88
89 status_str = services.registration_status_string(status, instance, registration)
90 cancel_str = services.cancel_info_string(instance, cancel_status, status)
91 if services.show_cancel_status(status) and cancel_str != "":
92 return f"{status_str} {cancel_str}"
93 return f"{status_str}"
94
95 def _num_participants(self, instance: Event):
96 if instance.max_participants:
97 return min(instance.participant_count, instance.max_participants)
98 return instance.participant_count
99
100 def _user_permissions(self, instance):
101 member = self.context["request"].member
102 return services.event_permissions(member, instance, registration_prefetch=True)
103
104 def _maps_url(self, instance):
105 return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py
--- a/website/events/api/v2/serializers/event.py
+++ b/website/events/api/v2/serializers/event.py
@@ -1,4 +1,5 @@
from rest_framework import serializers
+from rest_framework.reverse import reverse
from activemembers.api.v2.serializers.member_group import MemberGroupSerializer
from documents.api.v2.serializers.document import DocumentSerializer
@@ -20,6 +21,8 @@
model = Event
fields = (
"pk",
+ "slug",
+ "url",
"title",
"description",
"caption",
@@ -57,6 +60,7 @@
fine = PaymentAmountSerializer()
documents = DocumentSerializer(many=True)
user_permissions = serializers.SerializerMethodField("_user_permissions")
+ url = serializers.SerializerMethodField("_url")
def _user_registration(self, instance: Event):
if self.context["request"].member and len(instance.member_registration) > 0:
@@ -101,5 +105,18 @@
member = self.context["request"].member
return services.event_permissions(member, instance, registration_prefetch=True)
+ def _url(self, instance: Event):
+ if instance.slug is None:
+ return reverse(
+ "events:event",
+ kwargs={"pk": instance.pk},
+ request=self.context["request"],
+ )
+ return reverse(
+ "events:event",
+ kwargs={"slug": instance.slug},
+ request=self.context["request"],
+ )
+
def _maps_url(self, instance):
return create_google_maps_url(instance.map_location, zoom=13, size="450x250")
diff --git a/website/events/api/v2/urls.py b/website/events/api/v2/urls.py
--- a/website/events/api/v2/urls.py
+++ b/website/events/api/v2/urls.py
@@ -21,6 +21,11 @@
EventDetailView.as_view(),
name="event-detail",
),
+ path(
+ "events/<slug:slug>/",
+ EventDetailView.as_view(lookup_field="slug"),
+ name="event-detail",
+ ),
path(
"events/<int:pk>/registrations/",
EventRegistrationsView.as_view(),
|
{"golden_diff": "diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py\n--- a/website/events/api/v2/serializers/event.py\n+++ b/website/events/api/v2/serializers/event.py\n@@ -1,4 +1,5 @@\n from rest_framework import serializers\n+from rest_framework.reverse import reverse\n \n from activemembers.api.v2.serializers.member_group import MemberGroupSerializer\n from documents.api.v2.serializers.document import DocumentSerializer\n@@ -20,6 +21,8 @@\n model = Event\n fields = (\n \"pk\",\n+ \"slug\",\n+ \"url\",\n \"title\",\n \"description\",\n \"caption\",\n@@ -57,6 +60,7 @@\n fine = PaymentAmountSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n+ url = serializers.SerializerMethodField(\"_url\")\n \n def _user_registration(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n@@ -101,5 +105,18 @@\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance, registration_prefetch=True)\n \n+ def _url(self, instance: Event):\n+ if instance.slug is None:\n+ return reverse(\n+ \"events:event\",\n+ kwargs={\"pk\": instance.pk},\n+ request=self.context[\"request\"],\n+ )\n+ return reverse(\n+ \"events:event\",\n+ kwargs={\"slug\": instance.slug},\n+ request=self.context[\"request\"],\n+ )\n+\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\ndiff --git a/website/events/api/v2/urls.py b/website/events/api/v2/urls.py\n--- a/website/events/api/v2/urls.py\n+++ b/website/events/api/v2/urls.py\n@@ -21,6 +21,11 @@\n EventDetailView.as_view(),\n name=\"event-detail\",\n ),\n+ path(\n+ \"events/<slug:slug>/\",\n+ EventDetailView.as_view(lookup_field=\"slug\"),\n+ name=\"event-detail\",\n+ ),\n path(\n \"events/<int:pk>/registrations/\",\n EventRegistrationsView.as_view(),\n", "issue": "Add API endpoint for event slugs\n### Is your feature request related to a problem? Please describe.\r\nFor the app we want to get events based on their slug, this is currently not possible.\r\n\r\n### Describe the solution you'd like\r\nAdd an API endpoint for event slugs.\r\n\n", "before_files": [{"content": "\"\"\"Events app API v2 urls.\"\"\"\nfrom django.urls import path\n\nfrom events.api.v2.views import (\n EventDetailView,\n EventListView,\n EventRegistrationDetailView,\n EventRegistrationFieldsView,\n EventRegistrationsView,\n ExternalEventDetailView,\n ExternalEventListView,\n MarkPresentAPIView,\n)\n\napp_name = \"events\"\n\nurlpatterns = [\n path(\"events/\", EventListView.as_view(), name=\"events-list\"),\n path(\n \"events/<int:pk>/\",\n EventDetailView.as_view(),\n name=\"event-detail\",\n ),\n path(\n \"events/<int:pk>/registrations/\",\n EventRegistrationsView.as_view(),\n name=\"event-registrations\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:pk>/\",\n EventRegistrationDetailView.as_view(),\n name=\"event-registration-detail\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:registration_id>/fields/\",\n EventRegistrationFieldsView.as_view(),\n name=\"event-registration-fields\",\n ),\n path(\n \"events/<int:pk>/mark-present/<uuid:token>/\",\n MarkPresentAPIView.as_view(),\n name=\"mark-present\",\n ),\n path(\n \"events/external/\", ExternalEventListView.as_view(), name=\"external-events-list\"\n ),\n path(\n \"events/external/<int:pk>/\",\n ExternalEventDetailView.as_view(),\n name=\"external-event-detail\",\n ),\n]\n", "path": "website/events/api/v2/urls.py"}, {"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"registration_status\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n registration_status = serializers.SerializerMethodField(\"_registration_status\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n return EventRegistrationSerializer(\n registration,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n return None\n\n def _registration_status(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n else:\n registration = None\n status = services.registration_status(\n instance, registration, self.context[\"request\"].member\n )\n cancel_status = services.cancel_status(instance, registration)\n\n status_str = services.registration_status_string(status, instance, registration)\n cancel_str = services.cancel_info_string(instance, cancel_status, status)\n if services.show_cancel_status(status) and cancel_str != \"\":\n return f\"{status_str} {cancel_str}\"\n return f\"{status_str}\"\n\n def _num_participants(self, instance: Event):\n if instance.max_participants:\n return min(instance.participant_count, instance.max_participants)\n return instance.participant_count\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance, registration_prefetch=True)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}], "after_files": [{"content": "\"\"\"Events app API v2 urls.\"\"\"\nfrom django.urls import path\n\nfrom events.api.v2.views import (\n EventDetailView,\n EventListView,\n EventRegistrationDetailView,\n EventRegistrationFieldsView,\n EventRegistrationsView,\n ExternalEventDetailView,\n ExternalEventListView,\n MarkPresentAPIView,\n)\n\napp_name = \"events\"\n\nurlpatterns = [\n path(\"events/\", EventListView.as_view(), name=\"events-list\"),\n path(\n \"events/<int:pk>/\",\n EventDetailView.as_view(),\n name=\"event-detail\",\n ),\n path(\n \"events/<slug:slug>/\",\n EventDetailView.as_view(lookup_field=\"slug\"),\n name=\"event-detail\",\n ),\n path(\n \"events/<int:pk>/registrations/\",\n EventRegistrationsView.as_view(),\n name=\"event-registrations\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:pk>/\",\n EventRegistrationDetailView.as_view(),\n name=\"event-registration-detail\",\n ),\n path(\n \"events/<int:event_id>/registrations/<int:registration_id>/fields/\",\n EventRegistrationFieldsView.as_view(),\n name=\"event-registration-fields\",\n ),\n path(\n \"events/<int:pk>/mark-present/<uuid:token>/\",\n MarkPresentAPIView.as_view(),\n name=\"mark-present\",\n ),\n path(\n \"events/external/\", ExternalEventListView.as_view(), name=\"external-events-list\"\n ),\n path(\n \"events/external/<int:pk>/\",\n ExternalEventDetailView.as_view(),\n name=\"external-event-detail\",\n ),\n]\n", "path": "website/events/api/v2/urls.py"}, {"content": "from rest_framework import serializers\nfrom rest_framework.reverse import reverse\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"slug\",\n \"url\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"registration_status\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n registration_status = serializers.SerializerMethodField(\"_registration_status\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n url = serializers.SerializerMethodField(\"_url\")\n\n def _user_registration(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n return EventRegistrationSerializer(\n registration,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n return None\n\n def _registration_status(self, instance: Event):\n if self.context[\"request\"].member and len(instance.member_registration) > 0:\n registration = instance.member_registration[-1]\n else:\n registration = None\n status = services.registration_status(\n instance, registration, self.context[\"request\"].member\n )\n cancel_status = services.cancel_status(instance, registration)\n\n status_str = services.registration_status_string(status, instance, registration)\n cancel_str = services.cancel_info_string(instance, cancel_status, status)\n if services.show_cancel_status(status) and cancel_str != \"\":\n return f\"{status_str} {cancel_str}\"\n return f\"{status_str}\"\n\n def _num_participants(self, instance: Event):\n if instance.max_participants:\n return min(instance.participant_count, instance.max_participants)\n return instance.participant_count\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance, registration_prefetch=True)\n\n def _url(self, instance: Event):\n if instance.slug is None:\n return reverse(\n \"events:event\",\n kwargs={\"pk\": instance.pk},\n request=self.context[\"request\"],\n )\n return reverse(\n \"events:event\",\n kwargs={\"slug\": instance.slug},\n request=self.context[\"request\"],\n )\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}]}
| 1,691 | 524 |
gh_patches_debug_34713
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-2971
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
FillBetweenItem has no way to change FillRule
<!-- In the following, please describe your issue in detail! -->
<!-- If some of the sections do not apply, just remove them. -->
### Short description
There is currently no way (at least that I have found) to change the fillrule for the painterpath in the FillBetweenItem. being able to set it to winding would be very useful for certain cases.
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
from PySide2.QtWidgets import QApplication
win = pg.plot()
win.setWindowTitle('pyqtgraph example: FillBetweenItem')
win.setXRange(0, 1.5)
win.setYRange(0, 1.5)
x1=[0,1,1,0,0]
y1=[0,0,1,1,0]
x2=[0.5,1.5,1.5,0.5,0.5]
y2=[0.5,0.5,1.5,1.5,0.5]
curve1 = win.plot(x=x1, y=y1, pen='k')
curve2 = win.plot(x=x2, y=y2, pen='k')
brushes = [0.5, (100, 100, 255), 0.5]
fill = pg.FillBetweenItem(curve1, curve2,brush=(100,100,255))
win.addItem(fill)
## Start Qt event loop unless running in interactive mode or using pyside.
if __name__ == '__main__':
QApplication.instance().exec_()
```
### Expected behavior
Fill in the overlap
### Real behavior
Hole in the middle.
### Tested environment(s)
* PyQtGraph version: 0.12.1
* Qt Python binding: PyQt5 5.15.4 Qt 5.15.2
* Python version: 3.7.7
* NumPy version: 1.20.2
* Operating system: Windows 10
* Installation method: PIP
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pyqtgraph/examples/FillBetweenItem.py`
Content:
```
1 """
2 Demonstrates use of FillBetweenItem to fill the space between two plot curves.
3 """
4
5 import numpy as np
6
7 import pyqtgraph as pg
8 from pyqtgraph.Qt import QtCore
9
10 #FIXME: When running on Qt5, not as perfect as on Qt4
11
12 win = pg.plot()
13 win.setWindowTitle('pyqtgraph example: FillBetweenItem')
14 win.setXRange(-10, 10)
15 win.setYRange(-10, 10)
16
17 N = 200
18 x = np.linspace(-10, 10, N)
19 gauss = np.exp(-x**2 / 20.)
20 mn = mx = np.zeros(len(x))
21 curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]
22 brushes = [0.5, (100, 100, 255), 0.5]
23 fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]
24 for f in fills:
25 win.addItem(f)
26
27 def update():
28 global mx, mn, curves, gauss, x
29 a = 5 / abs(np.random.normal(loc=1, scale=0.2))
30 y1 = -np.abs(a*gauss + np.random.normal(size=len(x)))
31 y2 = np.abs(a*gauss + np.random.normal(size=len(x)))
32
33 s = 0.01
34 mn = np.where(y1<mn, y1, mn) * (1-s) + y1 * s
35 mx = np.where(y2>mx, y2, mx) * (1-s) + y2 * s
36 curves[0].setData(x, mn)
37 curves[1].setData(x, y1)
38 curves[2].setData(x, y2)
39 curves[3].setData(x, mx)
40
41
42 timer = QtCore.QTimer()
43 timer.timeout.connect(update)
44 timer.start(30)
45
46
47 if __name__ == '__main__':
48 pg.exec()
49
```
Path: `pyqtgraph/graphicsItems/FillBetweenItem.py`
Content:
```
1 from .. import functions as fn
2 from ..Qt import QtGui, QtWidgets
3 from .PlotCurveItem import PlotCurveItem
4 from .PlotDataItem import PlotDataItem
5
6 __all__ = ['FillBetweenItem']
7
8 class FillBetweenItem(QtWidgets.QGraphicsPathItem):
9 """
10 GraphicsItem filling the space between two PlotDataItems.
11 """
12 def __init__(self, curve1=None, curve2=None, brush=None, pen=None):
13 QtWidgets.QGraphicsPathItem.__init__(self)
14 self.curves = None
15 if curve1 is not None and curve2 is not None:
16 self.setCurves(curve1, curve2)
17 elif curve1 is not None or curve2 is not None:
18 raise Exception("Must specify two curves to fill between.")
19
20 if brush is not None:
21 self.setBrush(brush)
22 self.setPen(pen)
23 self.updatePath()
24
25 def setBrush(self, *args, **kwds):
26 """Change the fill brush. Acceps the same arguments as pg.mkBrush()"""
27 QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))
28
29 def setPen(self, *args, **kwds):
30 QtWidgets.QGraphicsPathItem.setPen(self, fn.mkPen(*args, **kwds))
31
32 def setCurves(self, curve1, curve2):
33 """Set the curves to fill between.
34
35 Arguments must be instances of PlotDataItem or PlotCurveItem.
36
37 Added in version 0.9.9
38 """
39 if self.curves is not None:
40 for c in self.curves:
41 try:
42 c.sigPlotChanged.disconnect(self.curveChanged)
43 except (TypeError, RuntimeError):
44 pass
45
46 curves = [curve1, curve2]
47 for c in curves:
48 if not isinstance(c, PlotDataItem) and not isinstance(c, PlotCurveItem):
49 raise TypeError("Curves must be PlotDataItem or PlotCurveItem.")
50 self.curves = curves
51 curve1.sigPlotChanged.connect(self.curveChanged)
52 curve2.sigPlotChanged.connect(self.curveChanged)
53 self.setZValue(min(curve1.zValue(), curve2.zValue())-1)
54 self.curveChanged()
55
56 def curveChanged(self):
57 self.updatePath()
58
59 def updatePath(self):
60 if self.curves is None:
61 self.setPath(QtGui.QPainterPath())
62 return
63 paths = []
64 for c in self.curves:
65 if isinstance(c, PlotDataItem):
66 paths.append(c.curve.getPath())
67 elif isinstance(c, PlotCurveItem):
68 paths.append(c.getPath())
69
70 path = QtGui.QPainterPath()
71 transform = QtGui.QTransform()
72 ps1 = paths[0].toSubpathPolygons(transform)
73 ps2 = paths[1].toReversed().toSubpathPolygons(transform)
74 ps2.reverse()
75 if len(ps1) == 0 or len(ps2) == 0:
76 self.setPath(QtGui.QPainterPath())
77 return
78
79
80 for p1, p2 in zip(ps1, ps2):
81 path.addPolygon(p1 + p2)
82 self.setPath(path)
83
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/pyqtgraph/examples/FillBetweenItem.py b/pyqtgraph/examples/FillBetweenItem.py
--- a/pyqtgraph/examples/FillBetweenItem.py
+++ b/pyqtgraph/examples/FillBetweenItem.py
@@ -20,7 +20,8 @@
mn = mx = np.zeros(len(x))
curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]
brushes = [0.5, (100, 100, 255), 0.5]
-fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]
+fills = [pg.FillBetweenItem(curves[0], curves[3], brushes[0]),
+ pg.FillBetweenItem(curves[1], curves[2], brushes[1])]
for f in fills:
win.addItem(f)
diff --git a/pyqtgraph/graphicsItems/FillBetweenItem.py b/pyqtgraph/graphicsItems/FillBetweenItem.py
--- a/pyqtgraph/graphicsItems/FillBetweenItem.py
+++ b/pyqtgraph/graphicsItems/FillBetweenItem.py
@@ -23,7 +23,7 @@
self.updatePath()
def setBrush(self, *args, **kwds):
- """Change the fill brush. Acceps the same arguments as pg.mkBrush()"""
+ """Change the fill brush. Accepts the same arguments as pg.mkBrush()"""
QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))
def setPen(self, *args, **kwds):
@@ -55,7 +55,6 @@
def curveChanged(self):
self.updatePath()
-
def updatePath(self):
if self.curves is None:
self.setPath(QtGui.QPainterPath())
@@ -69,14 +68,18 @@
path = QtGui.QPainterPath()
transform = QtGui.QTransform()
+
ps1 = paths[0].toSubpathPolygons(transform)
ps2 = paths[1].toReversed().toSubpathPolygons(transform)
ps2.reverse()
+
if len(ps1) == 0 or len(ps2) == 0:
self.setPath(QtGui.QPainterPath())
return
-
for p1, p2 in zip(ps1, ps2):
- path.addPolygon(p1 + p2)
+ intersection = p1.intersected(p2)
+ if not intersection.isEmpty():
+ path.addPolygon(intersection)
+ path.addPolygon(p1 + p2)
self.setPath(path)
|
{"golden_diff": "diff --git a/pyqtgraph/examples/FillBetweenItem.py b/pyqtgraph/examples/FillBetweenItem.py\n--- a/pyqtgraph/examples/FillBetweenItem.py\n+++ b/pyqtgraph/examples/FillBetweenItem.py\n@@ -20,7 +20,8 @@\n mn = mx = np.zeros(len(x))\n curves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]\n brushes = [0.5, (100, 100, 255), 0.5]\n-fills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]\n+fills = [pg.FillBetweenItem(curves[0], curves[3], brushes[0]),\n+ pg.FillBetweenItem(curves[1], curves[2], brushes[1])]\n for f in fills:\n win.addItem(f)\n \ndiff --git a/pyqtgraph/graphicsItems/FillBetweenItem.py b/pyqtgraph/graphicsItems/FillBetweenItem.py\n--- a/pyqtgraph/graphicsItems/FillBetweenItem.py\n+++ b/pyqtgraph/graphicsItems/FillBetweenItem.py\n@@ -23,7 +23,7 @@\n self.updatePath()\n \n def setBrush(self, *args, **kwds):\n- \"\"\"Change the fill brush. Acceps the same arguments as pg.mkBrush()\"\"\"\n+ \"\"\"Change the fill brush. Accepts the same arguments as pg.mkBrush()\"\"\"\n QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))\n \n def setPen(self, *args, **kwds):\n@@ -55,7 +55,6 @@\n \n def curveChanged(self):\n self.updatePath()\n-\n def updatePath(self):\n if self.curves is None:\n self.setPath(QtGui.QPainterPath())\n@@ -69,14 +68,18 @@\n \n path = QtGui.QPainterPath()\n transform = QtGui.QTransform()\n+\n ps1 = paths[0].toSubpathPolygons(transform)\n ps2 = paths[1].toReversed().toSubpathPolygons(transform)\n ps2.reverse()\n+\n if len(ps1) == 0 or len(ps2) == 0:\n self.setPath(QtGui.QPainterPath())\n return\n \n- \n for p1, p2 in zip(ps1, ps2):\n- path.addPolygon(p1 + p2)\n+ intersection = p1.intersected(p2)\n+ if not intersection.isEmpty():\n+ path.addPolygon(intersection)\n+ path.addPolygon(p1 + p2) \n self.setPath(path)\n", "issue": "FillBetweenItem has no way to change FillRule\n<!-- In the following, please describe your issue in detail! -->\r\n<!-- If some of the sections do not apply, just remove them. -->\r\n\r\n### Short description\r\nThere is currently no way (at least that I have found) to change the fillrule for the painterpath in the FillBetweenItem. being able to set it to winding would be very useful for certain cases.\r\n\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\n```python\r\nimport pyqtgraph as pg\r\nfrom PySide2.QtWidgets import QApplication\r\n\r\nwin = pg.plot()\r\nwin.setWindowTitle('pyqtgraph example: FillBetweenItem')\r\nwin.setXRange(0, 1.5)\r\nwin.setYRange(0, 1.5)\r\n\r\nx1=[0,1,1,0,0]\r\ny1=[0,0,1,1,0]\r\nx2=[0.5,1.5,1.5,0.5,0.5]\r\ny2=[0.5,0.5,1.5,1.5,0.5]\r\ncurve1 = win.plot(x=x1, y=y1, pen='k')\r\ncurve2 = win.plot(x=x2, y=y2, pen='k')\r\nbrushes = [0.5, (100, 100, 255), 0.5]\r\nfill = pg.FillBetweenItem(curve1, curve2,brush=(100,100,255))\r\nwin.addItem(fill)\r\n\r\n## Start Qt event loop unless running in interactive mode or using pyside.\r\nif __name__ == '__main__':\r\n QApplication.instance().exec_()\r\n```\r\n\r\n### Expected behavior\r\nFill in the overlap\r\n\r\n### Real behavior\r\nHole in the middle.\r\n\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.12.1\r\n * Qt Python binding: PyQt5 5.15.4 Qt 5.15.2\r\n * Python version: 3.7.7\r\n * NumPy version: 1.20.2\r\n * Operating system: Windows 10\r\n * Installation method: PIP\n", "before_files": [{"content": "\"\"\"\nDemonstrates use of FillBetweenItem to fill the space between two plot curves.\n\"\"\"\n\nimport numpy as np\n\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtCore\n\n#FIXME: When running on Qt5, not as perfect as on Qt4\n\nwin = pg.plot()\nwin.setWindowTitle('pyqtgraph example: FillBetweenItem')\nwin.setXRange(-10, 10)\nwin.setYRange(-10, 10)\n\nN = 200\nx = np.linspace(-10, 10, N)\ngauss = np.exp(-x**2 / 20.)\nmn = mx = np.zeros(len(x))\ncurves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]\nbrushes = [0.5, (100, 100, 255), 0.5]\nfills = [pg.FillBetweenItem(curves[i], curves[i+1], brushes[i]) for i in range(3)]\nfor f in fills:\n win.addItem(f)\n\ndef update():\n global mx, mn, curves, gauss, x\n a = 5 / abs(np.random.normal(loc=1, scale=0.2))\n y1 = -np.abs(a*gauss + np.random.normal(size=len(x)))\n y2 = np.abs(a*gauss + np.random.normal(size=len(x)))\n \n s = 0.01\n mn = np.where(y1<mn, y1, mn) * (1-s) + y1 * s\n mx = np.where(y2>mx, y2, mx) * (1-s) + y2 * s\n curves[0].setData(x, mn)\n curves[1].setData(x, y1)\n curves[2].setData(x, y2)\n curves[3].setData(x, mx)\n \n\ntimer = QtCore.QTimer()\ntimer.timeout.connect(update)\ntimer.start(30)\n\n\nif __name__ == '__main__':\n pg.exec()\n", "path": "pyqtgraph/examples/FillBetweenItem.py"}, {"content": "from .. import functions as fn\nfrom ..Qt import QtGui, QtWidgets\nfrom .PlotCurveItem import PlotCurveItem\nfrom .PlotDataItem import PlotDataItem\n\n__all__ = ['FillBetweenItem']\n\nclass FillBetweenItem(QtWidgets.QGraphicsPathItem):\n \"\"\"\n GraphicsItem filling the space between two PlotDataItems.\n \"\"\"\n def __init__(self, curve1=None, curve2=None, brush=None, pen=None):\n QtWidgets.QGraphicsPathItem.__init__(self)\n self.curves = None\n if curve1 is not None and curve2 is not None:\n self.setCurves(curve1, curve2)\n elif curve1 is not None or curve2 is not None:\n raise Exception(\"Must specify two curves to fill between.\")\n\n if brush is not None:\n self.setBrush(brush)\n self.setPen(pen)\n self.updatePath()\n \n def setBrush(self, *args, **kwds):\n \"\"\"Change the fill brush. Acceps the same arguments as pg.mkBrush()\"\"\"\n QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))\n \n def setPen(self, *args, **kwds):\n QtWidgets.QGraphicsPathItem.setPen(self, fn.mkPen(*args, **kwds))\n\n def setCurves(self, curve1, curve2):\n \"\"\"Set the curves to fill between.\n \n Arguments must be instances of PlotDataItem or PlotCurveItem.\n \n Added in version 0.9.9\n \"\"\"\n if self.curves is not None:\n for c in self.curves:\n try:\n c.sigPlotChanged.disconnect(self.curveChanged)\n except (TypeError, RuntimeError):\n pass\n\n curves = [curve1, curve2]\n for c in curves:\n if not isinstance(c, PlotDataItem) and not isinstance(c, PlotCurveItem):\n raise TypeError(\"Curves must be PlotDataItem or PlotCurveItem.\")\n self.curves = curves\n curve1.sigPlotChanged.connect(self.curveChanged)\n curve2.sigPlotChanged.connect(self.curveChanged)\n self.setZValue(min(curve1.zValue(), curve2.zValue())-1)\n self.curveChanged()\n\n def curveChanged(self):\n self.updatePath()\n\n def updatePath(self):\n if self.curves is None:\n self.setPath(QtGui.QPainterPath())\n return\n paths = []\n for c in self.curves:\n if isinstance(c, PlotDataItem):\n paths.append(c.curve.getPath())\n elif isinstance(c, PlotCurveItem):\n paths.append(c.getPath())\n\n path = QtGui.QPainterPath()\n transform = QtGui.QTransform()\n ps1 = paths[0].toSubpathPolygons(transform)\n ps2 = paths[1].toReversed().toSubpathPolygons(transform)\n ps2.reverse()\n if len(ps1) == 0 or len(ps2) == 0:\n self.setPath(QtGui.QPainterPath())\n return\n \n \n for p1, p2 in zip(ps1, ps2):\n path.addPolygon(p1 + p2)\n self.setPath(path)\n", "path": "pyqtgraph/graphicsItems/FillBetweenItem.py"}], "after_files": [{"content": "\"\"\"\nDemonstrates use of FillBetweenItem to fill the space between two plot curves.\n\"\"\"\n\nimport numpy as np\n\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtCore\n\n#FIXME: When running on Qt5, not as perfect as on Qt4\n\nwin = pg.plot()\nwin.setWindowTitle('pyqtgraph example: FillBetweenItem')\nwin.setXRange(-10, 10)\nwin.setYRange(-10, 10)\n\nN = 200\nx = np.linspace(-10, 10, N)\ngauss = np.exp(-x**2 / 20.)\nmn = mx = np.zeros(len(x))\ncurves = [win.plot(x=x, y=np.zeros(len(x)), pen='k') for i in range(4)]\nbrushes = [0.5, (100, 100, 255), 0.5]\nfills = [pg.FillBetweenItem(curves[0], curves[3], brushes[0]),\n pg.FillBetweenItem(curves[1], curves[2], brushes[1])]\nfor f in fills:\n win.addItem(f)\n\ndef update():\n global mx, mn, curves, gauss, x\n a = 5 / abs(np.random.normal(loc=1, scale=0.2))\n y1 = -np.abs(a*gauss + np.random.normal(size=len(x)))\n y2 = np.abs(a*gauss + np.random.normal(size=len(x)))\n \n s = 0.01\n mn = np.where(y1<mn, y1, mn) * (1-s) + y1 * s\n mx = np.where(y2>mx, y2, mx) * (1-s) + y2 * s\n curves[0].setData(x, mn)\n curves[1].setData(x, y1)\n curves[2].setData(x, y2)\n curves[3].setData(x, mx)\n \n\ntimer = QtCore.QTimer()\ntimer.timeout.connect(update)\ntimer.start(30)\n\n\nif __name__ == '__main__':\n pg.exec()\n", "path": "pyqtgraph/examples/FillBetweenItem.py"}, {"content": "from .. import functions as fn\nfrom ..Qt import QtGui, QtWidgets\nfrom .PlotCurveItem import PlotCurveItem\nfrom .PlotDataItem import PlotDataItem\n\n__all__ = ['FillBetweenItem']\n\nclass FillBetweenItem(QtWidgets.QGraphicsPathItem):\n \"\"\"\n GraphicsItem filling the space between two PlotDataItems.\n \"\"\"\n def __init__(self, curve1=None, curve2=None, brush=None, pen=None):\n QtWidgets.QGraphicsPathItem.__init__(self)\n self.curves = None\n if curve1 is not None and curve2 is not None:\n self.setCurves(curve1, curve2)\n elif curve1 is not None or curve2 is not None:\n raise Exception(\"Must specify two curves to fill between.\")\n\n if brush is not None:\n self.setBrush(brush)\n self.setPen(pen)\n self.updatePath()\n \n def setBrush(self, *args, **kwds):\n \"\"\"Change the fill brush. Accepts the same arguments as pg.mkBrush()\"\"\"\n QtWidgets.QGraphicsPathItem.setBrush(self, fn.mkBrush(*args, **kwds))\n \n def setPen(self, *args, **kwds):\n QtWidgets.QGraphicsPathItem.setPen(self, fn.mkPen(*args, **kwds))\n\n def setCurves(self, curve1, curve2):\n \"\"\"Set the curves to fill between.\n \n Arguments must be instances of PlotDataItem or PlotCurveItem.\n \n Added in version 0.9.9\n \"\"\"\n if self.curves is not None:\n for c in self.curves:\n try:\n c.sigPlotChanged.disconnect(self.curveChanged)\n except (TypeError, RuntimeError):\n pass\n\n curves = [curve1, curve2]\n for c in curves:\n if not isinstance(c, PlotDataItem) and not isinstance(c, PlotCurveItem):\n raise TypeError(\"Curves must be PlotDataItem or PlotCurveItem.\")\n self.curves = curves\n curve1.sigPlotChanged.connect(self.curveChanged)\n curve2.sigPlotChanged.connect(self.curveChanged)\n self.setZValue(min(curve1.zValue(), curve2.zValue())-1)\n self.curveChanged()\n\n def curveChanged(self):\n self.updatePath()\n def updatePath(self):\n if self.curves is None:\n self.setPath(QtGui.QPainterPath())\n return\n paths = []\n for c in self.curves:\n if isinstance(c, PlotDataItem):\n paths.append(c.curve.getPath())\n elif isinstance(c, PlotCurveItem):\n paths.append(c.getPath())\n\n path = QtGui.QPainterPath()\n transform = QtGui.QTransform()\n\n ps1 = paths[0].toSubpathPolygons(transform)\n ps2 = paths[1].toReversed().toSubpathPolygons(transform)\n ps2.reverse()\n\n if len(ps1) == 0 or len(ps2) == 0:\n self.setPath(QtGui.QPainterPath())\n return\n \n for p1, p2 in zip(ps1, ps2):\n intersection = p1.intersected(p2)\n if not intersection.isEmpty():\n path.addPolygon(intersection)\n path.addPolygon(p1 + p2) \n self.setPath(path)\n", "path": "pyqtgraph/graphicsItems/FillBetweenItem.py"}]}
| 2,131 | 582 |
gh_patches_debug_36712
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-4769
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DNS Resolution in Expressions
**Is your feature request related to a problem? Please describe.**
I would like to be able to resolve a hostname in expressions. In particular, my docker container resolves `host.docker.internal` to network address of the machine the container is running on. I would like to be able to check for requests from this hostname, but there is no way to detect this in expressions.
**Describe the solution you'd like**
I would like a function to be exposed in expressions that resolves a hostname to an address or list of addresses. Such a function could be implemented in `authentik/lib/expression/evaluator.py` like this:
```py
def expr_resolve_host(host: str) -> List[str]:
"""Resolve a hostname to a list of IP addresses."""
return [
sockaddr[0]
for family, type, proto, canonname, sockaddr in
socket.getaddrinfo(host, None)
]
```
**Describe alternatives you've considered**
It would be possible for me to set up the docker network statically, and then hardcode the address, but this is fragile, and I would prefer for Docker to manage its network allocation.
**Additional context**
I currently have an expression that checks for requests for my local/trusted networks. This works well, but if I access Authentik directly from the docker host, the source address is the "host gateway" address of the docker container's network, which is not in my trusted networks. (The host address of the docker internal network.) I can imagine this functionality would also be useful for getting the addresses other containers the docker container is connected to.
At my scale and use case, it doesn't really matter, but it may be worth caching this information for larger installations. I'm not sure if `socket.getaddrinfo` does any caching of it's own, but if it decides to do real DNS resolution it could be quite slow. If an expression is getting hit tens or hundreds of times a minute this could be a serious issue. (This could alternatively be resolved with a local DNS caching server on the host.)
Any caching should be conservatively short, at most 60 seconds or so, since an change in address could also cause serious problems. A timeout may also be in order, since some use cases would prefer a fast empty response over an always correct answer. Ideally, these would be configurable with function parameters so users can determine their own caching and timeout needs.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `authentik/lib/expression/evaluator.py`
Content:
```
1 """authentik expression policy evaluator"""
2 import re
3 from ipaddress import ip_address, ip_network
4 from textwrap import indent
5 from typing import Any, Iterable, Optional
6
7 from django.core.exceptions import FieldError
8 from django_otp import devices_for_user
9 from rest_framework.serializers import ValidationError
10 from sentry_sdk.hub import Hub
11 from sentry_sdk.tracing import Span
12 from structlog.stdlib import get_logger
13
14 from authentik.core.models import User
15 from authentik.events.models import Event
16 from authentik.lib.utils.http import get_http_session
17 from authentik.policies.types import PolicyRequest
18
19 LOGGER = get_logger()
20
21
22 class BaseEvaluator:
23 """Validate and evaluate python-based expressions"""
24
25 # Globals that can be used by function
26 _globals: dict[str, Any]
27 # Context passed as locals to exec()
28 _context: dict[str, Any]
29
30 # Filename used for exec
31 _filename: str
32
33 def __init__(self, filename: Optional[str] = None):
34 self._filename = filename if filename else "BaseEvaluator"
35 # update website/docs/expressions/_objects.md
36 # update website/docs/expressions/_functions.md
37 self._globals = {
38 "regex_match": BaseEvaluator.expr_regex_match,
39 "regex_replace": BaseEvaluator.expr_regex_replace,
40 "list_flatten": BaseEvaluator.expr_flatten,
41 "ak_is_group_member": BaseEvaluator.expr_is_group_member,
42 "ak_user_by": BaseEvaluator.expr_user_by,
43 "ak_user_has_authenticator": BaseEvaluator.expr_func_user_has_authenticator,
44 "ak_create_event": self.expr_event_create,
45 "ak_logger": get_logger(self._filename).bind(),
46 "requests": get_http_session(),
47 "ip_address": ip_address,
48 "ip_network": ip_network,
49 }
50 self._context = {}
51
52 @staticmethod
53 def expr_flatten(value: list[Any] | Any) -> Optional[Any]:
54 """Flatten `value` if its a list"""
55 if isinstance(value, list):
56 if len(value) < 1:
57 return None
58 return value[0]
59 return value
60
61 @staticmethod
62 def expr_regex_match(value: Any, regex: str) -> bool:
63 """Expression Filter to run re.search"""
64 return re.search(regex, value) is not None
65
66 @staticmethod
67 def expr_regex_replace(value: Any, regex: str, repl: str) -> str:
68 """Expression Filter to run re.sub"""
69 return re.sub(regex, repl, value)
70
71 @staticmethod
72 def expr_is_group_member(user: User, **group_filters) -> bool:
73 """Check if `user` is member of group with name `group_name`"""
74 return user.ak_groups.filter(**group_filters).exists()
75
76 @staticmethod
77 def expr_user_by(**filters) -> Optional[User]:
78 """Get user by filters"""
79 try:
80 users = User.objects.filter(**filters)
81 if users:
82 return users.first()
83 return None
84 except FieldError:
85 return None
86
87 @staticmethod
88 def expr_func_user_has_authenticator(user: User, device_type: Optional[str] = None) -> bool:
89 """Check if a user has any authenticator devices, optionally matching *device_type*"""
90 user_devices = devices_for_user(user)
91 if device_type:
92 for device in user_devices:
93 device_class = device.__class__.__name__.lower().replace("device", "")
94 if device_class == device_type:
95 return True
96 return False
97 return len(list(user_devices)) > 0
98
99 def expr_event_create(self, action: str, **kwargs):
100 """Create event with supplied data and try to extract as much relevant data
101 from the context"""
102 # If the result was a complex variable, we don't want to re-use it
103 self._context.pop("result", None)
104 self._context.pop("handler", None)
105 kwargs["context"] = self._context
106 event = Event.new(
107 action,
108 app=self._filename,
109 **kwargs,
110 )
111 if "request" in self._context and isinstance(self._context["request"], PolicyRequest):
112 policy_request: PolicyRequest = self._context["request"]
113 if policy_request.http_request:
114 event.from_http(policy_request)
115 return
116 event.save()
117
118 def wrap_expression(self, expression: str, params: Iterable[str]) -> str:
119 """Wrap expression in a function, call it, and save the result as `result`"""
120 handler_signature = ",".join(params)
121 full_expression = ""
122 full_expression += f"def handler({handler_signature}):\n"
123 full_expression += indent(expression, " ")
124 full_expression += f"\nresult = handler({handler_signature})"
125 return full_expression
126
127 def evaluate(self, expression_source: str) -> Any:
128 """Parse and evaluate expression. If the syntax is incorrect, a SyntaxError is raised.
129 If any exception is raised during execution, it is raised.
130 The result is returned without any type-checking."""
131 with Hub.current.start_span(op="authentik.lib.evaluator.evaluate") as span:
132 span: Span
133 span.description = self._filename
134 span.set_data("expression", expression_source)
135 param_keys = self._context.keys()
136 try:
137 ast_obj = compile(
138 self.wrap_expression(expression_source, param_keys),
139 self._filename,
140 "exec",
141 )
142 except (SyntaxError, ValueError) as exc:
143 self.handle_error(exc, expression_source)
144 raise exc
145 try:
146 _locals = self._context
147 # Yes this is an exec, yes it is potentially bad. Since we limit what variables are
148 # available here, and these policies can only be edited by admins, this is a risk
149 # we're willing to take.
150 # pylint: disable=exec-used
151 exec(ast_obj, self._globals, _locals) # nosec # noqa
152 result = _locals["result"]
153 except Exception as exc:
154 # So, this is a bit questionable. Essentially, we are edit the stacktrace
155 # so the user only sees information relevant to them
156 # and none of our surrounding error handling
157 exc.__traceback__ = exc.__traceback__.tb_next
158 self.handle_error(exc, expression_source)
159 raise exc
160 return result
161
162 def handle_error(self, exc: Exception, expression_source: str): # pragma: no cover
163 """Exception Handler"""
164 LOGGER.warning("Expression error", exc=exc)
165
166 def validate(self, expression: str) -> bool:
167 """Validate expression's syntax, raise ValidationError if Syntax is invalid"""
168 param_keys = self._context.keys()
169 try:
170 compile(
171 self.wrap_expression(expression, param_keys),
172 self._filename,
173 "exec",
174 )
175 return True
176 except (ValueError, SyntaxError) as exc:
177 raise ValidationError(f"Expression Syntax Error: {str(exc)}") from exc
178
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/authentik/lib/expression/evaluator.py b/authentik/lib/expression/evaluator.py
--- a/authentik/lib/expression/evaluator.py
+++ b/authentik/lib/expression/evaluator.py
@@ -1,9 +1,11 @@
"""authentik expression policy evaluator"""
import re
+import socket
from ipaddress import ip_address, ip_network
from textwrap import indent
from typing import Any, Iterable, Optional
+from cachetools import TLRUCache, cached
from django.core.exceptions import FieldError
from django_otp import devices_for_user
from rest_framework.serializers import ValidationError
@@ -41,6 +43,8 @@
"ak_is_group_member": BaseEvaluator.expr_is_group_member,
"ak_user_by": BaseEvaluator.expr_user_by,
"ak_user_has_authenticator": BaseEvaluator.expr_func_user_has_authenticator,
+ "resolve_dns": BaseEvaluator.expr_resolve_dns,
+ "reverse_dns": BaseEvaluator.expr_reverse_dns,
"ak_create_event": self.expr_event_create,
"ak_logger": get_logger(self._filename).bind(),
"requests": get_http_session(),
@@ -49,6 +53,39 @@
}
self._context = {}
+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))
+ @staticmethod
+ def expr_resolve_dns(host: str, ip_version: Optional[int] = None) -> list[str]:
+ """Resolve host to a list of IPv4 and/or IPv6 addresses."""
+ # Although it seems to be fine (raising OSError), docs warn
+ # against passing `None` for both the host and the port
+ # https://docs.python.org/3/library/socket.html#socket.getaddrinfo
+ host = host or ""
+
+ ip_list = []
+
+ family = 0
+ if ip_version == 4:
+ family = socket.AF_INET
+ if ip_version == 6:
+ family = socket.AF_INET6
+
+ try:
+ for ip_addr in socket.getaddrinfo(host, None, family=family):
+ ip_list.append(str(ip_addr[4][0]))
+ except OSError:
+ pass
+ return list(set(ip_list))
+
+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))
+ @staticmethod
+ def expr_reverse_dns(ip_addr: str) -> str:
+ """Perform a reverse DNS lookup."""
+ try:
+ return socket.getfqdn(ip_addr)
+ except OSError:
+ return ip_addr
+
@staticmethod
def expr_flatten(value: list[Any] | Any) -> Optional[Any]:
"""Flatten `value` if its a list"""
|
{"golden_diff": "diff --git a/authentik/lib/expression/evaluator.py b/authentik/lib/expression/evaluator.py\n--- a/authentik/lib/expression/evaluator.py\n+++ b/authentik/lib/expression/evaluator.py\n@@ -1,9 +1,11 @@\n \"\"\"authentik expression policy evaluator\"\"\"\n import re\n+import socket\n from ipaddress import ip_address, ip_network\n from textwrap import indent\n from typing import Any, Iterable, Optional\n \n+from cachetools import TLRUCache, cached\n from django.core.exceptions import FieldError\n from django_otp import devices_for_user\n from rest_framework.serializers import ValidationError\n@@ -41,6 +43,8 @@\n \"ak_is_group_member\": BaseEvaluator.expr_is_group_member,\n \"ak_user_by\": BaseEvaluator.expr_user_by,\n \"ak_user_has_authenticator\": BaseEvaluator.expr_func_user_has_authenticator,\n+ \"resolve_dns\": BaseEvaluator.expr_resolve_dns,\n+ \"reverse_dns\": BaseEvaluator.expr_reverse_dns,\n \"ak_create_event\": self.expr_event_create,\n \"ak_logger\": get_logger(self._filename).bind(),\n \"requests\": get_http_session(),\n@@ -49,6 +53,39 @@\n }\n self._context = {}\n \n+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n+ @staticmethod\n+ def expr_resolve_dns(host: str, ip_version: Optional[int] = None) -> list[str]:\n+ \"\"\"Resolve host to a list of IPv4 and/or IPv6 addresses.\"\"\"\n+ # Although it seems to be fine (raising OSError), docs warn\n+ # against passing `None` for both the host and the port\n+ # https://docs.python.org/3/library/socket.html#socket.getaddrinfo\n+ host = host or \"\"\n+\n+ ip_list = []\n+\n+ family = 0\n+ if ip_version == 4:\n+ family = socket.AF_INET\n+ if ip_version == 6:\n+ family = socket.AF_INET6\n+\n+ try:\n+ for ip_addr in socket.getaddrinfo(host, None, family=family):\n+ ip_list.append(str(ip_addr[4][0]))\n+ except OSError:\n+ pass\n+ return list(set(ip_list))\n+\n+ @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n+ @staticmethod\n+ def expr_reverse_dns(ip_addr: str) -> str:\n+ \"\"\"Perform a reverse DNS lookup.\"\"\"\n+ try:\n+ return socket.getfqdn(ip_addr)\n+ except OSError:\n+ return ip_addr\n+\n @staticmethod\n def expr_flatten(value: list[Any] | Any) -> Optional[Any]:\n \"\"\"Flatten `value` if its a list\"\"\"\n", "issue": "DNS Resolution in Expressions\n**Is your feature request related to a problem? Please describe.**\r\nI would like to be able to resolve a hostname in expressions. In particular, my docker container resolves `host.docker.internal` to network address of the machine the container is running on. I would like to be able to check for requests from this hostname, but there is no way to detect this in expressions.\r\n\r\n**Describe the solution you'd like**\r\nI would like a function to be exposed in expressions that resolves a hostname to an address or list of addresses. Such a function could be implemented in `authentik/lib/expression/evaluator.py` like this:\r\n```py\r\n def expr_resolve_host(host: str) -> List[str]:\r\n \"\"\"Resolve a hostname to a list of IP addresses.\"\"\"\r\n \r\n return [\r\n sockaddr[0]\r\n for family, type, proto, canonname, sockaddr in \r\n socket.getaddrinfo(host, None)\r\n ]\r\n```\r\n\r\n**Describe alternatives you've considered**\r\nIt would be possible for me to set up the docker network statically, and then hardcode the address, but this is fragile, and I would prefer for Docker to manage its network allocation.\r\n\r\n**Additional context**\r\nI currently have an expression that checks for requests for my local/trusted networks. This works well, but if I access Authentik directly from the docker host, the source address is the \"host gateway\" address of the docker container's network, which is not in my trusted networks. (The host address of the docker internal network.) I can imagine this functionality would also be useful for getting the addresses other containers the docker container is connected to.\r\n\r\nAt my scale and use case, it doesn't really matter, but it may be worth caching this information for larger installations. I'm not sure if `socket.getaddrinfo` does any caching of it's own, but if it decides to do real DNS resolution it could be quite slow. If an expression is getting hit tens or hundreds of times a minute this could be a serious issue. (This could alternatively be resolved with a local DNS caching server on the host.)\r\n\r\nAny caching should be conservatively short, at most 60 seconds or so, since an change in address could also cause serious problems. A timeout may also be in order, since some use cases would prefer a fast empty response over an always correct answer. Ideally, these would be configurable with function parameters so users can determine their own caching and timeout needs.\n", "before_files": [{"content": "\"\"\"authentik expression policy evaluator\"\"\"\nimport re\nfrom ipaddress import ip_address, ip_network\nfrom textwrap import indent\nfrom typing import Any, Iterable, Optional\n\nfrom django.core.exceptions import FieldError\nfrom django_otp import devices_for_user\nfrom rest_framework.serializers import ValidationError\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.tracing import Span\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import User\nfrom authentik.events.models import Event\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.types import PolicyRequest\n\nLOGGER = get_logger()\n\n\nclass BaseEvaluator:\n \"\"\"Validate and evaluate python-based expressions\"\"\"\n\n # Globals that can be used by function\n _globals: dict[str, Any]\n # Context passed as locals to exec()\n _context: dict[str, Any]\n\n # Filename used for exec\n _filename: str\n\n def __init__(self, filename: Optional[str] = None):\n self._filename = filename if filename else \"BaseEvaluator\"\n # update website/docs/expressions/_objects.md\n # update website/docs/expressions/_functions.md\n self._globals = {\n \"regex_match\": BaseEvaluator.expr_regex_match,\n \"regex_replace\": BaseEvaluator.expr_regex_replace,\n \"list_flatten\": BaseEvaluator.expr_flatten,\n \"ak_is_group_member\": BaseEvaluator.expr_is_group_member,\n \"ak_user_by\": BaseEvaluator.expr_user_by,\n \"ak_user_has_authenticator\": BaseEvaluator.expr_func_user_has_authenticator,\n \"ak_create_event\": self.expr_event_create,\n \"ak_logger\": get_logger(self._filename).bind(),\n \"requests\": get_http_session(),\n \"ip_address\": ip_address,\n \"ip_network\": ip_network,\n }\n self._context = {}\n\n @staticmethod\n def expr_flatten(value: list[Any] | Any) -> Optional[Any]:\n \"\"\"Flatten `value` if its a list\"\"\"\n if isinstance(value, list):\n if len(value) < 1:\n return None\n return value[0]\n return value\n\n @staticmethod\n def expr_regex_match(value: Any, regex: str) -> bool:\n \"\"\"Expression Filter to run re.search\"\"\"\n return re.search(regex, value) is not None\n\n @staticmethod\n def expr_regex_replace(value: Any, regex: str, repl: str) -> str:\n \"\"\"Expression Filter to run re.sub\"\"\"\n return re.sub(regex, repl, value)\n\n @staticmethod\n def expr_is_group_member(user: User, **group_filters) -> bool:\n \"\"\"Check if `user` is member of group with name `group_name`\"\"\"\n return user.ak_groups.filter(**group_filters).exists()\n\n @staticmethod\n def expr_user_by(**filters) -> Optional[User]:\n \"\"\"Get user by filters\"\"\"\n try:\n users = User.objects.filter(**filters)\n if users:\n return users.first()\n return None\n except FieldError:\n return None\n\n @staticmethod\n def expr_func_user_has_authenticator(user: User, device_type: Optional[str] = None) -> bool:\n \"\"\"Check if a user has any authenticator devices, optionally matching *device_type*\"\"\"\n user_devices = devices_for_user(user)\n if device_type:\n for device in user_devices:\n device_class = device.__class__.__name__.lower().replace(\"device\", \"\")\n if device_class == device_type:\n return True\n return False\n return len(list(user_devices)) > 0\n\n def expr_event_create(self, action: str, **kwargs):\n \"\"\"Create event with supplied data and try to extract as much relevant data\n from the context\"\"\"\n # If the result was a complex variable, we don't want to re-use it\n self._context.pop(\"result\", None)\n self._context.pop(\"handler\", None)\n kwargs[\"context\"] = self._context\n event = Event.new(\n action,\n app=self._filename,\n **kwargs,\n )\n if \"request\" in self._context and isinstance(self._context[\"request\"], PolicyRequest):\n policy_request: PolicyRequest = self._context[\"request\"]\n if policy_request.http_request:\n event.from_http(policy_request)\n return\n event.save()\n\n def wrap_expression(self, expression: str, params: Iterable[str]) -> str:\n \"\"\"Wrap expression in a function, call it, and save the result as `result`\"\"\"\n handler_signature = \",\".join(params)\n full_expression = \"\"\n full_expression += f\"def handler({handler_signature}):\\n\"\n full_expression += indent(expression, \" \")\n full_expression += f\"\\nresult = handler({handler_signature})\"\n return full_expression\n\n def evaluate(self, expression_source: str) -> Any:\n \"\"\"Parse and evaluate expression. If the syntax is incorrect, a SyntaxError is raised.\n If any exception is raised during execution, it is raised.\n The result is returned without any type-checking.\"\"\"\n with Hub.current.start_span(op=\"authentik.lib.evaluator.evaluate\") as span:\n span: Span\n span.description = self._filename\n span.set_data(\"expression\", expression_source)\n param_keys = self._context.keys()\n try:\n ast_obj = compile(\n self.wrap_expression(expression_source, param_keys),\n self._filename,\n \"exec\",\n )\n except (SyntaxError, ValueError) as exc:\n self.handle_error(exc, expression_source)\n raise exc\n try:\n _locals = self._context\n # Yes this is an exec, yes it is potentially bad. Since we limit what variables are\n # available here, and these policies can only be edited by admins, this is a risk\n # we're willing to take.\n # pylint: disable=exec-used\n exec(ast_obj, self._globals, _locals) # nosec # noqa\n result = _locals[\"result\"]\n except Exception as exc:\n # So, this is a bit questionable. Essentially, we are edit the stacktrace\n # so the user only sees information relevant to them\n # and none of our surrounding error handling\n exc.__traceback__ = exc.__traceback__.tb_next\n self.handle_error(exc, expression_source)\n raise exc\n return result\n\n def handle_error(self, exc: Exception, expression_source: str): # pragma: no cover\n \"\"\"Exception Handler\"\"\"\n LOGGER.warning(\"Expression error\", exc=exc)\n\n def validate(self, expression: str) -> bool:\n \"\"\"Validate expression's syntax, raise ValidationError if Syntax is invalid\"\"\"\n param_keys = self._context.keys()\n try:\n compile(\n self.wrap_expression(expression, param_keys),\n self._filename,\n \"exec\",\n )\n return True\n except (ValueError, SyntaxError) as exc:\n raise ValidationError(f\"Expression Syntax Error: {str(exc)}\") from exc\n", "path": "authentik/lib/expression/evaluator.py"}], "after_files": [{"content": "\"\"\"authentik expression policy evaluator\"\"\"\nimport re\nimport socket\nfrom ipaddress import ip_address, ip_network\nfrom textwrap import indent\nfrom typing import Any, Iterable, Optional\n\nfrom cachetools import TLRUCache, cached\nfrom django.core.exceptions import FieldError\nfrom django_otp import devices_for_user\nfrom rest_framework.serializers import ValidationError\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.tracing import Span\nfrom structlog.stdlib import get_logger\n\nfrom authentik.core.models import User\nfrom authentik.events.models import Event\nfrom authentik.lib.utils.http import get_http_session\nfrom authentik.policies.types import PolicyRequest\n\nLOGGER = get_logger()\n\n\nclass BaseEvaluator:\n \"\"\"Validate and evaluate python-based expressions\"\"\"\n\n # Globals that can be used by function\n _globals: dict[str, Any]\n # Context passed as locals to exec()\n _context: dict[str, Any]\n\n # Filename used for exec\n _filename: str\n\n def __init__(self, filename: Optional[str] = None):\n self._filename = filename if filename else \"BaseEvaluator\"\n # update website/docs/expressions/_objects.md\n # update website/docs/expressions/_functions.md\n self._globals = {\n \"regex_match\": BaseEvaluator.expr_regex_match,\n \"regex_replace\": BaseEvaluator.expr_regex_replace,\n \"list_flatten\": BaseEvaluator.expr_flatten,\n \"ak_is_group_member\": BaseEvaluator.expr_is_group_member,\n \"ak_user_by\": BaseEvaluator.expr_user_by,\n \"ak_user_has_authenticator\": BaseEvaluator.expr_func_user_has_authenticator,\n \"resolve_dns\": BaseEvaluator.expr_resolve_dns,\n \"reverse_dns\": BaseEvaluator.expr_reverse_dns,\n \"ak_create_event\": self.expr_event_create,\n \"ak_logger\": get_logger(self._filename).bind(),\n \"requests\": get_http_session(),\n \"ip_address\": ip_address,\n \"ip_network\": ip_network,\n }\n self._context = {}\n\n @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n @staticmethod\n def expr_resolve_dns(host: str, ip_version: Optional[int] = None) -> list[str]:\n \"\"\"Resolve host to a list of IPv4 and/or IPv6 addresses.\"\"\"\n # Although it seems to be fine (raising OSError), docs warn\n # against passing `None` for both the host and the port\n # https://docs.python.org/3/library/socket.html#socket.getaddrinfo\n host = host or \"\"\n\n ip_list = []\n\n family = 0\n if ip_version == 4:\n family = socket.AF_INET\n if ip_version == 6:\n family = socket.AF_INET6\n\n try:\n for ip_addr in socket.getaddrinfo(host, None, family=family):\n ip_list.append(str(ip_addr[4][0]))\n except OSError:\n pass\n return list(set(ip_list))\n\n @cached(cache=TLRUCache(maxsize=32, ttu=lambda key, value, now: now + 180))\n @staticmethod\n def expr_reverse_dns(ip_addr: str) -> str:\n \"\"\"Perform a reverse DNS lookup.\"\"\"\n try:\n return socket.getfqdn(ip_addr)\n except OSError:\n return ip_addr\n\n @staticmethod\n def expr_flatten(value: list[Any] | Any) -> Optional[Any]:\n \"\"\"Flatten `value` if its a list\"\"\"\n if isinstance(value, list):\n if len(value) < 1:\n return None\n return value[0]\n return value\n\n @staticmethod\n def expr_regex_match(value: Any, regex: str) -> bool:\n \"\"\"Expression Filter to run re.search\"\"\"\n return re.search(regex, value) is not None\n\n @staticmethod\n def expr_regex_replace(value: Any, regex: str, repl: str) -> str:\n \"\"\"Expression Filter to run re.sub\"\"\"\n return re.sub(regex, repl, value)\n\n @staticmethod\n def expr_is_group_member(user: User, **group_filters) -> bool:\n \"\"\"Check if `user` is member of group with name `group_name`\"\"\"\n return user.ak_groups.filter(**group_filters).exists()\n\n @staticmethod\n def expr_user_by(**filters) -> Optional[User]:\n \"\"\"Get user by filters\"\"\"\n try:\n users = User.objects.filter(**filters)\n if users:\n return users.first()\n return None\n except FieldError:\n return None\n\n @staticmethod\n def expr_func_user_has_authenticator(user: User, device_type: Optional[str] = None) -> bool:\n \"\"\"Check if a user has any authenticator devices, optionally matching *device_type*\"\"\"\n user_devices = devices_for_user(user)\n if device_type:\n for device in user_devices:\n device_class = device.__class__.__name__.lower().replace(\"device\", \"\")\n if device_class == device_type:\n return True\n return False\n return len(list(user_devices)) > 0\n\n def expr_event_create(self, action: str, **kwargs):\n \"\"\"Create event with supplied data and try to extract as much relevant data\n from the context\"\"\"\n # If the result was a complex variable, we don't want to re-use it\n self._context.pop(\"result\", None)\n self._context.pop(\"handler\", None)\n kwargs[\"context\"] = self._context\n event = Event.new(\n action,\n app=self._filename,\n **kwargs,\n )\n if \"request\" in self._context and isinstance(self._context[\"request\"], PolicyRequest):\n policy_request: PolicyRequest = self._context[\"request\"]\n if policy_request.http_request:\n event.from_http(policy_request)\n return\n event.save()\n\n def wrap_expression(self, expression: str, params: Iterable[str]) -> str:\n \"\"\"Wrap expression in a function, call it, and save the result as `result`\"\"\"\n handler_signature = \",\".join(params)\n full_expression = \"\"\n full_expression += f\"def handler({handler_signature}):\\n\"\n full_expression += indent(expression, \" \")\n full_expression += f\"\\nresult = handler({handler_signature})\"\n return full_expression\n\n def evaluate(self, expression_source: str) -> Any:\n \"\"\"Parse and evaluate expression. If the syntax is incorrect, a SyntaxError is raised.\n If any exception is raised during execution, it is raised.\n The result is returned without any type-checking.\"\"\"\n with Hub.current.start_span(op=\"authentik.lib.evaluator.evaluate\") as span:\n span: Span\n span.description = self._filename\n span.set_data(\"expression\", expression_source)\n param_keys = self._context.keys()\n try:\n ast_obj = compile(\n self.wrap_expression(expression_source, param_keys),\n self._filename,\n \"exec\",\n )\n except (SyntaxError, ValueError) as exc:\n self.handle_error(exc, expression_source)\n raise exc\n try:\n _locals = self._context\n # Yes this is an exec, yes it is potentially bad. Since we limit what variables are\n # available here, and these policies can only be edited by admins, this is a risk\n # we're willing to take.\n # pylint: disable=exec-used\n exec(ast_obj, self._globals, _locals) # nosec # noqa\n result = _locals[\"result\"]\n except Exception as exc:\n # So, this is a bit questionable. Essentially, we are edit the stacktrace\n # so the user only sees information relevant to them\n # and none of our surrounding error handling\n exc.__traceback__ = exc.__traceback__.tb_next\n self.handle_error(exc, expression_source)\n raise exc\n return result\n\n def handle_error(self, exc: Exception, expression_source: str): # pragma: no cover\n \"\"\"Exception Handler\"\"\"\n LOGGER.warning(\"Expression error\", exc=exc)\n\n def validate(self, expression: str) -> bool:\n \"\"\"Validate expression's syntax, raise ValidationError if Syntax is invalid\"\"\"\n param_keys = self._context.keys()\n try:\n compile(\n self.wrap_expression(expression, param_keys),\n self._filename,\n \"exec\",\n )\n return True\n except (ValueError, SyntaxError) as exc:\n raise ValidationError(f\"Expression Syntax Error: {str(exc)}\") from exc\n", "path": "authentik/lib/expression/evaluator.py"}]}
| 2,694 | 632 |
gh_patches_debug_6981
|
rasdani/github-patches
|
git_diff
|
celery__celery-5752
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
DatabaseBackend._update_result() have an error property.
python 3.7
celery 4.4.0rc3
The result has an error value NULL for the name in my backend(mysql), but it's work well when I use redis as my backend.
After I change this error in `backends/database/__init__.py` [135], alter 'task_name' to 'task', I get the correct task_name.
The 'name' in `backends/base.py` [706,717]
```
if self.app.conf.find_value_for_key('extended', 'result'):
if request:
request_meta = {
-> 'name': getattr(request, 'task', None),
'args': getattr(request, 'args', None),
'kwargs': getattr(request, 'kwargs', None),
'worker': getattr(request, 'hostname', None),
'retries': getattr(request, 'retries', None),
'queue': request.delivery_info.get('routing_key')
if hasattr(request, 'delivery_info') and
request.delivery_info else None
}
```
The 'name' in `backends/database/__init__.py` [129,148]
```
def _update_result(self, task, result, state, traceback=None,
request=None):
task.result = result
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
task.kwargs = ensure_bytes(
self.encode(getattr(request, 'kwargs', None))
)
task.worker = getattr(request, 'hostname', None)
task.retries = getattr(request, 'retries', None)
task.queue = (
request.delivery_info.get("routing_key")
if hasattr(request, "delivery_info") and request.delivery_info
else None
)
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/backends/database/__init__.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """SQLAlchemy result store backend."""
3 from __future__ import absolute_import, unicode_literals
4
5 import logging
6 from contextlib import contextmanager
7
8 from kombu.utils.encoding import ensure_bytes
9 from vine.utils import wraps
10
11 from celery import states
12 from celery.backends.base import BaseBackend
13 from celery.exceptions import ImproperlyConfigured
14 from celery.five import range
15 from celery.utils.time import maybe_timedelta
16
17 from .models import Task, TaskExtended, TaskSet
18 from .session import SessionManager
19
20 try:
21 from sqlalchemy.exc import DatabaseError, InvalidRequestError
22 from sqlalchemy.orm.exc import StaleDataError
23 except ImportError: # pragma: no cover
24 raise ImproperlyConfigured(
25 'The database result backend requires SQLAlchemy to be installed.'
26 'See https://pypi.org/project/SQLAlchemy/')
27
28 logger = logging.getLogger(__name__)
29
30 __all__ = ('DatabaseBackend',)
31
32
33 @contextmanager
34 def session_cleanup(session):
35 try:
36 yield
37 except Exception:
38 session.rollback()
39 raise
40 finally:
41 session.close()
42
43
44 def retry(fun):
45
46 @wraps(fun)
47 def _inner(*args, **kwargs):
48 max_retries = kwargs.pop('max_retries', 3)
49
50 for retries in range(max_retries):
51 try:
52 return fun(*args, **kwargs)
53 except (DatabaseError, InvalidRequestError, StaleDataError):
54 logger.warning(
55 'Failed operation %s. Retrying %s more times.',
56 fun.__name__, max_retries - retries - 1,
57 exc_info=True)
58 if retries + 1 >= max_retries:
59 raise
60
61 return _inner
62
63
64 class DatabaseBackend(BaseBackend):
65 """The database result backend."""
66
67 # ResultSet.iterate should sleep this much between each pool,
68 # to not bombard the database with queries.
69 subpolling_interval = 0.5
70
71 task_cls = Task
72 taskset_cls = TaskSet
73
74 def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):
75 # The `url` argument was added later and is used by
76 # the app to set backend by url (celery.app.backends.by_url)
77 super(DatabaseBackend, self).__init__(expires_type=maybe_timedelta,
78 url=url, **kwargs)
79 conf = self.app.conf
80
81 if self.extended_result:
82 self.task_cls = TaskExtended
83
84 self.url = url or dburi or conf.database_url
85 self.engine_options = dict(
86 engine_options or {},
87 **conf.database_engine_options or {})
88 self.short_lived_sessions = kwargs.get(
89 'short_lived_sessions',
90 conf.database_short_lived_sessions)
91
92 tablenames = conf.database_table_names or {}
93 self.task_cls.__table__.name = tablenames.get('task',
94 'celery_taskmeta')
95 self.taskset_cls.__table__.name = tablenames.get('group',
96 'celery_tasksetmeta')
97
98 if not self.url:
99 raise ImproperlyConfigured(
100 'Missing connection string! Do you have the'
101 ' database_url setting set to a real value?')
102
103 @property
104 def extended_result(self):
105 return self.app.conf.find_value_for_key('extended', 'result')
106
107 def ResultSession(self, session_manager=SessionManager()):
108 return session_manager.session_factory(
109 dburi=self.url,
110 short_lived_sessions=self.short_lived_sessions,
111 **self.engine_options)
112
113 @retry
114 def _store_result(self, task_id, result, state, traceback=None,
115 request=None, **kwargs):
116 """Store return value and state of an executed task."""
117 session = self.ResultSession()
118 with session_cleanup(session):
119 task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
120 task = task and task[0]
121 if not task:
122 task = self.task_cls(task_id)
123 session.add(task)
124 session.flush()
125
126 self._update_result(task, result, state, traceback=traceback, request=request)
127 session.commit()
128
129 def _update_result(self, task, result, state, traceback=None,
130 request=None):
131 task.result = result
132 task.status = state
133 task.traceback = traceback
134 if self.app.conf.find_value_for_key('extended', 'result'):
135 task.name = getattr(request, 'task_name', None)
136 task.args = ensure_bytes(
137 self.encode(getattr(request, 'args', None))
138 )
139 task.kwargs = ensure_bytes(
140 self.encode(getattr(request, 'kwargs', None))
141 )
142 task.worker = getattr(request, 'hostname', None)
143 task.retries = getattr(request, 'retries', None)
144 task.queue = (
145 request.delivery_info.get("routing_key")
146 if hasattr(request, "delivery_info") and request.delivery_info
147 else None
148 )
149
150 @retry
151 def _get_task_meta_for(self, task_id):
152 """Get task meta-data for a task by id."""
153 session = self.ResultSession()
154 with session_cleanup(session):
155 task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))
156 task = task and task[0]
157 if not task:
158 task = self.task_cls(task_id)
159 task.status = states.PENDING
160 task.result = None
161 data = task.to_dict()
162 if 'args' in data:
163 data['args'] = self.decode(data['args'])
164 if 'kwargs' in data:
165 data['kwargs'] = self.decode(data['kwargs'])
166 return self.meta_from_decoded(data)
167
168 @retry
169 def _save_group(self, group_id, result):
170 """Store the result of an executed group."""
171 session = self.ResultSession()
172 with session_cleanup(session):
173 group = self.taskset_cls(group_id, result)
174 session.add(group)
175 session.flush()
176 session.commit()
177 return result
178
179 @retry
180 def _restore_group(self, group_id):
181 """Get meta-data for group by id."""
182 session = self.ResultSession()
183 with session_cleanup(session):
184 group = session.query(self.taskset_cls).filter(
185 self.taskset_cls.taskset_id == group_id).first()
186 if group:
187 return group.to_dict()
188
189 @retry
190 def _delete_group(self, group_id):
191 """Delete meta-data for group by id."""
192 session = self.ResultSession()
193 with session_cleanup(session):
194 session.query(self.taskset_cls).filter(
195 self.taskset_cls.taskset_id == group_id).delete()
196 session.flush()
197 session.commit()
198
199 @retry
200 def _forget(self, task_id):
201 """Forget about result."""
202 session = self.ResultSession()
203 with session_cleanup(session):
204 session.query(self.task_cls).filter(self.task_cls.task_id == task_id).delete()
205 session.commit()
206
207 def cleanup(self):
208 """Delete expired meta-data."""
209 session = self.ResultSession()
210 expires = self.expires
211 now = self.app.now()
212 with session_cleanup(session):
213 session.query(self.task_cls).filter(
214 self.task_cls.date_done < (now - expires)).delete()
215 session.query(self.taskset_cls).filter(
216 self.taskset_cls.date_done < (now - expires)).delete()
217 session.commit()
218
219 def __reduce__(self, args=(), kwargs=None):
220 kwargs = {} if not kwargs else kwargs
221 kwargs.update(
222 {'dburi': self.url,
223 'expires': self.expires,
224 'engine_options': self.engine_options})
225 return super(DatabaseBackend, self).__reduce__(args, kwargs)
226
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py
--- a/celery/backends/database/__init__.py
+++ b/celery/backends/database/__init__.py
@@ -132,7 +132,7 @@
task.status = state
task.traceback = traceback
if self.app.conf.find_value_for_key('extended', 'result'):
- task.name = getattr(request, 'task_name', None)
+ task.name = getattr(request, 'task', None)
task.args = ensure_bytes(
self.encode(getattr(request, 'args', None))
)
|
{"golden_diff": "diff --git a/celery/backends/database/__init__.py b/celery/backends/database/__init__.py\n--- a/celery/backends/database/__init__.py\n+++ b/celery/backends/database/__init__.py\n@@ -132,7 +132,7 @@\n task.status = state\n task.traceback = traceback\n if self.app.conf.find_value_for_key('extended', 'result'):\n- task.name = getattr(request, 'task_name', None)\n+ task.name = getattr(request, 'task', None)\n task.args = ensure_bytes(\n self.encode(getattr(request, 'args', None))\n )\n", "issue": "DatabaseBackend._update_result() have an error property.\npython 3.7\r\ncelery 4.4.0rc3 \r\n\r\nThe result has an error value NULL for the name in my backend(mysql), but it's work well when I use redis as my backend.\r\n\r\nAfter I change this error in `backends/database/__init__.py` [135], alter 'task_name' to 'task', I get the correct task_name.\r\n\r\nThe 'name' in `backends/base.py` [706,717]\r\n```\r\n if self.app.conf.find_value_for_key('extended', 'result'):\r\n if request:\r\n request_meta = {\r\n-> 'name': getattr(request, 'task', None),\r\n 'args': getattr(request, 'args', None),\r\n 'kwargs': getattr(request, 'kwargs', None),\r\n 'worker': getattr(request, 'hostname', None),\r\n 'retries': getattr(request, 'retries', None),\r\n 'queue': request.delivery_info.get('routing_key')\r\n if hasattr(request, 'delivery_info') and\r\n request.delivery_info else None\r\n }\r\n```\r\nThe 'name' in `backends/database/__init__.py` [129,148]\r\n```\r\n def _update_result(self, task, result, state, traceback=None,\r\n request=None):\r\n task.result = result\r\n task.status = state\r\n task.traceback = traceback\r\n if self.app.conf.find_value_for_key('extended', 'result'):\r\n- task.name = getattr(request, 'task_name', None)\r\n+ task.name = getattr(request, 'task', None)\r\n task.args = ensure_bytes(\r\n self.encode(getattr(request, 'args', None))\r\n )\r\n task.kwargs = ensure_bytes(\r\n self.encode(getattr(request, 'kwargs', None))\r\n )\r\n task.worker = getattr(request, 'hostname', None)\r\n task.retries = getattr(request, 'retries', None)\r\n task.queue = (\r\n request.delivery_info.get(\"routing_key\")\r\n if hasattr(request, \"delivery_info\") and request.delivery_info\r\n else None\r\n )\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SQLAlchemy result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nfrom contextlib import contextmanager\n\nfrom kombu.utils.encoding import ensure_bytes\nfrom vine.utils import wraps\n\nfrom celery import states\nfrom celery.backends.base import BaseBackend\nfrom celery.exceptions import ImproperlyConfigured\nfrom celery.five import range\nfrom celery.utils.time import maybe_timedelta\n\nfrom .models import Task, TaskExtended, TaskSet\nfrom .session import SessionManager\n\ntry:\n from sqlalchemy.exc import DatabaseError, InvalidRequestError\n from sqlalchemy.orm.exc import StaleDataError\nexcept ImportError: # pragma: no cover\n raise ImproperlyConfigured(\n 'The database result backend requires SQLAlchemy to be installed.'\n 'See https://pypi.org/project/SQLAlchemy/')\n\nlogger = logging.getLogger(__name__)\n\n__all__ = ('DatabaseBackend',)\n\n\n@contextmanager\ndef session_cleanup(session):\n try:\n yield\n except Exception:\n session.rollback()\n raise\n finally:\n session.close()\n\n\ndef retry(fun):\n\n @wraps(fun)\n def _inner(*args, **kwargs):\n max_retries = kwargs.pop('max_retries', 3)\n\n for retries in range(max_retries):\n try:\n return fun(*args, **kwargs)\n except (DatabaseError, InvalidRequestError, StaleDataError):\n logger.warning(\n 'Failed operation %s. Retrying %s more times.',\n fun.__name__, max_retries - retries - 1,\n exc_info=True)\n if retries + 1 >= max_retries:\n raise\n\n return _inner\n\n\nclass DatabaseBackend(BaseBackend):\n \"\"\"The database result backend.\"\"\"\n\n # ResultSet.iterate should sleep this much between each pool,\n # to not bombard the database with queries.\n subpolling_interval = 0.5\n\n task_cls = Task\n taskset_cls = TaskSet\n\n def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):\n # The `url` argument was added later and is used by\n # the app to set backend by url (celery.app.backends.by_url)\n super(DatabaseBackend, self).__init__(expires_type=maybe_timedelta,\n url=url, **kwargs)\n conf = self.app.conf\n\n if self.extended_result:\n self.task_cls = TaskExtended\n\n self.url = url or dburi or conf.database_url\n self.engine_options = dict(\n engine_options or {},\n **conf.database_engine_options or {})\n self.short_lived_sessions = kwargs.get(\n 'short_lived_sessions',\n conf.database_short_lived_sessions)\n\n tablenames = conf.database_table_names or {}\n self.task_cls.__table__.name = tablenames.get('task',\n 'celery_taskmeta')\n self.taskset_cls.__table__.name = tablenames.get('group',\n 'celery_tasksetmeta')\n\n if not self.url:\n raise ImproperlyConfigured(\n 'Missing connection string! Do you have the'\n ' database_url setting set to a real value?')\n\n @property\n def extended_result(self):\n return self.app.conf.find_value_for_key('extended', 'result')\n\n def ResultSession(self, session_manager=SessionManager()):\n return session_manager.session_factory(\n dburi=self.url,\n short_lived_sessions=self.short_lived_sessions,\n **self.engine_options)\n\n @retry\n def _store_result(self, task_id, result, state, traceback=None,\n request=None, **kwargs):\n \"\"\"Store return value and state of an executed task.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n session.add(task)\n session.flush()\n\n self._update_result(task, result, state, traceback=traceback, request=request)\n session.commit()\n\n def _update_result(self, task, result, state, traceback=None,\n request=None):\n task.result = result\n task.status = state\n task.traceback = traceback\n if self.app.conf.find_value_for_key('extended', 'result'):\n task.name = getattr(request, 'task_name', None)\n task.args = ensure_bytes(\n self.encode(getattr(request, 'args', None))\n )\n task.kwargs = ensure_bytes(\n self.encode(getattr(request, 'kwargs', None))\n )\n task.worker = getattr(request, 'hostname', None)\n task.retries = getattr(request, 'retries', None)\n task.queue = (\n request.delivery_info.get(\"routing_key\")\n if hasattr(request, \"delivery_info\") and request.delivery_info\n else None\n )\n\n @retry\n def _get_task_meta_for(self, task_id):\n \"\"\"Get task meta-data for a task by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n task.status = states.PENDING\n task.result = None\n data = task.to_dict()\n if 'args' in data:\n data['args'] = self.decode(data['args'])\n if 'kwargs' in data:\n data['kwargs'] = self.decode(data['kwargs'])\n return self.meta_from_decoded(data)\n\n @retry\n def _save_group(self, group_id, result):\n \"\"\"Store the result of an executed group.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = self.taskset_cls(group_id, result)\n session.add(group)\n session.flush()\n session.commit()\n return result\n\n @retry\n def _restore_group(self, group_id):\n \"\"\"Get meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).first()\n if group:\n return group.to_dict()\n\n @retry\n def _delete_group(self, group_id):\n \"\"\"Delete meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).delete()\n session.flush()\n session.commit()\n\n @retry\n def _forget(self, task_id):\n \"\"\"Forget about result.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.task_cls).filter(self.task_cls.task_id == task_id).delete()\n session.commit()\n\n def cleanup(self):\n \"\"\"Delete expired meta-data.\"\"\"\n session = self.ResultSession()\n expires = self.expires\n now = self.app.now()\n with session_cleanup(session):\n session.query(self.task_cls).filter(\n self.task_cls.date_done < (now - expires)).delete()\n session.query(self.taskset_cls).filter(\n self.taskset_cls.date_done < (now - expires)).delete()\n session.commit()\n\n def __reduce__(self, args=(), kwargs=None):\n kwargs = {} if not kwargs else kwargs\n kwargs.update(\n {'dburi': self.url,\n 'expires': self.expires,\n 'engine_options': self.engine_options})\n return super(DatabaseBackend, self).__reduce__(args, kwargs)\n", "path": "celery/backends/database/__init__.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SQLAlchemy result store backend.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nfrom contextlib import contextmanager\n\nfrom kombu.utils.encoding import ensure_bytes\nfrom vine.utils import wraps\n\nfrom celery import states\nfrom celery.backends.base import BaseBackend\nfrom celery.exceptions import ImproperlyConfigured\nfrom celery.five import range\nfrom celery.utils.time import maybe_timedelta\n\nfrom .models import Task, TaskExtended, TaskSet\nfrom .session import SessionManager\n\ntry:\n from sqlalchemy.exc import DatabaseError, InvalidRequestError\n from sqlalchemy.orm.exc import StaleDataError\nexcept ImportError: # pragma: no cover\n raise ImproperlyConfigured(\n 'The database result backend requires SQLAlchemy to be installed.'\n 'See https://pypi.org/project/SQLAlchemy/')\n\nlogger = logging.getLogger(__name__)\n\n__all__ = ('DatabaseBackend',)\n\n\n@contextmanager\ndef session_cleanup(session):\n try:\n yield\n except Exception:\n session.rollback()\n raise\n finally:\n session.close()\n\n\ndef retry(fun):\n\n @wraps(fun)\n def _inner(*args, **kwargs):\n max_retries = kwargs.pop('max_retries', 3)\n\n for retries in range(max_retries):\n try:\n return fun(*args, **kwargs)\n except (DatabaseError, InvalidRequestError, StaleDataError):\n logger.warning(\n 'Failed operation %s. Retrying %s more times.',\n fun.__name__, max_retries - retries - 1,\n exc_info=True)\n if retries + 1 >= max_retries:\n raise\n\n return _inner\n\n\nclass DatabaseBackend(BaseBackend):\n \"\"\"The database result backend.\"\"\"\n\n # ResultSet.iterate should sleep this much between each pool,\n # to not bombard the database with queries.\n subpolling_interval = 0.5\n\n task_cls = Task\n taskset_cls = TaskSet\n\n def __init__(self, dburi=None, engine_options=None, url=None, **kwargs):\n # The `url` argument was added later and is used by\n # the app to set backend by url (celery.app.backends.by_url)\n super(DatabaseBackend, self).__init__(expires_type=maybe_timedelta,\n url=url, **kwargs)\n conf = self.app.conf\n\n if self.extended_result:\n self.task_cls = TaskExtended\n\n self.url = url or dburi or conf.database_url\n self.engine_options = dict(\n engine_options or {},\n **conf.database_engine_options or {})\n self.short_lived_sessions = kwargs.get(\n 'short_lived_sessions',\n conf.database_short_lived_sessions)\n\n tablenames = conf.database_table_names or {}\n self.task_cls.__table__.name = tablenames.get('task',\n 'celery_taskmeta')\n self.taskset_cls.__table__.name = tablenames.get('group',\n 'celery_tasksetmeta')\n\n if not self.url:\n raise ImproperlyConfigured(\n 'Missing connection string! Do you have the'\n ' database_url setting set to a real value?')\n\n @property\n def extended_result(self):\n return self.app.conf.find_value_for_key('extended', 'result')\n\n def ResultSession(self, session_manager=SessionManager()):\n return session_manager.session_factory(\n dburi=self.url,\n short_lived_sessions=self.short_lived_sessions,\n **self.engine_options)\n\n @retry\n def _store_result(self, task_id, result, state, traceback=None,\n request=None, **kwargs):\n \"\"\"Store return value and state of an executed task.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n session.add(task)\n session.flush()\n\n self._update_result(task, result, state, traceback=traceback, request=request)\n session.commit()\n\n def _update_result(self, task, result, state, traceback=None,\n request=None):\n task.result = result\n task.status = state\n task.traceback = traceback\n if self.app.conf.find_value_for_key('extended', 'result'):\n task.name = getattr(request, 'task', None)\n task.args = ensure_bytes(\n self.encode(getattr(request, 'args', None))\n )\n task.kwargs = ensure_bytes(\n self.encode(getattr(request, 'kwargs', None))\n )\n task.worker = getattr(request, 'hostname', None)\n task.retries = getattr(request, 'retries', None)\n task.queue = (\n request.delivery_info.get(\"routing_key\")\n if hasattr(request, \"delivery_info\") and request.delivery_info\n else None\n )\n\n @retry\n def _get_task_meta_for(self, task_id):\n \"\"\"Get task meta-data for a task by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n task = list(session.query(self.task_cls).filter(self.task_cls.task_id == task_id))\n task = task and task[0]\n if not task:\n task = self.task_cls(task_id)\n task.status = states.PENDING\n task.result = None\n data = task.to_dict()\n if 'args' in data:\n data['args'] = self.decode(data['args'])\n if 'kwargs' in data:\n data['kwargs'] = self.decode(data['kwargs'])\n return self.meta_from_decoded(data)\n\n @retry\n def _save_group(self, group_id, result):\n \"\"\"Store the result of an executed group.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = self.taskset_cls(group_id, result)\n session.add(group)\n session.flush()\n session.commit()\n return result\n\n @retry\n def _restore_group(self, group_id):\n \"\"\"Get meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n group = session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).first()\n if group:\n return group.to_dict()\n\n @retry\n def _delete_group(self, group_id):\n \"\"\"Delete meta-data for group by id.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.taskset_cls).filter(\n self.taskset_cls.taskset_id == group_id).delete()\n session.flush()\n session.commit()\n\n @retry\n def _forget(self, task_id):\n \"\"\"Forget about result.\"\"\"\n session = self.ResultSession()\n with session_cleanup(session):\n session.query(self.task_cls).filter(self.task_cls.task_id == task_id).delete()\n session.commit()\n\n def cleanup(self):\n \"\"\"Delete expired meta-data.\"\"\"\n session = self.ResultSession()\n expires = self.expires\n now = self.app.now()\n with session_cleanup(session):\n session.query(self.task_cls).filter(\n self.task_cls.date_done < (now - expires)).delete()\n session.query(self.taskset_cls).filter(\n self.taskset_cls.date_done < (now - expires)).delete()\n session.commit()\n\n def __reduce__(self, args=(), kwargs=None):\n kwargs = {} if not kwargs else kwargs\n kwargs.update(\n {'dburi': self.url,\n 'expires': self.expires,\n 'engine_options': self.engine_options})\n return super(DatabaseBackend, self).__reduce__(args, kwargs)\n", "path": "celery/backends/database/__init__.py"}]}
| 2,919 | 142 |
gh_patches_debug_8103
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-26647
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Internationalization of "Browse 1 more stream" and two more strings
<!-- Describe what you were expecting to see, what you saw instead, and steps to take in order to reproduce the buggy behavior. Screenshots can be helpful. -->
I am missing "Browse 1 more stream" https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L4
and "Browse # more streams" https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L9
in the internationalization files (translation.json or django.po).

Syntax above differs from syntax in
https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L14
which has an entry in translation.json.
I am also missing a translation of "Direct messages" in the first column of recent conversations. May be it is generated by the following code and missing internationalization. https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/recent_topic_row.hbs#L7
<!-- Check the box for the version of Zulip you are using (see https://zulip.com/help/view-zulip-version).-->
**Zulip Server and web app version:**
- [ ] Zulip Cloud (`*.zulipchat.com`)
- [x] Zulip Server 7.0+
- [ ] Zulip Server 6.0+
- [ ] Zulip Server 5.0 or older
- [ ] Other or not sure
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zerver/management/commands/makemessages.py`
Content:
```
1 """
2 See https://zulip.readthedocs.io/en/latest/translating/internationalization.html
3 for background.
4
5 The contents of this file are taken from
6 https://github.com/niwinz/django-jinja/blob/master/django_jinja/management/commands/makemessages.py
7
8 Jinja2's i18n functionality is not exactly the same as Django's.
9 In particular, the tags names and their syntax are different:
10
11 1. The Django ``trans`` tag is replaced by a _() global.
12 2. The Django ``blocktrans`` tag is called ``trans``.
13
14 (1) isn't an issue, since the whole ``makemessages`` process is based on
15 converting the template tags to ``_()`` calls. However, (2) means that
16 those Jinja2 ``trans`` tags will not be picked up by Django's
17 ``makemessages`` command.
18
19 There aren't any nice solutions here. While Jinja2's i18n extension does
20 come with extraction capabilities built in, the code behind ``makemessages``
21 unfortunately isn't extensible, so we can:
22
23 * Duplicate the command + code behind it.
24 * Offer a separate command for Jinja2 extraction.
25 * Try to get Django to offer hooks into makemessages().
26 * Monkey-patch.
27
28 We are currently doing that last thing. It turns out there we are lucky
29 for once: It's simply a matter of extending two regular expressions.
30 Credit for the approach goes to:
31 https://stackoverflow.com/questions/2090717
32
33 """
34 import glob
35 import itertools
36 import json
37 import os
38 import re
39 import subprocess
40 from typing import Any, Collection, Dict, Iterator, List, Mapping
41
42 from django.core.management.base import CommandParser
43 from django.core.management.commands import makemessages
44 from django.template.base import BLOCK_TAG_END, BLOCK_TAG_START
45 from django.utils.translation import template
46
47 strip_whitespace_right = re.compile(
48 f"({BLOCK_TAG_START}-?\\s*(trans|pluralize).*?-{BLOCK_TAG_END})\\s+"
49 )
50 strip_whitespace_left = re.compile(
51 f"\\s+({BLOCK_TAG_START}-\\s*(endtrans|pluralize).*?-?{BLOCK_TAG_END})"
52 )
53
54 regexes = [
55 r"{{#tr}}([\s\S]*?)(?:{{/tr}}|{{#\*inline )", # '.' doesn't match '\n' by default
56 r'{{\s*t "([\s\S]*?)"\W*}}',
57 r"{{\s*t '([\s\S]*?)'\W*}}",
58 r'\(t "([\s\S]*?)"\)',
59 r'=\(t "([\s\S]*?)"\)(?=[^{]*}})',
60 r"=\(t '([\s\S]*?)'\)(?=[^{]*}})",
61 ]
62 tags = [
63 ("err_", "error"),
64 ]
65
66 frontend_compiled_regexes = [re.compile(regex) for regex in regexes]
67 multiline_js_comment = re.compile(r"/\*.*?\*/", re.DOTALL)
68 singleline_js_comment = re.compile("//.*?\n")
69
70
71 def strip_whitespaces(src: str) -> str:
72 src = strip_whitespace_left.sub("\\1", src)
73 src = strip_whitespace_right.sub("\\1", src)
74 return src
75
76
77 class Command(makemessages.Command):
78 xgettext_options = makemessages.Command.xgettext_options
79 for func, tag in tags:
80 xgettext_options += [f'--keyword={func}:1,"{tag}"']
81
82 def add_arguments(self, parser: CommandParser) -> None:
83 super().add_arguments(parser)
84 parser.add_argument(
85 "--frontend-source",
86 default="web/templates",
87 help="Name of the Handlebars template directory",
88 )
89 parser.add_argument(
90 "--frontend-output",
91 default="locale",
92 help="Name of the frontend messages output directory",
93 )
94 parser.add_argument(
95 "--frontend-namespace",
96 default="translations.json",
97 help="Namespace of the frontend locale file",
98 )
99
100 def handle(self, *args: Any, **options: Any) -> None:
101 self.handle_django_locales(*args, **options)
102 self.handle_frontend_locales(**options)
103
104 def handle_frontend_locales(
105 self,
106 *,
107 frontend_source: str,
108 frontend_output: str,
109 frontend_namespace: str,
110 locale: List[str],
111 exclude: List[str],
112 all: bool,
113 **options: Any,
114 ) -> None:
115 self.frontend_source = frontend_source
116 self.frontend_output = frontend_output
117 self.frontend_namespace = frontend_namespace
118 self.frontend_locale = locale
119 self.frontend_exclude = exclude
120 self.frontend_all = all
121
122 translation_strings = self.get_translation_strings()
123 self.write_translation_strings(translation_strings)
124
125 def handle_django_locales(self, *args: Any, **options: Any) -> None:
126 old_endblock_re = template.endblock_re
127 old_block_re = template.block_re
128 old_constant_re = template.constant_re
129
130 old_templatize = template.templatize
131 # Extend the regular expressions that are used to detect
132 # translation blocks with an "OR jinja-syntax" clause.
133 template.endblock_re = re.compile(
134 template.endblock_re.pattern + "|" + r"""^-?\s*endtrans\s*-?$"""
135 )
136 template.block_re = re.compile(
137 template.block_re.pattern + "|" + r"""^-?\s*trans(?:\s+(?!'|")(?=.*?=.*?)|\s*-?$)"""
138 )
139 template.plural_re = re.compile(
140 template.plural_re.pattern + "|" + r"""^-?\s*pluralize(?:\s+.+|-?$)"""
141 )
142 template.constant_re = re.compile(r"""_\(((?:".*?")|(?:'.*?')).*\)""")
143
144 def my_templatize(src: str, *args: Any, **kwargs: Any) -> str:
145 new_src = strip_whitespaces(src)
146 return old_templatize(new_src, *args, **kwargs)
147
148 template.templatize = my_templatize
149
150 try:
151 ignore_patterns = options.get("ignore_patterns", [])
152 ignore_patterns.append("docs/*")
153 ignore_patterns.append("templates/zerver/emails/custom/*")
154 ignore_patterns.append("var/*")
155 options["ignore_patterns"] = ignore_patterns
156 super().handle(*args, **options)
157 finally:
158 template.endblock_re = old_endblock_re
159 template.block_re = old_block_re
160 template.templatize = old_templatize
161 template.constant_re = old_constant_re
162
163 def extract_strings(self, data: str) -> List[str]:
164 translation_strings: List[str] = []
165 for regex in frontend_compiled_regexes:
166 for match in regex.findall(data):
167 match = match.strip()
168 match = " ".join(line.strip() for line in match.splitlines())
169 translation_strings.append(match)
170
171 return translation_strings
172
173 def ignore_javascript_comments(self, data: str) -> str:
174 # Removes multi line comments.
175 data = multiline_js_comment.sub("", data)
176 # Removes single line (//) comments.
177 data = singleline_js_comment.sub("", data)
178 return data
179
180 def get_translation_strings(self) -> List[str]:
181 translation_strings: List[str] = []
182 dirname = self.get_template_dir()
183
184 for dirpath, dirnames, filenames in os.walk(dirname):
185 for filename in [f for f in filenames if f.endswith(".hbs")]:
186 if filename.startswith("."):
187 continue
188 with open(os.path.join(dirpath, filename)) as reader:
189 data = reader.read()
190 translation_strings.extend(self.extract_strings(data))
191 for dirpath, dirnames, filenames in itertools.chain(
192 os.walk("web/src"), os.walk("web/shared/src")
193 ):
194 for filename in [f for f in filenames if f.endswith((".js", ".ts"))]:
195 if filename.startswith("."):
196 continue
197 with open(os.path.join(dirpath, filename)) as reader:
198 data = reader.read()
199 data = self.ignore_javascript_comments(data)
200 translation_strings.extend(self.extract_strings(data))
201
202 extracted = subprocess.check_output(
203 [
204 "node_modules/.bin/formatjs",
205 "extract",
206 "--additional-function-names=$t,$t_html",
207 "--format=simple",
208 "--ignore=**/*.d.ts",
209 "web/src/**/*.js",
210 "web/src/**/*.ts",
211 ]
212 )
213 translation_strings.extend(json.loads(extracted).values())
214
215 return list(set(translation_strings))
216
217 def get_template_dir(self) -> str:
218 return self.frontend_source
219
220 def get_namespace(self) -> str:
221 return self.frontend_namespace
222
223 def get_locales(self) -> Collection[str]:
224 locale = self.frontend_locale
225 exclude = self.frontend_exclude
226 process_all = self.frontend_all
227
228 # After calling super().handle(), default_locale_path gets set on self
229 # so that we can reuse it here.
230 default_locale_path = self.default_locale_path # type: ignore[attr-defined] # not in stubs
231 paths = glob.glob(f"{default_locale_path}/*")
232 all_locales = [os.path.basename(path) for path in paths if os.path.isdir(path)]
233
234 # Account for excluded locales
235 if process_all:
236 return all_locales
237 else:
238 locales = locale or all_locales
239 return set(locales) - set(exclude)
240
241 def get_base_path(self) -> str:
242 return self.frontend_output
243
244 def get_output_paths(self) -> Iterator[str]:
245 base_path = self.get_base_path()
246 locales = self.get_locales()
247 for path in [os.path.join(base_path, locale) for locale in locales]:
248 if not os.path.exists(path):
249 os.makedirs(path)
250
251 yield os.path.join(path, self.get_namespace())
252
253 def get_new_strings(
254 self, old_strings: Mapping[str, str], translation_strings: List[str], locale: str
255 ) -> Dict[str, str]:
256 """
257 Missing strings are removed, new strings are added and already
258 translated strings are not touched.
259 """
260 new_strings = {} # Dict[str, str]
261 for k in translation_strings:
262 if locale == "en":
263 # For English language, translation is equal to the key.
264 new_strings[k] = old_strings.get(k, k)
265 else:
266 new_strings[k] = old_strings.get(k, "")
267
268 return new_strings
269
270 def write_translation_strings(self, translation_strings: List[str]) -> None:
271 for locale, output_path in zip(self.get_locales(), self.get_output_paths()):
272 self.stdout.write(f"[frontend] processing locale {locale}")
273 try:
274 with open(output_path) as reader:
275 old_strings = json.load(reader)
276 except (OSError, ValueError):
277 old_strings = {}
278
279 new_strings = self.get_new_strings(old_strings, translation_strings, locale)
280 with open(output_path, "w") as writer:
281 json.dump(new_strings, writer, indent=2, sort_keys=True)
282
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/zerver/management/commands/makemessages.py b/zerver/management/commands/makemessages.py
--- a/zerver/management/commands/makemessages.py
+++ b/zerver/management/commands/makemessages.py
@@ -52,9 +52,9 @@
)
regexes = [
- r"{{#tr}}([\s\S]*?)(?:{{/tr}}|{{#\*inline )", # '.' doesn't match '\n' by default
- r'{{\s*t "([\s\S]*?)"\W*}}',
- r"{{\s*t '([\s\S]*?)'\W*}}",
+ r"{{~?#tr}}([\s\S]*?)(?:~?{{/tr}}|{{#\*inline )", # '.' doesn't match '\n' by default
+ r'{{~?\s*t "([\s\S]*?)"\W*~?}}',
+ r"{{~?\s*t '([\s\S]*?)'\W*~?}}",
r'\(t "([\s\S]*?)"\)',
r'=\(t "([\s\S]*?)"\)(?=[^{]*}})',
r"=\(t '([\s\S]*?)'\)(?=[^{]*}})",
|
{"golden_diff": "diff --git a/zerver/management/commands/makemessages.py b/zerver/management/commands/makemessages.py\n--- a/zerver/management/commands/makemessages.py\n+++ b/zerver/management/commands/makemessages.py\n@@ -52,9 +52,9 @@\n )\n \n regexes = [\n- r\"{{#tr}}([\\s\\S]*?)(?:{{/tr}}|{{#\\*inline )\", # '.' doesn't match '\\n' by default\n- r'{{\\s*t \"([\\s\\S]*?)\"\\W*}}',\n- r\"{{\\s*t '([\\s\\S]*?)'\\W*}}\",\n+ r\"{{~?#tr}}([\\s\\S]*?)(?:~?{{/tr}}|{{#\\*inline )\", # '.' doesn't match '\\n' by default\n+ r'{{~?\\s*t \"([\\s\\S]*?)\"\\W*~?}}',\n+ r\"{{~?\\s*t '([\\s\\S]*?)'\\W*~?}}\",\n r'\\(t \"([\\s\\S]*?)\"\\)',\n r'=\\(t \"([\\s\\S]*?)\"\\)(?=[^{]*}})',\n r\"=\\(t '([\\s\\S]*?)'\\)(?=[^{]*}})\",\n", "issue": "Internationalization of \"Browse 1 more stream\" and two more strings\n<!-- Describe what you were expecting to see, what you saw instead, and steps to take in order to reproduce the buggy behavior. Screenshots can be helpful. -->\r\nI am missing \"Browse 1 more stream\" https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L4\r\nand \"Browse # more streams\" https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L9\r\nin the internationalization files (translation.json or django.po).\r\n\r\n\r\n\r\nSyntax above differs from syntax in\r\nhttps://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/subscribe_to_more_streams.hbs#L14\r\nwhich has an entry in translation.json.\r\n\r\nI am also missing a translation of \"Direct messages\" in the first column of recent conversations. May be it is generated by the following code and missing internationalization. https://github.com/zulip/zulip/blob/81bd63cb46273b8c94ef9e92c00893ed97110119/web/templates/recent_topic_row.hbs#L7\r\n\r\n<!-- Check the box for the version of Zulip you are using (see https://zulip.com/help/view-zulip-version).-->\r\n\r\n**Zulip Server and web app version:**\r\n\r\n- [ ] Zulip Cloud (`*.zulipchat.com`)\r\n- [x] Zulip Server 7.0+\r\n- [ ] Zulip Server 6.0+\r\n- [ ] Zulip Server 5.0 or older\r\n- [ ] Other or not sure\r\n\n", "before_files": [{"content": "\"\"\"\nSee https://zulip.readthedocs.io/en/latest/translating/internationalization.html\nfor background.\n\nThe contents of this file are taken from\nhttps://github.com/niwinz/django-jinja/blob/master/django_jinja/management/commands/makemessages.py\n\nJinja2's i18n functionality is not exactly the same as Django's.\nIn particular, the tags names and their syntax are different:\n\n 1. The Django ``trans`` tag is replaced by a _() global.\n 2. The Django ``blocktrans`` tag is called ``trans``.\n\n(1) isn't an issue, since the whole ``makemessages`` process is based on\nconverting the template tags to ``_()`` calls. However, (2) means that\nthose Jinja2 ``trans`` tags will not be picked up by Django's\n``makemessages`` command.\n\nThere aren't any nice solutions here. While Jinja2's i18n extension does\ncome with extraction capabilities built in, the code behind ``makemessages``\nunfortunately isn't extensible, so we can:\n\n * Duplicate the command + code behind it.\n * Offer a separate command for Jinja2 extraction.\n * Try to get Django to offer hooks into makemessages().\n * Monkey-patch.\n\nWe are currently doing that last thing. It turns out there we are lucky\nfor once: It's simply a matter of extending two regular expressions.\nCredit for the approach goes to:\nhttps://stackoverflow.com/questions/2090717\n\n\"\"\"\nimport glob\nimport itertools\nimport json\nimport os\nimport re\nimport subprocess\nfrom typing import Any, Collection, Dict, Iterator, List, Mapping\n\nfrom django.core.management.base import CommandParser\nfrom django.core.management.commands import makemessages\nfrom django.template.base import BLOCK_TAG_END, BLOCK_TAG_START\nfrom django.utils.translation import template\n\nstrip_whitespace_right = re.compile(\n f\"({BLOCK_TAG_START}-?\\\\s*(trans|pluralize).*?-{BLOCK_TAG_END})\\\\s+\"\n)\nstrip_whitespace_left = re.compile(\n f\"\\\\s+({BLOCK_TAG_START}-\\\\s*(endtrans|pluralize).*?-?{BLOCK_TAG_END})\"\n)\n\nregexes = [\n r\"{{#tr}}([\\s\\S]*?)(?:{{/tr}}|{{#\\*inline )\", # '.' doesn't match '\\n' by default\n r'{{\\s*t \"([\\s\\S]*?)\"\\W*}}',\n r\"{{\\s*t '([\\s\\S]*?)'\\W*}}\",\n r'\\(t \"([\\s\\S]*?)\"\\)',\n r'=\\(t \"([\\s\\S]*?)\"\\)(?=[^{]*}})',\n r\"=\\(t '([\\s\\S]*?)'\\)(?=[^{]*}})\",\n]\ntags = [\n (\"err_\", \"error\"),\n]\n\nfrontend_compiled_regexes = [re.compile(regex) for regex in regexes]\nmultiline_js_comment = re.compile(r\"/\\*.*?\\*/\", re.DOTALL)\nsingleline_js_comment = re.compile(\"//.*?\\n\")\n\n\ndef strip_whitespaces(src: str) -> str:\n src = strip_whitespace_left.sub(\"\\\\1\", src)\n src = strip_whitespace_right.sub(\"\\\\1\", src)\n return src\n\n\nclass Command(makemessages.Command):\n xgettext_options = makemessages.Command.xgettext_options\n for func, tag in tags:\n xgettext_options += [f'--keyword={func}:1,\"{tag}\"']\n\n def add_arguments(self, parser: CommandParser) -> None:\n super().add_arguments(parser)\n parser.add_argument(\n \"--frontend-source\",\n default=\"web/templates\",\n help=\"Name of the Handlebars template directory\",\n )\n parser.add_argument(\n \"--frontend-output\",\n default=\"locale\",\n help=\"Name of the frontend messages output directory\",\n )\n parser.add_argument(\n \"--frontend-namespace\",\n default=\"translations.json\",\n help=\"Namespace of the frontend locale file\",\n )\n\n def handle(self, *args: Any, **options: Any) -> None:\n self.handle_django_locales(*args, **options)\n self.handle_frontend_locales(**options)\n\n def handle_frontend_locales(\n self,\n *,\n frontend_source: str,\n frontend_output: str,\n frontend_namespace: str,\n locale: List[str],\n exclude: List[str],\n all: bool,\n **options: Any,\n ) -> None:\n self.frontend_source = frontend_source\n self.frontend_output = frontend_output\n self.frontend_namespace = frontend_namespace\n self.frontend_locale = locale\n self.frontend_exclude = exclude\n self.frontend_all = all\n\n translation_strings = self.get_translation_strings()\n self.write_translation_strings(translation_strings)\n\n def handle_django_locales(self, *args: Any, **options: Any) -> None:\n old_endblock_re = template.endblock_re\n old_block_re = template.block_re\n old_constant_re = template.constant_re\n\n old_templatize = template.templatize\n # Extend the regular expressions that are used to detect\n # translation blocks with an \"OR jinja-syntax\" clause.\n template.endblock_re = re.compile(\n template.endblock_re.pattern + \"|\" + r\"\"\"^-?\\s*endtrans\\s*-?$\"\"\"\n )\n template.block_re = re.compile(\n template.block_re.pattern + \"|\" + r\"\"\"^-?\\s*trans(?:\\s+(?!'|\")(?=.*?=.*?)|\\s*-?$)\"\"\"\n )\n template.plural_re = re.compile(\n template.plural_re.pattern + \"|\" + r\"\"\"^-?\\s*pluralize(?:\\s+.+|-?$)\"\"\"\n )\n template.constant_re = re.compile(r\"\"\"_\\(((?:\".*?\")|(?:'.*?')).*\\)\"\"\")\n\n def my_templatize(src: str, *args: Any, **kwargs: Any) -> str:\n new_src = strip_whitespaces(src)\n return old_templatize(new_src, *args, **kwargs)\n\n template.templatize = my_templatize\n\n try:\n ignore_patterns = options.get(\"ignore_patterns\", [])\n ignore_patterns.append(\"docs/*\")\n ignore_patterns.append(\"templates/zerver/emails/custom/*\")\n ignore_patterns.append(\"var/*\")\n options[\"ignore_patterns\"] = ignore_patterns\n super().handle(*args, **options)\n finally:\n template.endblock_re = old_endblock_re\n template.block_re = old_block_re\n template.templatize = old_templatize\n template.constant_re = old_constant_re\n\n def extract_strings(self, data: str) -> List[str]:\n translation_strings: List[str] = []\n for regex in frontend_compiled_regexes:\n for match in regex.findall(data):\n match = match.strip()\n match = \" \".join(line.strip() for line in match.splitlines())\n translation_strings.append(match)\n\n return translation_strings\n\n def ignore_javascript_comments(self, data: str) -> str:\n # Removes multi line comments.\n data = multiline_js_comment.sub(\"\", data)\n # Removes single line (//) comments.\n data = singleline_js_comment.sub(\"\", data)\n return data\n\n def get_translation_strings(self) -> List[str]:\n translation_strings: List[str] = []\n dirname = self.get_template_dir()\n\n for dirpath, dirnames, filenames in os.walk(dirname):\n for filename in [f for f in filenames if f.endswith(\".hbs\")]:\n if filename.startswith(\".\"):\n continue\n with open(os.path.join(dirpath, filename)) as reader:\n data = reader.read()\n translation_strings.extend(self.extract_strings(data))\n for dirpath, dirnames, filenames in itertools.chain(\n os.walk(\"web/src\"), os.walk(\"web/shared/src\")\n ):\n for filename in [f for f in filenames if f.endswith((\".js\", \".ts\"))]:\n if filename.startswith(\".\"):\n continue\n with open(os.path.join(dirpath, filename)) as reader:\n data = reader.read()\n data = self.ignore_javascript_comments(data)\n translation_strings.extend(self.extract_strings(data))\n\n extracted = subprocess.check_output(\n [\n \"node_modules/.bin/formatjs\",\n \"extract\",\n \"--additional-function-names=$t,$t_html\",\n \"--format=simple\",\n \"--ignore=**/*.d.ts\",\n \"web/src/**/*.js\",\n \"web/src/**/*.ts\",\n ]\n )\n translation_strings.extend(json.loads(extracted).values())\n\n return list(set(translation_strings))\n\n def get_template_dir(self) -> str:\n return self.frontend_source\n\n def get_namespace(self) -> str:\n return self.frontend_namespace\n\n def get_locales(self) -> Collection[str]:\n locale = self.frontend_locale\n exclude = self.frontend_exclude\n process_all = self.frontend_all\n\n # After calling super().handle(), default_locale_path gets set on self\n # so that we can reuse it here.\n default_locale_path = self.default_locale_path # type: ignore[attr-defined] # not in stubs\n paths = glob.glob(f\"{default_locale_path}/*\")\n all_locales = [os.path.basename(path) for path in paths if os.path.isdir(path)]\n\n # Account for excluded locales\n if process_all:\n return all_locales\n else:\n locales = locale or all_locales\n return set(locales) - set(exclude)\n\n def get_base_path(self) -> str:\n return self.frontend_output\n\n def get_output_paths(self) -> Iterator[str]:\n base_path = self.get_base_path()\n locales = self.get_locales()\n for path in [os.path.join(base_path, locale) for locale in locales]:\n if not os.path.exists(path):\n os.makedirs(path)\n\n yield os.path.join(path, self.get_namespace())\n\n def get_new_strings(\n self, old_strings: Mapping[str, str], translation_strings: List[str], locale: str\n ) -> Dict[str, str]:\n \"\"\"\n Missing strings are removed, new strings are added and already\n translated strings are not touched.\n \"\"\"\n new_strings = {} # Dict[str, str]\n for k in translation_strings:\n if locale == \"en\":\n # For English language, translation is equal to the key.\n new_strings[k] = old_strings.get(k, k)\n else:\n new_strings[k] = old_strings.get(k, \"\")\n\n return new_strings\n\n def write_translation_strings(self, translation_strings: List[str]) -> None:\n for locale, output_path in zip(self.get_locales(), self.get_output_paths()):\n self.stdout.write(f\"[frontend] processing locale {locale}\")\n try:\n with open(output_path) as reader:\n old_strings = json.load(reader)\n except (OSError, ValueError):\n old_strings = {}\n\n new_strings = self.get_new_strings(old_strings, translation_strings, locale)\n with open(output_path, \"w\") as writer:\n json.dump(new_strings, writer, indent=2, sort_keys=True)\n", "path": "zerver/management/commands/makemessages.py"}], "after_files": [{"content": "\"\"\"\nSee https://zulip.readthedocs.io/en/latest/translating/internationalization.html\nfor background.\n\nThe contents of this file are taken from\nhttps://github.com/niwinz/django-jinja/blob/master/django_jinja/management/commands/makemessages.py\n\nJinja2's i18n functionality is not exactly the same as Django's.\nIn particular, the tags names and their syntax are different:\n\n 1. The Django ``trans`` tag is replaced by a _() global.\n 2. The Django ``blocktrans`` tag is called ``trans``.\n\n(1) isn't an issue, since the whole ``makemessages`` process is based on\nconverting the template tags to ``_()`` calls. However, (2) means that\nthose Jinja2 ``trans`` tags will not be picked up by Django's\n``makemessages`` command.\n\nThere aren't any nice solutions here. While Jinja2's i18n extension does\ncome with extraction capabilities built in, the code behind ``makemessages``\nunfortunately isn't extensible, so we can:\n\n * Duplicate the command + code behind it.\n * Offer a separate command for Jinja2 extraction.\n * Try to get Django to offer hooks into makemessages().\n * Monkey-patch.\n\nWe are currently doing that last thing. It turns out there we are lucky\nfor once: It's simply a matter of extending two regular expressions.\nCredit for the approach goes to:\nhttps://stackoverflow.com/questions/2090717\n\n\"\"\"\nimport glob\nimport itertools\nimport json\nimport os\nimport re\nimport subprocess\nfrom typing import Any, Collection, Dict, Iterator, List, Mapping\n\nfrom django.core.management.base import CommandParser\nfrom django.core.management.commands import makemessages\nfrom django.template.base import BLOCK_TAG_END, BLOCK_TAG_START\nfrom django.utils.translation import template\n\nstrip_whitespace_right = re.compile(\n f\"({BLOCK_TAG_START}-?\\\\s*(trans|pluralize).*?-{BLOCK_TAG_END})\\\\s+\"\n)\nstrip_whitespace_left = re.compile(\n f\"\\\\s+({BLOCK_TAG_START}-\\\\s*(endtrans|pluralize).*?-?{BLOCK_TAG_END})\"\n)\n\nregexes = [\n r\"{{~?#tr}}([\\s\\S]*?)(?:~?{{/tr}}|{{#\\*inline )\", # '.' doesn't match '\\n' by default\n r'{{~?\\s*t \"([\\s\\S]*?)\"\\W*~?}}',\n r\"{{~?\\s*t '([\\s\\S]*?)'\\W*~?}}\",\n r'\\(t \"([\\s\\S]*?)\"\\)',\n r'=\\(t \"([\\s\\S]*?)\"\\)(?=[^{]*}})',\n r\"=\\(t '([\\s\\S]*?)'\\)(?=[^{]*}})\",\n]\ntags = [\n (\"err_\", \"error\"),\n]\n\nfrontend_compiled_regexes = [re.compile(regex) for regex in regexes]\nmultiline_js_comment = re.compile(r\"/\\*.*?\\*/\", re.DOTALL)\nsingleline_js_comment = re.compile(\"//.*?\\n\")\n\n\ndef strip_whitespaces(src: str) -> str:\n src = strip_whitespace_left.sub(\"\\\\1\", src)\n src = strip_whitespace_right.sub(\"\\\\1\", src)\n return src\n\n\nclass Command(makemessages.Command):\n xgettext_options = makemessages.Command.xgettext_options\n for func, tag in tags:\n xgettext_options += [f'--keyword={func}:1,\"{tag}\"']\n\n def add_arguments(self, parser: CommandParser) -> None:\n super().add_arguments(parser)\n parser.add_argument(\n \"--frontend-source\",\n default=\"web/templates\",\n help=\"Name of the Handlebars template directory\",\n )\n parser.add_argument(\n \"--frontend-output\",\n default=\"locale\",\n help=\"Name of the frontend messages output directory\",\n )\n parser.add_argument(\n \"--frontend-namespace\",\n default=\"translations.json\",\n help=\"Namespace of the frontend locale file\",\n )\n\n def handle(self, *args: Any, **options: Any) -> None:\n self.handle_django_locales(*args, **options)\n self.handle_frontend_locales(**options)\n\n def handle_frontend_locales(\n self,\n *,\n frontend_source: str,\n frontend_output: str,\n frontend_namespace: str,\n locale: List[str],\n exclude: List[str],\n all: bool,\n **options: Any,\n ) -> None:\n self.frontend_source = frontend_source\n self.frontend_output = frontend_output\n self.frontend_namespace = frontend_namespace\n self.frontend_locale = locale\n self.frontend_exclude = exclude\n self.frontend_all = all\n\n translation_strings = self.get_translation_strings()\n self.write_translation_strings(translation_strings)\n\n def handle_django_locales(self, *args: Any, **options: Any) -> None:\n old_endblock_re = template.endblock_re\n old_block_re = template.block_re\n old_constant_re = template.constant_re\n\n old_templatize = template.templatize\n # Extend the regular expressions that are used to detect\n # translation blocks with an \"OR jinja-syntax\" clause.\n template.endblock_re = re.compile(\n template.endblock_re.pattern + \"|\" + r\"\"\"^-?\\s*endtrans\\s*-?$\"\"\"\n )\n template.block_re = re.compile(\n template.block_re.pattern + \"|\" + r\"\"\"^-?\\s*trans(?:\\s+(?!'|\")(?=.*?=.*?)|\\s*-?$)\"\"\"\n )\n template.plural_re = re.compile(\n template.plural_re.pattern + \"|\" + r\"\"\"^-?\\s*pluralize(?:\\s+.+|-?$)\"\"\"\n )\n template.constant_re = re.compile(r\"\"\"_\\(((?:\".*?\")|(?:'.*?')).*\\)\"\"\")\n\n def my_templatize(src: str, *args: Any, **kwargs: Any) -> str:\n new_src = strip_whitespaces(src)\n return old_templatize(new_src, *args, **kwargs)\n\n template.templatize = my_templatize\n\n try:\n ignore_patterns = options.get(\"ignore_patterns\", [])\n ignore_patterns.append(\"docs/*\")\n ignore_patterns.append(\"templates/zerver/emails/custom/*\")\n ignore_patterns.append(\"var/*\")\n options[\"ignore_patterns\"] = ignore_patterns\n super().handle(*args, **options)\n finally:\n template.endblock_re = old_endblock_re\n template.block_re = old_block_re\n template.templatize = old_templatize\n template.constant_re = old_constant_re\n\n def extract_strings(self, data: str) -> List[str]:\n translation_strings: List[str] = []\n for regex in frontend_compiled_regexes:\n for match in regex.findall(data):\n match = match.strip()\n match = \" \".join(line.strip() for line in match.splitlines())\n translation_strings.append(match)\n\n return translation_strings\n\n def ignore_javascript_comments(self, data: str) -> str:\n # Removes multi line comments.\n data = multiline_js_comment.sub(\"\", data)\n # Removes single line (//) comments.\n data = singleline_js_comment.sub(\"\", data)\n return data\n\n def get_translation_strings(self) -> List[str]:\n translation_strings: List[str] = []\n dirname = self.get_template_dir()\n\n for dirpath, dirnames, filenames in os.walk(dirname):\n for filename in [f for f in filenames if f.endswith(\".hbs\")]:\n if filename.startswith(\".\"):\n continue\n with open(os.path.join(dirpath, filename)) as reader:\n data = reader.read()\n translation_strings.extend(self.extract_strings(data))\n for dirpath, dirnames, filenames in itertools.chain(\n os.walk(\"web/src\"), os.walk(\"web/shared/src\")\n ):\n for filename in [f for f in filenames if f.endswith((\".js\", \".ts\"))]:\n if filename.startswith(\".\"):\n continue\n with open(os.path.join(dirpath, filename)) as reader:\n data = reader.read()\n data = self.ignore_javascript_comments(data)\n translation_strings.extend(self.extract_strings(data))\n\n extracted = subprocess.check_output(\n [\n \"node_modules/.bin/formatjs\",\n \"extract\",\n \"--additional-function-names=$t,$t_html\",\n \"--format=simple\",\n \"--ignore=**/*.d.ts\",\n \"web/src/**/*.js\",\n \"web/src/**/*.ts\",\n ]\n )\n translation_strings.extend(json.loads(extracted).values())\n\n return list(set(translation_strings))\n\n def get_template_dir(self) -> str:\n return self.frontend_source\n\n def get_namespace(self) -> str:\n return self.frontend_namespace\n\n def get_locales(self) -> Collection[str]:\n locale = self.frontend_locale\n exclude = self.frontend_exclude\n process_all = self.frontend_all\n\n # After calling super().handle(), default_locale_path gets set on self\n # so that we can reuse it here.\n default_locale_path = self.default_locale_path # type: ignore[attr-defined] # not in stubs\n paths = glob.glob(f\"{default_locale_path}/*\")\n all_locales = [os.path.basename(path) for path in paths if os.path.isdir(path)]\n\n # Account for excluded locales\n if process_all:\n return all_locales\n else:\n locales = locale or all_locales\n return set(locales) - set(exclude)\n\n def get_base_path(self) -> str:\n return self.frontend_output\n\n def get_output_paths(self) -> Iterator[str]:\n base_path = self.get_base_path()\n locales = self.get_locales()\n for path in [os.path.join(base_path, locale) for locale in locales]:\n if not os.path.exists(path):\n os.makedirs(path)\n\n yield os.path.join(path, self.get_namespace())\n\n def get_new_strings(\n self, old_strings: Mapping[str, str], translation_strings: List[str], locale: str\n ) -> Dict[str, str]:\n \"\"\"\n Missing strings are removed, new strings are added and already\n translated strings are not touched.\n \"\"\"\n new_strings = {} # Dict[str, str]\n for k in translation_strings:\n if locale == \"en\":\n # For English language, translation is equal to the key.\n new_strings[k] = old_strings.get(k, k)\n else:\n new_strings[k] = old_strings.get(k, \"\")\n\n return new_strings\n\n def write_translation_strings(self, translation_strings: List[str]) -> None:\n for locale, output_path in zip(self.get_locales(), self.get_output_paths()):\n self.stdout.write(f\"[frontend] processing locale {locale}\")\n try:\n with open(output_path) as reader:\n old_strings = json.load(reader)\n except (OSError, ValueError):\n old_strings = {}\n\n new_strings = self.get_new_strings(old_strings, translation_strings, locale)\n with open(output_path, \"w\") as writer:\n json.dump(new_strings, writer, indent=2, sort_keys=True)\n", "path": "zerver/management/commands/makemessages.py"}]}
| 3,955 | 293 |
gh_patches_debug_33090
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-947
|
We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Polygon setEdges does not update the ShapeStim vertices
If I make a polygon object:
``` python
poly = visual.Polygon(win, edges=3, lineWidth=3, radius=3)
poly.draw()
win.flip()
```
and then want to change the shape on the fly in code, I would have though I would do:
``` python
poly.setEdges(5)
poly.draw()
win.flip()
```
This doesn't actually change the shape that gets shown though, but the following code does:
``` python
poly.setEdges(5)
poly.setVertices(poly.vertices)
poly.draw()
win.flip()
```
I think this is because `poly.setEdges` calls `poly._calcVertices` which sets the `poly.vertices` attribute, but `poly.setEdges` doesn't pass the new array to the `poly.setVertices` method, which I gather is inherited from `ShapeStim`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `psychopy/visual/polygon.py`
Content:
```
1
2 #!/usr/bin/env python2
3
4 '''Creates a regular polygon (triangles, pentagrams, ...)
5 as a special case of a :class:`~psychopy.visual.ShapeStim`'''
6
7 # Part of the PsychoPy library
8 # Copyright (C) 2015 Jonathan Peirce
9 # Distributed under the terms of the GNU General Public License (GPL).
10
11 import psychopy # so we can get the __path__
12
13 from psychopy.visual.shape import ShapeStim
14 from psychopy.tools.attributetools import attributeSetter, setAttribute
15
16 import numpy
17
18
19 class Polygon(ShapeStim):
20 """Creates a regular polygon (triangles, pentagrams, ...) as a special case of a :class:`~psychopy.visual.ShapeStim`
21
22 (New in version 1.72.00)
23 """
24 def __init__(self, win, edges=3, radius=.5, **kwargs):
25 """
26 Polygon accepts all input parameters that :class:`~psychopy.visual.ShapeStim` accepts, except for vertices and closeShape.
27 """
28 #what local vars are defined (these are the init params) for use by __repr__
29 self._initParams = dir()
30 self._initParams.remove('self')
31 #kwargs isn't a parameter, but a list of params
32 self._initParams.remove('kwargs')
33 self._initParams.extend(kwargs)
34 self.autoLog = False #but will be changed if needed at end of init
35 self.__dict__['edges'] = edges
36 self.radius = numpy.asarray(radius)
37 self._calcVertices()
38 kwargs['closeShape'] = True # Make sure nobody messes around here
39 kwargs['vertices'] = self.vertices
40 super(Polygon, self).__init__(win, **kwargs)
41
42 def _calcVertices(self):
43 d = numpy.pi*2/ self.edges
44 self.vertices = numpy.asarray([
45 numpy.asarray(
46 (numpy.sin(e*d), numpy.cos(e*d))
47 ) * self.radius
48 for e in xrange(int(round(self.edges)))
49 ])
50
51 @attributeSetter
52 def edges(self, edges):
53 """Int or float. Number of edges of the polygon. Floats are rounded to int.
54 :ref:`Operations <attrib-operations>` supported."""
55 self.__dict__['edges'] = edges
56 self._calcVertices()
57 def setEdges(self, edges, operation='', log=None):
58 """Usually you can use 'stim.attribute = value' syntax instead,
59 but use this method if you need to suppress the log message"""
60 setAttribute(self, 'edges', edges, log, operation)
61
62 @attributeSetter
63 def radius(self, radius):
64 """float, int, tuple, list or 2x1 array
65 Radius of the Polygon (distance from the center to the corners).
66 May be a -2tuple or list to stretch the polygon asymmetrically.
67
68 :ref:`Operations <attrib-operations>` supported.
69
70 Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging."""
71 self.__dict__['radius'] = numpy.array(radius)
72 self._calcVertices()
73 self.setVertices(self.vertices, log=False)
74 def setRadius(self, radius, operation='', log=None):
75 """Usually you can use 'stim.attribute = value' syntax instead,
76 but use this method if you need to suppress the log message"""
77 setAttribute(self, 'radius', radius, log, operation)
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
|
diff --git a/psychopy/visual/polygon.py b/psychopy/visual/polygon.py
--- a/psychopy/visual/polygon.py
+++ b/psychopy/visual/polygon.py
@@ -47,13 +47,14 @@
) * self.radius
for e in xrange(int(round(self.edges)))
])
-
+
@attributeSetter
def edges(self, edges):
"""Int or float. Number of edges of the polygon. Floats are rounded to int.
:ref:`Operations <attrib-operations>` supported."""
self.__dict__['edges'] = edges
self._calcVertices()
+ self.setVertices(self.vertices, log=False)
def setEdges(self, edges, operation='', log=None):
"""Usually you can use 'stim.attribute = value' syntax instead,
but use this method if you need to suppress the log message"""
@@ -66,7 +67,7 @@
May be a -2tuple or list to stretch the polygon asymmetrically.
:ref:`Operations <attrib-operations>` supported.
-
+
Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging."""
self.__dict__['radius'] = numpy.array(radius)
self._calcVertices()
@@ -74,4 +75,4 @@
def setRadius(self, radius, operation='', log=None):
"""Usually you can use 'stim.attribute = value' syntax instead,
but use this method if you need to suppress the log message"""
- setAttribute(self, 'radius', radius, log, operation)
\ No newline at end of file
+ setAttribute(self, 'radius', radius, log, operation)
|
{"golden_diff": "diff --git a/psychopy/visual/polygon.py b/psychopy/visual/polygon.py\n--- a/psychopy/visual/polygon.py\n+++ b/psychopy/visual/polygon.py\n@@ -47,13 +47,14 @@\n ) * self.radius\n for e in xrange(int(round(self.edges)))\n ])\n- \n+\n @attributeSetter\n def edges(self, edges):\n \"\"\"Int or float. Number of edges of the polygon. Floats are rounded to int.\n :ref:`Operations <attrib-operations>` supported.\"\"\"\n self.__dict__['edges'] = edges\n self._calcVertices()\n+ self.setVertices(self.vertices, log=False)\n def setEdges(self, edges, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n@@ -66,7 +67,7 @@\n May be a -2tuple or list to stretch the polygon asymmetrically.\n \n :ref:`Operations <attrib-operations>` supported.\n- \n+\n Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging.\"\"\"\n self.__dict__['radius'] = numpy.array(radius)\n self._calcVertices()\n@@ -74,4 +75,4 @@\n def setRadius(self, radius, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n- setAttribute(self, 'radius', radius, log, operation)\n\\ No newline at end of file\n+ setAttribute(self, 'radius', radius, log, operation)\n", "issue": "Polygon setEdges does not update the ShapeStim vertices\nIf I make a polygon object:\n\n``` python\npoly = visual.Polygon(win, edges=3, lineWidth=3, radius=3)\npoly.draw()\nwin.flip()\n```\n\nand then want to change the shape on the fly in code, I would have though I would do:\n\n``` python\npoly.setEdges(5)\npoly.draw()\nwin.flip()\n```\n\nThis doesn't actually change the shape that gets shown though, but the following code does:\n\n``` python\npoly.setEdges(5)\npoly.setVertices(poly.vertices)\npoly.draw()\nwin.flip()\n```\n\nI think this is because `poly.setEdges` calls `poly._calcVertices` which sets the `poly.vertices` attribute, but `poly.setEdges` doesn't pass the new array to the `poly.setVertices` method, which I gather is inherited from `ShapeStim`.\n\n", "before_files": [{"content": "\n#!/usr/bin/env python2\n\n'''Creates a regular polygon (triangles, pentagrams, ...)\nas a special case of a :class:`~psychopy.visual.ShapeStim`'''\n\n# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nimport psychopy # so we can get the __path__\n\nfrom psychopy.visual.shape import ShapeStim\nfrom psychopy.tools.attributetools import attributeSetter, setAttribute\n\nimport numpy\n\n\nclass Polygon(ShapeStim):\n \"\"\"Creates a regular polygon (triangles, pentagrams, ...) as a special case of a :class:`~psychopy.visual.ShapeStim`\n\n (New in version 1.72.00)\n \"\"\"\n def __init__(self, win, edges=3, radius=.5, **kwargs):\n \"\"\"\n Polygon accepts all input parameters that :class:`~psychopy.visual.ShapeStim` accepts, except for vertices and closeShape.\n \"\"\"\n #what local vars are defined (these are the init params) for use by __repr__\n self._initParams = dir()\n self._initParams.remove('self')\n #kwargs isn't a parameter, but a list of params\n self._initParams.remove('kwargs')\n self._initParams.extend(kwargs)\n self.autoLog = False #but will be changed if needed at end of init\n self.__dict__['edges'] = edges\n self.radius = numpy.asarray(radius)\n self._calcVertices()\n kwargs['closeShape'] = True # Make sure nobody messes around here\n kwargs['vertices'] = self.vertices\n super(Polygon, self).__init__(win, **kwargs)\n\n def _calcVertices(self):\n d = numpy.pi*2/ self.edges\n self.vertices = numpy.asarray([\n numpy.asarray(\n (numpy.sin(e*d), numpy.cos(e*d))\n ) * self.radius\n for e in xrange(int(round(self.edges)))\n ])\n \n @attributeSetter\n def edges(self, edges):\n \"\"\"Int or float. Number of edges of the polygon. Floats are rounded to int.\n :ref:`Operations <attrib-operations>` supported.\"\"\"\n self.__dict__['edges'] = edges\n self._calcVertices()\n def setEdges(self, edges, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'edges', edges, log, operation)\n\n @attributeSetter\n def radius(self, radius):\n \"\"\"float, int, tuple, list or 2x1 array\n Radius of the Polygon (distance from the center to the corners).\n May be a -2tuple or list to stretch the polygon asymmetrically.\n\n :ref:`Operations <attrib-operations>` supported.\n \n Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging.\"\"\"\n self.__dict__['radius'] = numpy.array(radius)\n self._calcVertices()\n self.setVertices(self.vertices, log=False)\n def setRadius(self, radius, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'radius', radius, log, operation)", "path": "psychopy/visual/polygon.py"}], "after_files": [{"content": "\n#!/usr/bin/env python2\n\n'''Creates a regular polygon (triangles, pentagrams, ...)\nas a special case of a :class:`~psychopy.visual.ShapeStim`'''\n\n# Part of the PsychoPy library\n# Copyright (C) 2015 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nimport psychopy # so we can get the __path__\n\nfrom psychopy.visual.shape import ShapeStim\nfrom psychopy.tools.attributetools import attributeSetter, setAttribute\n\nimport numpy\n\n\nclass Polygon(ShapeStim):\n \"\"\"Creates a regular polygon (triangles, pentagrams, ...) as a special case of a :class:`~psychopy.visual.ShapeStim`\n\n (New in version 1.72.00)\n \"\"\"\n def __init__(self, win, edges=3, radius=.5, **kwargs):\n \"\"\"\n Polygon accepts all input parameters that :class:`~psychopy.visual.ShapeStim` accepts, except for vertices and closeShape.\n \"\"\"\n #what local vars are defined (these are the init params) for use by __repr__\n self._initParams = dir()\n self._initParams.remove('self')\n #kwargs isn't a parameter, but a list of params\n self._initParams.remove('kwargs')\n self._initParams.extend(kwargs)\n self.autoLog = False #but will be changed if needed at end of init\n self.__dict__['edges'] = edges\n self.radius = numpy.asarray(radius)\n self._calcVertices()\n kwargs['closeShape'] = True # Make sure nobody messes around here\n kwargs['vertices'] = self.vertices\n super(Polygon, self).__init__(win, **kwargs)\n\n def _calcVertices(self):\n d = numpy.pi*2/ self.edges\n self.vertices = numpy.asarray([\n numpy.asarray(\n (numpy.sin(e*d), numpy.cos(e*d))\n ) * self.radius\n for e in xrange(int(round(self.edges)))\n ])\n\n @attributeSetter\n def edges(self, edges):\n \"\"\"Int or float. Number of edges of the polygon. Floats are rounded to int.\n :ref:`Operations <attrib-operations>` supported.\"\"\"\n self.__dict__['edges'] = edges\n self._calcVertices()\n self.setVertices(self.vertices, log=False)\n def setEdges(self, edges, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'edges', edges, log, operation)\n\n @attributeSetter\n def radius(self, radius):\n \"\"\"float, int, tuple, list or 2x1 array\n Radius of the Polygon (distance from the center to the corners).\n May be a -2tuple or list to stretch the polygon asymmetrically.\n\n :ref:`Operations <attrib-operations>` supported.\n\n Usually there's a setAttribute(value, log=False) method for each attribute. Use this if you want to disable logging.\"\"\"\n self.__dict__['radius'] = numpy.array(radius)\n self._calcVertices()\n self.setVertices(self.vertices, log=False)\n def setRadius(self, radius, operation='', log=None):\n \"\"\"Usually you can use 'stim.attribute = value' syntax instead,\n but use this method if you need to suppress the log message\"\"\"\n setAttribute(self, 'radius', radius, log, operation)\n", "path": "psychopy/visual/polygon.py"}]}
| 1,333 | 374 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.