problem_id
stringlengths 18
21
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
54
| prompt
stringlengths 1.28k
64.2k
| golden_diff
stringlengths 166
811
| verification_info
stringlengths 604
118k
|
---|---|---|---|---|---|---|
gh_patches_debug_1100 | rasdani/github-patches | git_diff | ultrabug__py3status-2023 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Importing module from entry_point doesn't work the same as importing from .i3/py3status/
I've written a small http monitoring module that I want to call multiple times. This works if I put the module in `~/.i3/py3status`.
With a config block like:
```
...
order += "http_monitor apache"
order += "http_monitor medusa"
http_monitor 'apache' {
service_location = "http://host:81"
service_name = '🪶'
}
http_monitor 'medusa' {
service_location = "http://host:8081"
service_name = '🐍'
}
...
```
Working from `~/.i3/py3status/` the py3status log.
```
2021-03-06 22:38:53 INFO modules include paths: [PosixPath('/home/j/.i3/py3status')]
2021-03-06 22:38:53 INFO available module from /home/j/.i3/py3status: http_monitor
2021-03-06 22:38:53 INFO loading module "http_monitor apache" from /home/j/.i3/py3status/http_monitor.py
2021-03-06 22:38:53 INFO loading module "http_monitor medusa" from /home/j/.i3/py3status/http_monitor.py
```
So this method has been working correctly for quite some time for me.
However I wanted to package this as an Arch package AUR, and to install packages with `pacman` generally it's best practice to never put any files in the users home directory.
So I figured I'd just convert my module to use the `entry_point` since this has worked for some of the other modules I've written for py3status and built Arch packages for. But I'm getting an error trying to pass it parameters when importing it this way.
```
2021-03-06 22:56:33 INFO available module from entry_point: http_monitor
2021-03-06 22:56:33 INFO Module `http_monitor apache` could not be loaded (unsupported operand type(s) for /: 'PosixPath' and 'type')
2021-03-06 22:56:33 INFO unsupported operand type(s) for /: 'PosixPath' and 'type'
2021-03-06 22:56:33 INFO Module `http_monitor medusa` could not be loaded (unsupported operand type(s) for /: 'PosixPath' and 'type')
2021-03-06 22:56:33 INFO unsupported operand type(s) for /: 'PosixPath' and 'type'
```
The module works correctly if I don't pass it a parameter using the `entry_point`, but then I can only have 1 instance of it running.
Any ideas 💭
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `py3status/core.py`
Content:
```
1 import pkg_resources
2 import sys
3 import time
4
5 from collections import deque
6 from json import dumps
7 from pathlib import Path
8 from pprint import pformat
9 from signal import signal, SIGTERM, SIGUSR1, SIGTSTP, SIGCONT
10 from subprocess import Popen
11 from threading import Event, Thread
12 from syslog import syslog, LOG_ERR, LOG_INFO, LOG_WARNING
13 from traceback import extract_tb, format_tb, format_stack
14
15 from py3status.command import CommandServer
16 from py3status.events import Events
17 from py3status.formatter import expand_color
18 from py3status.helpers import print_stderr
19 from py3status.i3status import I3status
20 from py3status.parse_config import process_config
21 from py3status.module import Module
22 from py3status.profiling import profile
23 from py3status.udev_monitor import UdevMonitor
24
25 LOG_LEVELS = {"error": LOG_ERR, "warning": LOG_WARNING, "info": LOG_INFO}
26
27 DBUS_LEVELS = {"error": "critical", "warning": "normal", "info": "low"}
28
29 CONFIG_SPECIAL_SECTIONS = [
30 ".group_extras",
31 ".module_groups",
32 "general",
33 "i3s_modules",
34 "on_click",
35 "order",
36 "py3_modules",
37 "py3status",
38 ]
39
40 ENTRY_POINT_NAME = "py3status"
41 ENTRY_POINT_KEY = "entry_point"
42
43
44 class Runner(Thread):
45 """
46 A Simple helper to run a module in a Thread so it is non-locking.
47 """
48
49 def __init__(self, module, py3_wrapper, module_name):
50 Thread.__init__(self)
51 self.daemon = True
52 self.module = module
53 self.module_name = module_name
54 self.py3_wrapper = py3_wrapper
55 self.start()
56
57 def run(self):
58 try:
59 self.module.run()
60 except: # noqa e722
61 self.py3_wrapper.report_exception("Runner")
62 # the module is no longer running so notify the timeout logic
63 if self.module_name:
64 self.py3_wrapper.timeout_finished.append(self.module_name)
65
66
67 class NoneSetting:
68 """
69 This class represents no setting in the config.
70 """
71
72 # this attribute is used to identify that this is a none setting
73 none_setting = True
74
75 def __len__(self):
76 return 0
77
78 def __repr__(self):
79 # this is for output via module_test
80 return "None"
81
82
83 class Task:
84 """
85 A simple task that can be run by the scheduler.
86 """
87
88 def run(self):
89 # F901 'raise NotImplemented' should be 'raise NotImplementedError'
90 raise NotImplemented() # noqa f901
91
92
93 class CheckI3StatusThread(Task):
94 """
95 Checks that the i3status thread is alive
96 """
97
98 def __init__(self, i3status_thread, py3_wrapper):
99 self.i3status_thread = i3status_thread
100 self.timeout_queue_add = py3_wrapper.timeout_queue_add
101 self.notify_user = py3_wrapper.notify_user
102
103 def run(self):
104 # check i3status thread
105 if not self.i3status_thread.is_alive():
106 err = self.i3status_thread.error
107 if not err:
108 err = "I3status died horribly."
109 self.notify_user(err)
110 else:
111 # check again in 5 seconds
112 self.timeout_queue_add(self, int(time.perf_counter()) + 5)
113
114
115 class ModuleRunner(Task):
116 """
117 Starts up a Module
118 """
119
120 def __init__(self, module):
121 self.module = module
122
123 def run(self):
124 self.module.start_module()
125
126
127 class Common:
128 """
129 This class is used to hold core functionality so that it can be shared more
130 easily. This allow us to run the module tests through the same code as
131 when we are running for real.
132 """
133
134 def __init__(self, py3_wrapper):
135 self.py3_wrapper = py3_wrapper
136 self.none_setting = NoneSetting()
137 self.config = py3_wrapper.config
138
139 def get_config_attribute(self, name, attribute):
140 """
141 Look for the attribute in the config. Start with the named module and
142 then walk up through any containing group and then try the general
143 section of the config.
144 """
145
146 # A user can set a param to None in the config to prevent a param
147 # being used. This is important when modules do something like
148 #
149 # color = self.py3.COLOR_MUTED or self.py3.COLOR_BAD
150 config = self.config["py3_config"]
151 param = config[name].get(attribute, self.none_setting)
152 if hasattr(param, "none_setting") and name in config[".module_groups"]:
153 for module in config[".module_groups"][name]:
154 if attribute in config.get(module, {}):
155 param = config[module].get(attribute)
156 break
157 if hasattr(param, "none_setting"):
158 # check py3status config section
159 param = config["py3status"].get(attribute, self.none_setting)
160 if hasattr(param, "none_setting"):
161 # check py3status general section
162 param = config["general"].get(attribute, self.none_setting)
163 if param and (attribute == "color" or attribute.startswith("color_")):
164 # check color value
165 param = expand_color(param.lower(), self.none_setting)
166 return param
167
168 def report_exception(self, msg, notify_user=True, level="error", error_frame=None):
169 """
170 Report details of an exception to the user.
171 This should only be called within an except: block Details of the
172 exception are reported eg filename, line number and exception type.
173
174 Because stack trace information outside of py3status or it's modules is
175 not helpful in actually finding and fixing the error, we try to locate
176 the first place that the exception affected our code.
177
178 Alternatively if the error occurs in a module via a Py3 call that
179 catches and reports the error then we receive an error_frame and use
180 that as the source of the error.
181
182 NOTE: msg should not end in a '.' for consistency.
183 """
184 # Get list of paths that our stack trace should be found in.
185 py3_paths = [Path(__file__).resolve()] + self.config["include_paths"]
186 traceback = None
187
188 try:
189 # We need to make sure to delete tb even if things go wrong.
190 exc_type, exc_obj, tb = sys.exc_info()
191 stack = extract_tb(tb)
192 error_str = f"{exc_type.__name__}: {exc_obj}\n"
193 traceback = [error_str]
194
195 if error_frame:
196 # The error occurred in a py3status module so the traceback
197 # should be made to appear correct. We caught the exception
198 # but make it look as though we did not.
199 traceback += format_stack(error_frame, 1) + format_tb(tb)
200 filename = Path(error_frame.f_code.co_filename).name
201 line_no = error_frame.f_lineno
202 else:
203 # This is a none module based error
204 traceback += format_tb(tb)
205 # Find first relevant trace in the stack.
206 # it should be in py3status or one of it's modules.
207 found = False
208 for item in reversed(stack):
209 filename = item[0]
210 for path in py3_paths:
211 if filename.startswith(path):
212 # Found a good trace
213 filename = item[0].name
214 line_no = item[1]
215 found = True
216 break
217 if found:
218 break
219 # all done! create our message.
220 msg = "{} ({}) {} line {}.".format(
221 msg, exc_type.__name__, filename, line_no
222 )
223 except: # noqa e722
224 # something went wrong report what we can.
225 msg = f"{msg}."
226 finally:
227 # delete tb!
228 del tb
229 # log the exception and notify user
230 self.py3_wrapper.log(msg, "warning")
231 if traceback:
232 # if debug is not in the config then we are at an early stage of
233 # running py3status and logging is not yet available so output the
234 # error to STDERR so it can be seen
235 if "debug" not in self.config:
236 print_stderr("\n".join(traceback))
237 elif self.config.get("log_file"):
238 self.py3_wrapper.log("".join(["Traceback\n"] + traceback))
239 if notify_user:
240 self.py3_wrapper.notify_user(msg, level=level)
241
242
243 class Py3statusWrapper:
244 """
245 This is the py3status wrapper.
246 """
247
248 def __init__(self, options):
249 """
250 Useful variables we'll need.
251 """
252 self.config = vars(options)
253 self.i3bar_running = True
254 self.last_refresh_ts = time.perf_counter()
255 self.lock = Event()
256 self.modules = {}
257 self.notified_messages = set()
258 self.options = options
259 self.output_modules = {}
260 self.py3_modules = []
261 self.running = True
262 self.update_queue = deque()
263 self.update_request = Event()
264
265 # shared code
266 self.common = Common(self)
267 self.get_config_attribute = self.common.get_config_attribute
268 self.report_exception = self.common.report_exception
269
270 # these are used to schedule module updates
271 self.timeout_add_queue = deque()
272 self.timeout_due = None
273 self.timeout_finished = deque()
274 self.timeout_keys = []
275 self.timeout_missed = {}
276 self.timeout_queue = {}
277 self.timeout_queue_lookup = {}
278 self.timeout_running = set()
279 self.timeout_update_due = deque()
280
281 def timeout_queue_add(self, item, cache_time=0):
282 """
283 Add a item to be run at a future time.
284 This must be a Module, I3statusModule or a Task
285 """
286 # add the info to the add queue. We do this so that actually adding
287 # the module is done in the core thread.
288 self.timeout_add_queue.append((item, cache_time))
289 # if the timeout_add_queue is not due to be processed until after this
290 # update request is due then trigger an update now.
291 if self.timeout_due is None or cache_time < self.timeout_due:
292 self.update_request.set()
293
294 def timeout_process_add_queue(self, module, cache_time):
295 """
296 Add a module to the timeout_queue if it is scheduled in the future or
297 if it is due for an update immediately just trigger that.
298
299 the timeout_queue is a dict with the scheduled time as the key and the
300 value is a list of module instance names due to be updated at that
301 point. An ordered list of keys is kept to allow easy checking of when
302 updates are due. A list is also kept of which modules are in the
303 update_queue to save having to search for modules in it unless needed.
304 """
305 # If already set to update do nothing
306 if module in self.timeout_update_due:
307 return
308
309 # remove if already in the queue
310 key = self.timeout_queue_lookup.get(module)
311 if key:
312 queue_item = self.timeout_queue[key]
313 queue_item.remove(module)
314 if not queue_item:
315 del self.timeout_queue[key]
316 self.timeout_keys.remove(key)
317
318 if cache_time == 0:
319 # if cache_time is 0 we can just trigger the module update
320 self.timeout_update_due.append(module)
321 self.timeout_queue_lookup[module] = None
322 else:
323 # add the module to the timeout queue
324 if cache_time not in self.timeout_keys:
325 self.timeout_queue[cache_time] = {module}
326 self.timeout_keys.append(cache_time)
327 # sort keys so earliest is first
328 self.timeout_keys.sort()
329
330 # when is next timeout due?
331 try:
332 self.timeout_due = self.timeout_keys[0]
333 except IndexError:
334 self.timeout_due = None
335 else:
336 self.timeout_queue[cache_time].add(module)
337 # note that the module is in the timeout_queue
338 self.timeout_queue_lookup[module] = cache_time
339
340 def timeout_queue_process(self):
341 """
342 Check the timeout_queue and set any due modules to update.
343 """
344 # process any items that need adding to the queue
345 while self.timeout_add_queue:
346 self.timeout_process_add_queue(*self.timeout_add_queue.popleft())
347 now = time.perf_counter()
348 due_timeouts = []
349 # find any due timeouts
350 for timeout in self.timeout_keys:
351 if timeout > now:
352 break
353 due_timeouts.append(timeout)
354
355 if due_timeouts:
356 # process them
357 for timeout in due_timeouts:
358 modules = self.timeout_queue[timeout]
359 # remove from the queue
360 del self.timeout_queue[timeout]
361 self.timeout_keys.remove(timeout)
362
363 for module in modules:
364 # module no longer in queue
365 del self.timeout_queue_lookup[module]
366 # tell module to update
367 self.timeout_update_due.append(module)
368
369 # when is next timeout due?
370 try:
371 self.timeout_due = self.timeout_keys[0]
372 except IndexError:
373 self.timeout_due = None
374
375 # process any finished modules.
376 # Now that the module has finished running it may have been marked to
377 # be triggered again. This is most likely to happen when events are
378 # being processed and the events are arriving much faster than the
379 # module can handle them. It is important as a module may handle
380 # events but not trigger the module update. If during the event the
381 # module is due to update the update is not actioned but it needs to be
382 # once the events have finished or else the module will no longer
383 # continue to update.
384 while self.timeout_finished:
385 module_name = self.timeout_finished.popleft()
386 self.timeout_running.discard(module_name)
387 if module_name in self.timeout_missed:
388 module = self.timeout_missed.pop(module_name)
389 self.timeout_update_due.append(module)
390
391 # run any modules that are due
392 while self.timeout_update_due:
393 module = self.timeout_update_due.popleft()
394 module_name = getattr(module, "module_full_name", None)
395 # if the module is running then we do not want to trigger it but
396 # instead wait till it has finished running and then trigger
397 if module_name and module_name in self.timeout_running:
398 self.timeout_missed[module_name] = module
399 else:
400 self.timeout_running.add(module_name)
401 Runner(module, self, module_name)
402
403 # we return how long till we next need to process the timeout_queue
404 if self.timeout_due is not None:
405 return self.timeout_due - time.perf_counter()
406
407 def gevent_monkey_patch_report(self):
408 """
409 Report effective gevent monkey patching on the logs.
410 """
411 try:
412 import gevent.socket
413 import socket
414
415 if gevent.socket.socket is socket.socket:
416 self.log("gevent monkey patching is active")
417 return True
418 else:
419 self.notify_user("gevent monkey patching failed.")
420 except ImportError:
421 self.notify_user("gevent is not installed, monkey patching failed.")
422 return False
423
424 def get_user_modules(self):
425 """Mapping from module name to relevant objects.
426
427 There are two ways of discovery and storage:
428 `include_paths` (no installation): include_path, f_name
429 `entry_point` (from installed package): "entry_point", <Py3Status class>
430
431 Modules of the same name from entry points shadow all other modules.
432 """
433 user_modules = self._get_path_based_modules()
434 user_modules.update(self._get_entry_point_based_modules())
435 return user_modules
436
437 def _get_path_based_modules(self):
438 """
439 Search configured include directories for user provided modules.
440
441 user_modules: {
442 'weather_yahoo': ('~/i3/py3status/', 'weather_yahoo.py')
443 }
444 """
445 user_modules = {}
446 for include_path in self.config["include_paths"]:
447 for f_name in sorted(include_path.iterdir()):
448 if f_name.suffix != ".py":
449 continue
450 module_name = f_name.stem
451 # do not overwrite modules if already found
452 if module_name in user_modules:
453 pass
454 user_modules[module_name] = (include_path, f_name)
455 self.log(f"available module from {include_path}: {module_name}")
456 return user_modules
457
458 def _get_entry_point_based_modules(self):
459 classes_from_entry_points = {}
460 for entry_point in pkg_resources.iter_entry_points(ENTRY_POINT_NAME):
461 try:
462 module = entry_point.load()
463 except Exception as err:
464 self.log(f"entry_point '{entry_point}' error: {err}")
465 continue
466 klass = getattr(module, Module.EXPECTED_CLASS, None)
467 if klass:
468 module_name = entry_point.module_name.split(".")[-1]
469 classes_from_entry_points[module_name] = (ENTRY_POINT_KEY, klass)
470 self.log(f"available module from {ENTRY_POINT_KEY}: {module_name}")
471 return classes_from_entry_points
472
473 def get_user_configured_modules(self):
474 """
475 Get a dict of all available and configured py3status modules
476 in the user's i3status.conf.
477
478 As we already have a convenient way of loading the module, we'll
479 populate the map with the Py3Status class right away
480 """
481 user_modules = {}
482 if not self.py3_modules:
483 return user_modules
484 for module_name, module_info in self.get_user_modules().items():
485 for module in self.py3_modules:
486 if module_name == module.split(" ")[0]:
487 source, item = module_info
488 user_modules[module_name] = (source, item)
489 return user_modules
490
491 def load_modules(self, modules_list, user_modules):
492 """
493 Load the given modules from the list (contains instance name) with
494 respect to the user provided modules dict.
495
496 modules_list: ['weather_yahoo paris', 'pewpew', 'net_rate']
497 user_modules: {
498 'weather_yahoo': ('/etc/py3status.d/', 'weather_yahoo.py'),
499 'pewpew': ('entry_point', <Py3Status class>),
500 }
501 """
502 for module in modules_list:
503 # ignore already provided modules (prevents double inclusion)
504 if module in self.modules:
505 continue
506 try:
507 instance = None
508 payload = user_modules.get(module)
509 if payload:
510 kind, Klass = payload
511 if kind == ENTRY_POINT_KEY:
512 instance = Klass()
513 my_m = Module(module, user_modules, self, instance=instance)
514 # only handle modules with available methods
515 if my_m.methods:
516 self.modules[module] = my_m
517 elif self.config["debug"]:
518 self.log(f'ignoring module "{module}" (no methods found)')
519 except Exception:
520 err = sys.exc_info()[1]
521 msg = f'Loading module "{module}" failed ({err}).'
522 self.report_exception(msg, level="warning")
523
524 def setup(self):
525 """
526 Setup py3status and spawn i3status/events/modules threads.
527 """
528
529 # SIGTSTP will be received from i3bar indicating that all output should
530 # stop and we should consider py3status suspended. It is however
531 # important that any processes using i3 ipc should continue to receive
532 # those events otherwise it can lead to a stall in i3.
533 signal(SIGTSTP, self.i3bar_stop)
534 # SIGCONT indicates output should be resumed.
535 signal(SIGCONT, self.i3bar_start)
536
537 # log py3status and python versions
538 self.log("=" * 8)
539 msg = "Starting py3status version {version} python {python_version}"
540 self.log(msg.format(**self.config))
541
542 try:
543 # if running from git then log the branch and last commit
544 # we do this by looking in the .git directory
545 git_path = Path(__file__).resolve().parent.parent / ".git"
546 # branch
547 with (git_path / "HEAD").open() as f:
548 out = f.readline()
549 branch = "/".join(out.strip().split("/")[2:])
550 self.log(f"git branch: {branch}")
551 # last commit
552 log_path = git_path / "logs" / "refs" / "heads" / branch
553 with log_path.open() as f:
554 out = f.readlines()[-1]
555 sha = out.split(" ")[1][:7]
556 msg = ":".join(out.strip().split("\t")[-1].split(":")[1:])
557 self.log(f"git commit: {sha}{msg}")
558 except: # noqa e722
559 pass
560
561 self.log("window manager: {}".format(self.config["wm_name"]))
562
563 if self.config["debug"]:
564 self.log(f"py3status started with config {self.config}")
565
566 if self.config["gevent"]:
567 self.is_gevent = self.gevent_monkey_patch_report()
568 else:
569 self.is_gevent = False
570
571 # read i3status.conf
572 config_path = self.config["i3status_config_path"]
573 self.log("config file: {}".format(self.config["i3status_config_path"]))
574 self.config["py3_config"] = process_config(config_path, self)
575
576 # read resources
577 if "resources" in str(self.config["py3_config"].values()):
578 from subprocess import check_output
579
580 resources = check_output(["xrdb", "-query"]).decode().splitlines()
581 self.config["resources"] = {
582 k: v.strip() for k, v in (x.split(":", 1) for x in resources)
583 }
584
585 # setup i3status thread
586 self.i3status_thread = I3status(self)
587
588 # If standalone or no i3status modules then use the mock i3status
589 # else start i3status thread.
590 i3s_modules = self.config["py3_config"]["i3s_modules"]
591 if self.config["standalone"] or not i3s_modules:
592 self.i3status_thread.mock()
593 i3s_mode = "mocked"
594 else:
595 for module in i3s_modules:
596 self.log(f"adding module {module}")
597 i3s_mode = "started"
598 self.i3status_thread.start()
599 while not self.i3status_thread.ready:
600 if not self.i3status_thread.is_alive():
601 # i3status is having a bad day, so tell the user what went
602 # wrong and do the best we can with just py3status modules.
603 err = self.i3status_thread.error
604 self.notify_user(err)
605 self.i3status_thread.mock()
606 i3s_mode = "mocked"
607 break
608 time.sleep(0.1)
609 if self.config["debug"]:
610 self.log(
611 "i3status thread {} with config {}".format(
612 i3s_mode, self.config["py3_config"]
613 )
614 )
615
616 # add i3status thread monitoring task
617 if i3s_mode == "started":
618 task = CheckI3StatusThread(self.i3status_thread, self)
619 self.timeout_queue_add(task)
620
621 # setup input events thread
622 self.events_thread = Events(self)
623 self.events_thread.daemon = True
624 self.events_thread.start()
625 if self.config["debug"]:
626 self.log("events thread started")
627
628 # initialise the command server
629 self.commands_thread = CommandServer(self)
630 self.commands_thread.daemon = True
631 self.commands_thread.start()
632 if self.config["debug"]:
633 self.log("commands thread started")
634
635 # initialize the udev monitor (lazy)
636 self.udev_monitor = UdevMonitor(self)
637
638 # suppress modules' output wrt issue #20
639 if not self.config["debug"]:
640 sys.stdout = Path("/dev/null").open("w")
641 sys.stderr = Path("/dev/null").open("w")
642
643 # get the list of py3status configured modules
644 self.py3_modules = self.config["py3_config"]["py3_modules"]
645
646 # get a dict of all user provided modules
647 self.log("modules include paths: {}".format(self.config["include_paths"]))
648 user_modules = self.get_user_configured_modules()
649 if self.config["debug"]:
650 self.log(f"user_modules={user_modules}")
651
652 if self.py3_modules:
653 # load and spawn i3status.conf configured modules threads
654 self.load_modules(self.py3_modules, user_modules)
655
656 def notify_user(
657 self,
658 msg,
659 level="error",
660 rate_limit=None,
661 module_name="",
662 icon=None,
663 title="py3status",
664 ):
665 """
666 Display notification to user via i3-nagbar or send-notify
667 We also make sure to log anything to keep trace of it.
668
669 NOTE: Message should end with a '.' for consistency.
670 """
671 dbus = self.config.get("dbus_notify")
672 if dbus:
673 # force msg, icon, title to be a string
674 title = f"{title}"
675 msg = f"{msg}"
676 if icon:
677 icon = f"{icon}"
678 else:
679 msg = f"py3status: {msg}"
680 if level != "info" and module_name == "":
681 fix_msg = "{} Please try to fix this and reload i3wm (Mod+Shift+R)"
682 msg = fix_msg.format(msg)
683 # Rate limiting. If rate limiting then we need to calculate the time
684 # period for which the message should not be repeated. We just use
685 # A simple chunked time model where a message cannot be repeated in a
686 # given time period. Messages can be repeated more frequently but must
687 # be in different time periods.
688
689 limit_key = ""
690 if rate_limit:
691 try:
692 limit_key = time.perf_counter() // rate_limit
693 except TypeError:
694 pass
695 # We use a hash to see if the message is being repeated. This is crude
696 # and imperfect but should work for our needs.
697 msg_hash = hash(f"{module_name}#{limit_key}#{msg}#{title}")
698 if msg_hash in self.notified_messages:
699 return
700 elif module_name:
701 log_msg = 'Module `{}` sent a notification. "{}: {}"'.format(
702 module_name, title, msg
703 )
704 self.log(log_msg, level)
705 else:
706 self.log(msg, level)
707 self.notified_messages.add(msg_hash)
708
709 try:
710 if dbus:
711 # fix any html entities
712 msg = msg.replace("&", "&")
713 msg = msg.replace("<", "<")
714 msg = msg.replace(">", ">")
715 cmd = ["notify-send"]
716 if icon:
717 cmd += ["-i", icon]
718 cmd += ["-u", DBUS_LEVELS.get(level, "normal"), "-t", "10000"]
719 cmd += [title, msg]
720 else:
721 py3_config = self.config.get("py3_config", {})
722 nagbar_font = py3_config.get("py3status", {}).get("nagbar_font")
723 wm_nag = self.config["wm"]["nag"]
724 cmd = [wm_nag, "-m", msg, "-t", level]
725 if nagbar_font:
726 cmd += ["-f", nagbar_font]
727 Popen(
728 cmd,
729 stdout=Path("/dev/null").open("w"),
730 stderr=Path("/dev/null").open("w"),
731 )
732 except Exception as err:
733 self.log(f"notify_user error: {err}")
734
735 def stop(self):
736 """
737 Set the Event lock, this will break all threads' loops.
738 """
739 self.running = False
740 # stop the command server
741 try:
742 self.commands_thread.kill()
743 except: # noqa e722
744 pass
745
746 try:
747 self.lock.set()
748 if self.config["debug"]:
749 self.log("lock set, exiting")
750 # run kill() method on all py3status modules
751 for module in self.modules.values():
752 module.kill()
753 except: # noqa e722
754 pass
755
756 def refresh_modules(self, module_string=None, exact=True):
757 """
758 Update modules.
759 if module_string is None all modules are refreshed
760 if module_string then modules with the exact name or those starting
761 with the given string depending on exact parameter will be refreshed.
762 If a module is an i3status one then we refresh i3status.
763 To prevent abuse, we rate limit this function to 100ms for full
764 refreshes.
765 """
766 if not module_string:
767 if time.perf_counter() > (self.last_refresh_ts + 0.1):
768 self.last_refresh_ts = time.perf_counter()
769 else:
770 # rate limiting
771 return
772 update_i3status = False
773 for name, module in self.output_modules.items():
774 if (
775 module_string is None
776 or (exact and name == module_string)
777 or (not exact and name.startswith(module_string))
778 ):
779 if module["type"] == "py3status":
780 if self.config["debug"]:
781 self.log(f"refresh py3status module {name}")
782 module["module"].force_update()
783 else:
784 if self.config["debug"]:
785 self.log(f"refresh i3status module {name}")
786 update_i3status = True
787 if update_i3status:
788 self.i3status_thread.refresh_i3status()
789
790 def sig_handler(self, signum, frame):
791 """
792 SIGUSR1 was received, the user asks for an immediate refresh of the bar
793 """
794 self.log("received USR1")
795 self.refresh_modules()
796
797 def terminate(self, signum, frame):
798 """
799 Received request to terminate (SIGTERM), exit nicely.
800 """
801 self.log("received SIGTERM")
802 raise KeyboardInterrupt()
803
804 def purge_module(self, module_name):
805 """
806 A module has been removed e.g. a module that had an error.
807 We need to find any containers and remove the module from them.
808 """
809 containers = self.config["py3_config"][".module_groups"]
810 containers_to_update = set()
811 if module_name in containers:
812 containers_to_update.update(set(containers[module_name]))
813 for container in containers_to_update:
814 try:
815 self.modules[container].module_class.items.remove(module_name)
816 except ValueError:
817 pass
818
819 def notify_update(self, update, urgent=False):
820 """
821 Name or list of names of modules that have updated.
822 """
823 if not isinstance(update, list):
824 update = [update]
825 self.update_queue.extend(update)
826
827 # find containers that use the modules that updated
828 containers = self.config["py3_config"][".module_groups"]
829 containers_to_update = set()
830 for item in update:
831 if item in containers:
832 containers_to_update.update(set(containers[item]))
833 # force containers to update
834 for container in containers_to_update:
835 container_module = self.output_modules.get(container)
836 if container_module:
837 # If the container registered a urgent_function then call it
838 # if this update is urgent.
839 if urgent and container_module.get("urgent_function"):
840 container_module["urgent_function"](update)
841 # If a container has registered a content_function we use that
842 # to see if the container needs to be updated.
843 # We only need to update containers if their active content has
844 # changed.
845 if container_module.get("content_function"):
846 if set(update) & container_module["content_function"]():
847 container_module["module"].force_update()
848 else:
849 # we don't know so just update.
850 container_module["module"].force_update()
851
852 # we need to update the output
853 if self.update_queue:
854 self.update_request.set()
855
856 def log(self, msg, level="info"):
857 """
858 log this information to syslog or user provided logfile.
859 """
860 if not self.config.get("log_file"):
861 # If level was given as a str then convert to actual level
862 level = LOG_LEVELS.get(level, level)
863 syslog(level, f"{msg}")
864 else:
865 # Binary mode so fs encoding setting is not an issue
866 with self.config["log_file"].open("ab") as f:
867 log_time = time.strftime("%Y-%m-%d %H:%M:%S")
868 # nice formatting of data structures using pretty print
869 if isinstance(msg, (dict, list, set, tuple)):
870 msg = pformat(msg)
871 # if multiline then start the data output on a fresh line
872 # to aid readability.
873 if "\n" in msg:
874 msg = "\n" + msg
875 out = f"{log_time} {level.upper()} {msg}\n"
876 try:
877 # Encode unicode strings to bytes
878 f.write(out.encode("utf-8"))
879 except (AttributeError, UnicodeDecodeError):
880 # Write any byte strings straight to log
881 f.write(out)
882
883 def create_output_modules(self):
884 """
885 Setup our output modules to allow easy updating of py3modules and
886 i3status modules allows the same module to be used multiple times.
887 """
888 py3_config = self.config["py3_config"]
889 i3modules = self.i3status_thread.i3modules
890 output_modules = self.output_modules
891 # position in the bar of the modules
892 positions = {}
893 for index, name in enumerate(py3_config["order"]):
894 if name not in positions:
895 positions[name] = []
896 positions[name].append(index)
897
898 # py3status modules
899 for name in self.modules:
900 if name not in output_modules:
901 output_modules[name] = {}
902 output_modules[name]["position"] = positions.get(name, [])
903 output_modules[name]["module"] = self.modules[name]
904 output_modules[name]["type"] = "py3status"
905 output_modules[name]["color"] = self.mappings_color.get(name)
906 # i3status modules
907 for name in i3modules:
908 if name not in output_modules:
909 output_modules[name] = {}
910 output_modules[name]["position"] = positions.get(name, [])
911 output_modules[name]["module"] = i3modules[name]
912 output_modules[name]["type"] = "i3status"
913 output_modules[name]["color"] = self.mappings_color.get(name)
914
915 self.output_modules = output_modules
916
917 def create_mappings(self, config):
918 """
919 Create any mappings needed for global substitutions eg. colors
920 """
921 mappings = {}
922 for name, cfg in config.items():
923 # Ignore special config sections.
924 if name in CONFIG_SPECIAL_SECTIONS:
925 continue
926 color = self.get_config_attribute(name, "color")
927 if hasattr(color, "none_setting"):
928 color = None
929 mappings[name] = color
930 # Store mappings for later use.
931 self.mappings_color = mappings
932
933 def process_module_output(self, module):
934 """
935 Process the output for a module and return a json string representing it.
936 Color processing occurs here.
937 """
938 outputs = module["module"].get_latest()
939 if self.config["py3_config"]["general"].get("colors") is False:
940 for output in outputs:
941 output.pop("color", None)
942 else:
943 color = module["color"]
944 if color:
945 for output in outputs:
946 # Color: substitute the config defined color
947 if "color" not in output:
948 output["color"] = color
949 # Create the json string output.
950 return ",".join(dumps(x) for x in outputs)
951
952 def i3bar_stop(self, signum, frame):
953 self.log("received SIGTSTP")
954 self.i3bar_running = False
955 # i3status should be stopped
956 self.i3status_thread.suspend_i3status()
957 self.sleep_modules()
958
959 def i3bar_start(self, signum, frame):
960 self.log("received SIGCONT")
961 self.i3bar_running = True
962 self.wake_modules()
963
964 def sleep_modules(self):
965 # Put all py3modules to sleep so they stop updating
966 for module in self.output_modules.values():
967 if module["type"] == "py3status":
968 module["module"].sleep()
969
970 def wake_modules(self):
971 # Wake up all py3modules.
972 for module in self.output_modules.values():
973 if module["type"] == "py3status":
974 module["module"].wake()
975
976 @profile
977 def run(self):
978 """
979 Main py3status loop, continuously read from i3status and modules
980 and output it to i3bar for displaying.
981 """
982 # SIGUSR1 forces a refresh of the bar both for py3status and i3status,
983 # this mimics the USR1 signal handling of i3status (see man i3status)
984 signal(SIGUSR1, self.sig_handler)
985 signal(SIGTERM, self.terminate)
986
987 # initialize usage variables
988 py3_config = self.config["py3_config"]
989
990 # prepare the color mappings
991 self.create_mappings(py3_config)
992
993 # self.output_modules needs to have been created before modules are
994 # started. This is so that modules can do things like register their
995 # content_function.
996 self.create_output_modules()
997
998 # start up all our modules
999 for module in self.modules.values():
1000 task = ModuleRunner(module)
1001 self.timeout_queue_add(task)
1002
1003 # this will be our output set to the correct length for the number of
1004 # items in the bar
1005 output = [None] * len(py3_config["order"])
1006
1007 write = sys.__stdout__.write
1008 flush = sys.__stdout__.flush
1009
1010 # start our output
1011 header = {
1012 "version": 1,
1013 "click_events": self.config["click_events"],
1014 "stop_signal": SIGTSTP,
1015 }
1016 write(dumps(header))
1017 write("\n[[]\n")
1018
1019 update_due = None
1020 # main loop
1021 while True:
1022 # process the timeout_queue and get interval till next update due
1023 update_due = self.timeout_queue_process()
1024
1025 # wait until an update is requested
1026 if self.update_request.wait(timeout=update_due):
1027 # event was set so clear it
1028 self.update_request.clear()
1029
1030 while not self.i3bar_running:
1031 time.sleep(0.1)
1032
1033 # check if an update is needed
1034 if self.update_queue:
1035 while len(self.update_queue):
1036 module_name = self.update_queue.popleft()
1037 module = self.output_modules[module_name]
1038 out = self.process_module_output(module)
1039
1040 for index in module["position"]:
1041 # store the output as json
1042 output[index] = out
1043
1044 # build output string
1045 out = ",".join(x for x in output if x)
1046 # dump the line to stdout
1047 write(f",[{out}]\n")
1048 flush()
1049
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/py3status/core.py b/py3status/core.py
--- a/py3status/core.py
+++ b/py3status/core.py
@@ -505,7 +505,7 @@
continue
try:
instance = None
- payload = user_modules.get(module)
+ payload = user_modules.get(module.split(" ")[0])
if payload:
kind, Klass = payload
if kind == ENTRY_POINT_KEY:
| {"golden_diff": "diff --git a/py3status/core.py b/py3status/core.py\n--- a/py3status/core.py\n+++ b/py3status/core.py\n@@ -505,7 +505,7 @@\n continue\n try:\n instance = None\n- payload = user_modules.get(module)\n+ payload = user_modules.get(module.split(\" \")[0])\n if payload:\n kind, Klass = payload\n if kind == ENTRY_POINT_KEY:\n", "issue": "Importing module from entry_point doesn't work the same as importing from .i3/py3status/\nI've written a small http monitoring module that I want to call multiple times. This works if I put the module in `~/.i3/py3status`.\r\n\r\nWith a config block like: \r\n```\r\n...\r\norder += \"http_monitor apache\"\r\norder += \"http_monitor medusa\"\r\n\r\nhttp_monitor 'apache' {\r\n service_location = \"http://host:81\"\r\n service_name = '\ud83e\udeb6'\r\n}\r\n\r\nhttp_monitor 'medusa' {\r\n service_location = \"http://host:8081\"\r\n service_name = '\ud83d\udc0d'\r\n}\r\n...\r\n```\r\n\r\nWorking from `~/.i3/py3status/` the py3status log.\r\n```\r\n2021-03-06 22:38:53 INFO modules include paths: [PosixPath('/home/j/.i3/py3status')]\r\n2021-03-06 22:38:53 INFO available module from /home/j/.i3/py3status: http_monitor\r\n2021-03-06 22:38:53 INFO loading module \"http_monitor apache\" from /home/j/.i3/py3status/http_monitor.py\r\n2021-03-06 22:38:53 INFO loading module \"http_monitor medusa\" from /home/j/.i3/py3status/http_monitor.py\r\n```\r\n\r\nSo this method has been working correctly for quite some time for me. \r\n\r\nHowever I wanted to package this as an Arch package AUR, and to install packages with `pacman` generally it's best practice to never put any files in the users home directory. \r\n\r\nSo I figured I'd just convert my module to use the `entry_point` since this has worked for some of the other modules I've written for py3status and built Arch packages for. But I'm getting an error trying to pass it parameters when importing it this way.\r\n\r\n```\r\n2021-03-06 22:56:33 INFO available module from entry_point: http_monitor\r\n2021-03-06 22:56:33 INFO Module `http_monitor apache` could not be loaded (unsupported operand type(s) for /: 'PosixPath' and 'type')\r\n2021-03-06 22:56:33 INFO unsupported operand type(s) for /: 'PosixPath' and 'type'\r\n2021-03-06 22:56:33 INFO Module `http_monitor medusa` could not be loaded (unsupported operand type(s) for /: 'PosixPath' and 'type')\r\n2021-03-06 22:56:33 INFO unsupported operand type(s) for /: 'PosixPath' and 'type'\r\n```\r\n\r\nThe module works correctly if I don't pass it a parameter using the `entry_point`, but then I can only have 1 instance of it running. \r\n\r\nAny ideas \ud83d\udcad \n", "before_files": [{"content": "import pkg_resources\nimport sys\nimport time\n\nfrom collections import deque\nfrom json import dumps\nfrom pathlib import Path\nfrom pprint import pformat\nfrom signal import signal, SIGTERM, SIGUSR1, SIGTSTP, SIGCONT\nfrom subprocess import Popen\nfrom threading import Event, Thread\nfrom syslog import syslog, LOG_ERR, LOG_INFO, LOG_WARNING\nfrom traceback import extract_tb, format_tb, format_stack\n\nfrom py3status.command import CommandServer\nfrom py3status.events import Events\nfrom py3status.formatter import expand_color\nfrom py3status.helpers import print_stderr\nfrom py3status.i3status import I3status\nfrom py3status.parse_config import process_config\nfrom py3status.module import Module\nfrom py3status.profiling import profile\nfrom py3status.udev_monitor import UdevMonitor\n\nLOG_LEVELS = {\"error\": LOG_ERR, \"warning\": LOG_WARNING, \"info\": LOG_INFO}\n\nDBUS_LEVELS = {\"error\": \"critical\", \"warning\": \"normal\", \"info\": \"low\"}\n\nCONFIG_SPECIAL_SECTIONS = [\n \".group_extras\",\n \".module_groups\",\n \"general\",\n \"i3s_modules\",\n \"on_click\",\n \"order\",\n \"py3_modules\",\n \"py3status\",\n]\n\nENTRY_POINT_NAME = \"py3status\"\nENTRY_POINT_KEY = \"entry_point\"\n\n\nclass Runner(Thread):\n \"\"\"\n A Simple helper to run a module in a Thread so it is non-locking.\n \"\"\"\n\n def __init__(self, module, py3_wrapper, module_name):\n Thread.__init__(self)\n self.daemon = True\n self.module = module\n self.module_name = module_name\n self.py3_wrapper = py3_wrapper\n self.start()\n\n def run(self):\n try:\n self.module.run()\n except: # noqa e722\n self.py3_wrapper.report_exception(\"Runner\")\n # the module is no longer running so notify the timeout logic\n if self.module_name:\n self.py3_wrapper.timeout_finished.append(self.module_name)\n\n\nclass NoneSetting:\n \"\"\"\n This class represents no setting in the config.\n \"\"\"\n\n # this attribute is used to identify that this is a none setting\n none_setting = True\n\n def __len__(self):\n return 0\n\n def __repr__(self):\n # this is for output via module_test\n return \"None\"\n\n\nclass Task:\n \"\"\"\n A simple task that can be run by the scheduler.\n \"\"\"\n\n def run(self):\n # F901 'raise NotImplemented' should be 'raise NotImplementedError'\n raise NotImplemented() # noqa f901\n\n\nclass CheckI3StatusThread(Task):\n \"\"\"\n Checks that the i3status thread is alive\n \"\"\"\n\n def __init__(self, i3status_thread, py3_wrapper):\n self.i3status_thread = i3status_thread\n self.timeout_queue_add = py3_wrapper.timeout_queue_add\n self.notify_user = py3_wrapper.notify_user\n\n def run(self):\n # check i3status thread\n if not self.i3status_thread.is_alive():\n err = self.i3status_thread.error\n if not err:\n err = \"I3status died horribly.\"\n self.notify_user(err)\n else:\n # check again in 5 seconds\n self.timeout_queue_add(self, int(time.perf_counter()) + 5)\n\n\nclass ModuleRunner(Task):\n \"\"\"\n Starts up a Module\n \"\"\"\n\n def __init__(self, module):\n self.module = module\n\n def run(self):\n self.module.start_module()\n\n\nclass Common:\n \"\"\"\n This class is used to hold core functionality so that it can be shared more\n easily. This allow us to run the module tests through the same code as\n when we are running for real.\n \"\"\"\n\n def __init__(self, py3_wrapper):\n self.py3_wrapper = py3_wrapper\n self.none_setting = NoneSetting()\n self.config = py3_wrapper.config\n\n def get_config_attribute(self, name, attribute):\n \"\"\"\n Look for the attribute in the config. Start with the named module and\n then walk up through any containing group and then try the general\n section of the config.\n \"\"\"\n\n # A user can set a param to None in the config to prevent a param\n # being used. This is important when modules do something like\n #\n # color = self.py3.COLOR_MUTED or self.py3.COLOR_BAD\n config = self.config[\"py3_config\"]\n param = config[name].get(attribute, self.none_setting)\n if hasattr(param, \"none_setting\") and name in config[\".module_groups\"]:\n for module in config[\".module_groups\"][name]:\n if attribute in config.get(module, {}):\n param = config[module].get(attribute)\n break\n if hasattr(param, \"none_setting\"):\n # check py3status config section\n param = config[\"py3status\"].get(attribute, self.none_setting)\n if hasattr(param, \"none_setting\"):\n # check py3status general section\n param = config[\"general\"].get(attribute, self.none_setting)\n if param and (attribute == \"color\" or attribute.startswith(\"color_\")):\n # check color value\n param = expand_color(param.lower(), self.none_setting)\n return param\n\n def report_exception(self, msg, notify_user=True, level=\"error\", error_frame=None):\n \"\"\"\n Report details of an exception to the user.\n This should only be called within an except: block Details of the\n exception are reported eg filename, line number and exception type.\n\n Because stack trace information outside of py3status or it's modules is\n not helpful in actually finding and fixing the error, we try to locate\n the first place that the exception affected our code.\n\n Alternatively if the error occurs in a module via a Py3 call that\n catches and reports the error then we receive an error_frame and use\n that as the source of the error.\n\n NOTE: msg should not end in a '.' for consistency.\n \"\"\"\n # Get list of paths that our stack trace should be found in.\n py3_paths = [Path(__file__).resolve()] + self.config[\"include_paths\"]\n traceback = None\n\n try:\n # We need to make sure to delete tb even if things go wrong.\n exc_type, exc_obj, tb = sys.exc_info()\n stack = extract_tb(tb)\n error_str = f\"{exc_type.__name__}: {exc_obj}\\n\"\n traceback = [error_str]\n\n if error_frame:\n # The error occurred in a py3status module so the traceback\n # should be made to appear correct. We caught the exception\n # but make it look as though we did not.\n traceback += format_stack(error_frame, 1) + format_tb(tb)\n filename = Path(error_frame.f_code.co_filename).name\n line_no = error_frame.f_lineno\n else:\n # This is a none module based error\n traceback += format_tb(tb)\n # Find first relevant trace in the stack.\n # it should be in py3status or one of it's modules.\n found = False\n for item in reversed(stack):\n filename = item[0]\n for path in py3_paths:\n if filename.startswith(path):\n # Found a good trace\n filename = item[0].name\n line_no = item[1]\n found = True\n break\n if found:\n break\n # all done! create our message.\n msg = \"{} ({}) {} line {}.\".format(\n msg, exc_type.__name__, filename, line_no\n )\n except: # noqa e722\n # something went wrong report what we can.\n msg = f\"{msg}.\"\n finally:\n # delete tb!\n del tb\n # log the exception and notify user\n self.py3_wrapper.log(msg, \"warning\")\n if traceback:\n # if debug is not in the config then we are at an early stage of\n # running py3status and logging is not yet available so output the\n # error to STDERR so it can be seen\n if \"debug\" not in self.config:\n print_stderr(\"\\n\".join(traceback))\n elif self.config.get(\"log_file\"):\n self.py3_wrapper.log(\"\".join([\"Traceback\\n\"] + traceback))\n if notify_user:\n self.py3_wrapper.notify_user(msg, level=level)\n\n\nclass Py3statusWrapper:\n \"\"\"\n This is the py3status wrapper.\n \"\"\"\n\n def __init__(self, options):\n \"\"\"\n Useful variables we'll need.\n \"\"\"\n self.config = vars(options)\n self.i3bar_running = True\n self.last_refresh_ts = time.perf_counter()\n self.lock = Event()\n self.modules = {}\n self.notified_messages = set()\n self.options = options\n self.output_modules = {}\n self.py3_modules = []\n self.running = True\n self.update_queue = deque()\n self.update_request = Event()\n\n # shared code\n self.common = Common(self)\n self.get_config_attribute = self.common.get_config_attribute\n self.report_exception = self.common.report_exception\n\n # these are used to schedule module updates\n self.timeout_add_queue = deque()\n self.timeout_due = None\n self.timeout_finished = deque()\n self.timeout_keys = []\n self.timeout_missed = {}\n self.timeout_queue = {}\n self.timeout_queue_lookup = {}\n self.timeout_running = set()\n self.timeout_update_due = deque()\n\n def timeout_queue_add(self, item, cache_time=0):\n \"\"\"\n Add a item to be run at a future time.\n This must be a Module, I3statusModule or a Task\n \"\"\"\n # add the info to the add queue. We do this so that actually adding\n # the module is done in the core thread.\n self.timeout_add_queue.append((item, cache_time))\n # if the timeout_add_queue is not due to be processed until after this\n # update request is due then trigger an update now.\n if self.timeout_due is None or cache_time < self.timeout_due:\n self.update_request.set()\n\n def timeout_process_add_queue(self, module, cache_time):\n \"\"\"\n Add a module to the timeout_queue if it is scheduled in the future or\n if it is due for an update immediately just trigger that.\n\n the timeout_queue is a dict with the scheduled time as the key and the\n value is a list of module instance names due to be updated at that\n point. An ordered list of keys is kept to allow easy checking of when\n updates are due. A list is also kept of which modules are in the\n update_queue to save having to search for modules in it unless needed.\n \"\"\"\n # If already set to update do nothing\n if module in self.timeout_update_due:\n return\n\n # remove if already in the queue\n key = self.timeout_queue_lookup.get(module)\n if key:\n queue_item = self.timeout_queue[key]\n queue_item.remove(module)\n if not queue_item:\n del self.timeout_queue[key]\n self.timeout_keys.remove(key)\n\n if cache_time == 0:\n # if cache_time is 0 we can just trigger the module update\n self.timeout_update_due.append(module)\n self.timeout_queue_lookup[module] = None\n else:\n # add the module to the timeout queue\n if cache_time not in self.timeout_keys:\n self.timeout_queue[cache_time] = {module}\n self.timeout_keys.append(cache_time)\n # sort keys so earliest is first\n self.timeout_keys.sort()\n\n # when is next timeout due?\n try:\n self.timeout_due = self.timeout_keys[0]\n except IndexError:\n self.timeout_due = None\n else:\n self.timeout_queue[cache_time].add(module)\n # note that the module is in the timeout_queue\n self.timeout_queue_lookup[module] = cache_time\n\n def timeout_queue_process(self):\n \"\"\"\n Check the timeout_queue and set any due modules to update.\n \"\"\"\n # process any items that need adding to the queue\n while self.timeout_add_queue:\n self.timeout_process_add_queue(*self.timeout_add_queue.popleft())\n now = time.perf_counter()\n due_timeouts = []\n # find any due timeouts\n for timeout in self.timeout_keys:\n if timeout > now:\n break\n due_timeouts.append(timeout)\n\n if due_timeouts:\n # process them\n for timeout in due_timeouts:\n modules = self.timeout_queue[timeout]\n # remove from the queue\n del self.timeout_queue[timeout]\n self.timeout_keys.remove(timeout)\n\n for module in modules:\n # module no longer in queue\n del self.timeout_queue_lookup[module]\n # tell module to update\n self.timeout_update_due.append(module)\n\n # when is next timeout due?\n try:\n self.timeout_due = self.timeout_keys[0]\n except IndexError:\n self.timeout_due = None\n\n # process any finished modules.\n # Now that the module has finished running it may have been marked to\n # be triggered again. This is most likely to happen when events are\n # being processed and the events are arriving much faster than the\n # module can handle them. It is important as a module may handle\n # events but not trigger the module update. If during the event the\n # module is due to update the update is not actioned but it needs to be\n # once the events have finished or else the module will no longer\n # continue to update.\n while self.timeout_finished:\n module_name = self.timeout_finished.popleft()\n self.timeout_running.discard(module_name)\n if module_name in self.timeout_missed:\n module = self.timeout_missed.pop(module_name)\n self.timeout_update_due.append(module)\n\n # run any modules that are due\n while self.timeout_update_due:\n module = self.timeout_update_due.popleft()\n module_name = getattr(module, \"module_full_name\", None)\n # if the module is running then we do not want to trigger it but\n # instead wait till it has finished running and then trigger\n if module_name and module_name in self.timeout_running:\n self.timeout_missed[module_name] = module\n else:\n self.timeout_running.add(module_name)\n Runner(module, self, module_name)\n\n # we return how long till we next need to process the timeout_queue\n if self.timeout_due is not None:\n return self.timeout_due - time.perf_counter()\n\n def gevent_monkey_patch_report(self):\n \"\"\"\n Report effective gevent monkey patching on the logs.\n \"\"\"\n try:\n import gevent.socket\n import socket\n\n if gevent.socket.socket is socket.socket:\n self.log(\"gevent monkey patching is active\")\n return True\n else:\n self.notify_user(\"gevent monkey patching failed.\")\n except ImportError:\n self.notify_user(\"gevent is not installed, monkey patching failed.\")\n return False\n\n def get_user_modules(self):\n \"\"\"Mapping from module name to relevant objects.\n\n There are two ways of discovery and storage:\n `include_paths` (no installation): include_path, f_name\n `entry_point` (from installed package): \"entry_point\", <Py3Status class>\n\n Modules of the same name from entry points shadow all other modules.\n \"\"\"\n user_modules = self._get_path_based_modules()\n user_modules.update(self._get_entry_point_based_modules())\n return user_modules\n\n def _get_path_based_modules(self):\n \"\"\"\n Search configured include directories for user provided modules.\n\n user_modules: {\n 'weather_yahoo': ('~/i3/py3status/', 'weather_yahoo.py')\n }\n \"\"\"\n user_modules = {}\n for include_path in self.config[\"include_paths\"]:\n for f_name in sorted(include_path.iterdir()):\n if f_name.suffix != \".py\":\n continue\n module_name = f_name.stem\n # do not overwrite modules if already found\n if module_name in user_modules:\n pass\n user_modules[module_name] = (include_path, f_name)\n self.log(f\"available module from {include_path}: {module_name}\")\n return user_modules\n\n def _get_entry_point_based_modules(self):\n classes_from_entry_points = {}\n for entry_point in pkg_resources.iter_entry_points(ENTRY_POINT_NAME):\n try:\n module = entry_point.load()\n except Exception as err:\n self.log(f\"entry_point '{entry_point}' error: {err}\")\n continue\n klass = getattr(module, Module.EXPECTED_CLASS, None)\n if klass:\n module_name = entry_point.module_name.split(\".\")[-1]\n classes_from_entry_points[module_name] = (ENTRY_POINT_KEY, klass)\n self.log(f\"available module from {ENTRY_POINT_KEY}: {module_name}\")\n return classes_from_entry_points\n\n def get_user_configured_modules(self):\n \"\"\"\n Get a dict of all available and configured py3status modules\n in the user's i3status.conf.\n\n As we already have a convenient way of loading the module, we'll\n populate the map with the Py3Status class right away\n \"\"\"\n user_modules = {}\n if not self.py3_modules:\n return user_modules\n for module_name, module_info in self.get_user_modules().items():\n for module in self.py3_modules:\n if module_name == module.split(\" \")[0]:\n source, item = module_info\n user_modules[module_name] = (source, item)\n return user_modules\n\n def load_modules(self, modules_list, user_modules):\n \"\"\"\n Load the given modules from the list (contains instance name) with\n respect to the user provided modules dict.\n\n modules_list: ['weather_yahoo paris', 'pewpew', 'net_rate']\n user_modules: {\n 'weather_yahoo': ('/etc/py3status.d/', 'weather_yahoo.py'),\n 'pewpew': ('entry_point', <Py3Status class>),\n }\n \"\"\"\n for module in modules_list:\n # ignore already provided modules (prevents double inclusion)\n if module in self.modules:\n continue\n try:\n instance = None\n payload = user_modules.get(module)\n if payload:\n kind, Klass = payload\n if kind == ENTRY_POINT_KEY:\n instance = Klass()\n my_m = Module(module, user_modules, self, instance=instance)\n # only handle modules with available methods\n if my_m.methods:\n self.modules[module] = my_m\n elif self.config[\"debug\"]:\n self.log(f'ignoring module \"{module}\" (no methods found)')\n except Exception:\n err = sys.exc_info()[1]\n msg = f'Loading module \"{module}\" failed ({err}).'\n self.report_exception(msg, level=\"warning\")\n\n def setup(self):\n \"\"\"\n Setup py3status and spawn i3status/events/modules threads.\n \"\"\"\n\n # SIGTSTP will be received from i3bar indicating that all output should\n # stop and we should consider py3status suspended. It is however\n # important that any processes using i3 ipc should continue to receive\n # those events otherwise it can lead to a stall in i3.\n signal(SIGTSTP, self.i3bar_stop)\n # SIGCONT indicates output should be resumed.\n signal(SIGCONT, self.i3bar_start)\n\n # log py3status and python versions\n self.log(\"=\" * 8)\n msg = \"Starting py3status version {version} python {python_version}\"\n self.log(msg.format(**self.config))\n\n try:\n # if running from git then log the branch and last commit\n # we do this by looking in the .git directory\n git_path = Path(__file__).resolve().parent.parent / \".git\"\n # branch\n with (git_path / \"HEAD\").open() as f:\n out = f.readline()\n branch = \"/\".join(out.strip().split(\"/\")[2:])\n self.log(f\"git branch: {branch}\")\n # last commit\n log_path = git_path / \"logs\" / \"refs\" / \"heads\" / branch\n with log_path.open() as f:\n out = f.readlines()[-1]\n sha = out.split(\" \")[1][:7]\n msg = \":\".join(out.strip().split(\"\\t\")[-1].split(\":\")[1:])\n self.log(f\"git commit: {sha}{msg}\")\n except: # noqa e722\n pass\n\n self.log(\"window manager: {}\".format(self.config[\"wm_name\"]))\n\n if self.config[\"debug\"]:\n self.log(f\"py3status started with config {self.config}\")\n\n if self.config[\"gevent\"]:\n self.is_gevent = self.gevent_monkey_patch_report()\n else:\n self.is_gevent = False\n\n # read i3status.conf\n config_path = self.config[\"i3status_config_path\"]\n self.log(\"config file: {}\".format(self.config[\"i3status_config_path\"]))\n self.config[\"py3_config\"] = process_config(config_path, self)\n\n # read resources\n if \"resources\" in str(self.config[\"py3_config\"].values()):\n from subprocess import check_output\n\n resources = check_output([\"xrdb\", \"-query\"]).decode().splitlines()\n self.config[\"resources\"] = {\n k: v.strip() for k, v in (x.split(\":\", 1) for x in resources)\n }\n\n # setup i3status thread\n self.i3status_thread = I3status(self)\n\n # If standalone or no i3status modules then use the mock i3status\n # else start i3status thread.\n i3s_modules = self.config[\"py3_config\"][\"i3s_modules\"]\n if self.config[\"standalone\"] or not i3s_modules:\n self.i3status_thread.mock()\n i3s_mode = \"mocked\"\n else:\n for module in i3s_modules:\n self.log(f\"adding module {module}\")\n i3s_mode = \"started\"\n self.i3status_thread.start()\n while not self.i3status_thread.ready:\n if not self.i3status_thread.is_alive():\n # i3status is having a bad day, so tell the user what went\n # wrong and do the best we can with just py3status modules.\n err = self.i3status_thread.error\n self.notify_user(err)\n self.i3status_thread.mock()\n i3s_mode = \"mocked\"\n break\n time.sleep(0.1)\n if self.config[\"debug\"]:\n self.log(\n \"i3status thread {} with config {}\".format(\n i3s_mode, self.config[\"py3_config\"]\n )\n )\n\n # add i3status thread monitoring task\n if i3s_mode == \"started\":\n task = CheckI3StatusThread(self.i3status_thread, self)\n self.timeout_queue_add(task)\n\n # setup input events thread\n self.events_thread = Events(self)\n self.events_thread.daemon = True\n self.events_thread.start()\n if self.config[\"debug\"]:\n self.log(\"events thread started\")\n\n # initialise the command server\n self.commands_thread = CommandServer(self)\n self.commands_thread.daemon = True\n self.commands_thread.start()\n if self.config[\"debug\"]:\n self.log(\"commands thread started\")\n\n # initialize the udev monitor (lazy)\n self.udev_monitor = UdevMonitor(self)\n\n # suppress modules' output wrt issue #20\n if not self.config[\"debug\"]:\n sys.stdout = Path(\"/dev/null\").open(\"w\")\n sys.stderr = Path(\"/dev/null\").open(\"w\")\n\n # get the list of py3status configured modules\n self.py3_modules = self.config[\"py3_config\"][\"py3_modules\"]\n\n # get a dict of all user provided modules\n self.log(\"modules include paths: {}\".format(self.config[\"include_paths\"]))\n user_modules = self.get_user_configured_modules()\n if self.config[\"debug\"]:\n self.log(f\"user_modules={user_modules}\")\n\n if self.py3_modules:\n # load and spawn i3status.conf configured modules threads\n self.load_modules(self.py3_modules, user_modules)\n\n def notify_user(\n self,\n msg,\n level=\"error\",\n rate_limit=None,\n module_name=\"\",\n icon=None,\n title=\"py3status\",\n ):\n \"\"\"\n Display notification to user via i3-nagbar or send-notify\n We also make sure to log anything to keep trace of it.\n\n NOTE: Message should end with a '.' for consistency.\n \"\"\"\n dbus = self.config.get(\"dbus_notify\")\n if dbus:\n # force msg, icon, title to be a string\n title = f\"{title}\"\n msg = f\"{msg}\"\n if icon:\n icon = f\"{icon}\"\n else:\n msg = f\"py3status: {msg}\"\n if level != \"info\" and module_name == \"\":\n fix_msg = \"{} Please try to fix this and reload i3wm (Mod+Shift+R)\"\n msg = fix_msg.format(msg)\n # Rate limiting. If rate limiting then we need to calculate the time\n # period for which the message should not be repeated. We just use\n # A simple chunked time model where a message cannot be repeated in a\n # given time period. Messages can be repeated more frequently but must\n # be in different time periods.\n\n limit_key = \"\"\n if rate_limit:\n try:\n limit_key = time.perf_counter() // rate_limit\n except TypeError:\n pass\n # We use a hash to see if the message is being repeated. This is crude\n # and imperfect but should work for our needs.\n msg_hash = hash(f\"{module_name}#{limit_key}#{msg}#{title}\")\n if msg_hash in self.notified_messages:\n return\n elif module_name:\n log_msg = 'Module `{}` sent a notification. \"{}: {}\"'.format(\n module_name, title, msg\n )\n self.log(log_msg, level)\n else:\n self.log(msg, level)\n self.notified_messages.add(msg_hash)\n\n try:\n if dbus:\n # fix any html entities\n msg = msg.replace(\"&\", \"&\")\n msg = msg.replace(\"<\", \"<\")\n msg = msg.replace(\">\", \">\")\n cmd = [\"notify-send\"]\n if icon:\n cmd += [\"-i\", icon]\n cmd += [\"-u\", DBUS_LEVELS.get(level, \"normal\"), \"-t\", \"10000\"]\n cmd += [title, msg]\n else:\n py3_config = self.config.get(\"py3_config\", {})\n nagbar_font = py3_config.get(\"py3status\", {}).get(\"nagbar_font\")\n wm_nag = self.config[\"wm\"][\"nag\"]\n cmd = [wm_nag, \"-m\", msg, \"-t\", level]\n if nagbar_font:\n cmd += [\"-f\", nagbar_font]\n Popen(\n cmd,\n stdout=Path(\"/dev/null\").open(\"w\"),\n stderr=Path(\"/dev/null\").open(\"w\"),\n )\n except Exception as err:\n self.log(f\"notify_user error: {err}\")\n\n def stop(self):\n \"\"\"\n Set the Event lock, this will break all threads' loops.\n \"\"\"\n self.running = False\n # stop the command server\n try:\n self.commands_thread.kill()\n except: # noqa e722\n pass\n\n try:\n self.lock.set()\n if self.config[\"debug\"]:\n self.log(\"lock set, exiting\")\n # run kill() method on all py3status modules\n for module in self.modules.values():\n module.kill()\n except: # noqa e722\n pass\n\n def refresh_modules(self, module_string=None, exact=True):\n \"\"\"\n Update modules.\n if module_string is None all modules are refreshed\n if module_string then modules with the exact name or those starting\n with the given string depending on exact parameter will be refreshed.\n If a module is an i3status one then we refresh i3status.\n To prevent abuse, we rate limit this function to 100ms for full\n refreshes.\n \"\"\"\n if not module_string:\n if time.perf_counter() > (self.last_refresh_ts + 0.1):\n self.last_refresh_ts = time.perf_counter()\n else:\n # rate limiting\n return\n update_i3status = False\n for name, module in self.output_modules.items():\n if (\n module_string is None\n or (exact and name == module_string)\n or (not exact and name.startswith(module_string))\n ):\n if module[\"type\"] == \"py3status\":\n if self.config[\"debug\"]:\n self.log(f\"refresh py3status module {name}\")\n module[\"module\"].force_update()\n else:\n if self.config[\"debug\"]:\n self.log(f\"refresh i3status module {name}\")\n update_i3status = True\n if update_i3status:\n self.i3status_thread.refresh_i3status()\n\n def sig_handler(self, signum, frame):\n \"\"\"\n SIGUSR1 was received, the user asks for an immediate refresh of the bar\n \"\"\"\n self.log(\"received USR1\")\n self.refresh_modules()\n\n def terminate(self, signum, frame):\n \"\"\"\n Received request to terminate (SIGTERM), exit nicely.\n \"\"\"\n self.log(\"received SIGTERM\")\n raise KeyboardInterrupt()\n\n def purge_module(self, module_name):\n \"\"\"\n A module has been removed e.g. a module that had an error.\n We need to find any containers and remove the module from them.\n \"\"\"\n containers = self.config[\"py3_config\"][\".module_groups\"]\n containers_to_update = set()\n if module_name in containers:\n containers_to_update.update(set(containers[module_name]))\n for container in containers_to_update:\n try:\n self.modules[container].module_class.items.remove(module_name)\n except ValueError:\n pass\n\n def notify_update(self, update, urgent=False):\n \"\"\"\n Name or list of names of modules that have updated.\n \"\"\"\n if not isinstance(update, list):\n update = [update]\n self.update_queue.extend(update)\n\n # find containers that use the modules that updated\n containers = self.config[\"py3_config\"][\".module_groups\"]\n containers_to_update = set()\n for item in update:\n if item in containers:\n containers_to_update.update(set(containers[item]))\n # force containers to update\n for container in containers_to_update:\n container_module = self.output_modules.get(container)\n if container_module:\n # If the container registered a urgent_function then call it\n # if this update is urgent.\n if urgent and container_module.get(\"urgent_function\"):\n container_module[\"urgent_function\"](update)\n # If a container has registered a content_function we use that\n # to see if the container needs to be updated.\n # We only need to update containers if their active content has\n # changed.\n if container_module.get(\"content_function\"):\n if set(update) & container_module[\"content_function\"]():\n container_module[\"module\"].force_update()\n else:\n # we don't know so just update.\n container_module[\"module\"].force_update()\n\n # we need to update the output\n if self.update_queue:\n self.update_request.set()\n\n def log(self, msg, level=\"info\"):\n \"\"\"\n log this information to syslog or user provided logfile.\n \"\"\"\n if not self.config.get(\"log_file\"):\n # If level was given as a str then convert to actual level\n level = LOG_LEVELS.get(level, level)\n syslog(level, f\"{msg}\")\n else:\n # Binary mode so fs encoding setting is not an issue\n with self.config[\"log_file\"].open(\"ab\") as f:\n log_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n # nice formatting of data structures using pretty print\n if isinstance(msg, (dict, list, set, tuple)):\n msg = pformat(msg)\n # if multiline then start the data output on a fresh line\n # to aid readability.\n if \"\\n\" in msg:\n msg = \"\\n\" + msg\n out = f\"{log_time} {level.upper()} {msg}\\n\"\n try:\n # Encode unicode strings to bytes\n f.write(out.encode(\"utf-8\"))\n except (AttributeError, UnicodeDecodeError):\n # Write any byte strings straight to log\n f.write(out)\n\n def create_output_modules(self):\n \"\"\"\n Setup our output modules to allow easy updating of py3modules and\n i3status modules allows the same module to be used multiple times.\n \"\"\"\n py3_config = self.config[\"py3_config\"]\n i3modules = self.i3status_thread.i3modules\n output_modules = self.output_modules\n # position in the bar of the modules\n positions = {}\n for index, name in enumerate(py3_config[\"order\"]):\n if name not in positions:\n positions[name] = []\n positions[name].append(index)\n\n # py3status modules\n for name in self.modules:\n if name not in output_modules:\n output_modules[name] = {}\n output_modules[name][\"position\"] = positions.get(name, [])\n output_modules[name][\"module\"] = self.modules[name]\n output_modules[name][\"type\"] = \"py3status\"\n output_modules[name][\"color\"] = self.mappings_color.get(name)\n # i3status modules\n for name in i3modules:\n if name not in output_modules:\n output_modules[name] = {}\n output_modules[name][\"position\"] = positions.get(name, [])\n output_modules[name][\"module\"] = i3modules[name]\n output_modules[name][\"type\"] = \"i3status\"\n output_modules[name][\"color\"] = self.mappings_color.get(name)\n\n self.output_modules = output_modules\n\n def create_mappings(self, config):\n \"\"\"\n Create any mappings needed for global substitutions eg. colors\n \"\"\"\n mappings = {}\n for name, cfg in config.items():\n # Ignore special config sections.\n if name in CONFIG_SPECIAL_SECTIONS:\n continue\n color = self.get_config_attribute(name, \"color\")\n if hasattr(color, \"none_setting\"):\n color = None\n mappings[name] = color\n # Store mappings for later use.\n self.mappings_color = mappings\n\n def process_module_output(self, module):\n \"\"\"\n Process the output for a module and return a json string representing it.\n Color processing occurs here.\n \"\"\"\n outputs = module[\"module\"].get_latest()\n if self.config[\"py3_config\"][\"general\"].get(\"colors\") is False:\n for output in outputs:\n output.pop(\"color\", None)\n else:\n color = module[\"color\"]\n if color:\n for output in outputs:\n # Color: substitute the config defined color\n if \"color\" not in output:\n output[\"color\"] = color\n # Create the json string output.\n return \",\".join(dumps(x) for x in outputs)\n\n def i3bar_stop(self, signum, frame):\n self.log(\"received SIGTSTP\")\n self.i3bar_running = False\n # i3status should be stopped\n self.i3status_thread.suspend_i3status()\n self.sleep_modules()\n\n def i3bar_start(self, signum, frame):\n self.log(\"received SIGCONT\")\n self.i3bar_running = True\n self.wake_modules()\n\n def sleep_modules(self):\n # Put all py3modules to sleep so they stop updating\n for module in self.output_modules.values():\n if module[\"type\"] == \"py3status\":\n module[\"module\"].sleep()\n\n def wake_modules(self):\n # Wake up all py3modules.\n for module in self.output_modules.values():\n if module[\"type\"] == \"py3status\":\n module[\"module\"].wake()\n\n @profile\n def run(self):\n \"\"\"\n Main py3status loop, continuously read from i3status and modules\n and output it to i3bar for displaying.\n \"\"\"\n # SIGUSR1 forces a refresh of the bar both for py3status and i3status,\n # this mimics the USR1 signal handling of i3status (see man i3status)\n signal(SIGUSR1, self.sig_handler)\n signal(SIGTERM, self.terminate)\n\n # initialize usage variables\n py3_config = self.config[\"py3_config\"]\n\n # prepare the color mappings\n self.create_mappings(py3_config)\n\n # self.output_modules needs to have been created before modules are\n # started. This is so that modules can do things like register their\n # content_function.\n self.create_output_modules()\n\n # start up all our modules\n for module in self.modules.values():\n task = ModuleRunner(module)\n self.timeout_queue_add(task)\n\n # this will be our output set to the correct length for the number of\n # items in the bar\n output = [None] * len(py3_config[\"order\"])\n\n write = sys.__stdout__.write\n flush = sys.__stdout__.flush\n\n # start our output\n header = {\n \"version\": 1,\n \"click_events\": self.config[\"click_events\"],\n \"stop_signal\": SIGTSTP,\n }\n write(dumps(header))\n write(\"\\n[[]\\n\")\n\n update_due = None\n # main loop\n while True:\n # process the timeout_queue and get interval till next update due\n update_due = self.timeout_queue_process()\n\n # wait until an update is requested\n if self.update_request.wait(timeout=update_due):\n # event was set so clear it\n self.update_request.clear()\n\n while not self.i3bar_running:\n time.sleep(0.1)\n\n # check if an update is needed\n if self.update_queue:\n while len(self.update_queue):\n module_name = self.update_queue.popleft()\n module = self.output_modules[module_name]\n out = self.process_module_output(module)\n\n for index in module[\"position\"]:\n # store the output as json\n output[index] = out\n\n # build output string\n out = \",\".join(x for x in output if x)\n # dump the line to stdout\n write(f\",[{out}]\\n\")\n flush()\n", "path": "py3status/core.py"}], "after_files": [{"content": "import pkg_resources\nimport sys\nimport time\n\nfrom collections import deque\nfrom json import dumps\nfrom pathlib import Path\nfrom pprint import pformat\nfrom signal import signal, SIGTERM, SIGUSR1, SIGTSTP, SIGCONT\nfrom subprocess import Popen\nfrom threading import Event, Thread\nfrom syslog import syslog, LOG_ERR, LOG_INFO, LOG_WARNING\nfrom traceback import extract_tb, format_tb, format_stack\n\nfrom py3status.command import CommandServer\nfrom py3status.events import Events\nfrom py3status.formatter import expand_color\nfrom py3status.helpers import print_stderr\nfrom py3status.i3status import I3status\nfrom py3status.parse_config import process_config\nfrom py3status.module import Module\nfrom py3status.profiling import profile\nfrom py3status.udev_monitor import UdevMonitor\n\nLOG_LEVELS = {\"error\": LOG_ERR, \"warning\": LOG_WARNING, \"info\": LOG_INFO}\n\nDBUS_LEVELS = {\"error\": \"critical\", \"warning\": \"normal\", \"info\": \"low\"}\n\nCONFIG_SPECIAL_SECTIONS = [\n \".group_extras\",\n \".module_groups\",\n \"general\",\n \"i3s_modules\",\n \"on_click\",\n \"order\",\n \"py3_modules\",\n \"py3status\",\n]\n\nENTRY_POINT_NAME = \"py3status\"\nENTRY_POINT_KEY = \"entry_point\"\n\n\nclass Runner(Thread):\n \"\"\"\n A Simple helper to run a module in a Thread so it is non-locking.\n \"\"\"\n\n def __init__(self, module, py3_wrapper, module_name):\n Thread.__init__(self)\n self.daemon = True\n self.module = module\n self.module_name = module_name\n self.py3_wrapper = py3_wrapper\n self.start()\n\n def run(self):\n try:\n self.module.run()\n except: # noqa e722\n self.py3_wrapper.report_exception(\"Runner\")\n # the module is no longer running so notify the timeout logic\n if self.module_name:\n self.py3_wrapper.timeout_finished.append(self.module_name)\n\n\nclass NoneSetting:\n \"\"\"\n This class represents no setting in the config.\n \"\"\"\n\n # this attribute is used to identify that this is a none setting\n none_setting = True\n\n def __len__(self):\n return 0\n\n def __repr__(self):\n # this is for output via module_test\n return \"None\"\n\n\nclass Task:\n \"\"\"\n A simple task that can be run by the scheduler.\n \"\"\"\n\n def run(self):\n # F901 'raise NotImplemented' should be 'raise NotImplementedError'\n raise NotImplemented() # noqa f901\n\n\nclass CheckI3StatusThread(Task):\n \"\"\"\n Checks that the i3status thread is alive\n \"\"\"\n\n def __init__(self, i3status_thread, py3_wrapper):\n self.i3status_thread = i3status_thread\n self.timeout_queue_add = py3_wrapper.timeout_queue_add\n self.notify_user = py3_wrapper.notify_user\n\n def run(self):\n # check i3status thread\n if not self.i3status_thread.is_alive():\n err = self.i3status_thread.error\n if not err:\n err = \"I3status died horribly.\"\n self.notify_user(err)\n else:\n # check again in 5 seconds\n self.timeout_queue_add(self, int(time.perf_counter()) + 5)\n\n\nclass ModuleRunner(Task):\n \"\"\"\n Starts up a Module\n \"\"\"\n\n def __init__(self, module):\n self.module = module\n\n def run(self):\n self.module.start_module()\n\n\nclass Common:\n \"\"\"\n This class is used to hold core functionality so that it can be shared more\n easily. This allow us to run the module tests through the same code as\n when we are running for real.\n \"\"\"\n\n def __init__(self, py3_wrapper):\n self.py3_wrapper = py3_wrapper\n self.none_setting = NoneSetting()\n self.config = py3_wrapper.config\n\n def get_config_attribute(self, name, attribute):\n \"\"\"\n Look for the attribute in the config. Start with the named module and\n then walk up through any containing group and then try the general\n section of the config.\n \"\"\"\n\n # A user can set a param to None in the config to prevent a param\n # being used. This is important when modules do something like\n #\n # color = self.py3.COLOR_MUTED or self.py3.COLOR_BAD\n config = self.config[\"py3_config\"]\n param = config[name].get(attribute, self.none_setting)\n if hasattr(param, \"none_setting\") and name in config[\".module_groups\"]:\n for module in config[\".module_groups\"][name]:\n if attribute in config.get(module, {}):\n param = config[module].get(attribute)\n break\n if hasattr(param, \"none_setting\"):\n # check py3status config section\n param = config[\"py3status\"].get(attribute, self.none_setting)\n if hasattr(param, \"none_setting\"):\n # check py3status general section\n param = config[\"general\"].get(attribute, self.none_setting)\n if param and (attribute == \"color\" or attribute.startswith(\"color_\")):\n # check color value\n param = expand_color(param.lower(), self.none_setting)\n return param\n\n def report_exception(self, msg, notify_user=True, level=\"error\", error_frame=None):\n \"\"\"\n Report details of an exception to the user.\n This should only be called within an except: block Details of the\n exception are reported eg filename, line number and exception type.\n\n Because stack trace information outside of py3status or it's modules is\n not helpful in actually finding and fixing the error, we try to locate\n the first place that the exception affected our code.\n\n Alternatively if the error occurs in a module via a Py3 call that\n catches and reports the error then we receive an error_frame and use\n that as the source of the error.\n\n NOTE: msg should not end in a '.' for consistency.\n \"\"\"\n # Get list of paths that our stack trace should be found in.\n py3_paths = [Path(__file__).resolve()] + self.config[\"include_paths\"]\n traceback = None\n\n try:\n # We need to make sure to delete tb even if things go wrong.\n exc_type, exc_obj, tb = sys.exc_info()\n stack = extract_tb(tb)\n error_str = f\"{exc_type.__name__}: {exc_obj}\\n\"\n traceback = [error_str]\n\n if error_frame:\n # The error occurred in a py3status module so the traceback\n # should be made to appear correct. We caught the exception\n # but make it look as though we did not.\n traceback += format_stack(error_frame, 1) + format_tb(tb)\n filename = Path(error_frame.f_code.co_filename).name\n line_no = error_frame.f_lineno\n else:\n # This is a none module based error\n traceback += format_tb(tb)\n # Find first relevant trace in the stack.\n # it should be in py3status or one of it's modules.\n found = False\n for item in reversed(stack):\n filename = item[0]\n for path in py3_paths:\n if filename.startswith(path):\n # Found a good trace\n filename = item[0].name\n line_no = item[1]\n found = True\n break\n if found:\n break\n # all done! create our message.\n msg = \"{} ({}) {} line {}.\".format(\n msg, exc_type.__name__, filename, line_no\n )\n except: # noqa e722\n # something went wrong report what we can.\n msg = f\"{msg}.\"\n finally:\n # delete tb!\n del tb\n # log the exception and notify user\n self.py3_wrapper.log(msg, \"warning\")\n if traceback:\n # if debug is not in the config then we are at an early stage of\n # running py3status and logging is not yet available so output the\n # error to STDERR so it can be seen\n if \"debug\" not in self.config:\n print_stderr(\"\\n\".join(traceback))\n elif self.config.get(\"log_file\"):\n self.py3_wrapper.log(\"\".join([\"Traceback\\n\"] + traceback))\n if notify_user:\n self.py3_wrapper.notify_user(msg, level=level)\n\n\nclass Py3statusWrapper:\n \"\"\"\n This is the py3status wrapper.\n \"\"\"\n\n def __init__(self, options):\n \"\"\"\n Useful variables we'll need.\n \"\"\"\n self.config = vars(options)\n self.i3bar_running = True\n self.last_refresh_ts = time.perf_counter()\n self.lock = Event()\n self.modules = {}\n self.notified_messages = set()\n self.options = options\n self.output_modules = {}\n self.py3_modules = []\n self.running = True\n self.update_queue = deque()\n self.update_request = Event()\n\n # shared code\n self.common = Common(self)\n self.get_config_attribute = self.common.get_config_attribute\n self.report_exception = self.common.report_exception\n\n # these are used to schedule module updates\n self.timeout_add_queue = deque()\n self.timeout_due = None\n self.timeout_finished = deque()\n self.timeout_keys = []\n self.timeout_missed = {}\n self.timeout_queue = {}\n self.timeout_queue_lookup = {}\n self.timeout_running = set()\n self.timeout_update_due = deque()\n\n def timeout_queue_add(self, item, cache_time=0):\n \"\"\"\n Add a item to be run at a future time.\n This must be a Module, I3statusModule or a Task\n \"\"\"\n # add the info to the add queue. We do this so that actually adding\n # the module is done in the core thread.\n self.timeout_add_queue.append((item, cache_time))\n # if the timeout_add_queue is not due to be processed until after this\n # update request is due then trigger an update now.\n if self.timeout_due is None or cache_time < self.timeout_due:\n self.update_request.set()\n\n def timeout_process_add_queue(self, module, cache_time):\n \"\"\"\n Add a module to the timeout_queue if it is scheduled in the future or\n if it is due for an update immediately just trigger that.\n\n the timeout_queue is a dict with the scheduled time as the key and the\n value is a list of module instance names due to be updated at that\n point. An ordered list of keys is kept to allow easy checking of when\n updates are due. A list is also kept of which modules are in the\n update_queue to save having to search for modules in it unless needed.\n \"\"\"\n # If already set to update do nothing\n if module in self.timeout_update_due:\n return\n\n # remove if already in the queue\n key = self.timeout_queue_lookup.get(module)\n if key:\n queue_item = self.timeout_queue[key]\n queue_item.remove(module)\n if not queue_item:\n del self.timeout_queue[key]\n self.timeout_keys.remove(key)\n\n if cache_time == 0:\n # if cache_time is 0 we can just trigger the module update\n self.timeout_update_due.append(module)\n self.timeout_queue_lookup[module] = None\n else:\n # add the module to the timeout queue\n if cache_time not in self.timeout_keys:\n self.timeout_queue[cache_time] = {module}\n self.timeout_keys.append(cache_time)\n # sort keys so earliest is first\n self.timeout_keys.sort()\n\n # when is next timeout due?\n try:\n self.timeout_due = self.timeout_keys[0]\n except IndexError:\n self.timeout_due = None\n else:\n self.timeout_queue[cache_time].add(module)\n # note that the module is in the timeout_queue\n self.timeout_queue_lookup[module] = cache_time\n\n def timeout_queue_process(self):\n \"\"\"\n Check the timeout_queue and set any due modules to update.\n \"\"\"\n # process any items that need adding to the queue\n while self.timeout_add_queue:\n self.timeout_process_add_queue(*self.timeout_add_queue.popleft())\n now = time.perf_counter()\n due_timeouts = []\n # find any due timeouts\n for timeout in self.timeout_keys:\n if timeout > now:\n break\n due_timeouts.append(timeout)\n\n if due_timeouts:\n # process them\n for timeout in due_timeouts:\n modules = self.timeout_queue[timeout]\n # remove from the queue\n del self.timeout_queue[timeout]\n self.timeout_keys.remove(timeout)\n\n for module in modules:\n # module no longer in queue\n del self.timeout_queue_lookup[module]\n # tell module to update\n self.timeout_update_due.append(module)\n\n # when is next timeout due?\n try:\n self.timeout_due = self.timeout_keys[0]\n except IndexError:\n self.timeout_due = None\n\n # process any finished modules.\n # Now that the module has finished running it may have been marked to\n # be triggered again. This is most likely to happen when events are\n # being processed and the events are arriving much faster than the\n # module can handle them. It is important as a module may handle\n # events but not trigger the module update. If during the event the\n # module is due to update the update is not actioned but it needs to be\n # once the events have finished or else the module will no longer\n # continue to update.\n while self.timeout_finished:\n module_name = self.timeout_finished.popleft()\n self.timeout_running.discard(module_name)\n if module_name in self.timeout_missed:\n module = self.timeout_missed.pop(module_name)\n self.timeout_update_due.append(module)\n\n # run any modules that are due\n while self.timeout_update_due:\n module = self.timeout_update_due.popleft()\n module_name = getattr(module, \"module_full_name\", None)\n # if the module is running then we do not want to trigger it but\n # instead wait till it has finished running and then trigger\n if module_name and module_name in self.timeout_running:\n self.timeout_missed[module_name] = module\n else:\n self.timeout_running.add(module_name)\n Runner(module, self, module_name)\n\n # we return how long till we next need to process the timeout_queue\n if self.timeout_due is not None:\n return self.timeout_due - time.perf_counter()\n\n def gevent_monkey_patch_report(self):\n \"\"\"\n Report effective gevent monkey patching on the logs.\n \"\"\"\n try:\n import gevent.socket\n import socket\n\n if gevent.socket.socket is socket.socket:\n self.log(\"gevent monkey patching is active\")\n return True\n else:\n self.notify_user(\"gevent monkey patching failed.\")\n except ImportError:\n self.notify_user(\"gevent is not installed, monkey patching failed.\")\n return False\n\n def get_user_modules(self):\n \"\"\"Mapping from module name to relevant objects.\n\n There are two ways of discovery and storage:\n `include_paths` (no installation): include_path, f_name\n `entry_point` (from installed package): \"entry_point\", <Py3Status class>\n\n Modules of the same name from entry points shadow all other modules.\n \"\"\"\n user_modules = self._get_path_based_modules()\n user_modules.update(self._get_entry_point_based_modules())\n return user_modules\n\n def _get_path_based_modules(self):\n \"\"\"\n Search configured include directories for user provided modules.\n\n user_modules: {\n 'weather_yahoo': ('~/i3/py3status/', 'weather_yahoo.py')\n }\n \"\"\"\n user_modules = {}\n for include_path in self.config[\"include_paths\"]:\n for f_name in sorted(include_path.iterdir()):\n if f_name.suffix != \".py\":\n continue\n module_name = f_name.stem\n # do not overwrite modules if already found\n if module_name in user_modules:\n pass\n user_modules[module_name] = (include_path, f_name)\n self.log(f\"available module from {include_path}: {module_name}\")\n return user_modules\n\n def _get_entry_point_based_modules(self):\n classes_from_entry_points = {}\n for entry_point in pkg_resources.iter_entry_points(ENTRY_POINT_NAME):\n try:\n module = entry_point.load()\n except Exception as err:\n self.log(f\"entry_point '{entry_point}' error: {err}\")\n continue\n klass = getattr(module, Module.EXPECTED_CLASS, None)\n if klass:\n module_name = entry_point.module_name.split(\".\")[-1]\n classes_from_entry_points[module_name] = (ENTRY_POINT_KEY, klass)\n self.log(f\"available module from {ENTRY_POINT_KEY}: {module_name}\")\n return classes_from_entry_points\n\n def get_user_configured_modules(self):\n \"\"\"\n Get a dict of all available and configured py3status modules\n in the user's i3status.conf.\n\n As we already have a convenient way of loading the module, we'll\n populate the map with the Py3Status class right away\n \"\"\"\n user_modules = {}\n if not self.py3_modules:\n return user_modules\n for module_name, module_info in self.get_user_modules().items():\n for module in self.py3_modules:\n if module_name == module.split(\" \")[0]:\n source, item = module_info\n user_modules[module_name] = (source, item)\n return user_modules\n\n def load_modules(self, modules_list, user_modules):\n \"\"\"\n Load the given modules from the list (contains instance name) with\n respect to the user provided modules dict.\n\n modules_list: ['weather_yahoo paris', 'pewpew', 'net_rate']\n user_modules: {\n 'weather_yahoo': ('/etc/py3status.d/', 'weather_yahoo.py'),\n 'pewpew': ('entry_point', <Py3Status class>),\n }\n \"\"\"\n for module in modules_list:\n # ignore already provided modules (prevents double inclusion)\n if module in self.modules:\n continue\n try:\n instance = None\n payload = user_modules.get(module.split(\" \")[0])\n if payload:\n kind, Klass = payload\n if kind == ENTRY_POINT_KEY:\n instance = Klass()\n my_m = Module(module, user_modules, self, instance=instance)\n # only handle modules with available methods\n if my_m.methods:\n self.modules[module] = my_m\n elif self.config[\"debug\"]:\n self.log(f'ignoring module \"{module}\" (no methods found)')\n except Exception:\n err = sys.exc_info()[1]\n msg = f'Loading module \"{module}\" failed ({err}).'\n self.report_exception(msg, level=\"warning\")\n\n def setup(self):\n \"\"\"\n Setup py3status and spawn i3status/events/modules threads.\n \"\"\"\n\n # SIGTSTP will be received from i3bar indicating that all output should\n # stop and we should consider py3status suspended. It is however\n # important that any processes using i3 ipc should continue to receive\n # those events otherwise it can lead to a stall in i3.\n signal(SIGTSTP, self.i3bar_stop)\n # SIGCONT indicates output should be resumed.\n signal(SIGCONT, self.i3bar_start)\n\n # log py3status and python versions\n self.log(\"=\" * 8)\n msg = \"Starting py3status version {version} python {python_version}\"\n self.log(msg.format(**self.config))\n\n try:\n # if running from git then log the branch and last commit\n # we do this by looking in the .git directory\n git_path = Path(__file__).resolve().parent.parent / \".git\"\n # branch\n with (git_path / \"HEAD\").open() as f:\n out = f.readline()\n branch = \"/\".join(out.strip().split(\"/\")[2:])\n self.log(f\"git branch: {branch}\")\n # last commit\n log_path = git_path / \"logs\" / \"refs\" / \"heads\" / branch\n with log_path.open() as f:\n out = f.readlines()[-1]\n sha = out.split(\" \")[1][:7]\n msg = \":\".join(out.strip().split(\"\\t\")[-1].split(\":\")[1:])\n self.log(f\"git commit: {sha}{msg}\")\n except: # noqa e722\n pass\n\n self.log(\"window manager: {}\".format(self.config[\"wm_name\"]))\n\n if self.config[\"debug\"]:\n self.log(f\"py3status started with config {self.config}\")\n\n if self.config[\"gevent\"]:\n self.is_gevent = self.gevent_monkey_patch_report()\n else:\n self.is_gevent = False\n\n # read i3status.conf\n config_path = self.config[\"i3status_config_path\"]\n self.log(\"config file: {}\".format(self.config[\"i3status_config_path\"]))\n self.config[\"py3_config\"] = process_config(config_path, self)\n\n # read resources\n if \"resources\" in str(self.config[\"py3_config\"].values()):\n from subprocess import check_output\n\n resources = check_output([\"xrdb\", \"-query\"]).decode().splitlines()\n self.config[\"resources\"] = {\n k: v.strip() for k, v in (x.split(\":\", 1) for x in resources)\n }\n\n # setup i3status thread\n self.i3status_thread = I3status(self)\n\n # If standalone or no i3status modules then use the mock i3status\n # else start i3status thread.\n i3s_modules = self.config[\"py3_config\"][\"i3s_modules\"]\n if self.config[\"standalone\"] or not i3s_modules:\n self.i3status_thread.mock()\n i3s_mode = \"mocked\"\n else:\n for module in i3s_modules:\n self.log(f\"adding module {module}\")\n i3s_mode = \"started\"\n self.i3status_thread.start()\n while not self.i3status_thread.ready:\n if not self.i3status_thread.is_alive():\n # i3status is having a bad day, so tell the user what went\n # wrong and do the best we can with just py3status modules.\n err = self.i3status_thread.error\n self.notify_user(err)\n self.i3status_thread.mock()\n i3s_mode = \"mocked\"\n break\n time.sleep(0.1)\n if self.config[\"debug\"]:\n self.log(\n \"i3status thread {} with config {}\".format(\n i3s_mode, self.config[\"py3_config\"]\n )\n )\n\n # add i3status thread monitoring task\n if i3s_mode == \"started\":\n task = CheckI3StatusThread(self.i3status_thread, self)\n self.timeout_queue_add(task)\n\n # setup input events thread\n self.events_thread = Events(self)\n self.events_thread.daemon = True\n self.events_thread.start()\n if self.config[\"debug\"]:\n self.log(\"events thread started\")\n\n # initialise the command server\n self.commands_thread = CommandServer(self)\n self.commands_thread.daemon = True\n self.commands_thread.start()\n if self.config[\"debug\"]:\n self.log(\"commands thread started\")\n\n # initialize the udev monitor (lazy)\n self.udev_monitor = UdevMonitor(self)\n\n # suppress modules' output wrt issue #20\n if not self.config[\"debug\"]:\n sys.stdout = Path(\"/dev/null\").open(\"w\")\n sys.stderr = Path(\"/dev/null\").open(\"w\")\n\n # get the list of py3status configured modules\n self.py3_modules = self.config[\"py3_config\"][\"py3_modules\"]\n\n # get a dict of all user provided modules\n self.log(\"modules include paths: {}\".format(self.config[\"include_paths\"]))\n user_modules = self.get_user_configured_modules()\n if self.config[\"debug\"]:\n self.log(f\"user_modules={user_modules}\")\n\n if self.py3_modules:\n # load and spawn i3status.conf configured modules threads\n self.load_modules(self.py3_modules, user_modules)\n\n def notify_user(\n self,\n msg,\n level=\"error\",\n rate_limit=None,\n module_name=\"\",\n icon=None,\n title=\"py3status\",\n ):\n \"\"\"\n Display notification to user via i3-nagbar or send-notify\n We also make sure to log anything to keep trace of it.\n\n NOTE: Message should end with a '.' for consistency.\n \"\"\"\n dbus = self.config.get(\"dbus_notify\")\n if dbus:\n # force msg, icon, title to be a string\n title = f\"{title}\"\n msg = f\"{msg}\"\n if icon:\n icon = f\"{icon}\"\n else:\n msg = f\"py3status: {msg}\"\n if level != \"info\" and module_name == \"\":\n fix_msg = \"{} Please try to fix this and reload i3wm (Mod+Shift+R)\"\n msg = fix_msg.format(msg)\n # Rate limiting. If rate limiting then we need to calculate the time\n # period for which the message should not be repeated. We just use\n # A simple chunked time model where a message cannot be repeated in a\n # given time period. Messages can be repeated more frequently but must\n # be in different time periods.\n\n limit_key = \"\"\n if rate_limit:\n try:\n limit_key = time.perf_counter() // rate_limit\n except TypeError:\n pass\n # We use a hash to see if the message is being repeated. This is crude\n # and imperfect but should work for our needs.\n msg_hash = hash(f\"{module_name}#{limit_key}#{msg}#{title}\")\n if msg_hash in self.notified_messages:\n return\n elif module_name:\n log_msg = 'Module `{}` sent a notification. \"{}: {}\"'.format(\n module_name, title, msg\n )\n self.log(log_msg, level)\n else:\n self.log(msg, level)\n self.notified_messages.add(msg_hash)\n\n try:\n if dbus:\n # fix any html entities\n msg = msg.replace(\"&\", \"&\")\n msg = msg.replace(\"<\", \"<\")\n msg = msg.replace(\">\", \">\")\n cmd = [\"notify-send\"]\n if icon:\n cmd += [\"-i\", icon]\n cmd += [\"-u\", DBUS_LEVELS.get(level, \"normal\"), \"-t\", \"10000\"]\n cmd += [title, msg]\n else:\n py3_config = self.config.get(\"py3_config\", {})\n nagbar_font = py3_config.get(\"py3status\", {}).get(\"nagbar_font\")\n wm_nag = self.config[\"wm\"][\"nag\"]\n cmd = [wm_nag, \"-m\", msg, \"-t\", level]\n if nagbar_font:\n cmd += [\"-f\", nagbar_font]\n Popen(\n cmd,\n stdout=Path(\"/dev/null\").open(\"w\"),\n stderr=Path(\"/dev/null\").open(\"w\"),\n )\n except Exception as err:\n self.log(f\"notify_user error: {err}\")\n\n def stop(self):\n \"\"\"\n Set the Event lock, this will break all threads' loops.\n \"\"\"\n self.running = False\n # stop the command server\n try:\n self.commands_thread.kill()\n except: # noqa e722\n pass\n\n try:\n self.lock.set()\n if self.config[\"debug\"]:\n self.log(\"lock set, exiting\")\n # run kill() method on all py3status modules\n for module in self.modules.values():\n module.kill()\n except: # noqa e722\n pass\n\n def refresh_modules(self, module_string=None, exact=True):\n \"\"\"\n Update modules.\n if module_string is None all modules are refreshed\n if module_string then modules with the exact name or those starting\n with the given string depending on exact parameter will be refreshed.\n If a module is an i3status one then we refresh i3status.\n To prevent abuse, we rate limit this function to 100ms for full\n refreshes.\n \"\"\"\n if not module_string:\n if time.perf_counter() > (self.last_refresh_ts + 0.1):\n self.last_refresh_ts = time.perf_counter()\n else:\n # rate limiting\n return\n update_i3status = False\n for name, module in self.output_modules.items():\n if (\n module_string is None\n or (exact and name == module_string)\n or (not exact and name.startswith(module_string))\n ):\n if module[\"type\"] == \"py3status\":\n if self.config[\"debug\"]:\n self.log(f\"refresh py3status module {name}\")\n module[\"module\"].force_update()\n else:\n if self.config[\"debug\"]:\n self.log(f\"refresh i3status module {name}\")\n update_i3status = True\n if update_i3status:\n self.i3status_thread.refresh_i3status()\n\n def sig_handler(self, signum, frame):\n \"\"\"\n SIGUSR1 was received, the user asks for an immediate refresh of the bar\n \"\"\"\n self.log(\"received USR1\")\n self.refresh_modules()\n\n def terminate(self, signum, frame):\n \"\"\"\n Received request to terminate (SIGTERM), exit nicely.\n \"\"\"\n self.log(\"received SIGTERM\")\n raise KeyboardInterrupt()\n\n def purge_module(self, module_name):\n \"\"\"\n A module has been removed e.g. a module that had an error.\n We need to find any containers and remove the module from them.\n \"\"\"\n containers = self.config[\"py3_config\"][\".module_groups\"]\n containers_to_update = set()\n if module_name in containers:\n containers_to_update.update(set(containers[module_name]))\n for container in containers_to_update:\n try:\n self.modules[container].module_class.items.remove(module_name)\n except ValueError:\n pass\n\n def notify_update(self, update, urgent=False):\n \"\"\"\n Name or list of names of modules that have updated.\n \"\"\"\n if not isinstance(update, list):\n update = [update]\n self.update_queue.extend(update)\n\n # find containers that use the modules that updated\n containers = self.config[\"py3_config\"][\".module_groups\"]\n containers_to_update = set()\n for item in update:\n if item in containers:\n containers_to_update.update(set(containers[item]))\n # force containers to update\n for container in containers_to_update:\n container_module = self.output_modules.get(container)\n if container_module:\n # If the container registered a urgent_function then call it\n # if this update is urgent.\n if urgent and container_module.get(\"urgent_function\"):\n container_module[\"urgent_function\"](update)\n # If a container has registered a content_function we use that\n # to see if the container needs to be updated.\n # We only need to update containers if their active content has\n # changed.\n if container_module.get(\"content_function\"):\n if set(update) & container_module[\"content_function\"]():\n container_module[\"module\"].force_update()\n else:\n # we don't know so just update.\n container_module[\"module\"].force_update()\n\n # we need to update the output\n if self.update_queue:\n self.update_request.set()\n\n def log(self, msg, level=\"info\"):\n \"\"\"\n log this information to syslog or user provided logfile.\n \"\"\"\n if not self.config.get(\"log_file\"):\n # If level was given as a str then convert to actual level\n level = LOG_LEVELS.get(level, level)\n syslog(level, f\"{msg}\")\n else:\n # Binary mode so fs encoding setting is not an issue\n with self.config[\"log_file\"].open(\"ab\") as f:\n log_time = time.strftime(\"%Y-%m-%d %H:%M:%S\")\n # nice formatting of data structures using pretty print\n if isinstance(msg, (dict, list, set, tuple)):\n msg = pformat(msg)\n # if multiline then start the data output on a fresh line\n # to aid readability.\n if \"\\n\" in msg:\n msg = \"\\n\" + msg\n out = f\"{log_time} {level.upper()} {msg}\\n\"\n try:\n # Encode unicode strings to bytes\n f.write(out.encode(\"utf-8\"))\n except (AttributeError, UnicodeDecodeError):\n # Write any byte strings straight to log\n f.write(out)\n\n def create_output_modules(self):\n \"\"\"\n Setup our output modules to allow easy updating of py3modules and\n i3status modules allows the same module to be used multiple times.\n \"\"\"\n py3_config = self.config[\"py3_config\"]\n i3modules = self.i3status_thread.i3modules\n output_modules = self.output_modules\n # position in the bar of the modules\n positions = {}\n for index, name in enumerate(py3_config[\"order\"]):\n if name not in positions:\n positions[name] = []\n positions[name].append(index)\n\n # py3status modules\n for name in self.modules:\n if name not in output_modules:\n output_modules[name] = {}\n output_modules[name][\"position\"] = positions.get(name, [])\n output_modules[name][\"module\"] = self.modules[name]\n output_modules[name][\"type\"] = \"py3status\"\n output_modules[name][\"color\"] = self.mappings_color.get(name)\n # i3status modules\n for name in i3modules:\n if name not in output_modules:\n output_modules[name] = {}\n output_modules[name][\"position\"] = positions.get(name, [])\n output_modules[name][\"module\"] = i3modules[name]\n output_modules[name][\"type\"] = \"i3status\"\n output_modules[name][\"color\"] = self.mappings_color.get(name)\n\n self.output_modules = output_modules\n\n def create_mappings(self, config):\n \"\"\"\n Create any mappings needed for global substitutions eg. colors\n \"\"\"\n mappings = {}\n for name, cfg in config.items():\n # Ignore special config sections.\n if name in CONFIG_SPECIAL_SECTIONS:\n continue\n color = self.get_config_attribute(name, \"color\")\n if hasattr(color, \"none_setting\"):\n color = None\n mappings[name] = color\n # Store mappings for later use.\n self.mappings_color = mappings\n\n def process_module_output(self, module):\n \"\"\"\n Process the output for a module and return a json string representing it.\n Color processing occurs here.\n \"\"\"\n outputs = module[\"module\"].get_latest()\n if self.config[\"py3_config\"][\"general\"].get(\"colors\") is False:\n for output in outputs:\n output.pop(\"color\", None)\n else:\n color = module[\"color\"]\n if color:\n for output in outputs:\n # Color: substitute the config defined color\n if \"color\" not in output:\n output[\"color\"] = color\n # Create the json string output.\n return \",\".join(dumps(x) for x in outputs)\n\n def i3bar_stop(self, signum, frame):\n self.log(\"received SIGTSTP\")\n self.i3bar_running = False\n # i3status should be stopped\n self.i3status_thread.suspend_i3status()\n self.sleep_modules()\n\n def i3bar_start(self, signum, frame):\n self.log(\"received SIGCONT\")\n self.i3bar_running = True\n self.wake_modules()\n\n def sleep_modules(self):\n # Put all py3modules to sleep so they stop updating\n for module in self.output_modules.values():\n if module[\"type\"] == \"py3status\":\n module[\"module\"].sleep()\n\n def wake_modules(self):\n # Wake up all py3modules.\n for module in self.output_modules.values():\n if module[\"type\"] == \"py3status\":\n module[\"module\"].wake()\n\n @profile\n def run(self):\n \"\"\"\n Main py3status loop, continuously read from i3status and modules\n and output it to i3bar for displaying.\n \"\"\"\n # SIGUSR1 forces a refresh of the bar both for py3status and i3status,\n # this mimics the USR1 signal handling of i3status (see man i3status)\n signal(SIGUSR1, self.sig_handler)\n signal(SIGTERM, self.terminate)\n\n # initialize usage variables\n py3_config = self.config[\"py3_config\"]\n\n # prepare the color mappings\n self.create_mappings(py3_config)\n\n # self.output_modules needs to have been created before modules are\n # started. This is so that modules can do things like register their\n # content_function.\n self.create_output_modules()\n\n # start up all our modules\n for module in self.modules.values():\n task = ModuleRunner(module)\n self.timeout_queue_add(task)\n\n # this will be our output set to the correct length for the number of\n # items in the bar\n output = [None] * len(py3_config[\"order\"])\n\n write = sys.__stdout__.write\n flush = sys.__stdout__.flush\n\n # start our output\n header = {\n \"version\": 1,\n \"click_events\": self.config[\"click_events\"],\n \"stop_signal\": SIGTSTP,\n }\n write(dumps(header))\n write(\"\\n[[]\\n\")\n\n update_due = None\n # main loop\n while True:\n # process the timeout_queue and get interval till next update due\n update_due = self.timeout_queue_process()\n\n # wait until an update is requested\n if self.update_request.wait(timeout=update_due):\n # event was set so clear it\n self.update_request.clear()\n\n while not self.i3bar_running:\n time.sleep(0.1)\n\n # check if an update is needed\n if self.update_queue:\n while len(self.update_queue):\n module_name = self.update_queue.popleft()\n module = self.output_modules[module_name]\n out = self.process_module_output(module)\n\n for index in module[\"position\"]:\n # store the output as json\n output[index] = out\n\n # build output string\n out = \",\".join(x for x in output if x)\n # dump the line to stdout\n write(f\",[{out}]\\n\")\n flush()\n", "path": "py3status/core.py"}]} |
gh_patches_debug_1101 | rasdani/github-patches | git_diff | streamlink__streamlink-4763 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
plugins.huya: As of today, Huya plugin has been broken
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
When I try to open any public Huya stream I get an error message. Assuming Huya has changed how live IDs are handled and switched to strings.
### Debug log
```text
hina@Hinas-MacBook-Pro ~ % streamlink https://www.huya.com/660108 best --loglevel debug
[cli][debug] OS: macOS 12.5
[cli][debug] Python: 3.10.6
[cli][debug] Streamlink: 4.3.0
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.15.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] websocket-client: 1.3.3
[cli][debug] Arguments:
[cli][debug] url=https://www.huya.com/660108
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin huya for URL https://www.huya.com/660108
error: Unable to validate response text: ValidationError(NoneOrAllSchema):
ValidationError(dict):
Unable to validate value of key 'data'
Context(AnySchema):
ValidationError(dict):
Unable to validate value of key 'gameLiveInfo'
Context(dict):
Unable to validate value of key 'liveId'
Context(type):
Type of '7134607205476108031' should be int, but is str
hina@Hinas-MacBook-Pro ~ %
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/streamlink/plugins/huya.py`
Content:
```
1 """
2 $description Chinese live-streaming platform for live video game broadcasts and individual live streams.
3 $url huya.com
4 $type live
5 """
6
7 import base64
8 import logging
9 import re
10 from html import unescape as html_unescape
11 from typing import Dict
12
13 from streamlink.plugin import Plugin, pluginmatcher
14 from streamlink.plugin.api import validate
15 from streamlink.stream.http import HTTPStream
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(
21 r"https?://(?:www\.)?huya\.com/(?P<channel>[^/]+)"
22 ))
23 class Huya(Plugin):
24 QUALITY_WEIGHTS: Dict[str, int] = {}
25
26 @classmethod
27 def stream_weight(cls, key):
28 weight = cls.QUALITY_WEIGHTS.get(key)
29 if weight:
30 return weight, "huya"
31
32 return super().stream_weight(key)
33
34 def _get_streams(self):
35 data = self.session.http.get(self.url, schema=validate.Schema(
36 validate.parse_html(),
37 validate.xml_xpath_string(".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()"),
38 validate.none_or_all(
39 re.compile(r"""(?P<q>"?)stream(?P=q)\s*:\s*(?:"(?P<base64>.+?)"|(?P<json>\{.+?})\s*}\s*;)"""),
40 ),
41 validate.none_or_all(
42 validate.any(
43 validate.all(
44 validate.get("base64"),
45 str,
46 validate.transform(base64.b64decode),
47 ),
48 validate.all(
49 validate.get("json"),
50 str,
51 ),
52 ),
53 validate.parse_json(),
54 {
55 "data": [{
56 "gameLiveInfo": {
57 "liveId": int,
58 "nick": str,
59 "roomName": str,
60 },
61 "gameStreamInfoList": [validate.all(
62 {
63 "sCdnType": str,
64 "iPCPriorityRate": int,
65 "sStreamName": str,
66 "sFlvUrl": str,
67 "sFlvUrlSuffix": str,
68 "sFlvAntiCode": validate.all(str, validate.transform(lambda v: html_unescape(v))),
69 },
70 validate.union_get(
71 "sCdnType",
72 "iPCPriorityRate",
73 "sStreamName",
74 "sFlvUrl",
75 "sFlvUrlSuffix",
76 "sFlvAntiCode",
77 )),
78 ],
79 }],
80 },
81 validate.get(("data", 0)),
82 validate.union_get(
83 ("gameLiveInfo", "liveId"),
84 ("gameLiveInfo", "nick"),
85 ("gameLiveInfo", "roomName"),
86 "gameStreamInfoList",
87 ),
88 ),
89 ))
90 if not data:
91 return
92
93 self.id, self.author, self.title, streamdata = data
94
95 for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata:
96 name = f"source_{cdntype.lower()}"
97 self.QUALITY_WEIGHTS[name] = priority
98 yield name, HTTPStream(self.session, f"{flvurl}/{streamname}.{suffix}?{anticode}")
99
100 log.debug(f"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}")
101
102
103 __plugin__ = Huya
104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py
--- a/src/streamlink/plugins/huya.py
+++ b/src/streamlink/plugins/huya.py
@@ -54,7 +54,7 @@
{
"data": [{
"gameLiveInfo": {
- "liveId": int,
+ "liveId": str,
"nick": str,
"roomName": str,
},
| {"golden_diff": "diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py\n--- a/src/streamlink/plugins/huya.py\n+++ b/src/streamlink/plugins/huya.py\n@@ -54,7 +54,7 @@\n {\n \"data\": [{\n \"gameLiveInfo\": {\n- \"liveId\": int,\n+ \"liveId\": str,\n \"nick\": str,\n \"roomName\": str,\n },\n", "issue": "plugins.huya: As of today, Huya plugin has been broken\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nWhen I try to open any public Huya stream I get an error message. Assuming Huya has changed how live IDs are handled and switched to strings.\n\n### Debug log\n\n```text\nhina@Hinas-MacBook-Pro ~ % streamlink https://www.huya.com/660108 best --loglevel debug\r\n[cli][debug] OS: macOS 12.5\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] Streamlink: 4.3.0\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.15.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] websocket-client: 1.3.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.huya.com/660108\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin huya for URL https://www.huya.com/660108\r\nerror: Unable to validate response text: ValidationError(NoneOrAllSchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'data'\r\n Context(AnySchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'gameLiveInfo'\r\n Context(dict):\r\n Unable to validate value of key 'liveId'\r\n Context(type):\r\n Type of '7134607205476108031' should be int, but is str\r\nhina@Hinas-MacBook-Pro ~ %\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Chinese live-streaming platform for live video game broadcasts and individual live streams.\n$url huya.com\n$type live\n\"\"\"\n\nimport base64\nimport logging\nimport re\nfrom html import unescape as html_unescape\nfrom typing import Dict\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.http import HTTPStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?huya\\.com/(?P<channel>[^/]+)\"\n))\nclass Huya(Plugin):\n QUALITY_WEIGHTS: Dict[str, int] = {}\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.QUALITY_WEIGHTS.get(key)\n if weight:\n return weight, \"huya\"\n\n return super().stream_weight(key)\n\n def _get_streams(self):\n data = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()\"),\n validate.none_or_all(\n re.compile(r\"\"\"(?P<q>\"?)stream(?P=q)\\s*:\\s*(?:\"(?P<base64>.+?)\"|(?P<json>\\{.+?})\\s*}\\s*;)\"\"\"),\n ),\n validate.none_or_all(\n validate.any(\n validate.all(\n validate.get(\"base64\"),\n str,\n validate.transform(base64.b64decode),\n ),\n validate.all(\n validate.get(\"json\"),\n str,\n ),\n ),\n validate.parse_json(),\n {\n \"data\": [{\n \"gameLiveInfo\": {\n \"liveId\": int,\n \"nick\": str,\n \"roomName\": str,\n },\n \"gameStreamInfoList\": [validate.all(\n {\n \"sCdnType\": str,\n \"iPCPriorityRate\": int,\n \"sStreamName\": str,\n \"sFlvUrl\": str,\n \"sFlvUrlSuffix\": str,\n \"sFlvAntiCode\": validate.all(str, validate.transform(lambda v: html_unescape(v))),\n },\n validate.union_get(\n \"sCdnType\",\n \"iPCPriorityRate\",\n \"sStreamName\",\n \"sFlvUrl\",\n \"sFlvUrlSuffix\",\n \"sFlvAntiCode\",\n )),\n ],\n }],\n },\n validate.get((\"data\", 0)),\n validate.union_get(\n (\"gameLiveInfo\", \"liveId\"),\n (\"gameLiveInfo\", \"nick\"),\n (\"gameLiveInfo\", \"roomName\"),\n \"gameStreamInfoList\",\n ),\n ),\n ))\n if not data:\n return\n\n self.id, self.author, self.title, streamdata = data\n\n for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata:\n name = f\"source_{cdntype.lower()}\"\n self.QUALITY_WEIGHTS[name] = priority\n yield name, HTTPStream(self.session, f\"{flvurl}/{streamname}.{suffix}?{anticode}\")\n\n log.debug(f\"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}\")\n\n\n__plugin__ = Huya\n", "path": "src/streamlink/plugins/huya.py"}], "after_files": [{"content": "\"\"\"\n$description Chinese live-streaming platform for live video game broadcasts and individual live streams.\n$url huya.com\n$type live\n\"\"\"\n\nimport base64\nimport logging\nimport re\nfrom html import unescape as html_unescape\nfrom typing import Dict\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.http import HTTPStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?huya\\.com/(?P<channel>[^/]+)\"\n))\nclass Huya(Plugin):\n QUALITY_WEIGHTS: Dict[str, int] = {}\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.QUALITY_WEIGHTS.get(key)\n if weight:\n return weight, \"huya\"\n\n return super().stream_weight(key)\n\n def _get_streams(self):\n data = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()\"),\n validate.none_or_all(\n re.compile(r\"\"\"(?P<q>\"?)stream(?P=q)\\s*:\\s*(?:\"(?P<base64>.+?)\"|(?P<json>\\{.+?})\\s*}\\s*;)\"\"\"),\n ),\n validate.none_or_all(\n validate.any(\n validate.all(\n validate.get(\"base64\"),\n str,\n validate.transform(base64.b64decode),\n ),\n validate.all(\n validate.get(\"json\"),\n str,\n ),\n ),\n validate.parse_json(),\n {\n \"data\": [{\n \"gameLiveInfo\": {\n \"liveId\": str,\n \"nick\": str,\n \"roomName\": str,\n },\n \"gameStreamInfoList\": [validate.all(\n {\n \"sCdnType\": str,\n \"iPCPriorityRate\": int,\n \"sStreamName\": str,\n \"sFlvUrl\": str,\n \"sFlvUrlSuffix\": str,\n \"sFlvAntiCode\": validate.all(str, validate.transform(lambda v: html_unescape(v))),\n },\n validate.union_get(\n \"sCdnType\",\n \"iPCPriorityRate\",\n \"sStreamName\",\n \"sFlvUrl\",\n \"sFlvUrlSuffix\",\n \"sFlvAntiCode\",\n )),\n ],\n }],\n },\n validate.get((\"data\", 0)),\n validate.union_get(\n (\"gameLiveInfo\", \"liveId\"),\n (\"gameLiveInfo\", \"nick\"),\n (\"gameLiveInfo\", \"roomName\"),\n \"gameStreamInfoList\",\n ),\n ),\n ))\n if not data:\n return\n\n self.id, self.author, self.title, streamdata = data\n\n for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata:\n name = f\"source_{cdntype.lower()}\"\n self.QUALITY_WEIGHTS[name] = priority\n yield name, HTTPStream(self.session, f\"{flvurl}/{streamname}.{suffix}?{anticode}\")\n\n log.debug(f\"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}\")\n\n\n__plugin__ = Huya\n", "path": "src/streamlink/plugins/huya.py"}]} |
gh_patches_debug_1102 | rasdani/github-patches | git_diff | wagtail__wagtail-997 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Password reset request generates email with URL "example.com"
Received:
```
Please follow the link below to reset your password
http://example.com/admin/password_reset/confirm/NA/3x7-cfc1f37209f0c04d1ee1/
```
This time `BASE_URL` _is_ configured, but as this view is from django.contrib this is perhaps due to some other missing setting.
Related to #693 #826
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import re
4
5 from django.conf import settings
6 from django import template
7 from django.contrib.humanize.templatetags.humanize import intcomma
8
9 from wagtail.wagtailcore import hooks
10 from wagtail.wagtailcore.models import get_navigation_menu_items, UserPagePermissionsProxy, PageViewRestriction
11 from wagtail.wagtailcore.utils import camelcase_to_underscore
12 from wagtail.wagtailadmin.menu import admin_menu
13
14
15 register = template.Library()
16
17 register.filter('intcomma', intcomma)
18
19 @register.inclusion_tag('wagtailadmin/shared/explorer_nav.html')
20 def explorer_nav():
21 return {
22 'nodes': get_navigation_menu_items()
23 }
24
25
26 @register.inclusion_tag('wagtailadmin/shared/explorer_nav_child.html')
27 def explorer_subnav(nodes):
28 return {
29 'nodes': nodes
30 }
31
32
33 @register.inclusion_tag('wagtailadmin/shared/main_nav.html', takes_context=True)
34 def main_nav(context):
35 request = context['request']
36
37 return {
38 'menu_html': admin_menu.render_html(request),
39 'request': request,
40 }
41
42 @register.simple_tag
43 def main_nav_js():
44 return admin_menu.media['js']
45
46
47 @register.filter("ellipsistrim")
48 def ellipsistrim(value, max_length):
49 if len(value) > max_length:
50 truncd_val = value[:max_length]
51 if not len(value) == max_length+1 and value[max_length+1] != " ":
52 truncd_val = truncd_val[:truncd_val.rfind(" ")]
53 return truncd_val + "..."
54 return value
55
56
57 @register.filter
58 def fieldtype(bound_field):
59 try:
60 return camelcase_to_underscore(bound_field.field.__class__.__name__)
61 except AttributeError:
62 try:
63 return camelcase_to_underscore(bound_field.__class__.__name__)
64 except AttributeError:
65 return ""
66
67
68 @register.filter
69 def widgettype(bound_field):
70 try:
71 return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)
72 except AttributeError:
73 return ""
74
75
76 @register.filter
77 def meta_description(model):
78 try:
79 return model.model_class()._meta.description
80 except:
81 return ""
82
83
84 @register.assignment_tag(takes_context=True)
85 def page_permissions(context, page):
86 """
87 Usage: {% page_permissions page as page_perms %}
88 Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out
89 what actions the current logged-in user can perform on the given page.
90 """
91 # Create a UserPagePermissionsProxy object to represent the user's global permissions, and
92 # cache it in the context for the duration of the page request, if one does not exist already
93 if 'user_page_permissions' not in context:
94 context['user_page_permissions'] = UserPagePermissionsProxy(context['request'].user)
95
96 # Now retrieve a PagePermissionTester from it, specific to the given page
97 return context['user_page_permissions'].for_page(page)
98
99
100 @register.assignment_tag(takes_context=True)
101 def test_page_is_public(context, page):
102 """
103 Usage: {% test_page_is_public page as is_public %}
104 Sets 'is_public' to True iff there are no page view restrictions in place on
105 this page.
106 Caches the list of page view restrictions in the context, to avoid repeated
107 DB queries on repeated calls.
108 """
109 if 'all_page_view_restriction_paths' not in context:
110 context['all_page_view_restriction_paths'] = PageViewRestriction.objects.select_related('page').values_list('page__path', flat=True)
111
112 is_private = any([
113 page.path.startswith(restricted_path)
114 for restricted_path in context['all_page_view_restriction_paths']
115 ])
116
117 return not is_private
118
119
120 @register.simple_tag
121 def hook_output(hook_name):
122 """
123 Example: {% hook_output 'insert_editor_css' %}
124 Whenever we have a hook whose functions take no parameters and return a string, this tag can be used
125 to output the concatenation of all of those return values onto the page.
126 Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.
127 """
128 snippets = [fn() for fn in hooks.get_hooks(hook_name)]
129 return ''.join(snippets)
130
131
132 @register.assignment_tag
133 def usage_count_enabled():
134 return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)
135
136
137 class EscapeScriptNode(template.Node):
138 TAG_NAME = 'escapescript'
139 SCRIPT_RE = re.compile(r'<(-*)/script>')
140
141 def __init__(self, nodelist):
142 super(EscapeScriptNode, self).__init__()
143 self.nodelist = nodelist
144
145 def render(self, context):
146 out = self.nodelist.render(context)
147 escaped_out = self.SCRIPT_RE.sub(r'<-\1/script>', out)
148 return escaped_out
149
150 @classmethod
151 def handle(cls, parser, token):
152 nodelist = parser.parse(('end' + EscapeScriptNode.TAG_NAME,))
153 parser.delete_first_token()
154 return cls(nodelist)
155
156 register.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)
157
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py
--- a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py
+++ b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py
@@ -134,6 +134,11 @@
return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)
[email protected]_tag
+def base_url_setting():
+ return getattr(settings, 'BASE_URL', None)
+
+
class EscapeScriptNode(template.Node):
TAG_NAME = 'escapescript'
SCRIPT_RE = re.compile(r'<(-*)/script>')
| {"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n--- a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n+++ b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n@@ -134,6 +134,11 @@\n return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)\n \n \[email protected]_tag\n+def base_url_setting():\n+ return getattr(settings, 'BASE_URL', None)\n+\n+\n class EscapeScriptNode(template.Node):\n TAG_NAME = 'escapescript'\n SCRIPT_RE = re.compile(r'<(-*)/script>')\n", "issue": "Password reset request generates email with URL \"example.com\"\nReceived:\n\n```\nPlease follow the link below to reset your password\nhttp://example.com/admin/password_reset/confirm/NA/3x7-cfc1f37209f0c04d1ee1/\n```\n\nThis time `BASE_URL` _is_ configured, but as this view is from django.contrib this is perhaps due to some other missing setting. \n\nRelated to #693 #826\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom django.conf import settings\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import intcomma\n\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import get_navigation_menu_items, UserPagePermissionsProxy, PageViewRestriction\nfrom wagtail.wagtailcore.utils import camelcase_to_underscore\nfrom wagtail.wagtailadmin.menu import admin_menu\n\n\nregister = template.Library()\n\nregister.filter('intcomma', intcomma)\n\[email protected]_tag('wagtailadmin/shared/explorer_nav.html')\ndef explorer_nav():\n return {\n 'nodes': get_navigation_menu_items()\n }\n\n\[email protected]_tag('wagtailadmin/shared/explorer_nav_child.html')\ndef explorer_subnav(nodes):\n return {\n 'nodes': nodes\n }\n\n\[email protected]_tag('wagtailadmin/shared/main_nav.html', takes_context=True)\ndef main_nav(context):\n request = context['request']\n\n return {\n 'menu_html': admin_menu.render_html(request),\n 'request': request,\n }\n\[email protected]_tag\ndef main_nav_js():\n return admin_menu.media['js']\n\n\[email protected](\"ellipsistrim\")\ndef ellipsistrim(value, max_length):\n if len(value) > max_length:\n truncd_val = value[:max_length]\n if not len(value) == max_length+1 and value[max_length+1] != \" \":\n truncd_val = truncd_val[:truncd_val.rfind(\" \")]\n return truncd_val + \"...\"\n return value\n\n\[email protected]\ndef fieldtype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef widgettype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef meta_description(model):\n try:\n return model.model_class()._meta.description\n except:\n return \"\"\n\n\[email protected]_tag(takes_context=True)\ndef page_permissions(context, page):\n \"\"\"\n Usage: {% page_permissions page as page_perms %}\n Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out\n what actions the current logged-in user can perform on the given page.\n \"\"\"\n # Create a UserPagePermissionsProxy object to represent the user's global permissions, and\n # cache it in the context for the duration of the page request, if one does not exist already\n if 'user_page_permissions' not in context:\n context['user_page_permissions'] = UserPagePermissionsProxy(context['request'].user)\n\n # Now retrieve a PagePermissionTester from it, specific to the given page\n return context['user_page_permissions'].for_page(page)\n\n\[email protected]_tag(takes_context=True)\ndef test_page_is_public(context, page):\n \"\"\"\n Usage: {% test_page_is_public page as is_public %}\n Sets 'is_public' to True iff there are no page view restrictions in place on\n this page.\n Caches the list of page view restrictions in the context, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if 'all_page_view_restriction_paths' not in context:\n context['all_page_view_restriction_paths'] = PageViewRestriction.objects.select_related('page').values_list('page__path', flat=True)\n\n is_private = any([\n page.path.startswith(restricted_path)\n for restricted_path in context['all_page_view_restriction_paths']\n ])\n\n return not is_private\n\n\[email protected]_tag\ndef hook_output(hook_name):\n \"\"\"\n Example: {% hook_output 'insert_editor_css' %}\n Whenever we have a hook whose functions take no parameters and return a string, this tag can be used\n to output the concatenation of all of those return values onto the page.\n Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.\n \"\"\"\n snippets = [fn() for fn in hooks.get_hooks(hook_name)]\n return ''.join(snippets)\n\n\[email protected]_tag\ndef usage_count_enabled():\n return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)\n\n\nclass EscapeScriptNode(template.Node):\n TAG_NAME = 'escapescript'\n SCRIPT_RE = re.compile(r'<(-*)/script>')\n\n def __init__(self, nodelist):\n super(EscapeScriptNode, self).__init__()\n self.nodelist = nodelist\n\n def render(self, context):\n out = self.nodelist.render(context)\n escaped_out = self.SCRIPT_RE.sub(r'<-\\1/script>', out)\n return escaped_out\n\n @classmethod\n def handle(cls, parser, token):\n nodelist = parser.parse(('end' + EscapeScriptNode.TAG_NAME,))\n parser.delete_first_token()\n return cls(nodelist)\n\nregister.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)\n", "path": "wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom django.conf import settings\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import intcomma\n\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import get_navigation_menu_items, UserPagePermissionsProxy, PageViewRestriction\nfrom wagtail.wagtailcore.utils import camelcase_to_underscore\nfrom wagtail.wagtailadmin.menu import admin_menu\n\n\nregister = template.Library()\n\nregister.filter('intcomma', intcomma)\n\[email protected]_tag('wagtailadmin/shared/explorer_nav.html')\ndef explorer_nav():\n return {\n 'nodes': get_navigation_menu_items()\n }\n\n\[email protected]_tag('wagtailadmin/shared/explorer_nav_child.html')\ndef explorer_subnav(nodes):\n return {\n 'nodes': nodes\n }\n\n\[email protected]_tag('wagtailadmin/shared/main_nav.html', takes_context=True)\ndef main_nav(context):\n request = context['request']\n\n return {\n 'menu_html': admin_menu.render_html(request),\n 'request': request,\n }\n\[email protected]_tag\ndef main_nav_js():\n return admin_menu.media['js']\n\n\[email protected](\"ellipsistrim\")\ndef ellipsistrim(value, max_length):\n if len(value) > max_length:\n truncd_val = value[:max_length]\n if not len(value) == max_length+1 and value[max_length+1] != \" \":\n truncd_val = truncd_val[:truncd_val.rfind(\" \")]\n return truncd_val + \"...\"\n return value\n\n\[email protected]\ndef fieldtype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef widgettype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef meta_description(model):\n try:\n return model.model_class()._meta.description\n except:\n return \"\"\n\n\[email protected]_tag(takes_context=True)\ndef page_permissions(context, page):\n \"\"\"\n Usage: {% page_permissions page as page_perms %}\n Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out\n what actions the current logged-in user can perform on the given page.\n \"\"\"\n # Create a UserPagePermissionsProxy object to represent the user's global permissions, and\n # cache it in the context for the duration of the page request, if one does not exist already\n if 'user_page_permissions' not in context:\n context['user_page_permissions'] = UserPagePermissionsProxy(context['request'].user)\n\n # Now retrieve a PagePermissionTester from it, specific to the given page\n return context['user_page_permissions'].for_page(page)\n\n\[email protected]_tag(takes_context=True)\ndef test_page_is_public(context, page):\n \"\"\"\n Usage: {% test_page_is_public page as is_public %}\n Sets 'is_public' to True iff there are no page view restrictions in place on\n this page.\n Caches the list of page view restrictions in the context, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if 'all_page_view_restriction_paths' not in context:\n context['all_page_view_restriction_paths'] = PageViewRestriction.objects.select_related('page').values_list('page__path', flat=True)\n\n is_private = any([\n page.path.startswith(restricted_path)\n for restricted_path in context['all_page_view_restriction_paths']\n ])\n\n return not is_private\n\n\[email protected]_tag\ndef hook_output(hook_name):\n \"\"\"\n Example: {% hook_output 'insert_editor_css' %}\n Whenever we have a hook whose functions take no parameters and return a string, this tag can be used\n to output the concatenation of all of those return values onto the page.\n Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.\n \"\"\"\n snippets = [fn() for fn in hooks.get_hooks(hook_name)]\n return ''.join(snippets)\n\n\[email protected]_tag\ndef usage_count_enabled():\n return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)\n\n\[email protected]_tag\ndef base_url_setting():\n return getattr(settings, 'BASE_URL', None)\n\n\nclass EscapeScriptNode(template.Node):\n TAG_NAME = 'escapescript'\n SCRIPT_RE = re.compile(r'<(-*)/script>')\n\n def __init__(self, nodelist):\n super(EscapeScriptNode, self).__init__()\n self.nodelist = nodelist\n\n def render(self, context):\n out = self.nodelist.render(context)\n escaped_out = self.SCRIPT_RE.sub(r'<-\\1/script>', out)\n return escaped_out\n\n @classmethod\n def handle(cls, parser, token):\n nodelist = parser.parse(('end' + EscapeScriptNode.TAG_NAME,))\n parser.delete_first_token()\n return cls(nodelist)\n\nregister.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)\n", "path": "wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py"}]} |
gh_patches_debug_1103 | rasdani/github-patches | git_diff | scverse__scanpy-2893 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Inconsistent array types from sc.get.aggregate
### Please make sure these conditions are met
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of scanpy.
- [X] (optional) I have confirmed this bug exists on the main branch of scanpy.
### What happened?
cc: @Intron7
The array types returned for the various aggregations in `sc.get.aggregate` are different (see example)
This can lead to somewhat confusing behavior downstream, especially while we are using the sparse matrix classes.
I would suggest we default to a dense result and consider adding an argument `array_type` that determines the type of the arrays added to `layers`.
### Minimal code sample
```python
import scanpy as sc
adata = sc.datasets.pbmc3k_processed().raw.to_adata()
aggregated = sc.get.aggregate(adata, "louvain", ["sum", "count_nonzero"])
type(aggregated.layers["sum"])
# numpy.ndarray
type(aggregated.layers["count_nonzero"])
# scipy.sparse._csr.csr_matrix
```
### Error output
_No response_
### Versions
<details>
```
-----
anndata 0.10.5.post1
scanpy 1.10.0.dev315+gf6d5ac94
-----
IPython 8.20.0
PIL 10.2.0
asciitree NA
asttokens NA
cloudpickle 3.0.0
cycler 0.12.1
cython_runtime NA
dask 2024.1.1
dateutil 2.8.2
decorator 5.1.1
executing 2.0.1
fasteners 0.19
h5py 3.10.0
igraph 0.11.3
jedi 0.19.1
jinja2 3.1.3
joblib 1.3.2
kiwisolver 1.4.5
legacy_api_wrap NA
leidenalg 0.10.2
llvmlite 0.41.1
markupsafe 2.1.4
matplotlib 3.8.2
mpl_toolkits NA
msgpack 1.0.7
natsort 8.4.0
numba 0.58.1
numcodecs 0.12.1
numpy 1.26.3
packaging 23.2
pandas 2.2.0
parso 0.8.3
pexpect 4.9.0
prompt_toolkit 3.0.43
psutil 5.9.8
ptyprocess 0.7.0
pure_eval 0.2.2
pygments 2.17.2
pyparsing 3.1.1
pytz 2023.4
scipy 1.12.0
session_info 1.0.0
six 1.16.0
sklearn 1.4.0
sparse 0.15.1
stack_data 0.6.3
tblib 3.0.0
texttable 1.7.0
threadpoolctl 3.2.0
tlz 0.12.1
toolz 0.12.1
traitlets 5.14.1
wcwidth 0.2.13
yaml 6.0.1
zarr 2.16.1
zipp NA
-----
Python 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]
Linux-5.15.0-87-generic-x86_64-with-glibc2.35
-----
Session information updated at 2024-03-04 13:41
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `scanpy/get/_aggregated.py`
Content:
```
1 from __future__ import annotations
2
3 from functools import singledispatch
4 from typing import TYPE_CHECKING, Literal, Union, get_args
5
6 import numpy as np
7 import pandas as pd
8 from anndata import AnnData, utils
9 from scipy import sparse
10
11 from .._utils import _resolve_axis
12 from .get import _check_mask
13
14 if TYPE_CHECKING:
15 from collections.abc import Collection, Iterable
16
17 from numpy.typing import NDArray
18
19 Array = Union[np.ndarray, sparse.csc_matrix, sparse.csr_matrix]
20 AggType = Literal["count_nonzero", "mean", "sum", "var"]
21
22
23 class Aggregate:
24 """\
25 Functionality for generic grouping and aggregating.
26
27 There is currently support for count_nonzero, sum, mean, and variance.
28
29 **Implementation**
30
31 Moments are computed using weighted sum aggregation of data by some feature
32 via multiplication by a sparse coordinate matrix A.
33
34 Runtime is effectively computation of the product `A @ X`, i.e. the count of (non-zero)
35 entries in X with multiplicity the number of group memberships for that entry.
36 This is `O(data)` for partitions (each observation belonging to exactly one group),
37 independent of the number of groups.
38
39 Params
40 ------
41 groupby
42 :class:`~pandas.Categorical` containing values for grouping by.
43 data
44 Data matrix for aggregation.
45 mask
46 Mask to be used for aggregation.
47 """
48
49 def __init__(
50 self,
51 groupby: pd.Categorical,
52 data: Array,
53 *,
54 mask: NDArray[np.bool_] | None = None,
55 ) -> None:
56 self.groupby = groupby
57 self.indicator_matrix = sparse_indicator(groupby, mask=mask)
58 self.data = data
59
60 groupby: pd.Categorical
61 indicator_matrix: sparse.coo_matrix
62 data: Array
63
64 def count_nonzero(self) -> NDArray[np.integer]:
65 """\
66 Count the number of observations in each group.
67
68 Returns
69 -------
70 Array of counts.
71 """
72 # pattern = self.data._with_data(np.broadcast_to(1, len(self.data.data)))
73 # return self.indicator_matrix @ pattern
74 return self.indicator_matrix @ (self.data != 0)
75
76 def sum(self) -> Array:
77 """\
78 Compute the sum per feature per group of observations.
79
80 Returns
81 -------
82 Array of sum.
83 """
84 return utils.asarray(self.indicator_matrix @ self.data)
85
86 def mean(self) -> Array:
87 """\
88 Compute the mean per feature per group of observations.
89
90 Returns
91 -------
92 Array of mean.
93 """
94 return (
95 utils.asarray(self.indicator_matrix @ self.data)
96 / np.bincount(self.groupby.codes)[:, None]
97 )
98
99 def mean_var(self, dof: int = 1) -> tuple[np.ndarray, np.ndarray]:
100 """\
101 Compute the count, as well as mean and variance per feature, per group of observations.
102
103 The formula `Var(X) = E(X^2) - E(X)^2` suffers loss of precision when the variance is a
104 very small fraction of the squared mean. In particular, when X is constant, the formula may
105 nonetheless be non-zero. By default, our implementation resets the variance to exactly zero
106 when the computed variance, relative to the squared mean, nears limit of precision of the
107 floating-point significand.
108
109 Params
110 ------
111 dof
112 Degrees of freedom for variance.
113
114 Returns
115 -------
116 Object with `count`, `mean`, and `var` attributes.
117 """
118 assert dof >= 0
119
120 group_counts = np.bincount(self.groupby.codes)
121 mean_ = self.mean()
122 # sparse matrices do not support ** for elementwise power.
123 mean_sq = (
124 utils.asarray(self.indicator_matrix @ _power(self.data, 2))
125 / group_counts[:, None]
126 )
127 sq_mean = mean_**2
128 var_ = mean_sq - sq_mean
129 # TODO: Why these values exactly? Because they are high relative to the datatype?
130 # (unchanged from original code: https://github.com/scverse/anndata/pull/564)
131 precision = 2 << (42 if self.data.dtype == np.float64 else 20)
132 # detects loss of precision in mean_sq - sq_mean, which suggests variance is 0
133 var_[precision * var_ < sq_mean] = 0
134 if dof != 0:
135 var_ *= (group_counts / (group_counts - dof))[:, np.newaxis]
136 return mean_, var_
137
138
139 def _power(X: Array, power: float | int) -> Array:
140 """\
141 Generate elementwise power of a matrix.
142
143 Needed for non-square sparse matrices because they do not support `**` so the `.power` function is used.
144
145 Params
146 ------
147 X
148 Matrix whose power is to be raised.
149 power
150 Integer power value
151
152 Returns
153 -------
154 Matrix whose power has been raised.
155 """
156 return X**power if isinstance(X, np.ndarray) else X.power(power)
157
158
159 @singledispatch
160 def aggregate(
161 adata: AnnData,
162 by: str | Collection[str],
163 func: AggType | Iterable[AggType],
164 *,
165 axis: Literal["obs", 0, "var", 1] | None = None,
166 mask: NDArray[np.bool_] | str | None = None,
167 dof: int = 1,
168 layer: str | None = None,
169 obsm: str | None = None,
170 varm: str | None = None,
171 ) -> AnnData:
172 """\
173 Aggregate data matrix based on some categorical grouping.
174
175 This function is useful for pseudobulking as well as plotting.
176
177 Aggregation to perform is specified by `func`, which can be a single metric or a
178 list of metrics. Each metric is computed over the group and results in a new layer
179 in the output `AnnData` object.
180
181 If none of `layer`, `obsm`, or `varm` are passed in, `X` will be used for aggregation data.
182 If `func` only has length 1 or is just an `AggType`, then aggregation data is written to `X`.
183 Otherwise, it is written to `layers` or `xxxm` as appropriate for the dimensions of the aggregation data.
184
185 Params
186 ------
187 adata
188 :class:`~anndata.AnnData` to be aggregated.
189 by
190 Key of the column to be grouped-by.
191 func
192 How to aggregate.
193 axis
194 Axis on which to find group by column.
195 mask
196 Boolean mask (or key to column containing mask) to apply along the axis.
197 dof
198 Degrees of freedom for variance. Defaults to 1.
199 layer
200 If not None, key for aggregation data.
201 obsm
202 If not None, key for aggregation data.
203 varm
204 If not None, key for aggregation data.
205
206 Returns
207 -------
208 Aggregated :class:`~anndata.AnnData`.
209
210 Examples
211 --------
212
213 Calculating mean expression and number of nonzero entries per cluster:
214
215 >>> import scanpy as sc, pandas as pd
216 >>> pbmc = sc.datasets.pbmc3k_processed().raw.to_adata()
217 >>> pbmc.shape
218 (2638, 13714)
219 >>> aggregated = sc.get.aggregate(pbmc, by="louvain", func=["mean", "count_nonzero"])
220 >>> aggregated
221 AnnData object with n_obs × n_vars = 8 × 13714
222 obs: 'louvain'
223 var: 'n_cells'
224 layers: 'mean', 'count_nonzero'
225
226 We can group over multiple columns:
227
228 >>> pbmc.obs["percent_mito_binned"] = pd.cut(pbmc.obs["percent_mito"], bins=5)
229 >>> sc.get.aggregate(pbmc, by=["louvain", "percent_mito_binned"], func=["mean", "count_nonzero"])
230 AnnData object with n_obs × n_vars = 40 × 13714
231 obs: 'louvain', 'percent_mito_binned'
232 var: 'n_cells'
233 layers: 'mean', 'count_nonzero'
234
235 Note that this filters out any combination of groups that wasn't present in the original data.
236 """
237 if axis is None:
238 axis = 1 if varm else 0
239 axis, axis_name = _resolve_axis(axis)
240 if mask is not None:
241 mask = _check_mask(adata, mask, axis_name)
242 data = adata.X
243 if sum(p is not None for p in [varm, obsm, layer]) > 1:
244 raise TypeError("Please only provide one (or none) of varm, obsm, or layer")
245
246 if varm is not None:
247 if axis != 1:
248 raise ValueError("varm can only be used when axis is 1")
249 data = adata.varm[varm]
250 elif obsm is not None:
251 if axis != 0:
252 raise ValueError("obsm can only be used when axis is 0")
253 data = adata.obsm[obsm]
254 elif layer is not None:
255 data = adata.layers[layer]
256 if axis == 1:
257 data = data.T
258 elif axis == 1:
259 # i.e., all of `varm`, `obsm`, `layers` are None so we use `X` which must be transposed
260 data = data.T
261
262 dim_df = getattr(adata, axis_name)
263 categorical, new_label_df = _combine_categories(dim_df, by)
264 # Actual computation
265 layers = aggregate(
266 data,
267 by=categorical,
268 func=func,
269 mask=mask,
270 dof=dof,
271 )
272 result = AnnData(
273 layers=layers,
274 obs=new_label_df,
275 var=getattr(adata, "var" if axis == 0 else "obs"),
276 )
277
278 if axis == 1:
279 return result.T
280 else:
281 return result
282
283
284 @aggregate.register(np.ndarray)
285 @aggregate.register(sparse.spmatrix)
286 def aggregate_array(
287 data,
288 by: pd.Categorical,
289 func: AggType | Iterable[AggType],
290 *,
291 mask: NDArray[np.bool_] | None = None,
292 dof: int = 1,
293 ) -> dict[AggType, np.ndarray]:
294 groupby = Aggregate(groupby=by, data=data, mask=mask)
295 result = {}
296
297 funcs = set([func] if isinstance(func, str) else func)
298 if unknown := funcs - set(get_args(AggType)):
299 raise ValueError(f"func {unknown} is not one of {get_args(AggType)}")
300
301 if "sum" in funcs: # sum is calculated separately from the rest
302 agg = groupby.sum()
303 result["sum"] = agg
304 # here and below for count, if var is present, these can be calculate alongside var
305 if "mean" in funcs and "var" not in funcs:
306 agg = groupby.mean()
307 result["mean"] = agg
308 if "count_nonzero" in funcs:
309 result["count_nonzero"] = groupby.count_nonzero()
310 if "var" in funcs:
311 mean_, var_ = groupby.mean_var(dof)
312 result["var"] = var_
313 if "mean" in funcs:
314 result["mean"] = mean_
315
316 return result
317
318
319 def _combine_categories(
320 label_df: pd.DataFrame, cols: Collection[str] | str
321 ) -> tuple[pd.Categorical, pd.DataFrame]:
322 """
323 Returns both the result categories and a dataframe labelling each row
324 """
325 from itertools import product
326
327 if isinstance(cols, str):
328 cols = [cols]
329
330 df = pd.DataFrame(
331 {c: pd.Categorical(label_df[c]).remove_unused_categories() for c in cols},
332 )
333 n_categories = [len(df[c].cat.categories) for c in cols]
334
335 # It's like np.concatenate([x for x in product(*[range(n) for n in n_categories])])
336 code_combinations = np.indices(n_categories).reshape(len(n_categories), -1)
337 result_categories = pd.Index(
338 ["_".join(map(str, x)) for x in product(*[df[c].cat.categories for c in cols])]
339 )
340
341 # Dataframe with unique combination of categories for each row
342 new_label_df = pd.DataFrame(
343 {
344 c: pd.Categorical.from_codes(code_combinations[i], df[c].cat.categories)
345 for i, c in enumerate(cols)
346 },
347 index=result_categories,
348 )
349
350 # Calculating result codes
351 factors = np.ones(len(cols) + 1, dtype=np.int32) # First factor needs to be 1
352 np.cumsum(n_categories[::-1], out=factors[1:])
353 factors = factors[:-1][::-1]
354
355 code_array = np.zeros((len(cols), df.shape[0]), dtype=np.int32)
356 for i, c in enumerate(cols):
357 code_array[i] = df[c].cat.codes
358 code_array *= factors[:, None]
359
360 result_categorical = pd.Categorical.from_codes(
361 code_array.sum(axis=0), categories=result_categories
362 )
363
364 # Filter unused categories
365 result_categorical = result_categorical.remove_unused_categories()
366 new_label_df = new_label_df.loc[result_categorical.categories]
367
368 return result_categorical, new_label_df
369
370
371 def sparse_indicator(
372 categorical: pd.Categorical,
373 *,
374 mask: NDArray[np.bool_] | None = None,
375 weight: NDArray[np.floating] | None = None,
376 ) -> sparse.coo_matrix:
377 if mask is not None and weight is None:
378 weight = mask.astype(np.float32)
379 elif mask is not None and weight is not None:
380 weight = mask * weight
381 elif mask is None and weight is None:
382 weight = np.broadcast_to(1.0, len(categorical))
383 A = sparse.coo_matrix(
384 (weight, (categorical.codes, np.arange(len(categorical)))),
385 shape=(len(categorical.categories), len(categorical)),
386 )
387 return A
388
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/scanpy/get/_aggregated.py b/scanpy/get/_aggregated.py
--- a/scanpy/get/_aggregated.py
+++ b/scanpy/get/_aggregated.py
@@ -71,7 +71,7 @@
"""
# pattern = self.data._with_data(np.broadcast_to(1, len(self.data.data)))
# return self.indicator_matrix @ pattern
- return self.indicator_matrix @ (self.data != 0)
+ return utils.asarray(self.indicator_matrix @ (self.data != 0))
def sum(self) -> Array:
"""\
| {"golden_diff": "diff --git a/scanpy/get/_aggregated.py b/scanpy/get/_aggregated.py\n--- a/scanpy/get/_aggregated.py\n+++ b/scanpy/get/_aggregated.py\n@@ -71,7 +71,7 @@\n \"\"\"\n # pattern = self.data._with_data(np.broadcast_to(1, len(self.data.data)))\n # return self.indicator_matrix @ pattern\n- return self.indicator_matrix @ (self.data != 0)\n+ return utils.asarray(self.indicator_matrix @ (self.data != 0))\n \n def sum(self) -> Array:\n \"\"\"\\\n", "issue": "Inconsistent array types from sc.get.aggregate\n### Please make sure these conditions are met\n\n- [X] I have checked that this issue has not already been reported.\n- [X] I have confirmed this bug exists on the latest version of scanpy.\n- [X] (optional) I have confirmed this bug exists on the main branch of scanpy.\n\n### What happened?\n\ncc: @Intron7 \r\n\r\nThe array types returned for the various aggregations in `sc.get.aggregate` are different (see example)\r\n\r\nThis can lead to somewhat confusing behavior downstream, especially while we are using the sparse matrix classes.\r\n\r\nI would suggest we default to a dense result and consider adding an argument `array_type` that determines the type of the arrays added to `layers`.\n\n### Minimal code sample\n\n```python\nimport scanpy as sc\r\n\r\nadata = sc.datasets.pbmc3k_processed().raw.to_adata()\r\n\r\naggregated = sc.get.aggregate(adata, \"louvain\", [\"sum\", \"count_nonzero\"])\r\ntype(aggregated.layers[\"sum\"])\r\n# numpy.ndarray\r\n\r\ntype(aggregated.layers[\"count_nonzero\"])\r\n# scipy.sparse._csr.csr_matrix\n```\n\n\n### Error output\n\n_No response_\n\n### Versions\n\n<details>\r\n\r\n```\r\n-----\r\nanndata 0.10.5.post1\r\nscanpy 1.10.0.dev315+gf6d5ac94\r\n-----\r\nIPython 8.20.0\r\nPIL 10.2.0\r\nasciitree NA\r\nasttokens NA\r\ncloudpickle 3.0.0\r\ncycler 0.12.1\r\ncython_runtime NA\r\ndask 2024.1.1\r\ndateutil 2.8.2\r\ndecorator 5.1.1\r\nexecuting 2.0.1\r\nfasteners 0.19\r\nh5py 3.10.0\r\nigraph 0.11.3\r\njedi 0.19.1\r\njinja2 3.1.3\r\njoblib 1.3.2\r\nkiwisolver 1.4.5\r\nlegacy_api_wrap NA\r\nleidenalg 0.10.2\r\nllvmlite 0.41.1\r\nmarkupsafe 2.1.4\r\nmatplotlib 3.8.2\r\nmpl_toolkits NA\r\nmsgpack 1.0.7\r\nnatsort 8.4.0\r\nnumba 0.58.1\r\nnumcodecs 0.12.1\r\nnumpy 1.26.3\r\npackaging 23.2\r\npandas 2.2.0\r\nparso 0.8.3\r\npexpect 4.9.0\r\nprompt_toolkit 3.0.43\r\npsutil 5.9.8\r\nptyprocess 0.7.0\r\npure_eval 0.2.2\r\npygments 2.17.2\r\npyparsing 3.1.1\r\npytz 2023.4\r\nscipy 1.12.0\r\nsession_info 1.0.0\r\nsix 1.16.0\r\nsklearn 1.4.0\r\nsparse 0.15.1\r\nstack_data 0.6.3\r\ntblib 3.0.0\r\ntexttable 1.7.0\r\nthreadpoolctl 3.2.0\r\ntlz 0.12.1\r\ntoolz 0.12.1\r\ntraitlets 5.14.1\r\nwcwidth 0.2.13\r\nyaml 6.0.1\r\nzarr 2.16.1\r\nzipp NA\r\n-----\r\nPython 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]\r\nLinux-5.15.0-87-generic-x86_64-with-glibc2.35\r\n-----\r\nSession information updated at 2024-03-04 13:41\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom functools import singledispatch\nfrom typing import TYPE_CHECKING, Literal, Union, get_args\n\nimport numpy as np\nimport pandas as pd\nfrom anndata import AnnData, utils\nfrom scipy import sparse\n\nfrom .._utils import _resolve_axis\nfrom .get import _check_mask\n\nif TYPE_CHECKING:\n from collections.abc import Collection, Iterable\n\n from numpy.typing import NDArray\n\nArray = Union[np.ndarray, sparse.csc_matrix, sparse.csr_matrix]\nAggType = Literal[\"count_nonzero\", \"mean\", \"sum\", \"var\"]\n\n\nclass Aggregate:\n \"\"\"\\\n Functionality for generic grouping and aggregating.\n\n There is currently support for count_nonzero, sum, mean, and variance.\n\n **Implementation**\n\n Moments are computed using weighted sum aggregation of data by some feature\n via multiplication by a sparse coordinate matrix A.\n\n Runtime is effectively computation of the product `A @ X`, i.e. the count of (non-zero)\n entries in X with multiplicity the number of group memberships for that entry.\n This is `O(data)` for partitions (each observation belonging to exactly one group),\n independent of the number of groups.\n\n Params\n ------\n groupby\n :class:`~pandas.Categorical` containing values for grouping by.\n data\n Data matrix for aggregation.\n mask\n Mask to be used for aggregation.\n \"\"\"\n\n def __init__(\n self,\n groupby: pd.Categorical,\n data: Array,\n *,\n mask: NDArray[np.bool_] | None = None,\n ) -> None:\n self.groupby = groupby\n self.indicator_matrix = sparse_indicator(groupby, mask=mask)\n self.data = data\n\n groupby: pd.Categorical\n indicator_matrix: sparse.coo_matrix\n data: Array\n\n def count_nonzero(self) -> NDArray[np.integer]:\n \"\"\"\\\n Count the number of observations in each group.\n\n Returns\n -------\n Array of counts.\n \"\"\"\n # pattern = self.data._with_data(np.broadcast_to(1, len(self.data.data)))\n # return self.indicator_matrix @ pattern\n return self.indicator_matrix @ (self.data != 0)\n\n def sum(self) -> Array:\n \"\"\"\\\n Compute the sum per feature per group of observations.\n\n Returns\n -------\n Array of sum.\n \"\"\"\n return utils.asarray(self.indicator_matrix @ self.data)\n\n def mean(self) -> Array:\n \"\"\"\\\n Compute the mean per feature per group of observations.\n\n Returns\n -------\n Array of mean.\n \"\"\"\n return (\n utils.asarray(self.indicator_matrix @ self.data)\n / np.bincount(self.groupby.codes)[:, None]\n )\n\n def mean_var(self, dof: int = 1) -> tuple[np.ndarray, np.ndarray]:\n \"\"\"\\\n Compute the count, as well as mean and variance per feature, per group of observations.\n\n The formula `Var(X) = E(X^2) - E(X)^2` suffers loss of precision when the variance is a\n very small fraction of the squared mean. In particular, when X is constant, the formula may\n nonetheless be non-zero. By default, our implementation resets the variance to exactly zero\n when the computed variance, relative to the squared mean, nears limit of precision of the\n floating-point significand.\n\n Params\n ------\n dof\n Degrees of freedom for variance.\n\n Returns\n -------\n Object with `count`, `mean`, and `var` attributes.\n \"\"\"\n assert dof >= 0\n\n group_counts = np.bincount(self.groupby.codes)\n mean_ = self.mean()\n # sparse matrices do not support ** for elementwise power.\n mean_sq = (\n utils.asarray(self.indicator_matrix @ _power(self.data, 2))\n / group_counts[:, None]\n )\n sq_mean = mean_**2\n var_ = mean_sq - sq_mean\n # TODO: Why these values exactly? Because they are high relative to the datatype?\n # (unchanged from original code: https://github.com/scverse/anndata/pull/564)\n precision = 2 << (42 if self.data.dtype == np.float64 else 20)\n # detects loss of precision in mean_sq - sq_mean, which suggests variance is 0\n var_[precision * var_ < sq_mean] = 0\n if dof != 0:\n var_ *= (group_counts / (group_counts - dof))[:, np.newaxis]\n return mean_, var_\n\n\ndef _power(X: Array, power: float | int) -> Array:\n \"\"\"\\\n Generate elementwise power of a matrix.\n\n Needed for non-square sparse matrices because they do not support `**` so the `.power` function is used.\n\n Params\n ------\n X\n Matrix whose power is to be raised.\n power\n Integer power value\n\n Returns\n -------\n Matrix whose power has been raised.\n \"\"\"\n return X**power if isinstance(X, np.ndarray) else X.power(power)\n\n\n@singledispatch\ndef aggregate(\n adata: AnnData,\n by: str | Collection[str],\n func: AggType | Iterable[AggType],\n *,\n axis: Literal[\"obs\", 0, \"var\", 1] | None = None,\n mask: NDArray[np.bool_] | str | None = None,\n dof: int = 1,\n layer: str | None = None,\n obsm: str | None = None,\n varm: str | None = None,\n) -> AnnData:\n \"\"\"\\\n Aggregate data matrix based on some categorical grouping.\n\n This function is useful for pseudobulking as well as plotting.\n\n Aggregation to perform is specified by `func`, which can be a single metric or a\n list of metrics. Each metric is computed over the group and results in a new layer\n in the output `AnnData` object.\n\n If none of `layer`, `obsm`, or `varm` are passed in, `X` will be used for aggregation data.\n If `func` only has length 1 or is just an `AggType`, then aggregation data is written to `X`.\n Otherwise, it is written to `layers` or `xxxm` as appropriate for the dimensions of the aggregation data.\n\n Params\n ------\n adata\n :class:`~anndata.AnnData` to be aggregated.\n by\n Key of the column to be grouped-by.\n func\n How to aggregate.\n axis\n Axis on which to find group by column.\n mask\n Boolean mask (or key to column containing mask) to apply along the axis.\n dof\n Degrees of freedom for variance. Defaults to 1.\n layer\n If not None, key for aggregation data.\n obsm\n If not None, key for aggregation data.\n varm\n If not None, key for aggregation data.\n\n Returns\n -------\n Aggregated :class:`~anndata.AnnData`.\n\n Examples\n --------\n\n Calculating mean expression and number of nonzero entries per cluster:\n\n >>> import scanpy as sc, pandas as pd\n >>> pbmc = sc.datasets.pbmc3k_processed().raw.to_adata()\n >>> pbmc.shape\n (2638, 13714)\n >>> aggregated = sc.get.aggregate(pbmc, by=\"louvain\", func=[\"mean\", \"count_nonzero\"])\n >>> aggregated\n AnnData object with n_obs \u00d7 n_vars = 8 \u00d7 13714\n obs: 'louvain'\n var: 'n_cells'\n layers: 'mean', 'count_nonzero'\n\n We can group over multiple columns:\n\n >>> pbmc.obs[\"percent_mito_binned\"] = pd.cut(pbmc.obs[\"percent_mito\"], bins=5)\n >>> sc.get.aggregate(pbmc, by=[\"louvain\", \"percent_mito_binned\"], func=[\"mean\", \"count_nonzero\"])\n AnnData object with n_obs \u00d7 n_vars = 40 \u00d7 13714\n obs: 'louvain', 'percent_mito_binned'\n var: 'n_cells'\n layers: 'mean', 'count_nonzero'\n\n Note that this filters out any combination of groups that wasn't present in the original data.\n \"\"\"\n if axis is None:\n axis = 1 if varm else 0\n axis, axis_name = _resolve_axis(axis)\n if mask is not None:\n mask = _check_mask(adata, mask, axis_name)\n data = adata.X\n if sum(p is not None for p in [varm, obsm, layer]) > 1:\n raise TypeError(\"Please only provide one (or none) of varm, obsm, or layer\")\n\n if varm is not None:\n if axis != 1:\n raise ValueError(\"varm can only be used when axis is 1\")\n data = adata.varm[varm]\n elif obsm is not None:\n if axis != 0:\n raise ValueError(\"obsm can only be used when axis is 0\")\n data = adata.obsm[obsm]\n elif layer is not None:\n data = adata.layers[layer]\n if axis == 1:\n data = data.T\n elif axis == 1:\n # i.e., all of `varm`, `obsm`, `layers` are None so we use `X` which must be transposed\n data = data.T\n\n dim_df = getattr(adata, axis_name)\n categorical, new_label_df = _combine_categories(dim_df, by)\n # Actual computation\n layers = aggregate(\n data,\n by=categorical,\n func=func,\n mask=mask,\n dof=dof,\n )\n result = AnnData(\n layers=layers,\n obs=new_label_df,\n var=getattr(adata, \"var\" if axis == 0 else \"obs\"),\n )\n\n if axis == 1:\n return result.T\n else:\n return result\n\n\[email protected](np.ndarray)\[email protected](sparse.spmatrix)\ndef aggregate_array(\n data,\n by: pd.Categorical,\n func: AggType | Iterable[AggType],\n *,\n mask: NDArray[np.bool_] | None = None,\n dof: int = 1,\n) -> dict[AggType, np.ndarray]:\n groupby = Aggregate(groupby=by, data=data, mask=mask)\n result = {}\n\n funcs = set([func] if isinstance(func, str) else func)\n if unknown := funcs - set(get_args(AggType)):\n raise ValueError(f\"func {unknown} is not one of {get_args(AggType)}\")\n\n if \"sum\" in funcs: # sum is calculated separately from the rest\n agg = groupby.sum()\n result[\"sum\"] = agg\n # here and below for count, if var is present, these can be calculate alongside var\n if \"mean\" in funcs and \"var\" not in funcs:\n agg = groupby.mean()\n result[\"mean\"] = agg\n if \"count_nonzero\" in funcs:\n result[\"count_nonzero\"] = groupby.count_nonzero()\n if \"var\" in funcs:\n mean_, var_ = groupby.mean_var(dof)\n result[\"var\"] = var_\n if \"mean\" in funcs:\n result[\"mean\"] = mean_\n\n return result\n\n\ndef _combine_categories(\n label_df: pd.DataFrame, cols: Collection[str] | str\n) -> tuple[pd.Categorical, pd.DataFrame]:\n \"\"\"\n Returns both the result categories and a dataframe labelling each row\n \"\"\"\n from itertools import product\n\n if isinstance(cols, str):\n cols = [cols]\n\n df = pd.DataFrame(\n {c: pd.Categorical(label_df[c]).remove_unused_categories() for c in cols},\n )\n n_categories = [len(df[c].cat.categories) for c in cols]\n\n # It's like np.concatenate([x for x in product(*[range(n) for n in n_categories])])\n code_combinations = np.indices(n_categories).reshape(len(n_categories), -1)\n result_categories = pd.Index(\n [\"_\".join(map(str, x)) for x in product(*[df[c].cat.categories for c in cols])]\n )\n\n # Dataframe with unique combination of categories for each row\n new_label_df = pd.DataFrame(\n {\n c: pd.Categorical.from_codes(code_combinations[i], df[c].cat.categories)\n for i, c in enumerate(cols)\n },\n index=result_categories,\n )\n\n # Calculating result codes\n factors = np.ones(len(cols) + 1, dtype=np.int32) # First factor needs to be 1\n np.cumsum(n_categories[::-1], out=factors[1:])\n factors = factors[:-1][::-1]\n\n code_array = np.zeros((len(cols), df.shape[0]), dtype=np.int32)\n for i, c in enumerate(cols):\n code_array[i] = df[c].cat.codes\n code_array *= factors[:, None]\n\n result_categorical = pd.Categorical.from_codes(\n code_array.sum(axis=0), categories=result_categories\n )\n\n # Filter unused categories\n result_categorical = result_categorical.remove_unused_categories()\n new_label_df = new_label_df.loc[result_categorical.categories]\n\n return result_categorical, new_label_df\n\n\ndef sparse_indicator(\n categorical: pd.Categorical,\n *,\n mask: NDArray[np.bool_] | None = None,\n weight: NDArray[np.floating] | None = None,\n) -> sparse.coo_matrix:\n if mask is not None and weight is None:\n weight = mask.astype(np.float32)\n elif mask is not None and weight is not None:\n weight = mask * weight\n elif mask is None and weight is None:\n weight = np.broadcast_to(1.0, len(categorical))\n A = sparse.coo_matrix(\n (weight, (categorical.codes, np.arange(len(categorical)))),\n shape=(len(categorical.categories), len(categorical)),\n )\n return A\n", "path": "scanpy/get/_aggregated.py"}], "after_files": [{"content": "from __future__ import annotations\n\nfrom functools import singledispatch\nfrom typing import TYPE_CHECKING, Literal, Union, get_args\n\nimport numpy as np\nimport pandas as pd\nfrom anndata import AnnData, utils\nfrom scipy import sparse\n\nfrom .._utils import _resolve_axis\nfrom .get import _check_mask\n\nif TYPE_CHECKING:\n from collections.abc import Collection, Iterable\n\n from numpy.typing import NDArray\n\nArray = Union[np.ndarray, sparse.csc_matrix, sparse.csr_matrix]\nAggType = Literal[\"count_nonzero\", \"mean\", \"sum\", \"var\"]\n\n\nclass Aggregate:\n \"\"\"\\\n Functionality for generic grouping and aggregating.\n\n There is currently support for count_nonzero, sum, mean, and variance.\n\n **Implementation**\n\n Moments are computed using weighted sum aggregation of data by some feature\n via multiplication by a sparse coordinate matrix A.\n\n Runtime is effectively computation of the product `A @ X`, i.e. the count of (non-zero)\n entries in X with multiplicity the number of group memberships for that entry.\n This is `O(data)` for partitions (each observation belonging to exactly one group),\n independent of the number of groups.\n\n Params\n ------\n groupby\n :class:`~pandas.Categorical` containing values for grouping by.\n data\n Data matrix for aggregation.\n mask\n Mask to be used for aggregation.\n \"\"\"\n\n def __init__(\n self,\n groupby: pd.Categorical,\n data: Array,\n *,\n mask: NDArray[np.bool_] | None = None,\n ) -> None:\n self.groupby = groupby\n self.indicator_matrix = sparse_indicator(groupby, mask=mask)\n self.data = data\n\n groupby: pd.Categorical\n indicator_matrix: sparse.coo_matrix\n data: Array\n\n def count_nonzero(self) -> NDArray[np.integer]:\n \"\"\"\\\n Count the number of observations in each group.\n\n Returns\n -------\n Array of counts.\n \"\"\"\n # pattern = self.data._with_data(np.broadcast_to(1, len(self.data.data)))\n # return self.indicator_matrix @ pattern\n return utils.asarray(self.indicator_matrix @ (self.data != 0))\n\n def sum(self) -> Array:\n \"\"\"\\\n Compute the sum per feature per group of observations.\n\n Returns\n -------\n Array of sum.\n \"\"\"\n return utils.asarray(self.indicator_matrix @ self.data)\n\n def mean(self) -> Array:\n \"\"\"\\\n Compute the mean per feature per group of observations.\n\n Returns\n -------\n Array of mean.\n \"\"\"\n return (\n utils.asarray(self.indicator_matrix @ self.data)\n / np.bincount(self.groupby.codes)[:, None]\n )\n\n def mean_var(self, dof: int = 1) -> tuple[np.ndarray, np.ndarray]:\n \"\"\"\\\n Compute the count, as well as mean and variance per feature, per group of observations.\n\n The formula `Var(X) = E(X^2) - E(X)^2` suffers loss of precision when the variance is a\n very small fraction of the squared mean. In particular, when X is constant, the formula may\n nonetheless be non-zero. By default, our implementation resets the variance to exactly zero\n when the computed variance, relative to the squared mean, nears limit of precision of the\n floating-point significand.\n\n Params\n ------\n dof\n Degrees of freedom for variance.\n\n Returns\n -------\n Object with `count`, `mean`, and `var` attributes.\n \"\"\"\n assert dof >= 0\n\n group_counts = np.bincount(self.groupby.codes)\n mean_ = self.mean()\n # sparse matrices do not support ** for elementwise power.\n mean_sq = (\n utils.asarray(self.indicator_matrix @ _power(self.data, 2))\n / group_counts[:, None]\n )\n sq_mean = mean_**2\n var_ = mean_sq - sq_mean\n # TODO: Why these values exactly? Because they are high relative to the datatype?\n # (unchanged from original code: https://github.com/scverse/anndata/pull/564)\n precision = 2 << (42 if self.data.dtype == np.float64 else 20)\n # detects loss of precision in mean_sq - sq_mean, which suggests variance is 0\n var_[precision * var_ < sq_mean] = 0\n if dof != 0:\n var_ *= (group_counts / (group_counts - dof))[:, np.newaxis]\n return mean_, var_\n\n\ndef _power(X: Array, power: float | int) -> Array:\n \"\"\"\\\n Generate elementwise power of a matrix.\n\n Needed for non-square sparse matrices because they do not support `**` so the `.power` function is used.\n\n Params\n ------\n X\n Matrix whose power is to be raised.\n power\n Integer power value\n\n Returns\n -------\n Matrix whose power has been raised.\n \"\"\"\n return X**power if isinstance(X, np.ndarray) else X.power(power)\n\n\n@singledispatch\ndef aggregate(\n adata: AnnData,\n by: str | Collection[str],\n func: AggType | Iterable[AggType],\n *,\n axis: Literal[\"obs\", 0, \"var\", 1] | None = None,\n mask: NDArray[np.bool_] | str | None = None,\n dof: int = 1,\n layer: str | None = None,\n obsm: str | None = None,\n varm: str | None = None,\n) -> AnnData:\n \"\"\"\\\n Aggregate data matrix based on some categorical grouping.\n\n This function is useful for pseudobulking as well as plotting.\n\n Aggregation to perform is specified by `func`, which can be a single metric or a\n list of metrics. Each metric is computed over the group and results in a new layer\n in the output `AnnData` object.\n\n If none of `layer`, `obsm`, or `varm` are passed in, `X` will be used for aggregation data.\n If `func` only has length 1 or is just an `AggType`, then aggregation data is written to `X`.\n Otherwise, it is written to `layers` or `xxxm` as appropriate for the dimensions of the aggregation data.\n\n Params\n ------\n adata\n :class:`~anndata.AnnData` to be aggregated.\n by\n Key of the column to be grouped-by.\n func\n How to aggregate.\n axis\n Axis on which to find group by column.\n mask\n Boolean mask (or key to column containing mask) to apply along the axis.\n dof\n Degrees of freedom for variance. Defaults to 1.\n layer\n If not None, key for aggregation data.\n obsm\n If not None, key for aggregation data.\n varm\n If not None, key for aggregation data.\n\n Returns\n -------\n Aggregated :class:`~anndata.AnnData`.\n\n Examples\n --------\n\n Calculating mean expression and number of nonzero entries per cluster:\n\n >>> import scanpy as sc, pandas as pd\n >>> pbmc = sc.datasets.pbmc3k_processed().raw.to_adata()\n >>> pbmc.shape\n (2638, 13714)\n >>> aggregated = sc.get.aggregate(pbmc, by=\"louvain\", func=[\"mean\", \"count_nonzero\"])\n >>> aggregated\n AnnData object with n_obs \u00d7 n_vars = 8 \u00d7 13714\n obs: 'louvain'\n var: 'n_cells'\n layers: 'mean', 'count_nonzero'\n\n We can group over multiple columns:\n\n >>> pbmc.obs[\"percent_mito_binned\"] = pd.cut(pbmc.obs[\"percent_mito\"], bins=5)\n >>> sc.get.aggregate(pbmc, by=[\"louvain\", \"percent_mito_binned\"], func=[\"mean\", \"count_nonzero\"])\n AnnData object with n_obs \u00d7 n_vars = 40 \u00d7 13714\n obs: 'louvain', 'percent_mito_binned'\n var: 'n_cells'\n layers: 'mean', 'count_nonzero'\n\n Note that this filters out any combination of groups that wasn't present in the original data.\n \"\"\"\n if axis is None:\n axis = 1 if varm else 0\n axis, axis_name = _resolve_axis(axis)\n if mask is not None:\n mask = _check_mask(adata, mask, axis_name)\n data = adata.X\n if sum(p is not None for p in [varm, obsm, layer]) > 1:\n raise TypeError(\"Please only provide one (or none) of varm, obsm, or layer\")\n\n if varm is not None:\n if axis != 1:\n raise ValueError(\"varm can only be used when axis is 1\")\n data = adata.varm[varm]\n elif obsm is not None:\n if axis != 0:\n raise ValueError(\"obsm can only be used when axis is 0\")\n data = adata.obsm[obsm]\n elif layer is not None:\n data = adata.layers[layer]\n if axis == 1:\n data = data.T\n elif axis == 1:\n # i.e., all of `varm`, `obsm`, `layers` are None so we use `X` which must be transposed\n data = data.T\n\n dim_df = getattr(adata, axis_name)\n categorical, new_label_df = _combine_categories(dim_df, by)\n # Actual computation\n layers = aggregate(\n data,\n by=categorical,\n func=func,\n mask=mask,\n dof=dof,\n )\n result = AnnData(\n layers=layers,\n obs=new_label_df,\n var=getattr(adata, \"var\" if axis == 0 else \"obs\"),\n )\n\n if axis == 1:\n return result.T\n else:\n return result\n\n\[email protected](np.ndarray)\[email protected](sparse.spmatrix)\ndef aggregate_array(\n data,\n by: pd.Categorical,\n func: AggType | Iterable[AggType],\n *,\n mask: NDArray[np.bool_] | None = None,\n dof: int = 1,\n) -> dict[AggType, np.ndarray]:\n groupby = Aggregate(groupby=by, data=data, mask=mask)\n result = {}\n\n funcs = set([func] if isinstance(func, str) else func)\n if unknown := funcs - set(get_args(AggType)):\n raise ValueError(f\"func {unknown} is not one of {get_args(AggType)}\")\n\n if \"sum\" in funcs: # sum is calculated separately from the rest\n agg = groupby.sum()\n result[\"sum\"] = agg\n # here and below for count, if var is present, these can be calculate alongside var\n if \"mean\" in funcs and \"var\" not in funcs:\n agg = groupby.mean()\n result[\"mean\"] = agg\n if \"count_nonzero\" in funcs:\n result[\"count_nonzero\"] = groupby.count_nonzero()\n if \"var\" in funcs:\n mean_, var_ = groupby.mean_var(dof)\n result[\"var\"] = var_\n if \"mean\" in funcs:\n result[\"mean\"] = mean_\n\n return result\n\n\ndef _combine_categories(\n label_df: pd.DataFrame, cols: Collection[str] | str\n) -> tuple[pd.Categorical, pd.DataFrame]:\n \"\"\"\n Returns both the result categories and a dataframe labelling each row\n \"\"\"\n from itertools import product\n\n if isinstance(cols, str):\n cols = [cols]\n\n df = pd.DataFrame(\n {c: pd.Categorical(label_df[c]).remove_unused_categories() for c in cols},\n )\n n_categories = [len(df[c].cat.categories) for c in cols]\n\n # It's like np.concatenate([x for x in product(*[range(n) for n in n_categories])])\n code_combinations = np.indices(n_categories).reshape(len(n_categories), -1)\n result_categories = pd.Index(\n [\"_\".join(map(str, x)) for x in product(*[df[c].cat.categories for c in cols])]\n )\n\n # Dataframe with unique combination of categories for each row\n new_label_df = pd.DataFrame(\n {\n c: pd.Categorical.from_codes(code_combinations[i], df[c].cat.categories)\n for i, c in enumerate(cols)\n },\n index=result_categories,\n )\n\n # Calculating result codes\n factors = np.ones(len(cols) + 1, dtype=np.int32) # First factor needs to be 1\n np.cumsum(n_categories[::-1], out=factors[1:])\n factors = factors[:-1][::-1]\n\n code_array = np.zeros((len(cols), df.shape[0]), dtype=np.int32)\n for i, c in enumerate(cols):\n code_array[i] = df[c].cat.codes\n code_array *= factors[:, None]\n\n result_categorical = pd.Categorical.from_codes(\n code_array.sum(axis=0), categories=result_categories\n )\n\n # Filter unused categories\n result_categorical = result_categorical.remove_unused_categories()\n new_label_df = new_label_df.loc[result_categorical.categories]\n\n return result_categorical, new_label_df\n\n\ndef sparse_indicator(\n categorical: pd.Categorical,\n *,\n mask: NDArray[np.bool_] | None = None,\n weight: NDArray[np.floating] | None = None,\n) -> sparse.coo_matrix:\n if mask is not None and weight is None:\n weight = mask.astype(np.float32)\n elif mask is not None and weight is not None:\n weight = mask * weight\n elif mask is None and weight is None:\n weight = np.broadcast_to(1.0, len(categorical))\n A = sparse.coo_matrix(\n (weight, (categorical.codes, np.arange(len(categorical)))),\n shape=(len(categorical.categories), len(categorical)),\n )\n return A\n", "path": "scanpy/get/_aggregated.py"}]} |
gh_patches_debug_1104 | rasdani/github-patches | git_diff | fossasia__open-event-server-4284 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Set attendees as required relationship to Orders API
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `app/api/orders.py`
Content:
```
1 from datetime import datetime
2
3 from flask import request
4 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
5 from marshmallow_jsonapi.flask import Schema, Relationship
6 from marshmallow_jsonapi import fields
7 from marshmallow import post_dump, validates_schema, validate
8 from flask_jwt import current_identity as current_user
9
10 from app.api.bootstrap import api
11 from app.api.data_layers.ChargesLayer import ChargesLayer
12 from app.api.helpers.db import save_to_db, safe_query
13 from app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity
14 from app.api.helpers.payment import PayPalPaymentsManager
15 from app.api.helpers.ticketing import TicketingManager
16 from app.api.helpers.permission_manager import has_access
17 from app.api.helpers.permissions import jwt_required
18 from app.api.helpers.utilities import dasherize, require_relationship
19 from app.models import db
20 from app.models.discount_code import DiscountCode, TICKET
21 from app.models.order import Order, OrderTicket
22
23
24 class OrderSchema(Schema):
25 class Meta:
26 type_ = 'order'
27 self_view = 'v1.order_detail'
28 self_view_kwargs = {'id': '<id>'}
29 inflect = dasherize
30
31 @post_dump
32 def generate_payment_url(self, data):
33 if 'POST' in request.method or ('GET' in request.method and 'regenerate' in request.args) and 'completed' != \
34 data["status"]:
35 if data['payment_mode'] == 'stripe':
36 data['payment_url'] = 'stripe://payment'
37 elif data['payment_mode'] == 'paypal':
38 order = Order.query.filter_by(id=data['id']).first()
39 data['payment_url'] = PayPalPaymentsManager.get_checkout_url(order)
40 return data
41
42 @validates_schema
43 def initial_values(self, data):
44 if data.get('payment_mode') is None and 'POST' in request.method:
45 data['payment_mode'] = 'free'
46 return data
47
48 id = fields.Str(dump_only=True)
49 identifier = fields.Str(dump_only=True)
50 amount = fields.Float(validate=lambda n: n > 0)
51 address = fields.Str()
52 city = fields.Str()
53 state = fields.Str(db.String)
54 country = fields.Str(required=True)
55 zipcode = fields.Str()
56 completed_at = fields.DateTime(dump_only=True)
57 transaction_id = fields.Str(dump_only=True)
58 payment_mode = fields.Str()
59 paid_via = fields.Str(dump_only=True)
60 brand = fields.Str(dump_only=True)
61 exp_month = fields.Str(dump_only=True)
62 exp_year = fields.Str(dump_only=True)
63 last4 = fields.Str(dump_only=True)
64 status = fields.Str(validate=validate.OneOf(choices=["pending", "cancelled", "confirmed", "deleted"]))
65 discount_code_id = fields.Str()
66 payment_url = fields.Str(dump_only=True)
67
68 attendees = Relationship(attribute='ticket_holders',
69 self_view='v1.order_attendee',
70 self_view_kwargs={'identifier': '<identifier>'},
71 related_view='v1.attendee_list',
72 related_view_kwargs={'order_id': '<id>'},
73 schema='AttendeeSchema',
74 many=True,
75 type_='attendee')
76
77 tickets = Relationship(self_view='v1.order_ticket',
78 self_view_kwargs={'identifier': '<identifier>'},
79 related_view='v1.ticket_list',
80 related_view_kwargs={'order_id': '<id>'},
81 schema='TicketSchema',
82 many=True,
83 type_="ticket")
84
85 user = Relationship(self_view='v1.order_user',
86 self_view_kwargs={'identifier': '<identifier>'},
87 related_view='v1.user_detail',
88 related_view_kwargs={'id': '<user_id>'},
89 schema='UserSchema',
90 type_="user")
91
92 event = Relationship(self_view='v1.order_event',
93 self_view_kwargs={'identifier': '<identifier>'},
94 related_view='v1.event_detail',
95 related_view_kwargs={'id': '<event_id>'},
96 schema='EventSchema',
97 type_="event")
98
99 marketer = Relationship(self_view='v1.order_marketer',
100 self_view_kwargs={'identifier': '<identifier>'},
101 related_view='v1.user_detail',
102 related_view_kwargs={'id': '<marketer_id>'},
103 schema='UserSchema',
104 type_="user")
105
106 discount_code = Relationship(self_view='v1.order_discount',
107 self_view_kwargs={'identifier': '<identifier>'},
108 related_view='v1.discount_code_detail',
109 related_view_kwargs={'id': '<discount_code_id>'},
110 schema='DiscountCodeSchema',
111 type_="discount-code")
112
113
114 class OrdersListPost(ResourceList):
115 def before_post(self, args, kwargs, data=None):
116 require_relationship(['event'], data)
117 if not has_access('is_coorganizer', event_id=data['event']):
118 data['status'] = 'pending'
119
120 def before_create_object(self, data, view_kwargs):
121 # Apply discount only if the user is not event admin
122 if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):
123 discount_code = safe_query(self, DiscountCode, 'id', data['discount'], 'discount_code_id')
124 if not discount_code.is_active:
125 raise UnprocessableEntity({'source': 'discount_code_id'}, "Inactive Discount Code")
126 else:
127 now = datetime.utcnow()
128 valid_from = datetime.strptime(discount_code.valid_from, '%Y-%m-%d %H:%M:%S')
129 valid_till = datetime.strptime(discount_code.valid_till, '%Y-%m-%d %H:%M:%S')
130 if not (valid_from <= now <= valid_till):
131 raise UnprocessableEntity({'source': 'discount_code_id'}, "Inactive Discount Code")
132 if not TicketingManager.match_discount_quantity(discount_code, data['ticket_holders']):
133 raise UnprocessableEntity({'source': 'discount_code_id'}, 'Discount Usage Exceeded')
134
135 if discount_code.event.id != data['event'] and discount_code.user_for == TICKET:
136 raise UnprocessableEntity({'source': 'discount_code_id'}, "Invalid Discount Code")
137
138 def after_create_object(self, order, data, view_kwargs):
139 order_tickets = {}
140 for holder in order.ticket_holders:
141 if order_tickets.get(holder.ticket_id) is None:
142 order_tickets[holder.ticket_id] = 1
143 else:
144 order_tickets[holder.ticket_id] += 1
145 for ticket in order_tickets:
146 od = OrderTicket(order_id=order.id, ticket_id=ticket, quantity=order_tickets[ticket])
147 save_to_db(od)
148 order.quantity = order.get_tickets_count()
149 save_to_db(order)
150 if not has_access('is_coorganizer', **view_kwargs):
151 TicketingManager.calculate_update_amount(order)
152
153 data['user_id'] = current_user.id
154
155 methods = ['POST', ]
156 decorators = (jwt_required,)
157 schema = OrderSchema
158 data_layer = {'session': db.session,
159 'model': Order,
160 'methods': {'before_create_object': before_create_object,
161 'after_create_object': after_create_object
162 }}
163
164
165 class OrdersList(ResourceList):
166 def before_get(self, args, kwargs):
167 if kwargs.get('event_id') is None:
168 if 'GET' in request.method and has_access('is_admin'):
169 pass
170 else:
171 raise ForbiddenException({'source': ''}, "Admin Access Required")
172 elif not has_access('is_coorganizer', event_id=kwargs['event_id']):
173 raise ForbiddenException({'source': ''}, "Co-Organizer Access Required")
174
175 decorators = (jwt_required,)
176 schema = OrderSchema
177 data_layer = {'session': db.session,
178 'model': Order}
179
180
181 class OrderDetail(ResourceDetail):
182 def before_get_object(self, view_kwargs):
183 if view_kwargs.get('identifier'):
184 order = safe_query(self, Order, 'identifier', view_kwargs['identifier'], 'order_identifier')
185 view_kwargs['id'] = order.id
186
187 def before_update_object(self, order, data, view_kwargs):
188 if data.get('status'):
189 if has_access('is_coorganizer', event_id=order.event.id):
190 pass
191 else:
192 raise ForbiddenException({'pointer': 'data/status'},
193 "To update status minimum Co-organizer access required")
194
195 decorators = (api.has_permission('is_coorganizer', fetch="event_id", fetch_as="event_id", model=Order),)
196
197 schema = OrderSchema
198 data_layer = {'session': db.session,
199 'model': Order,
200 'methods': {'before_update_object': before_update_object}}
201
202
203 class OrderRelationship(ResourceRelationship):
204 decorators = (jwt_required,)
205 schema = OrderSchema
206 data_layer = {'session': db.session,
207 'model': Order}
208
209
210 class ChargeSchema(Schema):
211 class Meta:
212 type_ = 'charge'
213 inflect = dasherize
214 self_view = 'v1.charge_list'
215 self_view_kwargs = {'id': '<id>'}
216
217 id = fields.Str(dump_only=True)
218 stripe = fields.Str(allow_none=True)
219
220
221 class ChargeList(ResourceList):
222 methods = ['POST', ]
223 schema = ChargeSchema
224
225 data_layer = {
226 'class': ChargesLayer,
227 'session': db.session
228 }
229
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/app/api/orders.py b/app/api/orders.py
--- a/app/api/orders.py
+++ b/app/api/orders.py
@@ -113,7 +113,7 @@
class OrdersListPost(ResourceList):
def before_post(self, args, kwargs, data=None):
- require_relationship(['event'], data)
+ require_relationship(['event', 'attendees'], data)
if not has_access('is_coorganizer', event_id=data['event']):
data['status'] = 'pending'
| {"golden_diff": "diff --git a/app/api/orders.py b/app/api/orders.py\n--- a/app/api/orders.py\n+++ b/app/api/orders.py\n@@ -113,7 +113,7 @@\n \n class OrdersListPost(ResourceList):\n def before_post(self, args, kwargs, data=None):\n- require_relationship(['event'], data)\n+ require_relationship(['event', 'attendees'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n data['status'] = 'pending'\n", "issue": "Set attendees as required relationship to Orders API\n\n", "before_files": [{"content": "from datetime import datetime\n\nfrom flask import request\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nfrom marshmallow import post_dump, validates_schema, validate\nfrom flask_jwt import current_identity as current_user\n\nfrom app.api.bootstrap import api\nfrom app.api.data_layers.ChargesLayer import ChargesLayer\nfrom app.api.helpers.db import save_to_db, safe_query\nfrom app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\nfrom app.api.helpers.payment import PayPalPaymentsManager\nfrom app.api.helpers.ticketing import TicketingManager\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import dasherize, require_relationship\nfrom app.models import db\nfrom app.models.discount_code import DiscountCode, TICKET\nfrom app.models.order import Order, OrderTicket\n\n\nclass OrderSchema(Schema):\n class Meta:\n type_ = 'order'\n self_view = 'v1.order_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @post_dump\n def generate_payment_url(self, data):\n if 'POST' in request.method or ('GET' in request.method and 'regenerate' in request.args) and 'completed' != \\\n data[\"status\"]:\n if data['payment_mode'] == 'stripe':\n data['payment_url'] = 'stripe://payment'\n elif data['payment_mode'] == 'paypal':\n order = Order.query.filter_by(id=data['id']).first()\n data['payment_url'] = PayPalPaymentsManager.get_checkout_url(order)\n return data\n\n @validates_schema\n def initial_values(self, data):\n if data.get('payment_mode') is None and 'POST' in request.method:\n data['payment_mode'] = 'free'\n return data\n\n id = fields.Str(dump_only=True)\n identifier = fields.Str(dump_only=True)\n amount = fields.Float(validate=lambda n: n > 0)\n address = fields.Str()\n city = fields.Str()\n state = fields.Str(db.String)\n country = fields.Str(required=True)\n zipcode = fields.Str()\n completed_at = fields.DateTime(dump_only=True)\n transaction_id = fields.Str(dump_only=True)\n payment_mode = fields.Str()\n paid_via = fields.Str(dump_only=True)\n brand = fields.Str(dump_only=True)\n exp_month = fields.Str(dump_only=True)\n exp_year = fields.Str(dump_only=True)\n last4 = fields.Str(dump_only=True)\n status = fields.Str(validate=validate.OneOf(choices=[\"pending\", \"cancelled\", \"confirmed\", \"deleted\"]))\n discount_code_id = fields.Str()\n payment_url = fields.Str(dump_only=True)\n\n attendees = Relationship(attribute='ticket_holders',\n self_view='v1.order_attendee',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.attendee_list',\n related_view_kwargs={'order_id': '<id>'},\n schema='AttendeeSchema',\n many=True,\n type_='attendee')\n\n tickets = Relationship(self_view='v1.order_ticket',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'order_id': '<id>'},\n schema='TicketSchema',\n many=True,\n type_=\"ticket\")\n\n user = Relationship(self_view='v1.order_user',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<user_id>'},\n schema='UserSchema',\n type_=\"user\")\n\n event = Relationship(self_view='v1.order_event',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.event_detail',\n related_view_kwargs={'id': '<event_id>'},\n schema='EventSchema',\n type_=\"event\")\n\n marketer = Relationship(self_view='v1.order_marketer',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<marketer_id>'},\n schema='UserSchema',\n type_=\"user\")\n\n discount_code = Relationship(self_view='v1.order_discount',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.discount_code_detail',\n related_view_kwargs={'id': '<discount_code_id>'},\n schema='DiscountCodeSchema',\n type_=\"discount-code\")\n\n\nclass OrdersListPost(ResourceList):\n def before_post(self, args, kwargs, data=None):\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n data['status'] = 'pending'\n\n def before_create_object(self, data, view_kwargs):\n # Apply discount only if the user is not event admin\n if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):\n discount_code = safe_query(self, DiscountCode, 'id', data['discount'], 'discount_code_id')\n if not discount_code.is_active:\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Inactive Discount Code\")\n else:\n now = datetime.utcnow()\n valid_from = datetime.strptime(discount_code.valid_from, '%Y-%m-%d %H:%M:%S')\n valid_till = datetime.strptime(discount_code.valid_till, '%Y-%m-%d %H:%M:%S')\n if not (valid_from <= now <= valid_till):\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Inactive Discount Code\")\n if not TicketingManager.match_discount_quantity(discount_code, data['ticket_holders']):\n raise UnprocessableEntity({'source': 'discount_code_id'}, 'Discount Usage Exceeded')\n\n if discount_code.event.id != data['event'] and discount_code.user_for == TICKET:\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Invalid Discount Code\")\n\n def after_create_object(self, order, data, view_kwargs):\n order_tickets = {}\n for holder in order.ticket_holders:\n if order_tickets.get(holder.ticket_id) is None:\n order_tickets[holder.ticket_id] = 1\n else:\n order_tickets[holder.ticket_id] += 1\n for ticket in order_tickets:\n od = OrderTicket(order_id=order.id, ticket_id=ticket, quantity=order_tickets[ticket])\n save_to_db(od)\n order.quantity = order.get_tickets_count()\n save_to_db(order)\n if not has_access('is_coorganizer', **view_kwargs):\n TicketingManager.calculate_update_amount(order)\n\n data['user_id'] = current_user.id\n\n methods = ['POST', ]\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order,\n 'methods': {'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass OrdersList(ResourceList):\n def before_get(self, args, kwargs):\n if kwargs.get('event_id') is None:\n if 'GET' in request.method and has_access('is_admin'):\n pass\n else:\n raise ForbiddenException({'source': ''}, \"Admin Access Required\")\n elif not has_access('is_coorganizer', event_id=kwargs['event_id']):\n raise ForbiddenException({'source': ''}, \"Co-Organizer Access Required\")\n\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order}\n\n\nclass OrderDetail(ResourceDetail):\n def before_get_object(self, view_kwargs):\n if view_kwargs.get('identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['identifier'], 'order_identifier')\n view_kwargs['id'] = order.id\n\n def before_update_object(self, order, data, view_kwargs):\n if data.get('status'):\n if has_access('is_coorganizer', event_id=order.event.id):\n pass\n else:\n raise ForbiddenException({'pointer': 'data/status'},\n \"To update status minimum Co-organizer access required\")\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\", fetch_as=\"event_id\", model=Order),)\n\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order,\n 'methods': {'before_update_object': before_update_object}}\n\n\nclass OrderRelationship(ResourceRelationship):\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order}\n\n\nclass ChargeSchema(Schema):\n class Meta:\n type_ = 'charge'\n inflect = dasherize\n self_view = 'v1.charge_list'\n self_view_kwargs = {'id': '<id>'}\n\n id = fields.Str(dump_only=True)\n stripe = fields.Str(allow_none=True)\n\n\nclass ChargeList(ResourceList):\n methods = ['POST', ]\n schema = ChargeSchema\n\n data_layer = {\n 'class': ChargesLayer,\n 'session': db.session\n }\n", "path": "app/api/orders.py"}], "after_files": [{"content": "from datetime import datetime\n\nfrom flask import request\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom marshmallow_jsonapi.flask import Schema, Relationship\nfrom marshmallow_jsonapi import fields\nfrom marshmallow import post_dump, validates_schema, validate\nfrom flask_jwt import current_identity as current_user\n\nfrom app.api.bootstrap import api\nfrom app.api.data_layers.ChargesLayer import ChargesLayer\nfrom app.api.helpers.db import save_to_db, safe_query\nfrom app.api.helpers.exceptions import ForbiddenException, UnprocessableEntity\nfrom app.api.helpers.payment import PayPalPaymentsManager\nfrom app.api.helpers.ticketing import TicketingManager\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import dasherize, require_relationship\nfrom app.models import db\nfrom app.models.discount_code import DiscountCode, TICKET\nfrom app.models.order import Order, OrderTicket\n\n\nclass OrderSchema(Schema):\n class Meta:\n type_ = 'order'\n self_view = 'v1.order_detail'\n self_view_kwargs = {'id': '<id>'}\n inflect = dasherize\n\n @post_dump\n def generate_payment_url(self, data):\n if 'POST' in request.method or ('GET' in request.method and 'regenerate' in request.args) and 'completed' != \\\n data[\"status\"]:\n if data['payment_mode'] == 'stripe':\n data['payment_url'] = 'stripe://payment'\n elif data['payment_mode'] == 'paypal':\n order = Order.query.filter_by(id=data['id']).first()\n data['payment_url'] = PayPalPaymentsManager.get_checkout_url(order)\n return data\n\n @validates_schema\n def initial_values(self, data):\n if data.get('payment_mode') is None and 'POST' in request.method:\n data['payment_mode'] = 'free'\n return data\n\n id = fields.Str(dump_only=True)\n identifier = fields.Str(dump_only=True)\n amount = fields.Float(validate=lambda n: n > 0)\n address = fields.Str()\n city = fields.Str()\n state = fields.Str(db.String)\n country = fields.Str(required=True)\n zipcode = fields.Str()\n completed_at = fields.DateTime(dump_only=True)\n transaction_id = fields.Str(dump_only=True)\n payment_mode = fields.Str()\n paid_via = fields.Str(dump_only=True)\n brand = fields.Str(dump_only=True)\n exp_month = fields.Str(dump_only=True)\n exp_year = fields.Str(dump_only=True)\n last4 = fields.Str(dump_only=True)\n status = fields.Str(validate=validate.OneOf(choices=[\"pending\", \"cancelled\", \"confirmed\", \"deleted\"]))\n discount_code_id = fields.Str()\n payment_url = fields.Str(dump_only=True)\n\n attendees = Relationship(attribute='ticket_holders',\n self_view='v1.order_attendee',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.attendee_list',\n related_view_kwargs={'order_id': '<id>'},\n schema='AttendeeSchema',\n many=True,\n type_='attendee')\n\n tickets = Relationship(self_view='v1.order_ticket',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.ticket_list',\n related_view_kwargs={'order_id': '<id>'},\n schema='TicketSchema',\n many=True,\n type_=\"ticket\")\n\n user = Relationship(self_view='v1.order_user',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<user_id>'},\n schema='UserSchema',\n type_=\"user\")\n\n event = Relationship(self_view='v1.order_event',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.event_detail',\n related_view_kwargs={'id': '<event_id>'},\n schema='EventSchema',\n type_=\"event\")\n\n marketer = Relationship(self_view='v1.order_marketer',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.user_detail',\n related_view_kwargs={'id': '<marketer_id>'},\n schema='UserSchema',\n type_=\"user\")\n\n discount_code = Relationship(self_view='v1.order_discount',\n self_view_kwargs={'identifier': '<identifier>'},\n related_view='v1.discount_code_detail',\n related_view_kwargs={'id': '<discount_code_id>'},\n schema='DiscountCodeSchema',\n type_=\"discount-code\")\n\n\nclass OrdersListPost(ResourceList):\n def before_post(self, args, kwargs, data=None):\n require_relationship(['event', 'attendees'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n data['status'] = 'pending'\n\n def before_create_object(self, data, view_kwargs):\n # Apply discount only if the user is not event admin\n if data.get('discount') and not has_access('is_coorganizer', event_id=data['event']):\n discount_code = safe_query(self, DiscountCode, 'id', data['discount'], 'discount_code_id')\n if not discount_code.is_active:\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Inactive Discount Code\")\n else:\n now = datetime.utcnow()\n valid_from = datetime.strptime(discount_code.valid_from, '%Y-%m-%d %H:%M:%S')\n valid_till = datetime.strptime(discount_code.valid_till, '%Y-%m-%d %H:%M:%S')\n if not (valid_from <= now <= valid_till):\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Inactive Discount Code\")\n if not TicketingManager.match_discount_quantity(discount_code, data['ticket_holders']):\n raise UnprocessableEntity({'source': 'discount_code_id'}, 'Discount Usage Exceeded')\n\n if discount_code.event.id != data['event'] and discount_code.user_for == TICKET:\n raise UnprocessableEntity({'source': 'discount_code_id'}, \"Invalid Discount Code\")\n\n def after_create_object(self, order, data, view_kwargs):\n order_tickets = {}\n for holder in order.ticket_holders:\n if order_tickets.get(holder.ticket_id) is None:\n order_tickets[holder.ticket_id] = 1\n else:\n order_tickets[holder.ticket_id] += 1\n for ticket in order_tickets:\n od = OrderTicket(order_id=order.id, ticket_id=ticket, quantity=order_tickets[ticket])\n save_to_db(od)\n order.quantity = order.get_tickets_count()\n save_to_db(order)\n if not has_access('is_coorganizer', **view_kwargs):\n TicketingManager.calculate_update_amount(order)\n\n data['user_id'] = current_user.id\n\n methods = ['POST', ]\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order,\n 'methods': {'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass OrdersList(ResourceList):\n def before_get(self, args, kwargs):\n if kwargs.get('event_id') is None:\n if 'GET' in request.method and has_access('is_admin'):\n pass\n else:\n raise ForbiddenException({'source': ''}, \"Admin Access Required\")\n elif not has_access('is_coorganizer', event_id=kwargs['event_id']):\n raise ForbiddenException({'source': ''}, \"Co-Organizer Access Required\")\n\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order}\n\n\nclass OrderDetail(ResourceDetail):\n def before_get_object(self, view_kwargs):\n if view_kwargs.get('identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['identifier'], 'order_identifier')\n view_kwargs['id'] = order.id\n\n def before_update_object(self, order, data, view_kwargs):\n if data.get('status'):\n if has_access('is_coorganizer', event_id=order.event.id):\n pass\n else:\n raise ForbiddenException({'pointer': 'data/status'},\n \"To update status minimum Co-organizer access required\")\n\n decorators = (api.has_permission('is_coorganizer', fetch=\"event_id\", fetch_as=\"event_id\", model=Order),)\n\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order,\n 'methods': {'before_update_object': before_update_object}}\n\n\nclass OrderRelationship(ResourceRelationship):\n decorators = (jwt_required,)\n schema = OrderSchema\n data_layer = {'session': db.session,\n 'model': Order}\n\n\nclass ChargeSchema(Schema):\n class Meta:\n type_ = 'charge'\n inflect = dasherize\n self_view = 'v1.charge_list'\n self_view_kwargs = {'id': '<id>'}\n\n id = fields.Str(dump_only=True)\n stripe = fields.Str(allow_none=True)\n\n\nclass ChargeList(ResourceList):\n methods = ['POST', ]\n schema = ChargeSchema\n\n data_layer = {\n 'class': ChargesLayer,\n 'session': db.session\n }\n", "path": "app/api/orders.py"}]} |
gh_patches_debug_1105 | rasdani/github-patches | git_diff | wright-group__WrightTools-512 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
save method of collection throws ValueError
I have a collection. I am attempting to save it, `col.save(filepath=p, overwrite=True)`. The following error is thrown:
```
File "<ipython-input-12-664d233e4850>", line 1, in <module>
runfile('/home/darien/source/MoS2_TSF/simulations/simulations.py', wdir='/home/darien/source/MoS2_TSF/simulations')
File "/home/darien/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/home/darien/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/home/darien/source/MoS2_TSF/simulations/simulations.py", line 84, in <module>
col.save(filepath=p, overwrite=True)
File "/home/darien/source/WrightTools/WrightTools/_group.py", line 317, in save
super().copy(v, new, name=v.natural_name)
File "/home/darien/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py", line 399, in copy
copypl, base.dlcpl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5o.pyx", line 217, in h5py.h5o.copy
ValueError: Destination object already exists (destination object already exists)
```
The error gets thrown even if `simulations.wt5` does not exist at runtime. The `save` method creates the file on disk, but doesn't finish the job.
I attempted to replicate the problem.
```
import numpy as np
import WrightTools as wt
x = np.linspace(0.1, 1, 10)
y = np.linspace(0.1, 1, 10)
z = x[:, None] * y[None, :]
root = wt.Collection(name='root')
d = root.create_data()
d.create_variable('x', values=x, units=None)
d.create_variable('y', values=y, units=None)
d.transform(['x', 'y'])
d.create_channel('z', values=z, units='p_nm')
p = 'testy.wt5'
root.save(p, overwrite=True)
```
This script works *as expected* :disappointed: .
In short, `save` is not working for me, but I can't nail down the problem. Feel free to checkout me in gitlab to see if you can replicate the problem with the same code I am using.
Should be the case that you can just clone and run `simulations/simulations.py`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/_group.py`
Content:
```
1 """Group base class."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import shutil
8 import os
9 import weakref
10 import tempfile
11 import posixpath
12 import warnings
13
14 import numpy as np
15
16 import h5py
17
18
19 # --- define --------------------------------------------------------------------------------------
20
21
22 wt5_version = '0.0.0'
23
24
25 # --- class ---------------------------------------------------------------------------------------
26
27
28 class MetaClass(type(h5py.Group)):
29
30 def __call__(cls, *args, **kwargs):
31 """Bypass normal construction."""
32 return cls.__new__(cls, *args, **kwargs)
33
34
35 class Group(h5py.Group, metaclass=MetaClass):
36 """Container of groups and datasets."""
37
38 instances = {}
39 class_name = 'Group'
40
41 def __init__(self, filepath=None, parent=None, name=None, **kwargs):
42 if filepath is None:
43 return
44 # parent
45 if parent is None:
46 parent = ''
47 if parent == '':
48 parent = posixpath.sep
49 path = posixpath.sep
50 else:
51 path = posixpath.sep.join([parent, name])
52 # file
53 self.filepath = filepath
54 file = h5py.File(self.filepath, 'a')
55 file.require_group(parent)
56 file.require_group(path)
57 h5py.Group.__init__(self, bind=file[path].id)
58 self.__n = 0
59 self.fid = self.file.fid
60 self.natural_name = name
61 # attrs
62 self.attrs['class'] = self.class_name
63 for key, value in kwargs.items():
64 try:
65 if isinstance(value, str):
66 value = value.encode()
67 elif isinstance(value, list) and len(value) > 0 and isinstance(value[0], str):
68 value = np.array(value, dtype='S')
69 self.attrs[key] = value
70 except TypeError:
71 # some values have no native HDF5 equivalent
72 message = "'{}' not included in attrs because its Type ({}) cannot be represented"
73 message = message.format(key, type(value))
74 warnings.warn(message)
75 # the following are populated if not already recorded
76 self.__version__
77 self.item_names
78
79 parent = file[parent]
80 if parent.name == self.name:
81 pass # at root, dont add to item_names
82 elif self.natural_name not in parent.attrs['item_names']:
83 parent.attrs['item_names'] = np.append(parent.attrs['item_names'],
84 self.natural_name.encode())
85
86 def __getattr__(self, key):
87 """Gets called if attribute not in self.__dict__.
88
89 See __getattribute__.
90 """
91 if key in self.keys():
92 value = self[key]
93 setattr(self, key, value)
94 return self[key]
95 else:
96 message = '{0} has no attribute {1}'.format(self.class_name, key)
97 raise AttributeError(message)
98
99 def __getitem__(self, key):
100 from .collection import Collection
101 from .data._data import Channel, Data, Variable
102 out = super().__getitem__(key)
103 if 'class' in out.attrs.keys():
104 if out.attrs['class'] == 'Channel':
105 return Channel(parent=self, id=out.id)
106 elif out.attrs['class'] == 'Collection':
107 return Collection(filepath=self.filepath, parent=self.name, name=key,
108 edit_local=True)
109 elif out.attrs['class'] == 'Data':
110 return Data(filepath=self.filepath, parent=self.name, name=key,
111 edit_local=True)
112 elif out.attrs['class'] == 'Variable':
113 return Variable(parent=self, id=out.id)
114 else:
115 return Group(filepath=self.filepath, parent=self.name, name=key,
116 edit_local=True)
117 else:
118 return out
119
120 def __new__(cls, *args, **kwargs):
121 """New object formation handler."""
122 # extract
123 filepath = args[0] if len(args) > 0 else kwargs.get('filepath', None)
124 parent = args[1] if len(args) > 1 else kwargs.get('parent', None)
125 natural_name = args[2] if len(args) > 2 else kwargs.get('name', cls.class_name.lower())
126 edit_local = args[3] if len(args) > 3 else kwargs.get('edit_local', False)
127 if isinstance(parent, h5py.Group):
128 filepath = parent.filepath
129 parent = parent.name
130 edit_local = True
131 # tempfile
132 tmpfile = None
133 if edit_local and filepath is None:
134 raise Exception # TODO: better exception
135 if not edit_local:
136 tmpfile = tempfile.mkstemp(prefix='', suffix='.wt5')
137 p = tmpfile[1]
138 if filepath:
139 shutil.copyfile(src=filepath, dst=p)
140 elif edit_local and filepath:
141 p = filepath
142 # construct fullpath
143 if parent is None:
144 parent = ''
145 name = posixpath.sep
146 else:
147 name = natural_name
148 fullpath = p + '::' + parent + name
149 # create and/or return
150 if fullpath not in cls.instances.keys():
151 kwargs['filepath'] = p
152 kwargs['parent'] = parent
153 kwargs['name'] = natural_name
154 instance = super(Group, cls).__new__(cls)
155 cls.__init__(instance, **kwargs)
156 cls.instances[fullpath] = instance
157 if tmpfile:
158 setattr(instance, '_tmpfile', tmpfile)
159 weakref.finalize(instance, instance.close)
160 return instance
161 instance = cls.instances[fullpath]
162 return instance
163
164 @property
165 def __version__(self):
166 if '__version__' not in self.file.attrs.keys():
167 self.file.attrs['__version__'] = wt5_version
168 return self.file.attrs['__version__']
169
170 @property
171 def fullpath(self):
172 """Full path: file and internal structure."""
173 return self.filepath + '::' + self.name
174
175 @property
176 def item_names(self):
177 """Item names."""
178 if 'item_names' not in self.attrs.keys():
179 self.attrs['item_names'] = np.array([], dtype='S')
180 return tuple(n.decode() for n in self.attrs['item_names'])
181
182 @property
183 def natural_name(self):
184 """Natural name."""
185 try:
186 assert self._natural_name is not None
187 except (AssertionError, AttributeError):
188 self._natural_name = self.attrs['name']
189 finally:
190 return self._natural_name
191
192 @natural_name.setter
193 def natural_name(self, value):
194 """Set natural name."""
195 if value is None:
196 value = ''
197 self._natural_name = self.attrs['name'] = value
198
199 @property
200 def parent(self):
201 """Parent."""
202 try:
203 assert self._parent is not None
204 except (AssertionError, AttributeError):
205 from .collection import Collection
206 key = posixpath.dirname(self.fullpath) + posixpath.sep
207 self._parent = Collection.instances[key]
208 finally:
209 return self._parent
210
211 def close(self):
212 """Close the group. Tempfile will be removed, if this is the final reference."""
213 if(self.fid.valid > 0):
214 self.__class__.instances.pop(self.fullpath, None)
215 # for some reason, the following file operations sometimes fail
216 # this stops execution of the method, meaning that the tempfile is never removed
217 # the following try case ensures that the tempfile code is always executed
218 # ---Blaise 2018-01-08
219 try:
220 self.file.flush()
221 self.file.close()
222 except SystemError:
223 pass
224 finally:
225 if hasattr(self, '_tmpfile'):
226 os.close(self._tmpfile[0])
227 os.remove(self._tmpfile[1])
228
229 def copy(self, parent=None, name=None, verbose=True):
230 """Create a copy under parent.
231
232 All children are copied as well.
233
234 Parameters
235 ----------
236 parent : WrightTools Collection (optional)
237 Parent to copy within. If None, copy is created in root of new
238 tempfile. Default is None.
239 name : string (optional)
240 Name of new copy at destination. If None, the current natural
241 name is used. Default is None.
242 verbose : boolean (optional)
243 Toggle talkback. Default is True.
244
245 Returns
246 -------
247 Group
248 Created copy.
249 """
250 if name is None:
251 name = self.natural_name
252 if parent is None:
253 from ._open import open as wt_open # circular import
254 new = Group() # root of new tempfile
255 # attrs
256 new.attrs.update(self.attrs)
257 new.natural_name = name
258 # children
259 for k, v in self.items():
260 super().copy(v, new, name=v.natural_name)
261 new.flush()
262 p = new.filepath
263 new = wt_open(p)
264 else:
265 # copy
266 self.file.copy(self.name, parent, name=name)
267 if 'item_names' in parent.attrs.keys():
268 new = parent.item_names + (name,)
269 parent.attrs['item_names'] = np.array(new, dtype='S')
270 new = parent[name]
271 # finish
272 if verbose:
273 print('{0} copied to {1}'.format(self.fullpath, new.fullpath))
274 return new
275
276 def flush(self):
277 """Ensure contents are written to file."""
278 self.file.flush()
279
280 def save(self, filepath=None, overwrite=False, verbose=True):
281 """Save as root of a new file.
282
283 Parameters
284 ----------
285 filepath : string (optional)
286 Filepath to write. If None, file is created using natural_name.
287 overwrite : boolean (optional)
288 Toggle overwrite behavior. Default is False.
289 verbose : boolean (optional)
290 Toggle talkback. Default is True
291
292 Returns
293 -------
294 str
295 Written filepath.
296 """
297 # parse filepath
298 if filepath is None:
299 filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')
300 elif not filepath.endswith(('.wt5', '.h5', '.hdf5')):
301 filepath += '.wt5'
302 filepath = os.path.expanduser(filepath)
303 # handle overwrite
304 if os.path.isfile(filepath):
305 if overwrite:
306 os.remove(filepath)
307 else:
308 raise FileExistsError(filepath)
309 # copy to new file
310 h5py.File(filepath)
311 new = Group(filepath=filepath, edit_local=True)
312 # attrs
313 for k, v in self.attrs.items():
314 new.attrs[k] = v
315 # children
316 for k, v in self.items():
317 super().copy(v, new, name=v.natural_name)
318 # finish
319 new.flush()
320 del new
321 if verbose:
322 print('file saved at', filepath)
323 return filepath
324
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/_group.py b/WrightTools/_group.py
--- a/WrightTools/_group.py
+++ b/WrightTools/_group.py
@@ -264,9 +264,6 @@
else:
# copy
self.file.copy(self.name, parent, name=name)
- if 'item_names' in parent.attrs.keys():
- new = parent.item_names + (name,)
- parent.attrs['item_names'] = np.array(new, dtype='S')
new = parent[name]
# finish
if verbose:
| {"golden_diff": "diff --git a/WrightTools/_group.py b/WrightTools/_group.py\n--- a/WrightTools/_group.py\n+++ b/WrightTools/_group.py\n@@ -264,9 +264,6 @@\n else:\n # copy\n self.file.copy(self.name, parent, name=name)\n- if 'item_names' in parent.attrs.keys():\n- new = parent.item_names + (name,)\n- parent.attrs['item_names'] = np.array(new, dtype='S')\n new = parent[name]\n # finish\n if verbose:\n", "issue": "save method of collection throws ValueError\nI have a collection. I am attempting to save it, `col.save(filepath=p, overwrite=True)`. The following error is thrown:\r\n```\r\n\r\n File \"<ipython-input-12-664d233e4850>\", line 1, in <module>\r\n runfile('/home/darien/source/MoS2_TSF/simulations/simulations.py', wdir='/home/darien/source/MoS2_TSF/simulations')\r\n\r\n File \"/home/darien/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 705, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"/home/darien/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py\", line 102, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"/home/darien/source/MoS2_TSF/simulations/simulations.py\", line 84, in <module>\r\n col.save(filepath=p, overwrite=True)\r\n\r\n File \"/home/darien/source/WrightTools/WrightTools/_group.py\", line 317, in save\r\n super().copy(v, new, name=v.natural_name)\r\n\r\n File \"/home/darien/anaconda3/lib/python3.6/site-packages/h5py/_hl/group.py\", line 399, in copy\r\n copypl, base.dlcpl)\r\n\r\n File \"h5py/_objects.pyx\", line 54, in h5py._objects.with_phil.wrapper\r\n\r\n File \"h5py/_objects.pyx\", line 55, in h5py._objects.with_phil.wrapper\r\n\r\n File \"h5py/h5o.pyx\", line 217, in h5py.h5o.copy\r\n\r\nValueError: Destination object already exists (destination object already exists)\r\n```\r\nThe error gets thrown even if `simulations.wt5` does not exist at runtime. The `save` method creates the file on disk, but doesn't finish the job. \r\n\r\nI attempted to replicate the problem.\r\n```\r\nimport numpy as np\r\nimport WrightTools as wt\r\n\r\nx = np.linspace(0.1, 1, 10)\r\ny = np.linspace(0.1, 1, 10)\r\nz = x[:, None] * y[None, :]\r\n\r\nroot = wt.Collection(name='root')\r\nd = root.create_data()\r\n\r\nd.create_variable('x', values=x, units=None)\r\nd.create_variable('y', values=y, units=None)\r\nd.transform(['x', 'y'])\r\nd.create_channel('z', values=z, units='p_nm')\r\n \r\np = 'testy.wt5' \r\nroot.save(p, overwrite=True)\r\n```\r\nThis script works *as expected* :disappointed: . \r\n\r\nIn short, `save` is not working for me, but I can't nail down the problem. Feel free to checkout me in gitlab to see if you can replicate the problem with the same code I am using.\r\nShould be the case that you can just clone and run `simulations/simulations.py`.\n", "before_files": [{"content": "\"\"\"Group base class.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport shutil\nimport os\nimport weakref\nimport tempfile\nimport posixpath\nimport warnings\n\nimport numpy as np\n\nimport h5py\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\nwt5_version = '0.0.0'\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass MetaClass(type(h5py.Group)):\n\n def __call__(cls, *args, **kwargs):\n \"\"\"Bypass normal construction.\"\"\"\n return cls.__new__(cls, *args, **kwargs)\n\n\nclass Group(h5py.Group, metaclass=MetaClass):\n \"\"\"Container of groups and datasets.\"\"\"\n\n instances = {}\n class_name = 'Group'\n\n def __init__(self, filepath=None, parent=None, name=None, **kwargs):\n if filepath is None:\n return\n # parent\n if parent is None:\n parent = ''\n if parent == '':\n parent = posixpath.sep\n path = posixpath.sep\n else:\n path = posixpath.sep.join([parent, name])\n # file\n self.filepath = filepath\n file = h5py.File(self.filepath, 'a')\n file.require_group(parent)\n file.require_group(path)\n h5py.Group.__init__(self, bind=file[path].id)\n self.__n = 0\n self.fid = self.file.fid\n self.natural_name = name\n # attrs\n self.attrs['class'] = self.class_name\n for key, value in kwargs.items():\n try:\n if isinstance(value, str):\n value = value.encode()\n elif isinstance(value, list) and len(value) > 0 and isinstance(value[0], str):\n value = np.array(value, dtype='S')\n self.attrs[key] = value\n except TypeError:\n # some values have no native HDF5 equivalent\n message = \"'{}' not included in attrs because its Type ({}) cannot be represented\"\n message = message.format(key, type(value))\n warnings.warn(message)\n # the following are populated if not already recorded\n self.__version__\n self.item_names\n\n parent = file[parent]\n if parent.name == self.name:\n pass # at root, dont add to item_names\n elif self.natural_name not in parent.attrs['item_names']:\n parent.attrs['item_names'] = np.append(parent.attrs['item_names'],\n self.natural_name.encode())\n\n def __getattr__(self, key):\n \"\"\"Gets called if attribute not in self.__dict__.\n\n See __getattribute__.\n \"\"\"\n if key in self.keys():\n value = self[key]\n setattr(self, key, value)\n return self[key]\n else:\n message = '{0} has no attribute {1}'.format(self.class_name, key)\n raise AttributeError(message)\n\n def __getitem__(self, key):\n from .collection import Collection\n from .data._data import Channel, Data, Variable\n out = super().__getitem__(key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Channel':\n return Channel(parent=self, id=out.id)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Data':\n return Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Variable':\n return Variable(parent=self, id=out.id)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __new__(cls, *args, **kwargs):\n \"\"\"New object formation handler.\"\"\"\n # extract\n filepath = args[0] if len(args) > 0 else kwargs.get('filepath', None)\n parent = args[1] if len(args) > 1 else kwargs.get('parent', None)\n natural_name = args[2] if len(args) > 2 else kwargs.get('name', cls.class_name.lower())\n edit_local = args[3] if len(args) > 3 else kwargs.get('edit_local', False)\n if isinstance(parent, h5py.Group):\n filepath = parent.filepath\n parent = parent.name\n edit_local = True\n # tempfile\n tmpfile = None\n if edit_local and filepath is None:\n raise Exception # TODO: better exception\n if not edit_local:\n tmpfile = tempfile.mkstemp(prefix='', suffix='.wt5')\n p = tmpfile[1]\n if filepath:\n shutil.copyfile(src=filepath, dst=p)\n elif edit_local and filepath:\n p = filepath\n # construct fullpath\n if parent is None:\n parent = ''\n name = posixpath.sep\n else:\n name = natural_name\n fullpath = p + '::' + parent + name\n # create and/or return\n if fullpath not in cls.instances.keys():\n kwargs['filepath'] = p\n kwargs['parent'] = parent\n kwargs['name'] = natural_name\n instance = super(Group, cls).__new__(cls)\n cls.__init__(instance, **kwargs)\n cls.instances[fullpath] = instance\n if tmpfile:\n setattr(instance, '_tmpfile', tmpfile)\n weakref.finalize(instance, instance.close)\n return instance\n instance = cls.instances[fullpath]\n return instance\n\n @property\n def __version__(self):\n if '__version__' not in self.file.attrs.keys():\n self.file.attrs['__version__'] = wt5_version\n return self.file.attrs['__version__']\n\n @property\n def fullpath(self):\n \"\"\"Full path: file and internal structure.\"\"\"\n return self.filepath + '::' + self.name\n\n @property\n def item_names(self):\n \"\"\"Item names.\"\"\"\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return tuple(n.decode() for n in self.attrs['item_names'])\n\n @property\n def natural_name(self):\n \"\"\"Natural name.\"\"\"\n try:\n assert self._natural_name is not None\n except (AssertionError, AttributeError):\n self._natural_name = self.attrs['name']\n finally:\n return self._natural_name\n\n @natural_name.setter\n def natural_name(self, value):\n \"\"\"Set natural name.\"\"\"\n if value is None:\n value = ''\n self._natural_name = self.attrs['name'] = value\n\n @property\n def parent(self):\n \"\"\"Parent.\"\"\"\n try:\n assert self._parent is not None\n except (AssertionError, AttributeError):\n from .collection import Collection\n key = posixpath.dirname(self.fullpath) + posixpath.sep\n self._parent = Collection.instances[key]\n finally:\n return self._parent\n\n def close(self):\n \"\"\"Close the group. Tempfile will be removed, if this is the final reference.\"\"\"\n if(self.fid.valid > 0):\n self.__class__.instances.pop(self.fullpath, None)\n # for some reason, the following file operations sometimes fail\n # this stops execution of the method, meaning that the tempfile is never removed\n # the following try case ensures that the tempfile code is always executed\n # ---Blaise 2018-01-08\n try:\n self.file.flush()\n self.file.close()\n except SystemError:\n pass\n finally:\n if hasattr(self, '_tmpfile'):\n os.close(self._tmpfile[0])\n os.remove(self._tmpfile[1])\n\n def copy(self, parent=None, name=None, verbose=True):\n \"\"\"Create a copy under parent.\n\n All children are copied as well.\n\n Parameters\n ----------\n parent : WrightTools Collection (optional)\n Parent to copy within. If None, copy is created in root of new\n tempfile. Default is None.\n name : string (optional)\n Name of new copy at destination. If None, the current natural\n name is used. Default is None.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n Group\n Created copy.\n \"\"\"\n if name is None:\n name = self.natural_name\n if parent is None:\n from ._open import open as wt_open # circular import\n new = Group() # root of new tempfile\n # attrs\n new.attrs.update(self.attrs)\n new.natural_name = name\n # children\n for k, v in self.items():\n super().copy(v, new, name=v.natural_name)\n new.flush()\n p = new.filepath\n new = wt_open(p)\n else:\n # copy\n self.file.copy(self.name, parent, name=name)\n if 'item_names' in parent.attrs.keys():\n new = parent.item_names + (name,)\n parent.attrs['item_names'] = np.array(new, dtype='S')\n new = parent[name]\n # finish\n if verbose:\n print('{0} copied to {1}'.format(self.fullpath, new.fullpath))\n return new\n\n def flush(self):\n \"\"\"Ensure contents are written to file.\"\"\"\n self.file.flush()\n\n def save(self, filepath=None, overwrite=False, verbose=True):\n \"\"\"Save as root of a new file.\n\n Parameters\n ----------\n filepath : string (optional)\n Filepath to write. If None, file is created using natural_name.\n overwrite : boolean (optional)\n Toggle overwrite behavior. Default is False.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n\n Returns\n -------\n str\n Written filepath.\n \"\"\"\n # parse filepath\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif not filepath.endswith(('.wt5', '.h5', '.hdf5')):\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n # handle overwrite\n if os.path.isfile(filepath):\n if overwrite:\n os.remove(filepath)\n else:\n raise FileExistsError(filepath)\n # copy to new file\n h5py.File(filepath)\n new = Group(filepath=filepath, edit_local=True)\n # attrs\n for k, v in self.attrs.items():\n new.attrs[k] = v\n # children\n for k, v in self.items():\n super().copy(v, new, name=v.natural_name)\n # finish\n new.flush()\n del new\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/_group.py"}], "after_files": [{"content": "\"\"\"Group base class.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport shutil\nimport os\nimport weakref\nimport tempfile\nimport posixpath\nimport warnings\n\nimport numpy as np\n\nimport h5py\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\nwt5_version = '0.0.0'\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass MetaClass(type(h5py.Group)):\n\n def __call__(cls, *args, **kwargs):\n \"\"\"Bypass normal construction.\"\"\"\n return cls.__new__(cls, *args, **kwargs)\n\n\nclass Group(h5py.Group, metaclass=MetaClass):\n \"\"\"Container of groups and datasets.\"\"\"\n\n instances = {}\n class_name = 'Group'\n\n def __init__(self, filepath=None, parent=None, name=None, **kwargs):\n if filepath is None:\n return\n # parent\n if parent is None:\n parent = ''\n if parent == '':\n parent = posixpath.sep\n path = posixpath.sep\n else:\n path = posixpath.sep.join([parent, name])\n # file\n self.filepath = filepath\n file = h5py.File(self.filepath, 'a')\n file.require_group(parent)\n file.require_group(path)\n h5py.Group.__init__(self, bind=file[path].id)\n self.__n = 0\n self.fid = self.file.fid\n self.natural_name = name\n # attrs\n self.attrs['class'] = self.class_name\n for key, value in kwargs.items():\n try:\n if isinstance(value, str):\n value = value.encode()\n elif isinstance(value, list) and len(value) > 0 and isinstance(value[0], str):\n value = np.array(value, dtype='S')\n self.attrs[key] = value\n except TypeError:\n # some values have no native HDF5 equivalent\n message = \"'{}' not included in attrs because its Type ({}) cannot be represented\"\n message = message.format(key, type(value))\n warnings.warn(message)\n # the following are populated if not already recorded\n self.__version__\n self.item_names\n\n parent = file[parent]\n if parent.name == self.name:\n pass # at root, dont add to item_names\n elif self.natural_name not in parent.attrs['item_names']:\n parent.attrs['item_names'] = np.append(parent.attrs['item_names'],\n self.natural_name.encode())\n\n def __getattr__(self, key):\n \"\"\"Gets called if attribute not in self.__dict__.\n\n See __getattribute__.\n \"\"\"\n if key in self.keys():\n value = self[key]\n setattr(self, key, value)\n return self[key]\n else:\n message = '{0} has no attribute {1}'.format(self.class_name, key)\n raise AttributeError(message)\n\n def __getitem__(self, key):\n from .collection import Collection\n from .data._data import Channel, Data, Variable\n out = super().__getitem__(key)\n if 'class' in out.attrs.keys():\n if out.attrs['class'] == 'Channel':\n return Channel(parent=self, id=out.id)\n elif out.attrs['class'] == 'Collection':\n return Collection(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Data':\n return Data(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n elif out.attrs['class'] == 'Variable':\n return Variable(parent=self, id=out.id)\n else:\n return Group(filepath=self.filepath, parent=self.name, name=key,\n edit_local=True)\n else:\n return out\n\n def __new__(cls, *args, **kwargs):\n \"\"\"New object formation handler.\"\"\"\n # extract\n filepath = args[0] if len(args) > 0 else kwargs.get('filepath', None)\n parent = args[1] if len(args) > 1 else kwargs.get('parent', None)\n natural_name = args[2] if len(args) > 2 else kwargs.get('name', cls.class_name.lower())\n edit_local = args[3] if len(args) > 3 else kwargs.get('edit_local', False)\n if isinstance(parent, h5py.Group):\n filepath = parent.filepath\n parent = parent.name\n edit_local = True\n # tempfile\n tmpfile = None\n if edit_local and filepath is None:\n raise Exception # TODO: better exception\n if not edit_local:\n tmpfile = tempfile.mkstemp(prefix='', suffix='.wt5')\n p = tmpfile[1]\n if filepath:\n shutil.copyfile(src=filepath, dst=p)\n elif edit_local and filepath:\n p = filepath\n # construct fullpath\n if parent is None:\n parent = ''\n name = posixpath.sep\n else:\n name = natural_name\n fullpath = p + '::' + parent + name\n # create and/or return\n if fullpath not in cls.instances.keys():\n kwargs['filepath'] = p\n kwargs['parent'] = parent\n kwargs['name'] = natural_name\n instance = super(Group, cls).__new__(cls)\n cls.__init__(instance, **kwargs)\n cls.instances[fullpath] = instance\n if tmpfile:\n setattr(instance, '_tmpfile', tmpfile)\n weakref.finalize(instance, instance.close)\n return instance\n instance = cls.instances[fullpath]\n return instance\n\n @property\n def __version__(self):\n if '__version__' not in self.file.attrs.keys():\n self.file.attrs['__version__'] = wt5_version\n return self.file.attrs['__version__']\n\n @property\n def fullpath(self):\n \"\"\"Full path: file and internal structure.\"\"\"\n return self.filepath + '::' + self.name\n\n @property\n def item_names(self):\n \"\"\"Item names.\"\"\"\n if 'item_names' not in self.attrs.keys():\n self.attrs['item_names'] = np.array([], dtype='S')\n return tuple(n.decode() for n in self.attrs['item_names'])\n\n @property\n def natural_name(self):\n \"\"\"Natural name.\"\"\"\n try:\n assert self._natural_name is not None\n except (AssertionError, AttributeError):\n self._natural_name = self.attrs['name']\n finally:\n return self._natural_name\n\n @natural_name.setter\n def natural_name(self, value):\n \"\"\"Set natural name.\"\"\"\n if value is None:\n value = ''\n self._natural_name = self.attrs['name'] = value\n\n @property\n def parent(self):\n \"\"\"Parent.\"\"\"\n try:\n assert self._parent is not None\n except (AssertionError, AttributeError):\n from .collection import Collection\n key = posixpath.dirname(self.fullpath) + posixpath.sep\n self._parent = Collection.instances[key]\n finally:\n return self._parent\n\n def close(self):\n \"\"\"Close the group. Tempfile will be removed, if this is the final reference.\"\"\"\n if(self.fid.valid > 0):\n self.__class__.instances.pop(self.fullpath, None)\n # for some reason, the following file operations sometimes fail\n # this stops execution of the method, meaning that the tempfile is never removed\n # the following try case ensures that the tempfile code is always executed\n # ---Blaise 2018-01-08\n try:\n self.file.flush()\n self.file.close()\n except SystemError:\n pass\n finally:\n if hasattr(self, '_tmpfile'):\n os.close(self._tmpfile[0])\n os.remove(self._tmpfile[1])\n\n def copy(self, parent=None, name=None, verbose=True):\n \"\"\"Create a copy under parent.\n\n All children are copied as well.\n\n Parameters\n ----------\n parent : WrightTools Collection (optional)\n Parent to copy within. If None, copy is created in root of new\n tempfile. Default is None.\n name : string (optional)\n Name of new copy at destination. If None, the current natural\n name is used. Default is None.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n Group\n Created copy.\n \"\"\"\n if name is None:\n name = self.natural_name\n if parent is None:\n from ._open import open as wt_open # circular import\n new = Group() # root of new tempfile\n # attrs\n new.attrs.update(self.attrs)\n new.natural_name = name\n # children\n for k, v in self.items():\n super().copy(v, new, name=v.natural_name)\n new.flush()\n p = new.filepath\n new = wt_open(p)\n else:\n # copy\n self.file.copy(self.name, parent, name=name)\n new = parent[name]\n # finish\n if verbose:\n print('{0} copied to {1}'.format(self.fullpath, new.fullpath))\n return new\n\n def flush(self):\n \"\"\"Ensure contents are written to file.\"\"\"\n self.file.flush()\n\n def save(self, filepath=None, overwrite=False, verbose=True):\n \"\"\"Save as root of a new file.\n\n Parameters\n ----------\n filepath : string (optional)\n Filepath to write. If None, file is created using natural_name.\n overwrite : boolean (optional)\n Toggle overwrite behavior. Default is False.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n\n Returns\n -------\n str\n Written filepath.\n \"\"\"\n # parse filepath\n if filepath is None:\n filepath = os.path.join(os.getcwd(), self.natural_name + '.wt5')\n elif not filepath.endswith(('.wt5', '.h5', '.hdf5')):\n filepath += '.wt5'\n filepath = os.path.expanduser(filepath)\n # handle overwrite\n if os.path.isfile(filepath):\n if overwrite:\n os.remove(filepath)\n else:\n raise FileExistsError(filepath)\n # copy to new file\n h5py.File(filepath)\n new = Group(filepath=filepath, edit_local=True)\n # attrs\n for k, v in self.attrs.items():\n new.attrs[k] = v\n # children\n for k, v in self.items():\n super().copy(v, new, name=v.natural_name)\n # finish\n new.flush()\n del new\n if verbose:\n print('file saved at', filepath)\n return filepath\n", "path": "WrightTools/_group.py"}]} |
gh_patches_debug_1106 | rasdani/github-patches | git_diff | Textualize__textual-1837 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CSS] Descendant type selectors can't have a numeric in their name
Consider the following code:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelH1( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
Vertical LabelH1 {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelH1( "LabelH1" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
When run we get the following error:
```
Error in stylesheet:
/Users/davep/develop/python/textual-sandbox/css_oddness.py:CSSOddnessApp:1:19
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ❱ 1 │ │
│ 2 │ Vertical LabelH1 { │
│ 3 │ │ background: red; │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
• Expected one of 'combinator child', 'comment start', 'declaration set start', 'new selector', 'pseudo class', 'selector', 'selector class', 'selector id',
'selector universal', or 'whitespace'.
• Did you forget a semicolon at the end of a line?
```
The same thing happens with `Vertical LabelH1`. On the other hand, if I remove the number from the inherited label widget:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelHOne( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
Vertical LabelHOne {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelHOne( "LabelHOne" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
this works fine. Likewise, if I retain the name but *don't* use combination:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelH1( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
LabelH1 {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelH1( "LabelH1" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
that also works fine.
I would suspect a variation on #1253.
[CSS] Descendant type selectors can't have a numeric in their name
Consider the following code:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelH1( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
Vertical LabelH1 {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelH1( "LabelH1" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
When run we get the following error:
```
Error in stylesheet:
/Users/davep/develop/python/textual-sandbox/css_oddness.py:CSSOddnessApp:1:19
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ❱ 1 │ │
│ 2 │ Vertical LabelH1 { │
│ 3 │ │ background: red; │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
• Expected one of 'combinator child', 'comment start', 'declaration set start', 'new selector', 'pseudo class', 'selector', 'selector class', 'selector id',
'selector universal', or 'whitespace'.
• Did you forget a semicolon at the end of a line?
```
The same thing happens with `Vertical LabelH1`. On the other hand, if I remove the number from the inherited label widget:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelHOne( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
Vertical LabelHOne {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelHOne( "LabelHOne" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
this works fine. Likewise, if I retain the name but *don't* use combination:
```python
from textual.app import App, ComposeResult
from textual.containers import Vertical
from textual.widgets import Header, Footer, Label
class LabelH1( Label ):
...
class CSSOddnessApp( App[ None ] ):
CSS = """
LabelH1 {
background: red;
}
"""
def compose( self ) -> ComposeResult:
yield Header()
yield Vertical(
Label( "Label" ),
LabelH1( "LabelH1" ),
)
yield Footer()
if __name__ == "__main__":
CSSOddnessApp().run()
```
that also works fine.
I would suspect a variation on #1253.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/textual/css/tokenize.py`
Content:
```
1 from __future__ import annotations
2
3 import re
4 from pathlib import PurePath
5 from typing import Iterable
6
7 from textual.css.tokenizer import Expect, Token, Tokenizer
8
9 PERCENT = r"-?\d+\.?\d*%"
10 DECIMAL = r"-?\d+\.?\d*"
11 COMMA = r"\s*,\s*"
12 OPEN_BRACE = r"\(\s*"
13 CLOSE_BRACE = r"\s*\)"
14
15 HEX_COLOR = r"\#[0-9a-fA-F]{8}|\#[0-9a-fA-F]{6}|\#[0-9a-fA-F]{4}|\#[0-9a-fA-F]{3}"
16 RGB_COLOR = rf"rgb{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}|rgba{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}"
17 HSL_COLOR = rf"hsl{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{CLOSE_BRACE}|hsla{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{COMMA}{DECIMAL}{CLOSE_BRACE}"
18
19 COMMENT_START = r"\/\*"
20 SCALAR = rf"{DECIMAL}(?:fr|%|w|h|vw|vh)"
21 DURATION = r"\d+\.?\d*(?:ms|s)"
22 NUMBER = r"\-?\d+\.?\d*"
23 COLOR = rf"{HEX_COLOR}|{RGB_COLOR}|{HSL_COLOR}"
24 KEY_VALUE = r"[a-zA-Z_-][a-zA-Z0-9_-]*=[0-9a-zA-Z_\-\/]+"
25 TOKEN = "[a-zA-Z][a-zA-Z0-9_-]*"
26 STRING = r"\".*?\""
27 VARIABLE_REF = r"\$[a-zA-Z0-9_\-]+"
28
29 IDENTIFIER = r"[a-zA-Z_\-][a-zA-Z0-9_\-]*"
30
31 # Values permitted in variable and rule declarations.
32 DECLARATION_VALUES = {
33 "scalar": SCALAR,
34 "duration": DURATION,
35 "number": NUMBER,
36 "color": COLOR,
37 "key_value": KEY_VALUE,
38 "token": TOKEN,
39 "string": STRING,
40 "variable_ref": VARIABLE_REF,
41 }
42
43 # The tokenizers "expectation" while at the root/highest level of scope
44 # in the CSS file. At this level we might expect to see selectors, comments,
45 # variable definitions etc.
46 expect_root_scope = Expect(
47 whitespace=r"\s+",
48 comment_start=COMMENT_START,
49 selector_start_id=r"\#" + IDENTIFIER,
50 selector_start_class=r"\." + IDENTIFIER,
51 selector_start_universal=r"\*",
52 selector_start=IDENTIFIER,
53 variable_name=rf"{VARIABLE_REF}:",
54 ).expect_eof(True)
55
56 # After a variable declaration e.g. "$warning-text: TOKENS;"
57 # for tokenizing variable value ------^~~~~~~^
58 expect_variable_name_continue = Expect(
59 variable_value_end=r"\n|;",
60 whitespace=r"\s+",
61 comment_start=COMMENT_START,
62 **DECLARATION_VALUES,
63 ).expect_eof(True)
64
65 expect_comment_end = Expect(
66 comment_end=re.escape("*/"),
67 )
68
69 # After we come across a selector in CSS e.g. ".my-class", we may
70 # find other selectors, pseudo-classes... e.g. ".my-class :hover"
71 expect_selector_continue = Expect(
72 whitespace=r"\s+",
73 comment_start=COMMENT_START,
74 pseudo_class=r"\:[a-zA-Z_-]+",
75 selector_id=r"\#[a-zA-Z_\-][a-zA-Z0-9_\-]*",
76 selector_class=r"\.[a-zA-Z_\-][a-zA-Z0-9_\-]*",
77 selector_universal=r"\*",
78 selector=r"[a-zA-Z_\-]+",
79 combinator_child=">",
80 new_selector=r",",
81 declaration_set_start=r"\{",
82 )
83
84 # A rule declaration e.g. "text: red;"
85 # ^---^
86 expect_declaration = Expect(
87 whitespace=r"\s+",
88 comment_start=COMMENT_START,
89 declaration_name=r"[a-zA-Z_\-]+\:",
90 declaration_set_end=r"\}",
91 )
92
93 expect_declaration_solo = Expect(
94 whitespace=r"\s+",
95 comment_start=COMMENT_START,
96 declaration_name=r"[a-zA-Z_\-]+\:",
97 declaration_set_end=r"\}",
98 ).expect_eof(True)
99
100 # The value(s)/content from a rule declaration e.g. "text: red;"
101 # ^---^
102 expect_declaration_content = Expect(
103 declaration_end=r";",
104 whitespace=r"\s+",
105 comment_start=COMMENT_START,
106 **DECLARATION_VALUES,
107 important=r"\!important",
108 comma=",",
109 declaration_set_end=r"\}",
110 )
111
112 expect_declaration_content_solo = Expect(
113 declaration_end=r";",
114 whitespace=r"\s+",
115 comment_start=COMMENT_START,
116 **DECLARATION_VALUES,
117 important=r"\!important",
118 comma=",",
119 declaration_set_end=r"\}",
120 ).expect_eof(True)
121
122
123 class TokenizerState:
124 """State machine for the tokenizer.
125
126 Attributes:
127 EXPECT: The initial expectation of the tokenizer. Since we start tokenizing
128 at the root scope, we might expect to see either a variable or selector, for example.
129 STATE_MAP: Maps token names to Expects, defines the sets of valid tokens
130 that we'd expect to see next, given the current token. For example, if
131 we've just processed a variable declaration name, we next expect to see
132 the value of that variable.
133 """
134
135 EXPECT = expect_root_scope
136 STATE_MAP = {
137 "variable_name": expect_variable_name_continue,
138 "variable_value_end": expect_root_scope,
139 "selector_start": expect_selector_continue,
140 "selector_start_id": expect_selector_continue,
141 "selector_start_class": expect_selector_continue,
142 "selector_start_universal": expect_selector_continue,
143 "selector_id": expect_selector_continue,
144 "selector_class": expect_selector_continue,
145 "selector_universal": expect_selector_continue,
146 "declaration_set_start": expect_declaration,
147 "declaration_name": expect_declaration_content,
148 "declaration_end": expect_declaration,
149 "declaration_set_end": expect_root_scope,
150 }
151
152 def __call__(self, code: str, path: str | PurePath) -> Iterable[Token]:
153 tokenizer = Tokenizer(code, path=path)
154 expect = self.EXPECT
155 get_token = tokenizer.get_token
156 get_state = self.STATE_MAP.get
157 while True:
158 token = get_token(expect)
159 name = token.name
160 if name == "comment_start":
161 tokenizer.skip_to(expect_comment_end)
162 continue
163 elif name == "eof":
164 break
165 expect = get_state(name, expect)
166 yield token
167
168
169 class DeclarationTokenizerState(TokenizerState):
170 EXPECT = expect_declaration_solo
171 STATE_MAP = {
172 "declaration_name": expect_declaration_content,
173 "declaration_end": expect_declaration_solo,
174 }
175
176
177 class ValueTokenizerState(TokenizerState):
178 EXPECT = expect_declaration_content_solo
179
180
181 tokenize = TokenizerState()
182 tokenize_declarations = DeclarationTokenizerState()
183 tokenize_value = ValueTokenizerState()
184
185
186 def tokenize_values(values: dict[str, str]) -> dict[str, list[Token]]:
187 """Tokens the values in a dict of strings.
188
189 Args:
190 values: A mapping of CSS variable name on to a value, to be
191 added to the CSS context.
192
193 Returns:
194 A mapping of name on to a list of tokens,
195 """
196 value_tokens = {
197 name: list(tokenize_value(value, "__name__")) for name, value in values.items()
198 }
199 return value_tokens
200
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/textual/css/tokenize.py b/src/textual/css/tokenize.py
--- a/src/textual/css/tokenize.py
+++ b/src/textual/css/tokenize.py
@@ -75,7 +75,7 @@
selector_id=r"\#[a-zA-Z_\-][a-zA-Z0-9_\-]*",
selector_class=r"\.[a-zA-Z_\-][a-zA-Z0-9_\-]*",
selector_universal=r"\*",
- selector=r"[a-zA-Z_\-]+",
+ selector=IDENTIFIER,
combinator_child=">",
new_selector=r",",
declaration_set_start=r"\{",
| {"golden_diff": "diff --git a/src/textual/css/tokenize.py b/src/textual/css/tokenize.py\n--- a/src/textual/css/tokenize.py\n+++ b/src/textual/css/tokenize.py\n@@ -75,7 +75,7 @@\n selector_id=r\"\\#[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_class=r\"\\.[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_universal=r\"\\*\",\n- selector=r\"[a-zA-Z_\\-]+\",\n+ selector=IDENTIFIER,\n combinator_child=\">\",\n new_selector=r\",\",\n declaration_set_start=r\"\\{\",\n", "issue": "[CSS] Descendant type selectors can't have a numeric in their name\nConsider the following code:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelH1( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n Vertical LabelH1 {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelH1( \"LabelH1\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nWhen run we get the following error:\r\n\r\n```\r\n Error in stylesheet:\r\n /Users/davep/develop/python/textual-sandbox/css_oddness.py:CSSOddnessApp:1:19\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 \u2771 1 \u2502 \u2502\r\n\u2502 2 \u2502 Vertical LabelH1 { \u2502\r\n\u2502 3 \u2502 \u2502 background: red; \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n \u2022 Expected one of 'combinator child', 'comment start', 'declaration set start', 'new selector', 'pseudo class', 'selector', 'selector class', 'selector id',\r\n 'selector universal', or 'whitespace'.\r\n \u2022 Did you forget a semicolon at the end of a line?\r\n```\r\n\r\nThe same thing happens with `Vertical LabelH1`. On the other hand, if I remove the number from the inherited label widget:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelHOne( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n Vertical LabelHOne {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelHOne( \"LabelHOne\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nthis works fine. Likewise, if I retain the name but *don't* use combination:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelH1( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n LabelH1 {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelH1( \"LabelH1\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nthat also works fine.\r\n\r\nI would suspect a variation on #1253.\n[CSS] Descendant type selectors can't have a numeric in their name\nConsider the following code:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelH1( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n Vertical LabelH1 {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelH1( \"LabelH1\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nWhen run we get the following error:\r\n\r\n```\r\n Error in stylesheet:\r\n /Users/davep/develop/python/textual-sandbox/css_oddness.py:CSSOddnessApp:1:19\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 \u2771 1 \u2502 \u2502\r\n\u2502 2 \u2502 Vertical LabelH1 { \u2502\r\n\u2502 3 \u2502 \u2502 background: red; \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n \u2022 Expected one of 'combinator child', 'comment start', 'declaration set start', 'new selector', 'pseudo class', 'selector', 'selector class', 'selector id',\r\n 'selector universal', or 'whitespace'.\r\n \u2022 Did you forget a semicolon at the end of a line?\r\n```\r\n\r\nThe same thing happens with `Vertical LabelH1`. On the other hand, if I remove the number from the inherited label widget:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelHOne( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n Vertical LabelHOne {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelHOne( \"LabelHOne\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nthis works fine. Likewise, if I retain the name but *don't* use combination:\r\n\r\n```python\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.containers import Vertical\r\nfrom textual.widgets import Header, Footer, Label\r\n\r\nclass LabelH1( Label ):\r\n ...\r\n\r\nclass CSSOddnessApp( App[ None ] ):\r\n\r\n CSS = \"\"\"\r\n LabelH1 {\r\n background: red;\r\n }\r\n \"\"\"\r\n\r\n def compose( self ) -> ComposeResult:\r\n yield Header()\r\n yield Vertical(\r\n Label( \"Label\" ),\r\n LabelH1( \"LabelH1\" ),\r\n )\r\n yield Footer()\r\n\r\nif __name__ == \"__main__\":\r\n CSSOddnessApp().run()\r\n```\r\n\r\nthat also works fine.\r\n\r\nI would suspect a variation on #1253.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom pathlib import PurePath\nfrom typing import Iterable\n\nfrom textual.css.tokenizer import Expect, Token, Tokenizer\n\nPERCENT = r\"-?\\d+\\.?\\d*%\"\nDECIMAL = r\"-?\\d+\\.?\\d*\"\nCOMMA = r\"\\s*,\\s*\"\nOPEN_BRACE = r\"\\(\\s*\"\nCLOSE_BRACE = r\"\\s*\\)\"\n\nHEX_COLOR = r\"\\#[0-9a-fA-F]{8}|\\#[0-9a-fA-F]{6}|\\#[0-9a-fA-F]{4}|\\#[0-9a-fA-F]{3}\"\nRGB_COLOR = rf\"rgb{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}|rgba{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}\"\nHSL_COLOR = rf\"hsl{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{CLOSE_BRACE}|hsla{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{COMMA}{DECIMAL}{CLOSE_BRACE}\"\n\nCOMMENT_START = r\"\\/\\*\"\nSCALAR = rf\"{DECIMAL}(?:fr|%|w|h|vw|vh)\"\nDURATION = r\"\\d+\\.?\\d*(?:ms|s)\"\nNUMBER = r\"\\-?\\d+\\.?\\d*\"\nCOLOR = rf\"{HEX_COLOR}|{RGB_COLOR}|{HSL_COLOR}\"\nKEY_VALUE = r\"[a-zA-Z_-][a-zA-Z0-9_-]*=[0-9a-zA-Z_\\-\\/]+\"\nTOKEN = \"[a-zA-Z][a-zA-Z0-9_-]*\"\nSTRING = r\"\\\".*?\\\"\"\nVARIABLE_REF = r\"\\$[a-zA-Z0-9_\\-]+\"\n\nIDENTIFIER = r\"[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\"\n\n# Values permitted in variable and rule declarations.\nDECLARATION_VALUES = {\n \"scalar\": SCALAR,\n \"duration\": DURATION,\n \"number\": NUMBER,\n \"color\": COLOR,\n \"key_value\": KEY_VALUE,\n \"token\": TOKEN,\n \"string\": STRING,\n \"variable_ref\": VARIABLE_REF,\n}\n\n# The tokenizers \"expectation\" while at the root/highest level of scope\n# in the CSS file. At this level we might expect to see selectors, comments,\n# variable definitions etc.\nexpect_root_scope = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n selector_start_id=r\"\\#\" + IDENTIFIER,\n selector_start_class=r\"\\.\" + IDENTIFIER,\n selector_start_universal=r\"\\*\",\n selector_start=IDENTIFIER,\n variable_name=rf\"{VARIABLE_REF}:\",\n).expect_eof(True)\n\n# After a variable declaration e.g. \"$warning-text: TOKENS;\"\n# for tokenizing variable value ------^~~~~~~^\nexpect_variable_name_continue = Expect(\n variable_value_end=r\"\\n|;\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n).expect_eof(True)\n\nexpect_comment_end = Expect(\n comment_end=re.escape(\"*/\"),\n)\n\n# After we come across a selector in CSS e.g. \".my-class\", we may\n# find other selectors, pseudo-classes... e.g. \".my-class :hover\"\nexpect_selector_continue = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n pseudo_class=r\"\\:[a-zA-Z_-]+\",\n selector_id=r\"\\#[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_class=r\"\\.[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_universal=r\"\\*\",\n selector=r\"[a-zA-Z_\\-]+\",\n combinator_child=\">\",\n new_selector=r\",\",\n declaration_set_start=r\"\\{\",\n)\n\n# A rule declaration e.g. \"text: red;\"\n# ^---^\nexpect_declaration = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n declaration_name=r\"[a-zA-Z_\\-]+\\:\",\n declaration_set_end=r\"\\}\",\n)\n\nexpect_declaration_solo = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n declaration_name=r\"[a-zA-Z_\\-]+\\:\",\n declaration_set_end=r\"\\}\",\n).expect_eof(True)\n\n# The value(s)/content from a rule declaration e.g. \"text: red;\"\n# ^---^\nexpect_declaration_content = Expect(\n declaration_end=r\";\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n important=r\"\\!important\",\n comma=\",\",\n declaration_set_end=r\"\\}\",\n)\n\nexpect_declaration_content_solo = Expect(\n declaration_end=r\";\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n important=r\"\\!important\",\n comma=\",\",\n declaration_set_end=r\"\\}\",\n).expect_eof(True)\n\n\nclass TokenizerState:\n \"\"\"State machine for the tokenizer.\n\n Attributes:\n EXPECT: The initial expectation of the tokenizer. Since we start tokenizing\n at the root scope, we might expect to see either a variable or selector, for example.\n STATE_MAP: Maps token names to Expects, defines the sets of valid tokens\n that we'd expect to see next, given the current token. For example, if\n we've just processed a variable declaration name, we next expect to see\n the value of that variable.\n \"\"\"\n\n EXPECT = expect_root_scope\n STATE_MAP = {\n \"variable_name\": expect_variable_name_continue,\n \"variable_value_end\": expect_root_scope,\n \"selector_start\": expect_selector_continue,\n \"selector_start_id\": expect_selector_continue,\n \"selector_start_class\": expect_selector_continue,\n \"selector_start_universal\": expect_selector_continue,\n \"selector_id\": expect_selector_continue,\n \"selector_class\": expect_selector_continue,\n \"selector_universal\": expect_selector_continue,\n \"declaration_set_start\": expect_declaration,\n \"declaration_name\": expect_declaration_content,\n \"declaration_end\": expect_declaration,\n \"declaration_set_end\": expect_root_scope,\n }\n\n def __call__(self, code: str, path: str | PurePath) -> Iterable[Token]:\n tokenizer = Tokenizer(code, path=path)\n expect = self.EXPECT\n get_token = tokenizer.get_token\n get_state = self.STATE_MAP.get\n while True:\n token = get_token(expect)\n name = token.name\n if name == \"comment_start\":\n tokenizer.skip_to(expect_comment_end)\n continue\n elif name == \"eof\":\n break\n expect = get_state(name, expect)\n yield token\n\n\nclass DeclarationTokenizerState(TokenizerState):\n EXPECT = expect_declaration_solo\n STATE_MAP = {\n \"declaration_name\": expect_declaration_content,\n \"declaration_end\": expect_declaration_solo,\n }\n\n\nclass ValueTokenizerState(TokenizerState):\n EXPECT = expect_declaration_content_solo\n\n\ntokenize = TokenizerState()\ntokenize_declarations = DeclarationTokenizerState()\ntokenize_value = ValueTokenizerState()\n\n\ndef tokenize_values(values: dict[str, str]) -> dict[str, list[Token]]:\n \"\"\"Tokens the values in a dict of strings.\n\n Args:\n values: A mapping of CSS variable name on to a value, to be\n added to the CSS context.\n\n Returns:\n A mapping of name on to a list of tokens,\n \"\"\"\n value_tokens = {\n name: list(tokenize_value(value, \"__name__\")) for name, value in values.items()\n }\n return value_tokens\n", "path": "src/textual/css/tokenize.py"}], "after_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom pathlib import PurePath\nfrom typing import Iterable\n\nfrom textual.css.tokenizer import Expect, Token, Tokenizer\n\nPERCENT = r\"-?\\d+\\.?\\d*%\"\nDECIMAL = r\"-?\\d+\\.?\\d*\"\nCOMMA = r\"\\s*,\\s*\"\nOPEN_BRACE = r\"\\(\\s*\"\nCLOSE_BRACE = r\"\\s*\\)\"\n\nHEX_COLOR = r\"\\#[0-9a-fA-F]{8}|\\#[0-9a-fA-F]{6}|\\#[0-9a-fA-F]{4}|\\#[0-9a-fA-F]{3}\"\nRGB_COLOR = rf\"rgb{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}|rgba{OPEN_BRACE}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{COMMA}{DECIMAL}{CLOSE_BRACE}\"\nHSL_COLOR = rf\"hsl{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{CLOSE_BRACE}|hsla{OPEN_BRACE}{DECIMAL}{COMMA}{PERCENT}{COMMA}{PERCENT}{COMMA}{DECIMAL}{CLOSE_BRACE}\"\n\nCOMMENT_START = r\"\\/\\*\"\nSCALAR = rf\"{DECIMAL}(?:fr|%|w|h|vw|vh)\"\nDURATION = r\"\\d+\\.?\\d*(?:ms|s)\"\nNUMBER = r\"\\-?\\d+\\.?\\d*\"\nCOLOR = rf\"{HEX_COLOR}|{RGB_COLOR}|{HSL_COLOR}\"\nKEY_VALUE = r\"[a-zA-Z_-][a-zA-Z0-9_-]*=[0-9a-zA-Z_\\-\\/]+\"\nTOKEN = \"[a-zA-Z][a-zA-Z0-9_-]*\"\nSTRING = r\"\\\".*?\\\"\"\nVARIABLE_REF = r\"\\$[a-zA-Z0-9_\\-]+\"\n\nIDENTIFIER = r\"[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\"\n\n# Values permitted in variable and rule declarations.\nDECLARATION_VALUES = {\n \"scalar\": SCALAR,\n \"duration\": DURATION,\n \"number\": NUMBER,\n \"color\": COLOR,\n \"key_value\": KEY_VALUE,\n \"token\": TOKEN,\n \"string\": STRING,\n \"variable_ref\": VARIABLE_REF,\n}\n\n# The tokenizers \"expectation\" while at the root/highest level of scope\n# in the CSS file. At this level we might expect to see selectors, comments,\n# variable definitions etc.\nexpect_root_scope = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n selector_start_id=r\"\\#\" + IDENTIFIER,\n selector_start_class=r\"\\.\" + IDENTIFIER,\n selector_start_universal=r\"\\*\",\n selector_start=IDENTIFIER,\n variable_name=rf\"{VARIABLE_REF}:\",\n).expect_eof(True)\n\n# After a variable declaration e.g. \"$warning-text: TOKENS;\"\n# for tokenizing variable value ------^~~~~~~^\nexpect_variable_name_continue = Expect(\n variable_value_end=r\"\\n|;\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n).expect_eof(True)\n\nexpect_comment_end = Expect(\n comment_end=re.escape(\"*/\"),\n)\n\n# After we come across a selector in CSS e.g. \".my-class\", we may\n# find other selectors, pseudo-classes... e.g. \".my-class :hover\"\nexpect_selector_continue = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n pseudo_class=r\"\\:[a-zA-Z_-]+\",\n selector_id=r\"\\#[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_class=r\"\\.[a-zA-Z_\\-][a-zA-Z0-9_\\-]*\",\n selector_universal=r\"\\*\",\n selector=IDENTIFIER,\n combinator_child=\">\",\n new_selector=r\",\",\n declaration_set_start=r\"\\{\",\n)\n\n# A rule declaration e.g. \"text: red;\"\n# ^---^\nexpect_declaration = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n declaration_name=r\"[a-zA-Z_\\-]+\\:\",\n declaration_set_end=r\"\\}\",\n)\n\nexpect_declaration_solo = Expect(\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n declaration_name=r\"[a-zA-Z_\\-]+\\:\",\n declaration_set_end=r\"\\}\",\n).expect_eof(True)\n\n# The value(s)/content from a rule declaration e.g. \"text: red;\"\n# ^---^\nexpect_declaration_content = Expect(\n declaration_end=r\";\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n important=r\"\\!important\",\n comma=\",\",\n declaration_set_end=r\"\\}\",\n)\n\nexpect_declaration_content_solo = Expect(\n declaration_end=r\";\",\n whitespace=r\"\\s+\",\n comment_start=COMMENT_START,\n **DECLARATION_VALUES,\n important=r\"\\!important\",\n comma=\",\",\n declaration_set_end=r\"\\}\",\n).expect_eof(True)\n\n\nclass TokenizerState:\n \"\"\"State machine for the tokenizer.\n\n Attributes:\n EXPECT: The initial expectation of the tokenizer. Since we start tokenizing\n at the root scope, we might expect to see either a variable or selector, for example.\n STATE_MAP: Maps token names to Expects, defines the sets of valid tokens\n that we'd expect to see next, given the current token. For example, if\n we've just processed a variable declaration name, we next expect to see\n the value of that variable.\n \"\"\"\n\n EXPECT = expect_root_scope\n STATE_MAP = {\n \"variable_name\": expect_variable_name_continue,\n \"variable_value_end\": expect_root_scope,\n \"selector_start\": expect_selector_continue,\n \"selector_start_id\": expect_selector_continue,\n \"selector_start_class\": expect_selector_continue,\n \"selector_start_universal\": expect_selector_continue,\n \"selector_id\": expect_selector_continue,\n \"selector_class\": expect_selector_continue,\n \"selector_universal\": expect_selector_continue,\n \"declaration_set_start\": expect_declaration,\n \"declaration_name\": expect_declaration_content,\n \"declaration_end\": expect_declaration,\n \"declaration_set_end\": expect_root_scope,\n }\n\n def __call__(self, code: str, path: str | PurePath) -> Iterable[Token]:\n tokenizer = Tokenizer(code, path=path)\n expect = self.EXPECT\n get_token = tokenizer.get_token\n get_state = self.STATE_MAP.get\n while True:\n token = get_token(expect)\n name = token.name\n if name == \"comment_start\":\n tokenizer.skip_to(expect_comment_end)\n continue\n elif name == \"eof\":\n break\n expect = get_state(name, expect)\n yield token\n\n\nclass DeclarationTokenizerState(TokenizerState):\n EXPECT = expect_declaration_solo\n STATE_MAP = {\n \"declaration_name\": expect_declaration_content,\n \"declaration_end\": expect_declaration_solo,\n }\n\n\nclass ValueTokenizerState(TokenizerState):\n EXPECT = expect_declaration_content_solo\n\n\ntokenize = TokenizerState()\ntokenize_declarations = DeclarationTokenizerState()\ntokenize_value = ValueTokenizerState()\n\n\ndef tokenize_values(values: dict[str, str]) -> dict[str, list[Token]]:\n \"\"\"Tokens the values in a dict of strings.\n\n Args:\n values: A mapping of CSS variable name on to a value, to be\n added to the CSS context.\n\n Returns:\n A mapping of name on to a list of tokens,\n \"\"\"\n value_tokens = {\n name: list(tokenize_value(value, \"__name__\")) for name, value in values.items()\n }\n return value_tokens\n", "path": "src/textual/css/tokenize.py"}]} |
gh_patches_debug_1107 | rasdani/github-patches | git_diff | obspy__obspy-2734 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"obspy.clients.seedlink.basic_client.Client" raises AttributeError when debug option is True
I run the following code to get waveform data from my SeedLink server:
```python
from obspy import UTCDateTime
from obspy.clients.seedlink.basic_client import Client
tend = UTCDateTime()
tstart = tend - 60
client = Client('127.0.0.1', port=18000, debug=True)
st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)
print(st)
```
When creating `Client` instance with `debug=True`, I got the following error:
```
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 0
INFO: obspy.clients.seedlink [127.0.0.1:18000]:network socket opened
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:sending: HELLO
INFO: obspy.clients.seedlink [127.0.0.1:18000]:connected to: 'SeedLink v3.2 (2014.071)'
INFO: obspy.clients.seedlink [127.0.0.1:18000]:sending: requesting INFO level STATIONS
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-101
Traceback (most recent call last):
File "get_waveforms.py", line 9, in <module>
st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 150, in get_waveforms
level='station', cache=True)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 286, in get_info
self._slclient.run(packet_handler=self._packet_handler)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/slclient.py", line 249, in run
terminate = packet_handler(count, slpack)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 343, in _packet_handler
print("Complete INFO:" + self.slconn.get_info_string())
AttributeError: 'Client' object has no attribute 'slconn'
```
Output when leaving debug to default value:
```
1 Trace(s) in Stream:
VG.MEPAS.00.HHZ | 2020-10-19T14:40:48.330000Z - 2020-10-19T14:41:48.330000Z | 100.0 Hz, 6001 samples
```
ObsPy version: 1.2.2
I think this just a minor bug in the following line:
https://github.com/obspy/obspy/blob/4217ce60a296af9f14ec704befb6c12c3a3e081e/obspy/clients/seedlink/basic_client.py#L339
It should be `self._slclient.slconn.get_info_string()`.
"obspy.clients.seedlink.basic_client.Client" raises AttributeError when debug option is True
I run the following code to get waveform data from my SeedLink server:
```python
from obspy import UTCDateTime
from obspy.clients.seedlink.basic_client import Client
tend = UTCDateTime()
tstart = tend - 60
client = Client('127.0.0.1', port=18000, debug=True)
st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)
print(st)
```
When creating `Client` instance with `debug=True`, I got the following error:
```
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 0
INFO: obspy.clients.seedlink [127.0.0.1:18000]:network socket opened
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:sending: HELLO
INFO: obspy.clients.seedlink [127.0.0.1:18000]:connected to: 'SeedLink v3.2 (2014.071)'
INFO: obspy.clients.seedlink [127.0.0.1:18000]:sending: requesting INFO level STATIONS
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-102
-102
DEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1
-101
Traceback (most recent call last):
File "get_waveforms.py", line 9, in <module>
st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 150, in get_waveforms
level='station', cache=True)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 286, in get_info
self._slclient.run(packet_handler=self._packet_handler)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/slclient.py", line 249, in run
terminate = packet_handler(count, slpack)
File "/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py", line 343, in _packet_handler
print("Complete INFO:" + self.slconn.get_info_string())
AttributeError: 'Client' object has no attribute 'slconn'
```
Output when leaving debug to default value:
```
1 Trace(s) in Stream:
VG.MEPAS.00.HHZ | 2020-10-19T14:40:48.330000Z - 2020-10-19T14:41:48.330000Z | 100.0 Hz, 6001 samples
```
ObsPy version: 1.2.2
I think this just a minor bug in the following line:
https://github.com/obspy/obspy/blob/4217ce60a296af9f14ec704befb6c12c3a3e081e/obspy/clients/seedlink/basic_client.py#L339
It should be `self._slclient.slconn.get_info_string()`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `obspy/clients/seedlink/basic_client.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """
3 SeedLink request client for ObsPy.
4
5 :copyright:
6 The ObsPy Development Team ([email protected])
7 :license:
8 GNU Lesser General Public License, Version 3
9 (https://www.gnu.org/copyleft/lesser.html)
10 """
11 from __future__ import (absolute_import, division, print_function,
12 unicode_literals)
13 from future.builtins import * # NOQA @UnusedWildImport
14
15 import fnmatch
16 import warnings
17
18 from lxml import etree
19
20 from obspy import Stream
21 from .slclient import SLClient, SLPacket
22 from .client.seedlinkconnection import SeedLinkConnection
23
24
25 class Client(object):
26 """
27 SeedLink request client.
28
29 This client is intended for requests of specific, finite time windows.
30 To work with continuous realtime data streams please see
31 :class:`~obspy.clients.seedlink.slclient.SLClient` and
32 :class:`~obspy.clients.seedlink.easyseedlink.EasySeedLinkClient`.
33
34 :type server: str
35 :param server: Server name or IP address to connect to (e.g.
36 "localhost", "rtserver.ipgp.fr")
37 :type port: int
38 :param port: Port at which the seedlink server is operating (default is
39 `18000`).
40 :type timeout: float
41 :param timeout: Network timeout for low-level network connection in
42 seconds.
43 :type debug: bool
44 :param debug: Switches on debugging output.
45 """
46 def __init__(self, server, port=18000, timeout=20, debug=False):
47 """
48 Initializes the SeedLink request client.
49 """
50 self.timeout = timeout
51 self.debug = debug
52 self.loglevel = debug and "DEBUG" or "CRITICAL"
53 self._server_url = "%s:%i" % (server, port)
54 self._station_cache = None
55 self._station_cache_level = None
56
57 def _init_client(self):
58 """
59 Make fresh connection to seedlink server
60
61 Should be done before any request to server, since SLClient keeps
62 things like multiselect etc for subsequent requests
63 """
64 self._slclient = SLClient(loglevel=self.loglevel, timeout=self.timeout)
65
66 def _connect(self):
67 """
68 Open new connection to seedlink server.
69 """
70 self._slclient.slconn = SeedLinkConnection(timeout=self.timeout)
71 self._slclient.slconn.set_sl_address(self._server_url)
72 self._slclient.slconn.netto = self.timeout
73
74 def get_waveforms(self, network, station, location, channel, starttime,
75 endtime):
76 """
77 Request waveform data from the seedlink server.
78
79 >>> from obspy import UTCDateTime
80 >>> client = Client('rtserver.ipgp.fr')
81 >>> t = UTCDateTime() - 1500
82 >>> st = client.get_waveforms("G", "FDFM", "00", "BHZ", t, t + 5)
83 >>> print(st) # doctest: +ELLIPSIS
84 1 Trace(s) in Stream:
85 G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples
86
87 Most servers support '?' single-character wildcard in location and
88 channel code fields:
89
90 >>> st = client.get_waveforms("G", "FDFM", "??", "B??", t, t + 5)
91 >>> st = st.sort(reverse=True)
92 >>> print(st) # doctest: +ELLIPSIS
93 6 Trace(s) in Stream:
94 G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples
95 G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples
96 G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples
97 G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples
98 G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples
99 G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples
100
101 Depending on server capabilities, '*' multi-character wildcards might
102 work in any parameter:
103
104 >>> st = client.get_waveforms("*", "FDFM", "*", "B*", t, t + 5)
105 >>> st = st.sort(reverse=True)
106 >>> print(st) # doctest: +ELLIPSIS
107 6 Trace(s) in Stream:
108 G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples
109 G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples
110 G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples
111 G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples
112 G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples
113 G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples
114
115 .. note::
116
117 Support of wildcards strongly depends on the queried seedlink
118 server. In general, '?' as single character wildcard seems to work
119 well in location code and channel code fields for most servers.
120 Usage of '*' relies on the server supporting info requests on
121 station or even channel level, see :meth:`Client.get_info()`.
122
123 :type network: str
124 :param network: Network code. See note on wildcards above.
125 :type station: str
126 :param station: Station code. See note on wildcards above.
127 :type location: str
128 :param location: Location code. See note on wildcards above.
129 :type channel: str
130 :param channel: Channel code. See note on wildcards above.
131 :type starttime: :class:`~obspy.core.utcdatetime.UTCDateTime`
132 :param starttime: Start time of requested time window.
133 :type endtime: :class:`~obspy.core.utcdatetime.UTCDateTime`
134 :param endtime: End time of requested time window.
135 """
136 # need to do an info request?
137 if any('*' in x for x in (network, station, location, channel)) \
138 or ('?' in x for x in (network, station)):
139 # need to do an info request on channel level?
140 if any('*' in x for x in (location, channel)):
141 info = self.get_info(network=network, station=station,
142 location=location, channel=channel,
143 level='channel', cache=True)
144 multiselect = ["%s_%s:%s%s" % (net, sta, loc, cha)
145 for net, sta, loc, cha in info]
146 # otherwise keep location/channel wildcards and do request on
147 # station level only
148 else:
149 info = self.get_info(network=network, station=station,
150 level='station', cache=True)
151 multiselect = ["%s_%s:%s%s" % (net, sta, location, channel)
152 for net, sta in info]
153 multiselect = ','.join(multiselect)
154 return self._multiselect_request(multiselect, starttime, endtime)
155
156 # if no info request is needed, we just work with the given input
157 # (might have some '?' wildcards in loc/cha)
158 if len(location) > 2:
159 msg = ("Location code ('%s') only supports a maximum of 2 "
160 "characters.") % location
161 raise ValueError(msg)
162 elif len(location) == 1:
163 msg = ("Single character location codes that are not an '*' are "
164 "untested.")
165 warnings.warn(msg)
166 if location:
167 loccha = "%2s%3s" % (location, channel)
168 else:
169 loccha = channel
170 seedlink_id = "%s_%s:%s" % (network, station, loccha)
171 return self._multiselect_request(seedlink_id, starttime, endtime)
172
173 def _multiselect_request(self, multiselect, starttime, endtime):
174 """
175 Make a multiselect request to underlying seedlink client
176
177 Multiselect string is one or more comma separated
178 network/station/location/channel combinations as defined by seedlink
179 standard, e.g.
180 "NETWORK_STATION:LOCATIONCHANNEL,NETWORK_STATION:LOCATIONCHANNEL"
181 where location+channel may contain '?' characters but should be exactly
182 5 characters long.
183
184 :rtype: :class:`~obspy.core.stream.Stream`
185 """
186 self._init_client()
187 self._slclient.multiselect = multiselect
188 self._slclient.begin_time = starttime
189 self._slclient.end_time = endtime
190 self._connect()
191 self._slclient.initialize()
192 self.stream = Stream()
193 self._slclient.run(packet_handler=self._packet_handler)
194 stream = self.stream
195 stream.trim(starttime, endtime)
196 self.stream = None
197 stream.sort()
198 return stream
199
200 def get_info(self, network=None, station=None, location=None, channel=None,
201 level='station', cache=True):
202 """
203 Request available stations information from the seedlink server.
204
205 Supports ``fnmatch`` wildcards, e.g. ``*`` and ``?``, in ``network``,
206 ``station``, ``location`` and ``channel``.
207
208 >>> client = Client('rtserver.ipgp.fr')
209 >>> info = client.get_info(station="FDFM")
210 >>> print(info)
211 [('G', 'FDFM')]
212 >>> info = client.get_info(station="FD?M", channel='*Z',
213 ... level='channel')
214 >>> print(info) # doctest: +NORMALIZE_WHITESPACE
215 [('G', 'FDFM', '00', 'BHZ'), ('G', 'FDFM', '00', 'HHZ'),
216 ('G', 'FDFM', '00', 'HNZ'), ('G', 'FDFM', '00', 'LHZ'),
217 ('G', 'FDFM', '10', 'BHZ'), ('G', 'FDFM', '10', 'HHZ'),
218 ('G', 'FDFM', '10', 'LHZ')]
219
220 Available station information is cached after the first request to the
221 server, so use ``cache=False`` on subsequent requests if there is a
222 need to force fetching new information from the server (should only
223 concern programs running in background for a very long time).
224
225 :type network: str
226 :param network: Network code. Supports ``fnmatch`` wildcards, e.g.
227 ``*`` and ``?``.
228 :type station: str
229 :param station: Station code. Supports ``fnmatch`` wildcards, e.g.
230 ``*`` and ``?``.
231 :type location: str
232 :param location: Location code. Supports ``fnmatch`` wildcards, e.g.
233 ``*`` and ``?``.
234 :type channel: str
235 :param channel: Channel code. Supports ``fnmatch`` wildcards, e.g.
236 ``*`` and ``?``.
237 :type cache: bool
238 :param cache: Subsequent function calls are cached, use ``cache=False``
239 to force fetching station metadata again from the server.
240 :rtype: list
241 :returns: list of 2-tuples (or 4-tuples with ``level='channel'``) with
242 network/station (network/station/location/channel, respectively)
243 code combinations for which data is served by the server.
244 """
245 if level not in ('station', 'channel'):
246 msg = "Invalid option for 'level': '%s'" % str(level)
247 raise ValueError(msg)
248 if level == 'station' and \
249 any(x is not None for x in (location, channel)):
250 msg = ("location and channel options are ignored in get_info() if "
251 "level='station'.")
252 warnings.warn(msg)
253 # deteremine if we have a usable cache and check if it is at least the
254 # requested level of detail
255 if cache and self._station_cache is not None \
256 and level in ('station', self._station_cache_level):
257 if level == 'station':
258 if self._station_cache_level == 'station':
259 info = [(net, sta) for net, sta in self._station_cache
260 if fnmatch.fnmatch(net, network or '*') and
261 fnmatch.fnmatch(sta, station or '*')]
262 return sorted(info)
263 else:
264 info = [(net, sta) for net, sta, loc, cha
265 in self._station_cache
266 if fnmatch.fnmatch(net, network or '*') and
267 fnmatch.fnmatch(sta, station or '*')]
268 return sorted(set(info))
269 info = [(net, sta, loc, cha) for net, sta, loc, cha in
270 self._station_cache if
271 fnmatch.fnmatch(net, network or '*') and
272 fnmatch.fnmatch(sta, station or '*') and
273 fnmatch.fnmatch(loc, location or '*') and
274 fnmatch.fnmatch(cha, channel or '*')]
275 return sorted(info)
276
277 self._init_client()
278 if level == 'station':
279 self._slclient.infolevel = "STATIONS"
280 elif level == 'channel':
281 self._slclient.infolevel = "STREAMS"
282 self._slclient.verbose = 1
283 self._connect()
284 self._slclient.initialize()
285 # self._slclient.run()
286 self._slclient.run(packet_handler=self._packet_handler)
287 info = self._slclient.slconn.info_string
288 try:
289 xml = etree.fromstring(info)
290 except ValueError as e:
291 msg = 'Unicode strings with encoding declaration are not supported'
292 if msg not in str(e):
293 raise
294 parser = etree.XMLParser(encoding='utf-8')
295 xml = etree.fromstring(info.encode('utf-8'), parser=parser)
296 station_cache = set()
297 for tag in xml.xpath('./station'):
298 net = tag.attrib['network']
299 sta = tag.attrib['name']
300 item = (net, sta)
301 if level == 'channel':
302 subtags = tag.xpath('./stream')
303 for subtag in subtags:
304 loc = subtag.attrib['location']
305 cha = subtag.attrib['seedname']
306 station_cache.add(item + (loc, cha))
307 # If no data is in ring buffer (e.g. station outage?) then it
308 # seems the seedlink server replies with no subtags for the
309 # channels
310 if not subtags:
311 station_cache.add(item + (None, None))
312 else:
313 station_cache.add(item)
314 # change results to an Inventory object
315 self._station_cache = station_cache
316 self._station_cache_level = level
317 return self.get_info(
318 network=network, station=station, location=location,
319 channel=channel, cache=True, level=level)
320
321 def _packet_handler(self, count, slpack):
322 """
323 Custom packet handler that accumulates all waveform packets in a
324 stream.
325 """
326 # check if not a complete packet
327 if slpack is None or (slpack == SLPacket.SLNOPACKET) or \
328 (slpack == SLPacket.SLERROR):
329 return False
330
331 # get basic packet info
332 type_ = slpack.get_type()
333 if self.debug:
334 print(type_)
335
336 # process INFO packets here
337 if type_ == SLPacket.TYPE_SLINF:
338 if self.debug:
339 print(SLPacket.TYPE_SLINF)
340 return False
341 elif type_ == SLPacket.TYPE_SLINFT:
342 if self.debug:
343 print("Complete INFO:" + self.slconn.get_info_string())
344 return False
345
346 # process packet data
347 trace = slpack.get_trace()
348 if trace is None:
349 if self.debug:
350 print("Blockette contains no trace")
351 return False
352
353 # new samples add to the main stream which is then trimmed
354 self.stream += trace
355 self.stream.merge(-1)
356 return False
357
358
359 if __name__ == '__main__':
360 import doctest
361 doctest.testmod(exclude_empty=True)
362
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/obspy/clients/seedlink/basic_client.py b/obspy/clients/seedlink/basic_client.py
--- a/obspy/clients/seedlink/basic_client.py
+++ b/obspy/clients/seedlink/basic_client.py
@@ -340,7 +340,8 @@
return False
elif type_ == SLPacket.TYPE_SLINFT:
if self.debug:
- print("Complete INFO:" + self.slconn.get_info_string())
+ print("Complete INFO:",
+ self._slclient.slconn.get_info_string())
return False
# process packet data
| {"golden_diff": "diff --git a/obspy/clients/seedlink/basic_client.py b/obspy/clients/seedlink/basic_client.py\n--- a/obspy/clients/seedlink/basic_client.py\n+++ b/obspy/clients/seedlink/basic_client.py\n@@ -340,7 +340,8 @@\n return False\n elif type_ == SLPacket.TYPE_SLINFT:\n if self.debug:\n- print(\"Complete INFO:\" + self.slconn.get_info_string())\n+ print(\"Complete INFO:\",\n+ self._slclient.slconn.get_info_string())\n return False\n \n # process packet data\n", "issue": "\"obspy.clients.seedlink.basic_client.Client\" raises AttributeError when debug option is True\nI run the following code to get waveform data from my SeedLink server:\r\n\r\n```python\r\nfrom obspy import UTCDateTime\r\nfrom obspy.clients.seedlink.basic_client import Client\r\n\r\ntend = UTCDateTime()\r\ntstart = tend - 60\r\n\r\nclient = Client('127.0.0.1', port=18000, debug=True)\r\nst = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)\r\n\r\nprint(st)\r\n```\r\n\r\nWhen creating `Client` instance with `debug=True`, I got the following error:\r\n\r\n```\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 0\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:network socket opened\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:sending: HELLO\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:connected to: 'SeedLink v3.2 (2014.071)'\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:sending: requesting INFO level STATIONS\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-101\r\nTraceback (most recent call last):\r\n File \"get_waveforms.py\", line 9, in <module>\r\n st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 150, in get_waveforms\r\n level='station', cache=True)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 286, in get_info\r\n self._slclient.run(packet_handler=self._packet_handler)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/slclient.py\", line 249, in run\r\n terminate = packet_handler(count, slpack)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 343, in _packet_handler\r\n print(\"Complete INFO:\" + self.slconn.get_info_string())\r\nAttributeError: 'Client' object has no attribute 'slconn'\r\n```\r\n\r\nOutput when leaving debug to default value:\r\n\r\n```\r\n1 Trace(s) in Stream:\r\nVG.MEPAS.00.HHZ | 2020-10-19T14:40:48.330000Z - 2020-10-19T14:41:48.330000Z | 100.0 Hz, 6001 samples\r\n```\r\nObsPy version: 1.2.2\r\n\r\nI think this just a minor bug in the following line:\r\n\r\nhttps://github.com/obspy/obspy/blob/4217ce60a296af9f14ec704befb6c12c3a3e081e/obspy/clients/seedlink/basic_client.py#L339\r\n\r\nIt should be `self._slclient.slconn.get_info_string()`.\n\"obspy.clients.seedlink.basic_client.Client\" raises AttributeError when debug option is True\nI run the following code to get waveform data from my SeedLink server:\r\n\r\n```python\r\nfrom obspy import UTCDateTime\r\nfrom obspy.clients.seedlink.basic_client import Client\r\n\r\ntend = UTCDateTime()\r\ntstart = tend - 60\r\n\r\nclient = Client('127.0.0.1', port=18000, debug=True)\r\nst = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)\r\n\r\nprint(st)\r\n```\r\n\r\nWhen creating `Client` instance with `debug=True`, I got the following error:\r\n\r\n```\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 0\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:network socket opened\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:sending: HELLO\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:connected to: 'SeedLink v3.2 (2014.071)'\r\nINFO: obspy.clients.seedlink [127.0.0.1:18000]:sending: requesting INFO level STATIONS\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-102\r\n-102\r\nDEBUG: obspy.clients.seedlink [127.0.0.1:18000]:primary loop pass 0, state 1\r\n-101\r\nTraceback (most recent call last):\r\n File \"get_waveforms.py\", line 9, in <module>\r\n st = client.get_waveforms('VG', 'MEPAS', '00', 'HHZ', tstart, tend)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 150, in get_waveforms\r\n level='station', cache=True)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 286, in get_info\r\n self._slclient.run(packet_handler=self._packet_handler)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/slclient.py\", line 249, in run\r\n terminate = packet_handler(count, slpack)\r\n File \"/home/iori/GitLab/bma/venv/lib/python3.6/site-packages/obspy/clients/seedlink/basic_client.py\", line 343, in _packet_handler\r\n print(\"Complete INFO:\" + self.slconn.get_info_string())\r\nAttributeError: 'Client' object has no attribute 'slconn'\r\n```\r\n\r\nOutput when leaving debug to default value:\r\n\r\n```\r\n1 Trace(s) in Stream:\r\nVG.MEPAS.00.HHZ | 2020-10-19T14:40:48.330000Z - 2020-10-19T14:41:48.330000Z | 100.0 Hz, 6001 samples\r\n```\r\nObsPy version: 1.2.2\r\n\r\nI think this just a minor bug in the following line:\r\n\r\nhttps://github.com/obspy/obspy/blob/4217ce60a296af9f14ec704befb6c12c3a3e081e/obspy/clients/seedlink/basic_client.py#L339\r\n\r\nIt should be `self._slclient.slconn.get_info_string()`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nSeedLink request client for ObsPy.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA @UnusedWildImport\n\nimport fnmatch\nimport warnings\n\nfrom lxml import etree\n\nfrom obspy import Stream\nfrom .slclient import SLClient, SLPacket\nfrom .client.seedlinkconnection import SeedLinkConnection\n\n\nclass Client(object):\n \"\"\"\n SeedLink request client.\n\n This client is intended for requests of specific, finite time windows.\n To work with continuous realtime data streams please see\n :class:`~obspy.clients.seedlink.slclient.SLClient` and\n :class:`~obspy.clients.seedlink.easyseedlink.EasySeedLinkClient`.\n\n :type server: str\n :param server: Server name or IP address to connect to (e.g.\n \"localhost\", \"rtserver.ipgp.fr\")\n :type port: int\n :param port: Port at which the seedlink server is operating (default is\n `18000`).\n :type timeout: float\n :param timeout: Network timeout for low-level network connection in\n seconds.\n :type debug: bool\n :param debug: Switches on debugging output.\n \"\"\"\n def __init__(self, server, port=18000, timeout=20, debug=False):\n \"\"\"\n Initializes the SeedLink request client.\n \"\"\"\n self.timeout = timeout\n self.debug = debug\n self.loglevel = debug and \"DEBUG\" or \"CRITICAL\"\n self._server_url = \"%s:%i\" % (server, port)\n self._station_cache = None\n self._station_cache_level = None\n\n def _init_client(self):\n \"\"\"\n Make fresh connection to seedlink server\n\n Should be done before any request to server, since SLClient keeps\n things like multiselect etc for subsequent requests\n \"\"\"\n self._slclient = SLClient(loglevel=self.loglevel, timeout=self.timeout)\n\n def _connect(self):\n \"\"\"\n Open new connection to seedlink server.\n \"\"\"\n self._slclient.slconn = SeedLinkConnection(timeout=self.timeout)\n self._slclient.slconn.set_sl_address(self._server_url)\n self._slclient.slconn.netto = self.timeout\n\n def get_waveforms(self, network, station, location, channel, starttime,\n endtime):\n \"\"\"\n Request waveform data from the seedlink server.\n\n >>> from obspy import UTCDateTime\n >>> client = Client('rtserver.ipgp.fr')\n >>> t = UTCDateTime() - 1500\n >>> st = client.get_waveforms(\"G\", \"FDFM\", \"00\", \"BHZ\", t, t + 5)\n >>> print(st) # doctest: +ELLIPSIS\n 1 Trace(s) in Stream:\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n\n Most servers support '?' single-character wildcard in location and\n channel code fields:\n\n >>> st = client.get_waveforms(\"G\", \"FDFM\", \"??\", \"B??\", t, t + 5)\n >>> st = st.sort(reverse=True)\n >>> print(st) # doctest: +ELLIPSIS\n 6 Trace(s) in Stream:\n G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples\n\n Depending on server capabilities, '*' multi-character wildcards might\n work in any parameter:\n\n >>> st = client.get_waveforms(\"*\", \"FDFM\", \"*\", \"B*\", t, t + 5)\n >>> st = st.sort(reverse=True)\n >>> print(st) # doctest: +ELLIPSIS\n 6 Trace(s) in Stream:\n G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples\n\n .. note::\n\n Support of wildcards strongly depends on the queried seedlink\n server. In general, '?' as single character wildcard seems to work\n well in location code and channel code fields for most servers.\n Usage of '*' relies on the server supporting info requests on\n station or even channel level, see :meth:`Client.get_info()`.\n\n :type network: str\n :param network: Network code. See note on wildcards above.\n :type station: str\n :param station: Station code. See note on wildcards above.\n :type location: str\n :param location: Location code. See note on wildcards above.\n :type channel: str\n :param channel: Channel code. See note on wildcards above.\n :type starttime: :class:`~obspy.core.utcdatetime.UTCDateTime`\n :param starttime: Start time of requested time window.\n :type endtime: :class:`~obspy.core.utcdatetime.UTCDateTime`\n :param endtime: End time of requested time window.\n \"\"\"\n # need to do an info request?\n if any('*' in x for x in (network, station, location, channel)) \\\n or ('?' in x for x in (network, station)):\n # need to do an info request on channel level?\n if any('*' in x for x in (location, channel)):\n info = self.get_info(network=network, station=station,\n location=location, channel=channel,\n level='channel', cache=True)\n multiselect = [\"%s_%s:%s%s\" % (net, sta, loc, cha)\n for net, sta, loc, cha in info]\n # otherwise keep location/channel wildcards and do request on\n # station level only\n else:\n info = self.get_info(network=network, station=station,\n level='station', cache=True)\n multiselect = [\"%s_%s:%s%s\" % (net, sta, location, channel)\n for net, sta in info]\n multiselect = ','.join(multiselect)\n return self._multiselect_request(multiselect, starttime, endtime)\n\n # if no info request is needed, we just work with the given input\n # (might have some '?' wildcards in loc/cha)\n if len(location) > 2:\n msg = (\"Location code ('%s') only supports a maximum of 2 \"\n \"characters.\") % location\n raise ValueError(msg)\n elif len(location) == 1:\n msg = (\"Single character location codes that are not an '*' are \"\n \"untested.\")\n warnings.warn(msg)\n if location:\n loccha = \"%2s%3s\" % (location, channel)\n else:\n loccha = channel\n seedlink_id = \"%s_%s:%s\" % (network, station, loccha)\n return self._multiselect_request(seedlink_id, starttime, endtime)\n\n def _multiselect_request(self, multiselect, starttime, endtime):\n \"\"\"\n Make a multiselect request to underlying seedlink client\n\n Multiselect string is one or more comma separated\n network/station/location/channel combinations as defined by seedlink\n standard, e.g.\n \"NETWORK_STATION:LOCATIONCHANNEL,NETWORK_STATION:LOCATIONCHANNEL\"\n where location+channel may contain '?' characters but should be exactly\n 5 characters long.\n\n :rtype: :class:`~obspy.core.stream.Stream`\n \"\"\"\n self._init_client()\n self._slclient.multiselect = multiselect\n self._slclient.begin_time = starttime\n self._slclient.end_time = endtime\n self._connect()\n self._slclient.initialize()\n self.stream = Stream()\n self._slclient.run(packet_handler=self._packet_handler)\n stream = self.stream\n stream.trim(starttime, endtime)\n self.stream = None\n stream.sort()\n return stream\n\n def get_info(self, network=None, station=None, location=None, channel=None,\n level='station', cache=True):\n \"\"\"\n Request available stations information from the seedlink server.\n\n Supports ``fnmatch`` wildcards, e.g. ``*`` and ``?``, in ``network``,\n ``station``, ``location`` and ``channel``.\n\n >>> client = Client('rtserver.ipgp.fr')\n >>> info = client.get_info(station=\"FDFM\")\n >>> print(info)\n [('G', 'FDFM')]\n >>> info = client.get_info(station=\"FD?M\", channel='*Z',\n ... level='channel')\n >>> print(info) # doctest: +NORMALIZE_WHITESPACE\n [('G', 'FDFM', '00', 'BHZ'), ('G', 'FDFM', '00', 'HHZ'),\n ('G', 'FDFM', '00', 'HNZ'), ('G', 'FDFM', '00', 'LHZ'),\n ('G', 'FDFM', '10', 'BHZ'), ('G', 'FDFM', '10', 'HHZ'),\n ('G', 'FDFM', '10', 'LHZ')]\n\n Available station information is cached after the first request to the\n server, so use ``cache=False`` on subsequent requests if there is a\n need to force fetching new information from the server (should only\n concern programs running in background for a very long time).\n\n :type network: str\n :param network: Network code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type station: str\n :param station: Station code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type location: str\n :param location: Location code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type channel: str\n :param channel: Channel code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type cache: bool\n :param cache: Subsequent function calls are cached, use ``cache=False``\n to force fetching station metadata again from the server.\n :rtype: list\n :returns: list of 2-tuples (or 4-tuples with ``level='channel'``) with\n network/station (network/station/location/channel, respectively)\n code combinations for which data is served by the server.\n \"\"\"\n if level not in ('station', 'channel'):\n msg = \"Invalid option for 'level': '%s'\" % str(level)\n raise ValueError(msg)\n if level == 'station' and \\\n any(x is not None for x in (location, channel)):\n msg = (\"location and channel options are ignored in get_info() if \"\n \"level='station'.\")\n warnings.warn(msg)\n # deteremine if we have a usable cache and check if it is at least the\n # requested level of detail\n if cache and self._station_cache is not None \\\n and level in ('station', self._station_cache_level):\n if level == 'station':\n if self._station_cache_level == 'station':\n info = [(net, sta) for net, sta in self._station_cache\n if fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*')]\n return sorted(info)\n else:\n info = [(net, sta) for net, sta, loc, cha\n in self._station_cache\n if fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*')]\n return sorted(set(info))\n info = [(net, sta, loc, cha) for net, sta, loc, cha in\n self._station_cache if\n fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*') and\n fnmatch.fnmatch(loc, location or '*') and\n fnmatch.fnmatch(cha, channel or '*')]\n return sorted(info)\n\n self._init_client()\n if level == 'station':\n self._slclient.infolevel = \"STATIONS\"\n elif level == 'channel':\n self._slclient.infolevel = \"STREAMS\"\n self._slclient.verbose = 1\n self._connect()\n self._slclient.initialize()\n # self._slclient.run()\n self._slclient.run(packet_handler=self._packet_handler)\n info = self._slclient.slconn.info_string\n try:\n xml = etree.fromstring(info)\n except ValueError as e:\n msg = 'Unicode strings with encoding declaration are not supported'\n if msg not in str(e):\n raise\n parser = etree.XMLParser(encoding='utf-8')\n xml = etree.fromstring(info.encode('utf-8'), parser=parser)\n station_cache = set()\n for tag in xml.xpath('./station'):\n net = tag.attrib['network']\n sta = tag.attrib['name']\n item = (net, sta)\n if level == 'channel':\n subtags = tag.xpath('./stream')\n for subtag in subtags:\n loc = subtag.attrib['location']\n cha = subtag.attrib['seedname']\n station_cache.add(item + (loc, cha))\n # If no data is in ring buffer (e.g. station outage?) then it\n # seems the seedlink server replies with no subtags for the\n # channels\n if not subtags:\n station_cache.add(item + (None, None))\n else:\n station_cache.add(item)\n # change results to an Inventory object\n self._station_cache = station_cache\n self._station_cache_level = level\n return self.get_info(\n network=network, station=station, location=location,\n channel=channel, cache=True, level=level)\n\n def _packet_handler(self, count, slpack):\n \"\"\"\n Custom packet handler that accumulates all waveform packets in a\n stream.\n \"\"\"\n # check if not a complete packet\n if slpack is None or (slpack == SLPacket.SLNOPACKET) or \\\n (slpack == SLPacket.SLERROR):\n return False\n\n # get basic packet info\n type_ = slpack.get_type()\n if self.debug:\n print(type_)\n\n # process INFO packets here\n if type_ == SLPacket.TYPE_SLINF:\n if self.debug:\n print(SLPacket.TYPE_SLINF)\n return False\n elif type_ == SLPacket.TYPE_SLINFT:\n if self.debug:\n print(\"Complete INFO:\" + self.slconn.get_info_string())\n return False\n\n # process packet data\n trace = slpack.get_trace()\n if trace is None:\n if self.debug:\n print(\"Blockette contains no trace\")\n return False\n\n # new samples add to the main stream which is then trimmed\n self.stream += trace\n self.stream.merge(-1)\n return False\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/clients/seedlink/basic_client.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nSeedLink request client for ObsPy.\n\n:copyright:\n The ObsPy Development Team ([email protected])\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA @UnusedWildImport\n\nimport fnmatch\nimport warnings\n\nfrom lxml import etree\n\nfrom obspy import Stream\nfrom .slclient import SLClient, SLPacket\nfrom .client.seedlinkconnection import SeedLinkConnection\n\n\nclass Client(object):\n \"\"\"\n SeedLink request client.\n\n This client is intended for requests of specific, finite time windows.\n To work with continuous realtime data streams please see\n :class:`~obspy.clients.seedlink.slclient.SLClient` and\n :class:`~obspy.clients.seedlink.easyseedlink.EasySeedLinkClient`.\n\n :type server: str\n :param server: Server name or IP address to connect to (e.g.\n \"localhost\", \"rtserver.ipgp.fr\")\n :type port: int\n :param port: Port at which the seedlink server is operating (default is\n `18000`).\n :type timeout: float\n :param timeout: Network timeout for low-level network connection in\n seconds.\n :type debug: bool\n :param debug: Switches on debugging output.\n \"\"\"\n def __init__(self, server, port=18000, timeout=20, debug=False):\n \"\"\"\n Initializes the SeedLink request client.\n \"\"\"\n self.timeout = timeout\n self.debug = debug\n self.loglevel = debug and \"DEBUG\" or \"CRITICAL\"\n self._server_url = \"%s:%i\" % (server, port)\n self._station_cache = None\n self._station_cache_level = None\n\n def _init_client(self):\n \"\"\"\n Make fresh connection to seedlink server\n\n Should be done before any request to server, since SLClient keeps\n things like multiselect etc for subsequent requests\n \"\"\"\n self._slclient = SLClient(loglevel=self.loglevel, timeout=self.timeout)\n\n def _connect(self):\n \"\"\"\n Open new connection to seedlink server.\n \"\"\"\n self._slclient.slconn = SeedLinkConnection(timeout=self.timeout)\n self._slclient.slconn.set_sl_address(self._server_url)\n self._slclient.slconn.netto = self.timeout\n\n def get_waveforms(self, network, station, location, channel, starttime,\n endtime):\n \"\"\"\n Request waveform data from the seedlink server.\n\n >>> from obspy import UTCDateTime\n >>> client = Client('rtserver.ipgp.fr')\n >>> t = UTCDateTime() - 1500\n >>> st = client.get_waveforms(\"G\", \"FDFM\", \"00\", \"BHZ\", t, t + 5)\n >>> print(st) # doctest: +ELLIPSIS\n 1 Trace(s) in Stream:\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n\n Most servers support '?' single-character wildcard in location and\n channel code fields:\n\n >>> st = client.get_waveforms(\"G\", \"FDFM\", \"??\", \"B??\", t, t + 5)\n >>> st = st.sort(reverse=True)\n >>> print(st) # doctest: +ELLIPSIS\n 6 Trace(s) in Stream:\n G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples\n\n Depending on server capabilities, '*' multi-character wildcards might\n work in any parameter:\n\n >>> st = client.get_waveforms(\"*\", \"FDFM\", \"*\", \"B*\", t, t + 5)\n >>> st = st.sort(reverse=True)\n >>> print(st) # doctest: +ELLIPSIS\n 6 Trace(s) in Stream:\n G.FDFM.10.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.10.BHE | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHZ | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHN | 20... | 20.0 Hz, ... samples\n G.FDFM.00.BHE | 20... | 20.0 Hz, ... samples\n\n .. note::\n\n Support of wildcards strongly depends on the queried seedlink\n server. In general, '?' as single character wildcard seems to work\n well in location code and channel code fields for most servers.\n Usage of '*' relies on the server supporting info requests on\n station or even channel level, see :meth:`Client.get_info()`.\n\n :type network: str\n :param network: Network code. See note on wildcards above.\n :type station: str\n :param station: Station code. See note on wildcards above.\n :type location: str\n :param location: Location code. See note on wildcards above.\n :type channel: str\n :param channel: Channel code. See note on wildcards above.\n :type starttime: :class:`~obspy.core.utcdatetime.UTCDateTime`\n :param starttime: Start time of requested time window.\n :type endtime: :class:`~obspy.core.utcdatetime.UTCDateTime`\n :param endtime: End time of requested time window.\n \"\"\"\n # need to do an info request?\n if any('*' in x for x in (network, station, location, channel)) \\\n or ('?' in x for x in (network, station)):\n # need to do an info request on channel level?\n if any('*' in x for x in (location, channel)):\n info = self.get_info(network=network, station=station,\n location=location, channel=channel,\n level='channel', cache=True)\n multiselect = [\"%s_%s:%s%s\" % (net, sta, loc, cha)\n for net, sta, loc, cha in info]\n # otherwise keep location/channel wildcards and do request on\n # station level only\n else:\n info = self.get_info(network=network, station=station,\n level='station', cache=True)\n multiselect = [\"%s_%s:%s%s\" % (net, sta, location, channel)\n for net, sta in info]\n multiselect = ','.join(multiselect)\n return self._multiselect_request(multiselect, starttime, endtime)\n\n # if no info request is needed, we just work with the given input\n # (might have some '?' wildcards in loc/cha)\n if len(location) > 2:\n msg = (\"Location code ('%s') only supports a maximum of 2 \"\n \"characters.\") % location\n raise ValueError(msg)\n elif len(location) == 1:\n msg = (\"Single character location codes that are not an '*' are \"\n \"untested.\")\n warnings.warn(msg)\n if location:\n loccha = \"%2s%3s\" % (location, channel)\n else:\n loccha = channel\n seedlink_id = \"%s_%s:%s\" % (network, station, loccha)\n return self._multiselect_request(seedlink_id, starttime, endtime)\n\n def _multiselect_request(self, multiselect, starttime, endtime):\n \"\"\"\n Make a multiselect request to underlying seedlink client\n\n Multiselect string is one or more comma separated\n network/station/location/channel combinations as defined by seedlink\n standard, e.g.\n \"NETWORK_STATION:LOCATIONCHANNEL,NETWORK_STATION:LOCATIONCHANNEL\"\n where location+channel may contain '?' characters but should be exactly\n 5 characters long.\n\n :rtype: :class:`~obspy.core.stream.Stream`\n \"\"\"\n self._init_client()\n self._slclient.multiselect = multiselect\n self._slclient.begin_time = starttime\n self._slclient.end_time = endtime\n self._connect()\n self._slclient.initialize()\n self.stream = Stream()\n self._slclient.run(packet_handler=self._packet_handler)\n stream = self.stream\n stream.trim(starttime, endtime)\n self.stream = None\n stream.sort()\n return stream\n\n def get_info(self, network=None, station=None, location=None, channel=None,\n level='station', cache=True):\n \"\"\"\n Request available stations information from the seedlink server.\n\n Supports ``fnmatch`` wildcards, e.g. ``*`` and ``?``, in ``network``,\n ``station``, ``location`` and ``channel``.\n\n >>> client = Client('rtserver.ipgp.fr')\n >>> info = client.get_info(station=\"FDFM\")\n >>> print(info)\n [('G', 'FDFM')]\n >>> info = client.get_info(station=\"FD?M\", channel='*Z',\n ... level='channel')\n >>> print(info) # doctest: +NORMALIZE_WHITESPACE\n [('G', 'FDFM', '00', 'BHZ'), ('G', 'FDFM', '00', 'HHZ'),\n ('G', 'FDFM', '00', 'HNZ'), ('G', 'FDFM', '00', 'LHZ'),\n ('G', 'FDFM', '10', 'BHZ'), ('G', 'FDFM', '10', 'HHZ'),\n ('G', 'FDFM', '10', 'LHZ')]\n\n Available station information is cached after the first request to the\n server, so use ``cache=False`` on subsequent requests if there is a\n need to force fetching new information from the server (should only\n concern programs running in background for a very long time).\n\n :type network: str\n :param network: Network code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type station: str\n :param station: Station code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type location: str\n :param location: Location code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type channel: str\n :param channel: Channel code. Supports ``fnmatch`` wildcards, e.g.\n ``*`` and ``?``.\n :type cache: bool\n :param cache: Subsequent function calls are cached, use ``cache=False``\n to force fetching station metadata again from the server.\n :rtype: list\n :returns: list of 2-tuples (or 4-tuples with ``level='channel'``) with\n network/station (network/station/location/channel, respectively)\n code combinations for which data is served by the server.\n \"\"\"\n if level not in ('station', 'channel'):\n msg = \"Invalid option for 'level': '%s'\" % str(level)\n raise ValueError(msg)\n if level == 'station' and \\\n any(x is not None for x in (location, channel)):\n msg = (\"location and channel options are ignored in get_info() if \"\n \"level='station'.\")\n warnings.warn(msg)\n # deteremine if we have a usable cache and check if it is at least the\n # requested level of detail\n if cache and self._station_cache is not None \\\n and level in ('station', self._station_cache_level):\n if level == 'station':\n if self._station_cache_level == 'station':\n info = [(net, sta) for net, sta in self._station_cache\n if fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*')]\n return sorted(info)\n else:\n info = [(net, sta) for net, sta, loc, cha\n in self._station_cache\n if fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*')]\n return sorted(set(info))\n info = [(net, sta, loc, cha) for net, sta, loc, cha in\n self._station_cache if\n fnmatch.fnmatch(net, network or '*') and\n fnmatch.fnmatch(sta, station or '*') and\n fnmatch.fnmatch(loc, location or '*') and\n fnmatch.fnmatch(cha, channel or '*')]\n return sorted(info)\n\n self._init_client()\n if level == 'station':\n self._slclient.infolevel = \"STATIONS\"\n elif level == 'channel':\n self._slclient.infolevel = \"STREAMS\"\n self._slclient.verbose = 1\n self._connect()\n self._slclient.initialize()\n # self._slclient.run()\n self._slclient.run(packet_handler=self._packet_handler)\n info = self._slclient.slconn.info_string\n try:\n xml = etree.fromstring(info)\n except ValueError as e:\n msg = 'Unicode strings with encoding declaration are not supported'\n if msg not in str(e):\n raise\n parser = etree.XMLParser(encoding='utf-8')\n xml = etree.fromstring(info.encode('utf-8'), parser=parser)\n station_cache = set()\n for tag in xml.xpath('./station'):\n net = tag.attrib['network']\n sta = tag.attrib['name']\n item = (net, sta)\n if level == 'channel':\n subtags = tag.xpath('./stream')\n for subtag in subtags:\n loc = subtag.attrib['location']\n cha = subtag.attrib['seedname']\n station_cache.add(item + (loc, cha))\n # If no data is in ring buffer (e.g. station outage?) then it\n # seems the seedlink server replies with no subtags for the\n # channels\n if not subtags:\n station_cache.add(item + (None, None))\n else:\n station_cache.add(item)\n # change results to an Inventory object\n self._station_cache = station_cache\n self._station_cache_level = level\n return self.get_info(\n network=network, station=station, location=location,\n channel=channel, cache=True, level=level)\n\n def _packet_handler(self, count, slpack):\n \"\"\"\n Custom packet handler that accumulates all waveform packets in a\n stream.\n \"\"\"\n # check if not a complete packet\n if slpack is None or (slpack == SLPacket.SLNOPACKET) or \\\n (slpack == SLPacket.SLERROR):\n return False\n\n # get basic packet info\n type_ = slpack.get_type()\n if self.debug:\n print(type_)\n\n # process INFO packets here\n if type_ == SLPacket.TYPE_SLINF:\n if self.debug:\n print(SLPacket.TYPE_SLINF)\n return False\n elif type_ == SLPacket.TYPE_SLINFT:\n if self.debug:\n print(\"Complete INFO:\",\n self._slclient.slconn.get_info_string())\n return False\n\n # process packet data\n trace = slpack.get_trace()\n if trace is None:\n if self.debug:\n print(\"Blockette contains no trace\")\n return False\n\n # new samples add to the main stream which is then trimmed\n self.stream += trace\n self.stream.merge(-1)\n return False\n\n\nif __name__ == '__main__':\n import doctest\n doctest.testmod(exclude_empty=True)\n", "path": "obspy/clients/seedlink/basic_client.py"}]} |
gh_patches_debug_1108 | rasdani/github-patches | git_diff | python-poetry__poetry-4733 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Incorrect requirements.txt formatting in poetry export
The `requirements.txt` format needs to put a space in front of the semicolon that specifies the package and the pyversion and platform constraints. Right now, without the space, the semicolon will be interpreted as part of a URL. See this issue in `packaging`:
https://github.com/pypa/packaging/issues/456
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/utils/exporter.py`
Content:
```
1 import urllib.parse
2
3 from pathlib import Path
4 from typing import Optional
5 from typing import Sequence
6 from typing import Union
7
8 from cleo.io.io import IO
9
10 from poetry.core.packages.utils.utils import path_to_url
11 from poetry.poetry import Poetry
12 from poetry.utils._compat import decode
13
14
15 class Exporter:
16 """
17 Exporter class to export a lock file to alternative formats.
18 """
19
20 FORMAT_REQUIREMENTS_TXT = "requirements.txt"
21 #: The names of the supported export formats.
22 ACCEPTED_FORMATS = (FORMAT_REQUIREMENTS_TXT,)
23 ALLOWED_HASH_ALGORITHMS = ("sha256", "sha384", "sha512")
24
25 def __init__(self, poetry: Poetry) -> None:
26 self._poetry = poetry
27
28 def export(
29 self,
30 fmt: str,
31 cwd: Path,
32 output: Union[IO, str],
33 with_hashes: bool = True,
34 dev: bool = False,
35 extras: Optional[Union[bool, Sequence[str]]] = None,
36 with_credentials: bool = False,
37 ) -> None:
38 if fmt not in self.ACCEPTED_FORMATS:
39 raise ValueError(f"Invalid export format: {fmt}")
40
41 getattr(self, "_export_{}".format(fmt.replace(".", "_")))(
42 cwd,
43 output,
44 with_hashes=with_hashes,
45 dev=dev,
46 extras=extras,
47 with_credentials=with_credentials,
48 )
49
50 def _export_requirements_txt(
51 self,
52 cwd: Path,
53 output: Union[IO, str],
54 with_hashes: bool = True,
55 dev: bool = False,
56 extras: Optional[Union[bool, Sequence[str]]] = None,
57 with_credentials: bool = False,
58 ) -> None:
59 indexes = set()
60 content = ""
61 dependency_lines = set()
62
63 for dependency_package in self._poetry.locker.get_project_dependency_packages(
64 project_requires=self._poetry.package.all_requires, dev=dev, extras=extras
65 ):
66 line = ""
67
68 dependency = dependency_package.dependency
69 package = dependency_package.package
70
71 if package.develop:
72 line += "-e "
73
74 requirement = dependency.to_pep_508(with_extras=False)
75 is_direct_local_reference = (
76 dependency.is_file() or dependency.is_directory()
77 )
78 is_direct_remote_reference = dependency.is_vcs() or dependency.is_url()
79
80 if is_direct_remote_reference:
81 line = requirement
82 elif is_direct_local_reference:
83 dependency_uri = path_to_url(dependency.source_url)
84 line = f"{dependency.name} @ {dependency_uri}"
85 else:
86 line = f"{package.name}=={package.version}"
87
88 if not is_direct_remote_reference:
89 if ";" in requirement:
90 markers = requirement.split(";", 1)[1].strip()
91 if markers:
92 line += f"; {markers}"
93
94 if (
95 not is_direct_remote_reference
96 and not is_direct_local_reference
97 and package.source_url
98 ):
99 indexes.add(package.source_url)
100
101 if package.files and with_hashes:
102 hashes = []
103 for f in package.files:
104 h = f["hash"]
105 algorithm = "sha256"
106 if ":" in h:
107 algorithm, h = h.split(":")
108
109 if algorithm not in self.ALLOWED_HASH_ALGORITHMS:
110 continue
111
112 hashes.append(f"{algorithm}:{h}")
113
114 if hashes:
115 line += " \\\n"
116 for i, h in enumerate(hashes):
117 line += " --hash={}{}".format(
118 h, " \\\n" if i < len(hashes) - 1 else ""
119 )
120 dependency_lines.add(line)
121
122 content += "\n".join(sorted(dependency_lines))
123 content += "\n"
124
125 if indexes:
126 # If we have extra indexes, we add them to the beginning of the output
127 indexes_header = ""
128 for index in sorted(indexes):
129 repositories = [
130 r
131 for r in self._poetry.pool.repositories
132 if r.url == index.rstrip("/")
133 ]
134 if not repositories:
135 continue
136 repository = repositories[0]
137 if (
138 self._poetry.pool.has_default()
139 and repository is self._poetry.pool.repositories[0]
140 ):
141 url = (
142 repository.authenticated_url
143 if with_credentials
144 else repository.url
145 )
146 indexes_header = f"--index-url {url}\n"
147 continue
148
149 url = (
150 repository.authenticated_url if with_credentials else repository.url
151 )
152 parsed_url = urllib.parse.urlsplit(url)
153 if parsed_url.scheme == "http":
154 indexes_header += f"--trusted-host {parsed_url.netloc}\n"
155 indexes_header += f"--extra-index-url {url}\n"
156
157 content = indexes_header + "\n" + content
158
159 self._output(content, cwd, output)
160
161 def _output(self, content: str, cwd: Path, output: Union[IO, str]) -> None:
162 decoded = decode(content)
163 try:
164 output.write(decoded)
165 except AttributeError:
166 filepath = cwd / output
167 with filepath.open("w", encoding="utf-8") as f:
168 f.write(decoded)
169
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/utils/exporter.py b/poetry/utils/exporter.py
--- a/poetry/utils/exporter.py
+++ b/poetry/utils/exporter.py
@@ -89,7 +89,7 @@
if ";" in requirement:
markers = requirement.split(";", 1)[1].strip()
if markers:
- line += f"; {markers}"
+ line += f" ; {markers}"
if (
not is_direct_remote_reference
| {"golden_diff": "diff --git a/poetry/utils/exporter.py b/poetry/utils/exporter.py\n--- a/poetry/utils/exporter.py\n+++ b/poetry/utils/exporter.py\n@@ -89,7 +89,7 @@\n if \";\" in requirement:\n markers = requirement.split(\";\", 1)[1].strip()\n if markers:\n- line += f\"; {markers}\"\n+ line += f\" ; {markers}\"\n \n if (\n not is_direct_remote_reference\n", "issue": "Incorrect requirements.txt formatting in poetry export \nThe `requirements.txt` format needs to put a space in front of the semicolon that specifies the package and the pyversion and platform constraints. Right now, without the space, the semicolon will be interpreted as part of a URL. See this issue in `packaging`:\r\nhttps://github.com/pypa/packaging/issues/456\n", "before_files": [{"content": "import urllib.parse\n\nfrom pathlib import Path\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\n\nfrom cleo.io.io import IO\n\nfrom poetry.core.packages.utils.utils import path_to_url\nfrom poetry.poetry import Poetry\nfrom poetry.utils._compat import decode\n\n\nclass Exporter:\n \"\"\"\n Exporter class to export a lock file to alternative formats.\n \"\"\"\n\n FORMAT_REQUIREMENTS_TXT = \"requirements.txt\"\n #: The names of the supported export formats.\n ACCEPTED_FORMATS = (FORMAT_REQUIREMENTS_TXT,)\n ALLOWED_HASH_ALGORITHMS = (\"sha256\", \"sha384\", \"sha512\")\n\n def __init__(self, poetry: Poetry) -> None:\n self._poetry = poetry\n\n def export(\n self,\n fmt: str,\n cwd: Path,\n output: Union[IO, str],\n with_hashes: bool = True,\n dev: bool = False,\n extras: Optional[Union[bool, Sequence[str]]] = None,\n with_credentials: bool = False,\n ) -> None:\n if fmt not in self.ACCEPTED_FORMATS:\n raise ValueError(f\"Invalid export format: {fmt}\")\n\n getattr(self, \"_export_{}\".format(fmt.replace(\".\", \"_\")))(\n cwd,\n output,\n with_hashes=with_hashes,\n dev=dev,\n extras=extras,\n with_credentials=with_credentials,\n )\n\n def _export_requirements_txt(\n self,\n cwd: Path,\n output: Union[IO, str],\n with_hashes: bool = True,\n dev: bool = False,\n extras: Optional[Union[bool, Sequence[str]]] = None,\n with_credentials: bool = False,\n ) -> None:\n indexes = set()\n content = \"\"\n dependency_lines = set()\n\n for dependency_package in self._poetry.locker.get_project_dependency_packages(\n project_requires=self._poetry.package.all_requires, dev=dev, extras=extras\n ):\n line = \"\"\n\n dependency = dependency_package.dependency\n package = dependency_package.package\n\n if package.develop:\n line += \"-e \"\n\n requirement = dependency.to_pep_508(with_extras=False)\n is_direct_local_reference = (\n dependency.is_file() or dependency.is_directory()\n )\n is_direct_remote_reference = dependency.is_vcs() or dependency.is_url()\n\n if is_direct_remote_reference:\n line = requirement\n elif is_direct_local_reference:\n dependency_uri = path_to_url(dependency.source_url)\n line = f\"{dependency.name} @ {dependency_uri}\"\n else:\n line = f\"{package.name}=={package.version}\"\n\n if not is_direct_remote_reference:\n if \";\" in requirement:\n markers = requirement.split(\";\", 1)[1].strip()\n if markers:\n line += f\"; {markers}\"\n\n if (\n not is_direct_remote_reference\n and not is_direct_local_reference\n and package.source_url\n ):\n indexes.add(package.source_url)\n\n if package.files and with_hashes:\n hashes = []\n for f in package.files:\n h = f[\"hash\"]\n algorithm = \"sha256\"\n if \":\" in h:\n algorithm, h = h.split(\":\")\n\n if algorithm not in self.ALLOWED_HASH_ALGORITHMS:\n continue\n\n hashes.append(f\"{algorithm}:{h}\")\n\n if hashes:\n line += \" \\\\\\n\"\n for i, h in enumerate(hashes):\n line += \" --hash={}{}\".format(\n h, \" \\\\\\n\" if i < len(hashes) - 1 else \"\"\n )\n dependency_lines.add(line)\n\n content += \"\\n\".join(sorted(dependency_lines))\n content += \"\\n\"\n\n if indexes:\n # If we have extra indexes, we add them to the beginning of the output\n indexes_header = \"\"\n for index in sorted(indexes):\n repositories = [\n r\n for r in self._poetry.pool.repositories\n if r.url == index.rstrip(\"/\")\n ]\n if not repositories:\n continue\n repository = repositories[0]\n if (\n self._poetry.pool.has_default()\n and repository is self._poetry.pool.repositories[0]\n ):\n url = (\n repository.authenticated_url\n if with_credentials\n else repository.url\n )\n indexes_header = f\"--index-url {url}\\n\"\n continue\n\n url = (\n repository.authenticated_url if with_credentials else repository.url\n )\n parsed_url = urllib.parse.urlsplit(url)\n if parsed_url.scheme == \"http\":\n indexes_header += f\"--trusted-host {parsed_url.netloc}\\n\"\n indexes_header += f\"--extra-index-url {url}\\n\"\n\n content = indexes_header + \"\\n\" + content\n\n self._output(content, cwd, output)\n\n def _output(self, content: str, cwd: Path, output: Union[IO, str]) -> None:\n decoded = decode(content)\n try:\n output.write(decoded)\n except AttributeError:\n filepath = cwd / output\n with filepath.open(\"w\", encoding=\"utf-8\") as f:\n f.write(decoded)\n", "path": "poetry/utils/exporter.py"}], "after_files": [{"content": "import urllib.parse\n\nfrom pathlib import Path\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Union\n\nfrom cleo.io.io import IO\n\nfrom poetry.core.packages.utils.utils import path_to_url\nfrom poetry.poetry import Poetry\nfrom poetry.utils._compat import decode\n\n\nclass Exporter:\n \"\"\"\n Exporter class to export a lock file to alternative formats.\n \"\"\"\n\n FORMAT_REQUIREMENTS_TXT = \"requirements.txt\"\n #: The names of the supported export formats.\n ACCEPTED_FORMATS = (FORMAT_REQUIREMENTS_TXT,)\n ALLOWED_HASH_ALGORITHMS = (\"sha256\", \"sha384\", \"sha512\")\n\n def __init__(self, poetry: Poetry) -> None:\n self._poetry = poetry\n\n def export(\n self,\n fmt: str,\n cwd: Path,\n output: Union[IO, str],\n with_hashes: bool = True,\n dev: bool = False,\n extras: Optional[Union[bool, Sequence[str]]] = None,\n with_credentials: bool = False,\n ) -> None:\n if fmt not in self.ACCEPTED_FORMATS:\n raise ValueError(f\"Invalid export format: {fmt}\")\n\n getattr(self, \"_export_{}\".format(fmt.replace(\".\", \"_\")))(\n cwd,\n output,\n with_hashes=with_hashes,\n dev=dev,\n extras=extras,\n with_credentials=with_credentials,\n )\n\n def _export_requirements_txt(\n self,\n cwd: Path,\n output: Union[IO, str],\n with_hashes: bool = True,\n dev: bool = False,\n extras: Optional[Union[bool, Sequence[str]]] = None,\n with_credentials: bool = False,\n ) -> None:\n indexes = set()\n content = \"\"\n dependency_lines = set()\n\n for dependency_package in self._poetry.locker.get_project_dependency_packages(\n project_requires=self._poetry.package.all_requires, dev=dev, extras=extras\n ):\n line = \"\"\n\n dependency = dependency_package.dependency\n package = dependency_package.package\n\n if package.develop:\n line += \"-e \"\n\n requirement = dependency.to_pep_508(with_extras=False)\n is_direct_local_reference = (\n dependency.is_file() or dependency.is_directory()\n )\n is_direct_remote_reference = dependency.is_vcs() or dependency.is_url()\n\n if is_direct_remote_reference:\n line = requirement\n elif is_direct_local_reference:\n dependency_uri = path_to_url(dependency.source_url)\n line = f\"{dependency.name} @ {dependency_uri}\"\n else:\n line = f\"{package.name}=={package.version}\"\n\n if not is_direct_remote_reference:\n if \";\" in requirement:\n markers = requirement.split(\";\", 1)[1].strip()\n if markers:\n line += f\" ; {markers}\"\n\n if (\n not is_direct_remote_reference\n and not is_direct_local_reference\n and package.source_url\n ):\n indexes.add(package.source_url)\n\n if package.files and with_hashes:\n hashes = []\n for f in package.files:\n h = f[\"hash\"]\n algorithm = \"sha256\"\n if \":\" in h:\n algorithm, h = h.split(\":\")\n\n if algorithm not in self.ALLOWED_HASH_ALGORITHMS:\n continue\n\n hashes.append(f\"{algorithm}:{h}\")\n\n if hashes:\n line += \" \\\\\\n\"\n for i, h in enumerate(hashes):\n line += \" --hash={}{}\".format(\n h, \" \\\\\\n\" if i < len(hashes) - 1 else \"\"\n )\n dependency_lines.add(line)\n\n content += \"\\n\".join(sorted(dependency_lines))\n content += \"\\n\"\n\n if indexes:\n # If we have extra indexes, we add them to the beginning of the output\n indexes_header = \"\"\n for index in sorted(indexes):\n repositories = [\n r\n for r in self._poetry.pool.repositories\n if r.url == index.rstrip(\"/\")\n ]\n if not repositories:\n continue\n repository = repositories[0]\n if (\n self._poetry.pool.has_default()\n and repository is self._poetry.pool.repositories[0]\n ):\n url = (\n repository.authenticated_url\n if with_credentials\n else repository.url\n )\n indexes_header = f\"--index-url {url}\\n\"\n continue\n\n url = (\n repository.authenticated_url if with_credentials else repository.url\n )\n parsed_url = urllib.parse.urlsplit(url)\n if parsed_url.scheme == \"http\":\n indexes_header += f\"--trusted-host {parsed_url.netloc}\\n\"\n indexes_header += f\"--extra-index-url {url}\\n\"\n\n content = indexes_header + \"\\n\" + content\n\n self._output(content, cwd, output)\n\n def _output(self, content: str, cwd: Path, output: Union[IO, str]) -> None:\n decoded = decode(content)\n try:\n output.write(decoded)\n except AttributeError:\n filepath = cwd / output\n with filepath.open(\"w\", encoding=\"utf-8\") as f:\n f.write(decoded)\n", "path": "poetry/utils/exporter.py"}]} |
gh_patches_debug_1109 | rasdani/github-patches | git_diff | lk-geimfari__mimesis-433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix mypy issues
There are several things to consider:
1. Fixing bugs like this one: https://travis-ci.org/lk-geimfari/mimesis/jobs/361128185#L600
2. Adding new options to `mypy` to make it stricter: https://github.com/wemake-services/wemake-django-template/blob/master/%7B%7Bcookiecutter.project_name%7D%7D/setup.cfg#L67
3. Add `tests` folder to be checked by `mypy` (not only `mimesis/` folder is checked)
I can do it, if @lk-geimfari does not have anything to add/comment.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mimesis/providers/payment.py`
Content:
```
1 """Provides data related to payment."""
2
3 import re
4 import string
5 from typing import Optional
6
7 from mimesis.data import CREDIT_CARD_NETWORKS
8 from mimesis.enums import CardType, Gender
9 from mimesis.exceptions import NonEnumerableError
10 from mimesis.helpers import get_random_item
11 from mimesis.providers.base import BaseDataProvider
12 from mimesis.providers.person import Person
13 from mimesis.utils import luhn_checksum
14
15 __all__ = ['Payment']
16
17
18 class Payment(BaseDataProvider):
19 """Class that provides data related to payments."""
20
21 def __init__(self, *args, **kwargs) -> None:
22 """Initialize attributes.
23
24 :param args: Arguments.
25 :param kwargs: Keyword arguments.
26 """
27 super().__init__(*args, **kwargs)
28 self.__person = Person('en', seed=self.seed)
29
30 def cid(self) -> int:
31 """Generate a random CID.
32
33 :return: CID code.
34
35 :Example:
36 7452
37 """
38 return self.random.randint(1000, 9999)
39
40 def paypal(self) -> str:
41 """Generate a random PayPal account.
42
43 :return: Email of PapPal user.
44
45 :Example:
46 [email protected]
47 """
48 return self.__person.email()
49
50 def bitcoin_address(self) -> str:
51 """Generate a random bitcoin address.
52
53 :return: Bitcoin address.
54
55 :Example:
56 3EktnHQD7RiAE6uzMj2ZifT9YgRrkSgzQX
57 """
58 type_ = self.random.choice(['1', '3'])
59 letters = string.ascii_letters + string.digits
60 return type_ + ''.join(
61 self.random.choice(letters) for _ in range(33))
62
63 def ethereum_address(self) -> str:
64 """Generate a random Ethereum address.
65
66 .. Note: The address will look like Ethereum address,
67 but keep in mind that it is not the valid address.
68
69 :return: Ethereum address.
70
71 :Example:
72 0xe8ece9e6ff7dba52d4c07d37418036a89af9698d
73 """
74 bits = self.random.getrandbits(160)
75 address = bits.to_bytes(20, byteorder='big')
76 return '0x' + address.hex()
77
78 def credit_card_network(self) -> str:
79 """Generate a random credit card network.
80
81 :return: Credit card network
82
83 :Example:
84 MasterCard
85 """
86 return self.random.choice(CREDIT_CARD_NETWORKS)
87
88 def credit_card_number(self, card_type: Optional[CardType] = None) -> str:
89 """Generate a random credit card number.
90
91 :param card_type: Issuing Network. Default is Visa.
92 :return: Credit card number.
93 :raises NotImplementedError: if cart_type is not supported.
94
95 :Example:
96 4455 5299 1152 2450
97 """
98 length = 16
99 regex = re.compile('(\d{4})(\d{4})(\d{4})(\d{4})')
100
101 if card_type is None:
102 card_type = get_random_item(CardType, rnd=self.random)
103
104 if card_type == CardType.VISA:
105 number = self.random.randint(4000, 4999)
106 elif card_type == CardType.MASTER_CARD:
107 number = self.random.choice([
108 self.random.randint(2221, 2720),
109 self.random.randint(5100, 5500),
110 ])
111 elif card_type == CardType.AMERICAN_EXPRESS:
112 number = self.random.choice([34, 37])
113 length = 15
114 regex = re.compile('(\d{4})(\d{6})(\d{5})')
115 else:
116 raise NonEnumerableError(CardType)
117
118 str_num = str(number)
119 while len(str_num) < length - 1:
120 str_num += self.random.choice(string.digits)
121
122 groups = regex.search(str_num + luhn_checksum(str_num)).groups()
123 card = ' '.join(groups)
124 return card
125
126 def credit_card_expiration_date(self, minimum: int = 16,
127 maximum: int = 25) -> str:
128 """Generate a random expiration date for credit card.
129
130 :param minimum: Date of issue.
131 :param maximum: Maximum of expiration_date.
132 :return: Expiration date of credit card.
133
134 :Example:
135 03/19.
136 """
137 month = self.random.randint(1, 12)
138 year = self.random.randint(minimum, maximum)
139 return '{0:02d}/{1}'.format(month, year)
140
141 def cvv(self) -> int:
142 """Generate a random CVV.
143
144 :return: CVV code.
145
146 :Example:
147 324
148 """
149 return self.random.randint(100, 999)
150
151 def credit_card_owner(self, gender: Optional[Gender] = None) -> dict:
152 """Generate credit card owner.
153
154 :param gender: Gender of credit card owner.
155 :type gender: Gender's enum object.
156 :return:
157 """
158 owner = {
159 'credit_card': self.credit_card_number(),
160 'expiration_date': self.credit_card_expiration_date(),
161 'owner': self.__person.full_name(gender=gender).upper(),
162 }
163 return owner
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mimesis/providers/payment.py b/mimesis/providers/payment.py
--- a/mimesis/providers/payment.py
+++ b/mimesis/providers/payment.py
@@ -119,7 +119,9 @@
while len(str_num) < length - 1:
str_num += self.random.choice(string.digits)
- groups = regex.search(str_num + luhn_checksum(str_num)).groups()
+ groups = regex.search( # type: ignore
+ str_num + luhn_checksum(str_num),
+ ).groups()
card = ' '.join(groups)
return card
| {"golden_diff": "diff --git a/mimesis/providers/payment.py b/mimesis/providers/payment.py\n--- a/mimesis/providers/payment.py\n+++ b/mimesis/providers/payment.py\n@@ -119,7 +119,9 @@\n while len(str_num) < length - 1:\n str_num += self.random.choice(string.digits)\n \n- groups = regex.search(str_num + luhn_checksum(str_num)).groups()\n+ groups = regex.search( # type: ignore\n+ str_num + luhn_checksum(str_num),\n+ ).groups()\n card = ' '.join(groups)\n return card\n", "issue": "Fix mypy issues\nThere are several things to consider:\r\n\r\n1. Fixing bugs like this one: https://travis-ci.org/lk-geimfari/mimesis/jobs/361128185#L600\r\n2. Adding new options to `mypy` to make it stricter: https://github.com/wemake-services/wemake-django-template/blob/master/%7B%7Bcookiecutter.project_name%7D%7D/setup.cfg#L67\r\n3. Add `tests` folder to be checked by `mypy` (not only `mimesis/` folder is checked)\r\n\r\nI can do it, if @lk-geimfari does not have anything to add/comment.\n", "before_files": [{"content": "\"\"\"Provides data related to payment.\"\"\"\n\nimport re\nimport string\nfrom typing import Optional\n\nfrom mimesis.data import CREDIT_CARD_NETWORKS\nfrom mimesis.enums import CardType, Gender\nfrom mimesis.exceptions import NonEnumerableError\nfrom mimesis.helpers import get_random_item\nfrom mimesis.providers.base import BaseDataProvider\nfrom mimesis.providers.person import Person\nfrom mimesis.utils import luhn_checksum\n\n__all__ = ['Payment']\n\n\nclass Payment(BaseDataProvider):\n \"\"\"Class that provides data related to payments.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__person = Person('en', seed=self.seed)\n\n def cid(self) -> int:\n \"\"\"Generate a random CID.\n\n :return: CID code.\n\n :Example:\n 7452\n \"\"\"\n return self.random.randint(1000, 9999)\n\n def paypal(self) -> str:\n \"\"\"Generate a random PayPal account.\n\n :return: Email of PapPal user.\n\n :Example:\n [email protected]\n \"\"\"\n return self.__person.email()\n\n def bitcoin_address(self) -> str:\n \"\"\"Generate a random bitcoin address.\n\n :return: Bitcoin address.\n\n :Example:\n 3EktnHQD7RiAE6uzMj2ZifT9YgRrkSgzQX\n \"\"\"\n type_ = self.random.choice(['1', '3'])\n letters = string.ascii_letters + string.digits\n return type_ + ''.join(\n self.random.choice(letters) for _ in range(33))\n\n def ethereum_address(self) -> str:\n \"\"\"Generate a random Ethereum address.\n\n .. Note: The address will look like Ethereum address,\n but keep in mind that it is not the valid address.\n\n :return: Ethereum address.\n\n :Example:\n 0xe8ece9e6ff7dba52d4c07d37418036a89af9698d\n \"\"\"\n bits = self.random.getrandbits(160)\n address = bits.to_bytes(20, byteorder='big')\n return '0x' + address.hex()\n\n def credit_card_network(self) -> str:\n \"\"\"Generate a random credit card network.\n\n :return: Credit card network\n\n :Example:\n MasterCard\n \"\"\"\n return self.random.choice(CREDIT_CARD_NETWORKS)\n\n def credit_card_number(self, card_type: Optional[CardType] = None) -> str:\n \"\"\"Generate a random credit card number.\n\n :param card_type: Issuing Network. Default is Visa.\n :return: Credit card number.\n :raises NotImplementedError: if cart_type is not supported.\n\n :Example:\n 4455 5299 1152 2450\n \"\"\"\n length = 16\n regex = re.compile('(\\d{4})(\\d{4})(\\d{4})(\\d{4})')\n\n if card_type is None:\n card_type = get_random_item(CardType, rnd=self.random)\n\n if card_type == CardType.VISA:\n number = self.random.randint(4000, 4999)\n elif card_type == CardType.MASTER_CARD:\n number = self.random.choice([\n self.random.randint(2221, 2720),\n self.random.randint(5100, 5500),\n ])\n elif card_type == CardType.AMERICAN_EXPRESS:\n number = self.random.choice([34, 37])\n length = 15\n regex = re.compile('(\\d{4})(\\d{6})(\\d{5})')\n else:\n raise NonEnumerableError(CardType)\n\n str_num = str(number)\n while len(str_num) < length - 1:\n str_num += self.random.choice(string.digits)\n\n groups = regex.search(str_num + luhn_checksum(str_num)).groups()\n card = ' '.join(groups)\n return card\n\n def credit_card_expiration_date(self, minimum: int = 16,\n maximum: int = 25) -> str:\n \"\"\"Generate a random expiration date for credit card.\n\n :param minimum: Date of issue.\n :param maximum: Maximum of expiration_date.\n :return: Expiration date of credit card.\n\n :Example:\n 03/19.\n \"\"\"\n month = self.random.randint(1, 12)\n year = self.random.randint(minimum, maximum)\n return '{0:02d}/{1}'.format(month, year)\n\n def cvv(self) -> int:\n \"\"\"Generate a random CVV.\n\n :return: CVV code.\n\n :Example:\n 324\n \"\"\"\n return self.random.randint(100, 999)\n\n def credit_card_owner(self, gender: Optional[Gender] = None) -> dict:\n \"\"\"Generate credit card owner.\n\n :param gender: Gender of credit card owner.\n :type gender: Gender's enum object.\n :return:\n \"\"\"\n owner = {\n 'credit_card': self.credit_card_number(),\n 'expiration_date': self.credit_card_expiration_date(),\n 'owner': self.__person.full_name(gender=gender).upper(),\n }\n return owner\n", "path": "mimesis/providers/payment.py"}], "after_files": [{"content": "\"\"\"Provides data related to payment.\"\"\"\n\nimport re\nimport string\nfrom typing import Optional\n\nfrom mimesis.data import CREDIT_CARD_NETWORKS\nfrom mimesis.enums import CardType, Gender\nfrom mimesis.exceptions import NonEnumerableError\nfrom mimesis.helpers import get_random_item\nfrom mimesis.providers.base import BaseDataProvider\nfrom mimesis.providers.person import Person\nfrom mimesis.utils import luhn_checksum\n\n__all__ = ['Payment']\n\n\nclass Payment(BaseDataProvider):\n \"\"\"Class that provides data related to payments.\"\"\"\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initialize attributes.\n\n :param args: Arguments.\n :param kwargs: Keyword arguments.\n \"\"\"\n super().__init__(*args, **kwargs)\n self.__person = Person('en', seed=self.seed)\n\n def cid(self) -> int:\n \"\"\"Generate a random CID.\n\n :return: CID code.\n\n :Example:\n 7452\n \"\"\"\n return self.random.randint(1000, 9999)\n\n def paypal(self) -> str:\n \"\"\"Generate a random PayPal account.\n\n :return: Email of PapPal user.\n\n :Example:\n [email protected]\n \"\"\"\n return self.__person.email()\n\n def bitcoin_address(self) -> str:\n \"\"\"Generate a random bitcoin address.\n\n :return: Bitcoin address.\n\n :Example:\n 3EktnHQD7RiAE6uzMj2ZifT9YgRrkSgzQX\n \"\"\"\n type_ = self.random.choice(['1', '3'])\n letters = string.ascii_letters + string.digits\n return type_ + ''.join(\n self.random.choice(letters) for _ in range(33))\n\n def ethereum_address(self) -> str:\n \"\"\"Generate a random Ethereum address.\n\n .. Note: The address will look like Ethereum address,\n but keep in mind that it is not the valid address.\n\n :return: Ethereum address.\n\n :Example:\n 0xe8ece9e6ff7dba52d4c07d37418036a89af9698d\n \"\"\"\n bits = self.random.getrandbits(160)\n address = bits.to_bytes(20, byteorder='big')\n return '0x' + address.hex()\n\n def credit_card_network(self) -> str:\n \"\"\"Generate a random credit card network.\n\n :return: Credit card network\n\n :Example:\n MasterCard\n \"\"\"\n return self.random.choice(CREDIT_CARD_NETWORKS)\n\n def credit_card_number(self, card_type: Optional[CardType] = None) -> str:\n \"\"\"Generate a random credit card number.\n\n :param card_type: Issuing Network. Default is Visa.\n :return: Credit card number.\n :raises NotImplementedError: if cart_type is not supported.\n\n :Example:\n 4455 5299 1152 2450\n \"\"\"\n length = 16\n regex = re.compile('(\\d{4})(\\d{4})(\\d{4})(\\d{4})')\n\n if card_type is None:\n card_type = get_random_item(CardType, rnd=self.random)\n\n if card_type == CardType.VISA:\n number = self.random.randint(4000, 4999)\n elif card_type == CardType.MASTER_CARD:\n number = self.random.choice([\n self.random.randint(2221, 2720),\n self.random.randint(5100, 5500),\n ])\n elif card_type == CardType.AMERICAN_EXPRESS:\n number = self.random.choice([34, 37])\n length = 15\n regex = re.compile('(\\d{4})(\\d{6})(\\d{5})')\n else:\n raise NonEnumerableError(CardType)\n\n str_num = str(number)\n while len(str_num) < length - 1:\n str_num += self.random.choice(string.digits)\n\n groups = regex.search( # type: ignore\n str_num + luhn_checksum(str_num),\n ).groups()\n card = ' '.join(groups)\n return card\n\n def credit_card_expiration_date(self, minimum: int = 16,\n maximum: int = 25) -> str:\n \"\"\"Generate a random expiration date for credit card.\n\n :param minimum: Date of issue.\n :param maximum: Maximum of expiration_date.\n :return: Expiration date of credit card.\n\n :Example:\n 03/19.\n \"\"\"\n month = self.random.randint(1, 12)\n year = self.random.randint(minimum, maximum)\n return '{0:02d}/{1}'.format(month, year)\n\n def cvv(self) -> int:\n \"\"\"Generate a random CVV.\n\n :return: CVV code.\n\n :Example:\n 324\n \"\"\"\n return self.random.randint(100, 999)\n\n def credit_card_owner(self, gender: Optional[Gender] = None) -> dict:\n \"\"\"Generate credit card owner.\n\n :param gender: Gender of credit card owner.\n :type gender: Gender's enum object.\n :return:\n \"\"\"\n owner = {\n 'credit_card': self.credit_card_number(),\n 'expiration_date': self.credit_card_expiration_date(),\n 'owner': self.__person.full_name(gender=gender).upper(),\n }\n return owner\n", "path": "mimesis/providers/payment.py"}]} |
gh_patches_debug_1110 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6692 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
RAG does not correctly merge weights
## Description
Hi there,
I have a question about the example of merging RAG 1. I just wondering how does the parameters are passed to the identified function max_edge. For example, I wanna merge the node of 1 and 3, as far as I concerned the ‘src’ and ‘dst’ are supposed to be 1 and 3, however, the result is 1 and 5. Moreover, after merging these 2 nodes, if we choose the max weights, the result should be 40 and 20 but we just got 40 and 10. Obviously there is some problems about this part. Here is the link of the [example code](https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_rag.html#sphx-glr-auto-examples-segmentation-plot-rag-py) and the link from the [foum](https://forum.image.sc/t/question-about-the-example-of-merging-rag-from-the-tutorial/51946).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `skimage/graph/_rag.py`
Content:
```
1 import networkx as nx
2 import numpy as np
3 from scipy import ndimage as ndi
4 from scipy import sparse
5 import math
6
7 from .. import measure, segmentation, util, color
8 from .._shared.version_requirements import require
9
10
11 def _edge_generator_from_csr(csr_matrix):
12 """Yield weighted edge triples for use by NetworkX from a CSR matrix.
13
14 This function is a straight rewrite of
15 `networkx.convert_matrix._csr_gen_triples`. Since that is a private
16 function, it is safer to include our own here.
17
18 Parameters
19 ----------
20 csr_matrix : scipy.sparse.csr_matrix
21 The input matrix. An edge (i, j, w) will be yielded if there is a
22 data value for coordinates (i, j) in the matrix, even if that value
23 is 0.
24
25 Yields
26 ------
27 i, j, w : (int, int, float) tuples
28 Each value `w` in the matrix along with its coordinates (i, j).
29
30 Examples
31 --------
32
33 >>> dense = np.eye(2, dtype=float)
34 >>> csr = sparse.csr_matrix(dense)
35 >>> edges = _edge_generator_from_csr(csr)
36 >>> list(edges)
37 [(0, 0, 1.0), (1, 1, 1.0)]
38 """
39 nrows = csr_matrix.shape[0]
40 values = csr_matrix.data
41 indptr = csr_matrix.indptr
42 col_indices = csr_matrix.indices
43 for i in range(nrows):
44 for j in range(indptr[i], indptr[i + 1]):
45 yield i, col_indices[j], values[j]
46
47
48 def min_weight(graph, src, dst, n):
49 """Callback to handle merging nodes by choosing minimum weight.
50
51 Returns a dictionary with `"weight"` set as either the weight between
52 (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when
53 both exist.
54
55 Parameters
56 ----------
57 graph : RAG
58 The graph under consideration.
59 src, dst : int
60 The verices in `graph` to be merged.
61 n : int
62 A neighbor of `src` or `dst` or both.
63
64 Returns
65 -------
66 data : dict
67 A dict with the `"weight"` attribute set the weight between
68 (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when
69 both exist.
70
71 """
72
73 # cover the cases where n only has edge to either `src` or `dst`
74 default = {'weight': np.inf}
75 w1 = graph[n].get(src, default)['weight']
76 w2 = graph[n].get(dst, default)['weight']
77 return {'weight': min(w1, w2)}
78
79
80 def _add_edge_filter(values, graph):
81 """Create edge in `graph` between central element of `values` and the rest.
82
83 Add an edge between the middle element in `values` and
84 all other elements of `values` into `graph`. ``values[len(values) // 2]``
85 is expected to be the central value of the footprint used.
86
87 Parameters
88 ----------
89 values : array
90 The array to process.
91 graph : RAG
92 The graph to add edges in.
93
94 Returns
95 -------
96 0 : float
97 Always returns 0. The return value is required so that `generic_filter`
98 can put it in the output array, but it is ignored by this filter.
99 """
100 values = values.astype(int)
101 center = values[len(values) // 2]
102 for value in values:
103 if value != center and not graph.has_edge(center, value):
104 graph.add_edge(center, value)
105 return 0.
106
107
108 class RAG(nx.Graph):
109
110 """
111 The Region Adjacency Graph (RAG) of an image, subclasses
112 `networx.Graph <http://networkx.github.io/documentation/latest/reference/classes/graph.html>`_
113
114 Parameters
115 ----------
116 label_image : array of int
117 An initial segmentation, with each region labeled as a different
118 integer. Every unique value in ``label_image`` will correspond to
119 a node in the graph.
120 connectivity : int in {1, ..., ``label_image.ndim``}, optional
121 The connectivity between pixels in ``label_image``. For a 2D image,
122 a connectivity of 1 corresponds to immediate neighbors up, down,
123 left, and right, while a connectivity of 2 also includes diagonal
124 neighbors. See `scipy.ndimage.generate_binary_structure`.
125 data : networkx Graph specification, optional
126 Initial or additional edges to pass to the NetworkX Graph
127 constructor. See `networkx.Graph`. Valid edge specifications
128 include edge list (list of tuples), NumPy arrays, and SciPy
129 sparse matrices.
130 **attr : keyword arguments, optional
131 Additional attributes to add to the graph.
132 """
133
134 def __init__(self, label_image=None, connectivity=1, data=None, **attr):
135
136 super().__init__(data, **attr)
137 if self.number_of_nodes() == 0:
138 self.max_id = 0
139 else:
140 self.max_id = max(self.nodes())
141
142 if label_image is not None:
143 fp = ndi.generate_binary_structure(label_image.ndim, connectivity)
144 # In the next ``ndi.generic_filter`` function, the kwarg
145 # ``output`` is used to provide a strided array with a single
146 # 64-bit floating point number, to which the function repeatedly
147 # writes. This is done because even if we don't care about the
148 # output, without this, a float array of the same shape as the
149 # input image will be created and that could be expensive in
150 # memory consumption.
151 output = np.broadcast_to(1., label_image.shape)
152 output.setflags(write=True)
153 ndi.generic_filter(
154 label_image,
155 function=_add_edge_filter,
156 footprint=fp,
157 mode='nearest',
158 output=output,
159 extra_arguments=(self,))
160
161 def merge_nodes(self, src, dst, weight_func=min_weight, in_place=True,
162 extra_arguments=[], extra_keywords={}):
163 """Merge node `src` and `dst`.
164
165 The new combined node is adjacent to all the neighbors of `src`
166 and `dst`. `weight_func` is called to decide the weight of edges
167 incident on the new node.
168
169 Parameters
170 ----------
171 src, dst : int
172 Nodes to be merged.
173 weight_func : callable, optional
174 Function to decide the attributes of edges incident on the new
175 node. For each neighbor `n` for `src and `dst`, `weight_func` will
176 be called as follows: `weight_func(src, dst, n, *extra_arguments,
177 **extra_keywords)`. `src`, `dst` and `n` are IDs of vertices in the
178 RAG object which is in turn a subclass of `networkx.Graph`. It is
179 expected to return a dict of attributes of the resulting edge.
180 in_place : bool, optional
181 If set to `True`, the merged node has the id `dst`, else merged
182 node has a new id which is returned.
183 extra_arguments : sequence, optional
184 The sequence of extra positional arguments passed to
185 `weight_func`.
186 extra_keywords : dictionary, optional
187 The dict of keyword arguments passed to the `weight_func`.
188
189 Returns
190 -------
191 id : int
192 The id of the new node.
193
194 Notes
195 -----
196 If `in_place` is `False` the resulting node has a new id, rather than
197 `dst`.
198 """
199 src_nbrs = set(self.neighbors(src))
200 dst_nbrs = set(self.neighbors(dst))
201 neighbors = (src_nbrs | dst_nbrs) - {src, dst}
202
203 if in_place:
204 new = dst
205 else:
206 new = self.next_id()
207 self.add_node(new)
208
209 for neighbor in neighbors:
210 data = weight_func(self, src, new, neighbor, *extra_arguments,
211 **extra_keywords)
212 self.add_edge(neighbor, new, attr_dict=data)
213
214 self.nodes[new]['labels'] = (self.nodes[src]['labels'] +
215 self.nodes[dst]['labels'])
216 self.remove_node(src)
217
218 if not in_place:
219 self.remove_node(dst)
220
221 return new
222
223 def add_node(self, n, attr_dict=None, **attr):
224 """Add node `n` while updating the maximum node id.
225
226 .. seealso:: :func:`networkx.Graph.add_node`."""
227 if attr_dict is None: # compatibility with old networkx
228 attr_dict = attr
229 else:
230 attr_dict.update(attr)
231 super().add_node(n, **attr_dict)
232 self.max_id = max(n, self.max_id)
233
234 def add_edge(self, u, v, attr_dict=None, **attr):
235 """Add an edge between `u` and `v` while updating max node id.
236
237 .. seealso:: :func:`networkx.Graph.add_edge`."""
238 if attr_dict is None: # compatibility with old networkx
239 attr_dict = attr
240 else:
241 attr_dict.update(attr)
242 super().add_edge(u, v, **attr_dict)
243 self.max_id = max(u, v, self.max_id)
244
245 def copy(self):
246 """Copy the graph with its max node id.
247
248 .. seealso:: :func:`networkx.Graph.copy`."""
249 g = super().copy()
250 g.max_id = self.max_id
251 return g
252
253 def fresh_copy(self):
254 """Return a fresh copy graph with the same data structure.
255
256 A fresh copy has no nodes, edges or graph attributes. It is
257 the same data structure as the current graph. This method is
258 typically used to create an empty version of the graph.
259
260 This is required when subclassing Graph with networkx v2 and
261 does not cause problems for v1. Here is more detail from
262 the network migrating from 1.x to 2.x document::
263
264 With the new GraphViews (SubGraph, ReversedGraph, etc)
265 you can't assume that ``G.__class__()`` will create a new
266 instance of the same graph type as ``G``. In fact, the
267 call signature for ``__class__`` differs depending on
268 whether ``G`` is a view or a base class. For v2.x you
269 should use ``G.fresh_copy()`` to create a null graph of
270 the correct type---ready to fill with nodes and edges.
271
272 """
273 return RAG()
274
275 def next_id(self):
276 """Returns the `id` for the new node to be inserted.
277
278 The current implementation returns one more than the maximum `id`.
279
280 Returns
281 -------
282 id : int
283 The `id` of the new node to be inserted.
284 """
285 return self.max_id + 1
286
287 def _add_node_silent(self, n):
288 """Add node `n` without updating the maximum node id.
289
290 This is a convenience method used internally.
291
292 .. seealso:: :func:`networkx.Graph.add_node`."""
293 super().add_node(n)
294
295
296 def rag_mean_color(image, labels, connectivity=2, mode='distance',
297 sigma=255.0):
298 """Compute the Region Adjacency Graph using mean colors.
299
300 Given an image and its initial segmentation, this method constructs the
301 corresponding Region Adjacency Graph (RAG). Each node in the RAG
302 represents a set of pixels within `image` with the same label in `labels`.
303 The weight between two adjacent regions represents how similar or
304 dissimilar two regions are depending on the `mode` parameter.
305
306 Parameters
307 ----------
308 image : ndarray, shape(M, N, [..., P,] 3)
309 Input image.
310 labels : ndarray, shape(M, N, [..., P])
311 The labelled image. This should have one dimension less than
312 `image`. If `image` has dimensions `(M, N, 3)` `labels` should have
313 dimensions `(M, N)`.
314 connectivity : int, optional
315 Pixels with a squared distance less than `connectivity` from each other
316 are considered adjacent. It can range from 1 to `labels.ndim`. Its
317 behavior is the same as `connectivity` parameter in
318 ``scipy.ndimage.generate_binary_structure``.
319 mode : {'distance', 'similarity'}, optional
320 The strategy to assign edge weights.
321
322 'distance' : The weight between two adjacent regions is the
323 :math:`|c_1 - c_2|`, where :math:`c_1` and :math:`c_2` are the mean
324 colors of the two regions. It represents the Euclidean distance in
325 their average color.
326
327 'similarity' : The weight between two adjacent is
328 :math:`e^{-d^2/sigma}` where :math:`d=|c_1 - c_2|`, where
329 :math:`c_1` and :math:`c_2` are the mean colors of the two regions.
330 It represents how similar two regions are.
331 sigma : float, optional
332 Used for computation when `mode` is "similarity". It governs how
333 close to each other two colors should be, for their corresponding edge
334 weight to be significant. A very large value of `sigma` could make
335 any two colors behave as though they were similar.
336
337 Returns
338 -------
339 out : RAG
340 The region adjacency graph.
341
342 Examples
343 --------
344 >>> from skimage import data, segmentation, graph
345 >>> img = data.astronaut()
346 >>> labels = segmentation.slic(img)
347 >>> rag = graph.rag_mean_color(img, labels)
348
349 References
350 ----------
351 .. [1] Alain Tremeau and Philippe Colantoni
352 "Regions Adjacency Graph Applied To Color Image Segmentation"
353 :DOI:`10.1109/83.841950`
354 """
355 graph = RAG(labels, connectivity=connectivity)
356
357 for n in graph:
358 graph.nodes[n].update({'labels': [n],
359 'pixel count': 0,
360 'total color': np.array([0, 0, 0],
361 dtype=np.float64)})
362
363 for index in np.ndindex(labels.shape):
364 current = labels[index]
365 graph.nodes[current]['pixel count'] += 1
366 graph.nodes[current]['total color'] += image[index]
367
368 for n in graph:
369 graph.nodes[n]['mean color'] = (graph.nodes[n]['total color'] /
370 graph.nodes[n]['pixel count'])
371
372 for x, y, d in graph.edges(data=True):
373 diff = graph.nodes[x]['mean color'] - graph.nodes[y]['mean color']
374 diff = np.linalg.norm(diff)
375 if mode == 'similarity':
376 d['weight'] = math.e ** (-(diff ** 2) / sigma)
377 elif mode == 'distance':
378 d['weight'] = diff
379 else:
380 raise ValueError(f"The mode '{mode}' is not recognised")
381
382 return graph
383
384
385 def rag_boundary(labels, edge_map, connectivity=2):
386 """ Comouter RAG based on region boundaries
387
388 Given an image's initial segmentation and its edge map this method
389 constructs the corresponding Region Adjacency Graph (RAG). Each node in the
390 RAG represents a set of pixels within the image with the same label in
391 `labels`. The weight between two adjacent regions is the average value
392 in `edge_map` along their boundary.
393
394 labels : ndarray
395 The labelled image.
396 edge_map : ndarray
397 This should have the same shape as that of `labels`. For all pixels
398 along the boundary between 2 adjacent regions, the average value of the
399 corresponding pixels in `edge_map` is the edge weight between them.
400 connectivity : int, optional
401 Pixels with a squared distance less than `connectivity` from each other
402 are considered adjacent. It can range from 1 to `labels.ndim`. Its
403 behavior is the same as `connectivity` parameter in
404 `scipy.ndimage.generate_binary_structure`.
405
406 Examples
407 --------
408 >>> from skimage import data, segmentation, filters, color, graph
409 >>> img = data.chelsea()
410 >>> labels = segmentation.slic(img)
411 >>> edge_map = filters.sobel(color.rgb2gray(img))
412 >>> rag = graph.rag_boundary(labels, edge_map)
413
414 """
415
416 conn = ndi.generate_binary_structure(labels.ndim, connectivity)
417 eroded = ndi.grey_erosion(labels, footprint=conn)
418 dilated = ndi.grey_dilation(labels, footprint=conn)
419 boundaries0 = (eroded != labels)
420 boundaries1 = (dilated != labels)
421 labels_small = np.concatenate((eroded[boundaries0], labels[boundaries1]))
422 labels_large = np.concatenate((labels[boundaries0], dilated[boundaries1]))
423 n = np.max(labels_large) + 1
424
425 # use a dummy broadcast array as data for RAG
426 ones = np.broadcast_to(1., labels_small.shape)
427 count_matrix = sparse.coo_matrix((ones, (labels_small, labels_large)),
428 dtype=int, shape=(n, n)).tocsr()
429 data = np.concatenate((edge_map[boundaries0], edge_map[boundaries1]))
430
431 data_coo = sparse.coo_matrix((data, (labels_small, labels_large)))
432 graph_matrix = data_coo.tocsr()
433 graph_matrix.data /= count_matrix.data
434
435 rag = RAG()
436 rag.add_weighted_edges_from(_edge_generator_from_csr(graph_matrix),
437 weight='weight')
438 rag.add_weighted_edges_from(_edge_generator_from_csr(count_matrix),
439 weight='count')
440
441 for n in rag.nodes():
442 rag.nodes[n].update({'labels': [n]})
443
444 return rag
445
446
447 @require("matplotlib", ">=3.3")
448 def show_rag(labels, rag, image, border_color='black', edge_width=1.5,
449 edge_cmap='magma', img_cmap='bone', in_place=True, ax=None):
450 """Show a Region Adjacency Graph on an image.
451
452 Given a labelled image and its corresponding RAG, show the nodes and edges
453 of the RAG on the image with the specified colors. Edges are displayed between
454 the centroid of the 2 adjacent regions in the image.
455
456 Parameters
457 ----------
458 labels : ndarray, shape (M, N)
459 The labelled image.
460 rag : RAG
461 The Region Adjacency Graph.
462 image : ndarray, shape (M, N[, 3])
463 Input image. If `colormap` is `None`, the image should be in RGB
464 format.
465 border_color : color spec, optional
466 Color with which the borders between regions are drawn.
467 edge_width : float, optional
468 The thickness with which the RAG edges are drawn.
469 edge_cmap : :py:class:`matplotlib.colors.Colormap`, optional
470 Any matplotlib colormap with which the edges are drawn.
471 img_cmap : :py:class:`matplotlib.colors.Colormap`, optional
472 Any matplotlib colormap with which the image is draw. If set to `None`
473 the image is drawn as it is.
474 in_place : bool, optional
475 If set, the RAG is modified in place. For each node `n` the function
476 will set a new attribute ``rag.nodes[n]['centroid']``.
477 ax : :py:class:`matplotlib.axes.Axes`, optional
478 The axes to draw on. If not specified, new axes are created and drawn
479 on.
480
481 Returns
482 -------
483 lc : :py:class:`matplotlib.collections.LineCollection`
484 A collection of lines that represent the edges of the graph. It can be
485 passed to the :meth:`matplotlib.figure.Figure.colorbar` function.
486
487 Examples
488 --------
489 >>> from skimage import data, segmentation, graph
490 >>> import matplotlib.pyplot as plt
491 >>>
492 >>> img = data.coffee()
493 >>> labels = segmentation.slic(img)
494 >>> g = graph.rag_mean_color(img, labels)
495 >>> lc = graph.show_rag(labels, g, img)
496 >>> cbar = plt.colorbar(lc)
497 """
498 from matplotlib import colors
499 from matplotlib import pyplot as plt
500 from matplotlib.collections import LineCollection
501
502 if not in_place:
503 rag = rag.copy()
504
505 if ax is None:
506 fig, ax = plt.subplots()
507 out = util.img_as_float(image, force_copy=True)
508
509 if img_cmap is None:
510 if image.ndim < 3 or image.shape[2] not in [3, 4]:
511 msg = 'If colormap is `None`, an RGB or RGBA image should be given'
512 raise ValueError(msg)
513 # Ignore the alpha channel
514 out = image[:, :, :3]
515 else:
516 img_cmap = plt.get_cmap(img_cmap)
517 out = color.rgb2gray(image)
518 # Ignore the alpha channel
519 out = img_cmap(out)[:, :, :3]
520
521 edge_cmap = plt.get_cmap(edge_cmap)
522
523 # Handling the case where one node has multiple labels
524 # offset is 1 so that regionprops does not ignore 0
525 offset = 1
526 map_array = np.arange(labels.max() + 1)
527 for n, d in rag.nodes(data=True):
528 for label in d['labels']:
529 map_array[label] = offset
530 offset += 1
531
532 rag_labels = map_array[labels]
533 regions = measure.regionprops(rag_labels)
534
535 for (n, data), region in zip(rag.nodes(data=True), regions):
536 data['centroid'] = tuple(map(int, region['centroid']))
537
538 cc = colors.ColorConverter()
539 if border_color is not None:
540 border_color = cc.to_rgb(border_color)
541 out = segmentation.mark_boundaries(out, rag_labels, color=border_color)
542
543 ax.imshow(out)
544
545 # Defining the end points of the edges
546 # The tuple[::-1] syntax reverses a tuple as matplotlib uses (x,y)
547 # convention while skimage uses (row, column)
548 lines = [[rag.nodes[n1]['centroid'][::-1], rag.nodes[n2]['centroid'][::-1]]
549 for (n1, n2) in rag.edges()]
550
551 lc = LineCollection(lines, linewidths=edge_width, cmap=edge_cmap)
552 edge_weights = [d['weight'] for x, y, d in rag.edges(data=True)]
553 lc.set_array(np.array(edge_weights))
554 ax.add_collection(lc)
555
556 return lc
557
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/skimage/graph/_rag.py b/skimage/graph/_rag.py
--- a/skimage/graph/_rag.py
+++ b/skimage/graph/_rag.py
@@ -207,7 +207,7 @@
self.add_node(new)
for neighbor in neighbors:
- data = weight_func(self, src, new, neighbor, *extra_arguments,
+ data = weight_func(self, src, dst, neighbor, *extra_arguments,
**extra_keywords)
self.add_edge(neighbor, new, attr_dict=data)
| {"golden_diff": "diff --git a/skimage/graph/_rag.py b/skimage/graph/_rag.py\n--- a/skimage/graph/_rag.py\n+++ b/skimage/graph/_rag.py\n@@ -207,7 +207,7 @@\n self.add_node(new)\n \n for neighbor in neighbors:\n- data = weight_func(self, src, new, neighbor, *extra_arguments,\n+ data = weight_func(self, src, dst, neighbor, *extra_arguments,\n **extra_keywords)\n self.add_edge(neighbor, new, attr_dict=data)\n", "issue": "RAG does not correctly merge weights\n## Description\r\n\r\nHi there,\r\nI have a question about the example of merging RAG 1. I just wondering how does the parameters are passed to the identified function max_edge. For example, I wanna merge the node of 1 and 3, as far as I concerned the \u2018src\u2019 and \u2018dst\u2019 are supposed to be 1 and 3, however, the result is 1 and 5. Moreover, after merging these 2 nodes, if we choose the max weights, the result should be 40 and 20 but we just got 40 and 10. Obviously there is some problems about this part. Here is the link of the [example code](https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_rag.html#sphx-glr-auto-examples-segmentation-plot-rag-py) and the link from the [foum](https://forum.image.sc/t/question-about-the-example-of-merging-rag-from-the-tutorial/51946).\r\n\n", "before_files": [{"content": "import networkx as nx\nimport numpy as np\nfrom scipy import ndimage as ndi\nfrom scipy import sparse\nimport math\n\nfrom .. import measure, segmentation, util, color\nfrom .._shared.version_requirements import require\n\n\ndef _edge_generator_from_csr(csr_matrix):\n \"\"\"Yield weighted edge triples for use by NetworkX from a CSR matrix.\n\n This function is a straight rewrite of\n `networkx.convert_matrix._csr_gen_triples`. Since that is a private\n function, it is safer to include our own here.\n\n Parameters\n ----------\n csr_matrix : scipy.sparse.csr_matrix\n The input matrix. An edge (i, j, w) will be yielded if there is a\n data value for coordinates (i, j) in the matrix, even if that value\n is 0.\n\n Yields\n ------\n i, j, w : (int, int, float) tuples\n Each value `w` in the matrix along with its coordinates (i, j).\n\n Examples\n --------\n\n >>> dense = np.eye(2, dtype=float)\n >>> csr = sparse.csr_matrix(dense)\n >>> edges = _edge_generator_from_csr(csr)\n >>> list(edges)\n [(0, 0, 1.0), (1, 1, 1.0)]\n \"\"\"\n nrows = csr_matrix.shape[0]\n values = csr_matrix.data\n indptr = csr_matrix.indptr\n col_indices = csr_matrix.indices\n for i in range(nrows):\n for j in range(indptr[i], indptr[i + 1]):\n yield i, col_indices[j], values[j]\n\n\ndef min_weight(graph, src, dst, n):\n \"\"\"Callback to handle merging nodes by choosing minimum weight.\n\n Returns a dictionary with `\"weight\"` set as either the weight between\n (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when\n both exist.\n\n Parameters\n ----------\n graph : RAG\n The graph under consideration.\n src, dst : int\n The verices in `graph` to be merged.\n n : int\n A neighbor of `src` or `dst` or both.\n\n Returns\n -------\n data : dict\n A dict with the `\"weight\"` attribute set the weight between\n (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when\n both exist.\n\n \"\"\"\n\n # cover the cases where n only has edge to either `src` or `dst`\n default = {'weight': np.inf}\n w1 = graph[n].get(src, default)['weight']\n w2 = graph[n].get(dst, default)['weight']\n return {'weight': min(w1, w2)}\n\n\ndef _add_edge_filter(values, graph):\n \"\"\"Create edge in `graph` between central element of `values` and the rest.\n\n Add an edge between the middle element in `values` and\n all other elements of `values` into `graph`. ``values[len(values) // 2]``\n is expected to be the central value of the footprint used.\n\n Parameters\n ----------\n values : array\n The array to process.\n graph : RAG\n The graph to add edges in.\n\n Returns\n -------\n 0 : float\n Always returns 0. The return value is required so that `generic_filter`\n can put it in the output array, but it is ignored by this filter.\n \"\"\"\n values = values.astype(int)\n center = values[len(values) // 2]\n for value in values:\n if value != center and not graph.has_edge(center, value):\n graph.add_edge(center, value)\n return 0.\n\n\nclass RAG(nx.Graph):\n\n \"\"\"\n The Region Adjacency Graph (RAG) of an image, subclasses\n `networx.Graph <http://networkx.github.io/documentation/latest/reference/classes/graph.html>`_\n\n Parameters\n ----------\n label_image : array of int\n An initial segmentation, with each region labeled as a different\n integer. Every unique value in ``label_image`` will correspond to\n a node in the graph.\n connectivity : int in {1, ..., ``label_image.ndim``}, optional\n The connectivity between pixels in ``label_image``. For a 2D image,\n a connectivity of 1 corresponds to immediate neighbors up, down,\n left, and right, while a connectivity of 2 also includes diagonal\n neighbors. See `scipy.ndimage.generate_binary_structure`.\n data : networkx Graph specification, optional\n Initial or additional edges to pass to the NetworkX Graph\n constructor. See `networkx.Graph`. Valid edge specifications\n include edge list (list of tuples), NumPy arrays, and SciPy\n sparse matrices.\n **attr : keyword arguments, optional\n Additional attributes to add to the graph.\n \"\"\"\n\n def __init__(self, label_image=None, connectivity=1, data=None, **attr):\n\n super().__init__(data, **attr)\n if self.number_of_nodes() == 0:\n self.max_id = 0\n else:\n self.max_id = max(self.nodes())\n\n if label_image is not None:\n fp = ndi.generate_binary_structure(label_image.ndim, connectivity)\n # In the next ``ndi.generic_filter`` function, the kwarg\n # ``output`` is used to provide a strided array with a single\n # 64-bit floating point number, to which the function repeatedly\n # writes. This is done because even if we don't care about the\n # output, without this, a float array of the same shape as the\n # input image will be created and that could be expensive in\n # memory consumption.\n output = np.broadcast_to(1., label_image.shape)\n output.setflags(write=True)\n ndi.generic_filter(\n label_image,\n function=_add_edge_filter,\n footprint=fp,\n mode='nearest',\n output=output,\n extra_arguments=(self,))\n\n def merge_nodes(self, src, dst, weight_func=min_weight, in_place=True,\n extra_arguments=[], extra_keywords={}):\n \"\"\"Merge node `src` and `dst`.\n\n The new combined node is adjacent to all the neighbors of `src`\n and `dst`. `weight_func` is called to decide the weight of edges\n incident on the new node.\n\n Parameters\n ----------\n src, dst : int\n Nodes to be merged.\n weight_func : callable, optional\n Function to decide the attributes of edges incident on the new\n node. For each neighbor `n` for `src and `dst`, `weight_func` will\n be called as follows: `weight_func(src, dst, n, *extra_arguments,\n **extra_keywords)`. `src`, `dst` and `n` are IDs of vertices in the\n RAG object which is in turn a subclass of `networkx.Graph`. It is\n expected to return a dict of attributes of the resulting edge.\n in_place : bool, optional\n If set to `True`, the merged node has the id `dst`, else merged\n node has a new id which is returned.\n extra_arguments : sequence, optional\n The sequence of extra positional arguments passed to\n `weight_func`.\n extra_keywords : dictionary, optional\n The dict of keyword arguments passed to the `weight_func`.\n\n Returns\n -------\n id : int\n The id of the new node.\n\n Notes\n -----\n If `in_place` is `False` the resulting node has a new id, rather than\n `dst`.\n \"\"\"\n src_nbrs = set(self.neighbors(src))\n dst_nbrs = set(self.neighbors(dst))\n neighbors = (src_nbrs | dst_nbrs) - {src, dst}\n\n if in_place:\n new = dst\n else:\n new = self.next_id()\n self.add_node(new)\n\n for neighbor in neighbors:\n data = weight_func(self, src, new, neighbor, *extra_arguments,\n **extra_keywords)\n self.add_edge(neighbor, new, attr_dict=data)\n\n self.nodes[new]['labels'] = (self.nodes[src]['labels'] +\n self.nodes[dst]['labels'])\n self.remove_node(src)\n\n if not in_place:\n self.remove_node(dst)\n\n return new\n\n def add_node(self, n, attr_dict=None, **attr):\n \"\"\"Add node `n` while updating the maximum node id.\n\n .. seealso:: :func:`networkx.Graph.add_node`.\"\"\"\n if attr_dict is None: # compatibility with old networkx\n attr_dict = attr\n else:\n attr_dict.update(attr)\n super().add_node(n, **attr_dict)\n self.max_id = max(n, self.max_id)\n\n def add_edge(self, u, v, attr_dict=None, **attr):\n \"\"\"Add an edge between `u` and `v` while updating max node id.\n\n .. seealso:: :func:`networkx.Graph.add_edge`.\"\"\"\n if attr_dict is None: # compatibility with old networkx\n attr_dict = attr\n else:\n attr_dict.update(attr)\n super().add_edge(u, v, **attr_dict)\n self.max_id = max(u, v, self.max_id)\n\n def copy(self):\n \"\"\"Copy the graph with its max node id.\n\n .. seealso:: :func:`networkx.Graph.copy`.\"\"\"\n g = super().copy()\n g.max_id = self.max_id\n return g\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n\n This is required when subclassing Graph with networkx v2 and\n does not cause problems for v1. Here is more detail from\n the network migrating from 1.x to 2.x document::\n\n With the new GraphViews (SubGraph, ReversedGraph, etc)\n you can't assume that ``G.__class__()`` will create a new\n instance of the same graph type as ``G``. In fact, the\n call signature for ``__class__`` differs depending on\n whether ``G`` is a view or a base class. For v2.x you\n should use ``G.fresh_copy()`` to create a null graph of\n the correct type---ready to fill with nodes and edges.\n\n \"\"\"\n return RAG()\n\n def next_id(self):\n \"\"\"Returns the `id` for the new node to be inserted.\n\n The current implementation returns one more than the maximum `id`.\n\n Returns\n -------\n id : int\n The `id` of the new node to be inserted.\n \"\"\"\n return self.max_id + 1\n\n def _add_node_silent(self, n):\n \"\"\"Add node `n` without updating the maximum node id.\n\n This is a convenience method used internally.\n\n .. seealso:: :func:`networkx.Graph.add_node`.\"\"\"\n super().add_node(n)\n\n\ndef rag_mean_color(image, labels, connectivity=2, mode='distance',\n sigma=255.0):\n \"\"\"Compute the Region Adjacency Graph using mean colors.\n\n Given an image and its initial segmentation, this method constructs the\n corresponding Region Adjacency Graph (RAG). Each node in the RAG\n represents a set of pixels within `image` with the same label in `labels`.\n The weight between two adjacent regions represents how similar or\n dissimilar two regions are depending on the `mode` parameter.\n\n Parameters\n ----------\n image : ndarray, shape(M, N, [..., P,] 3)\n Input image.\n labels : ndarray, shape(M, N, [..., P])\n The labelled image. This should have one dimension less than\n `image`. If `image` has dimensions `(M, N, 3)` `labels` should have\n dimensions `(M, N)`.\n connectivity : int, optional\n Pixels with a squared distance less than `connectivity` from each other\n are considered adjacent. It can range from 1 to `labels.ndim`. Its\n behavior is the same as `connectivity` parameter in\n ``scipy.ndimage.generate_binary_structure``.\n mode : {'distance', 'similarity'}, optional\n The strategy to assign edge weights.\n\n 'distance' : The weight between two adjacent regions is the\n :math:`|c_1 - c_2|`, where :math:`c_1` and :math:`c_2` are the mean\n colors of the two regions. It represents the Euclidean distance in\n their average color.\n\n 'similarity' : The weight between two adjacent is\n :math:`e^{-d^2/sigma}` where :math:`d=|c_1 - c_2|`, where\n :math:`c_1` and :math:`c_2` are the mean colors of the two regions.\n It represents how similar two regions are.\n sigma : float, optional\n Used for computation when `mode` is \"similarity\". It governs how\n close to each other two colors should be, for their corresponding edge\n weight to be significant. A very large value of `sigma` could make\n any two colors behave as though they were similar.\n\n Returns\n -------\n out : RAG\n The region adjacency graph.\n\n Examples\n --------\n >>> from skimage import data, segmentation, graph\n >>> img = data.astronaut()\n >>> labels = segmentation.slic(img)\n >>> rag = graph.rag_mean_color(img, labels)\n\n References\n ----------\n .. [1] Alain Tremeau and Philippe Colantoni\n \"Regions Adjacency Graph Applied To Color Image Segmentation\"\n :DOI:`10.1109/83.841950`\n \"\"\"\n graph = RAG(labels, connectivity=connectivity)\n\n for n in graph:\n graph.nodes[n].update({'labels': [n],\n 'pixel count': 0,\n 'total color': np.array([0, 0, 0],\n dtype=np.float64)})\n\n for index in np.ndindex(labels.shape):\n current = labels[index]\n graph.nodes[current]['pixel count'] += 1\n graph.nodes[current]['total color'] += image[index]\n\n for n in graph:\n graph.nodes[n]['mean color'] = (graph.nodes[n]['total color'] /\n graph.nodes[n]['pixel count'])\n\n for x, y, d in graph.edges(data=True):\n diff = graph.nodes[x]['mean color'] - graph.nodes[y]['mean color']\n diff = np.linalg.norm(diff)\n if mode == 'similarity':\n d['weight'] = math.e ** (-(diff ** 2) / sigma)\n elif mode == 'distance':\n d['weight'] = diff\n else:\n raise ValueError(f\"The mode '{mode}' is not recognised\")\n\n return graph\n\n\ndef rag_boundary(labels, edge_map, connectivity=2):\n \"\"\" Comouter RAG based on region boundaries\n\n Given an image's initial segmentation and its edge map this method\n constructs the corresponding Region Adjacency Graph (RAG). Each node in the\n RAG represents a set of pixels within the image with the same label in\n `labels`. The weight between two adjacent regions is the average value\n in `edge_map` along their boundary.\n\n labels : ndarray\n The labelled image.\n edge_map : ndarray\n This should have the same shape as that of `labels`. For all pixels\n along the boundary between 2 adjacent regions, the average value of the\n corresponding pixels in `edge_map` is the edge weight between them.\n connectivity : int, optional\n Pixels with a squared distance less than `connectivity` from each other\n are considered adjacent. It can range from 1 to `labels.ndim`. Its\n behavior is the same as `connectivity` parameter in\n `scipy.ndimage.generate_binary_structure`.\n\n Examples\n --------\n >>> from skimage import data, segmentation, filters, color, graph\n >>> img = data.chelsea()\n >>> labels = segmentation.slic(img)\n >>> edge_map = filters.sobel(color.rgb2gray(img))\n >>> rag = graph.rag_boundary(labels, edge_map)\n\n \"\"\"\n\n conn = ndi.generate_binary_structure(labels.ndim, connectivity)\n eroded = ndi.grey_erosion(labels, footprint=conn)\n dilated = ndi.grey_dilation(labels, footprint=conn)\n boundaries0 = (eroded != labels)\n boundaries1 = (dilated != labels)\n labels_small = np.concatenate((eroded[boundaries0], labels[boundaries1]))\n labels_large = np.concatenate((labels[boundaries0], dilated[boundaries1]))\n n = np.max(labels_large) + 1\n\n # use a dummy broadcast array as data for RAG\n ones = np.broadcast_to(1., labels_small.shape)\n count_matrix = sparse.coo_matrix((ones, (labels_small, labels_large)),\n dtype=int, shape=(n, n)).tocsr()\n data = np.concatenate((edge_map[boundaries0], edge_map[boundaries1]))\n\n data_coo = sparse.coo_matrix((data, (labels_small, labels_large)))\n graph_matrix = data_coo.tocsr()\n graph_matrix.data /= count_matrix.data\n\n rag = RAG()\n rag.add_weighted_edges_from(_edge_generator_from_csr(graph_matrix),\n weight='weight')\n rag.add_weighted_edges_from(_edge_generator_from_csr(count_matrix),\n weight='count')\n\n for n in rag.nodes():\n rag.nodes[n].update({'labels': [n]})\n\n return rag\n\n\n@require(\"matplotlib\", \">=3.3\")\ndef show_rag(labels, rag, image, border_color='black', edge_width=1.5,\n edge_cmap='magma', img_cmap='bone', in_place=True, ax=None):\n \"\"\"Show a Region Adjacency Graph on an image.\n\n Given a labelled image and its corresponding RAG, show the nodes and edges\n of the RAG on the image with the specified colors. Edges are displayed between\n the centroid of the 2 adjacent regions in the image.\n\n Parameters\n ----------\n labels : ndarray, shape (M, N)\n The labelled image.\n rag : RAG\n The Region Adjacency Graph.\n image : ndarray, shape (M, N[, 3])\n Input image. If `colormap` is `None`, the image should be in RGB\n format.\n border_color : color spec, optional\n Color with which the borders between regions are drawn.\n edge_width : float, optional\n The thickness with which the RAG edges are drawn.\n edge_cmap : :py:class:`matplotlib.colors.Colormap`, optional\n Any matplotlib colormap with which the edges are drawn.\n img_cmap : :py:class:`matplotlib.colors.Colormap`, optional\n Any matplotlib colormap with which the image is draw. If set to `None`\n the image is drawn as it is.\n in_place : bool, optional\n If set, the RAG is modified in place. For each node `n` the function\n will set a new attribute ``rag.nodes[n]['centroid']``.\n ax : :py:class:`matplotlib.axes.Axes`, optional\n The axes to draw on. If not specified, new axes are created and drawn\n on.\n\n Returns\n -------\n lc : :py:class:`matplotlib.collections.LineCollection`\n A collection of lines that represent the edges of the graph. It can be\n passed to the :meth:`matplotlib.figure.Figure.colorbar` function.\n\n Examples\n --------\n >>> from skimage import data, segmentation, graph\n >>> import matplotlib.pyplot as plt\n >>>\n >>> img = data.coffee()\n >>> labels = segmentation.slic(img)\n >>> g = graph.rag_mean_color(img, labels)\n >>> lc = graph.show_rag(labels, g, img)\n >>> cbar = plt.colorbar(lc)\n \"\"\"\n from matplotlib import colors\n from matplotlib import pyplot as plt\n from matplotlib.collections import LineCollection\n\n if not in_place:\n rag = rag.copy()\n\n if ax is None:\n fig, ax = plt.subplots()\n out = util.img_as_float(image, force_copy=True)\n\n if img_cmap is None:\n if image.ndim < 3 or image.shape[2] not in [3, 4]:\n msg = 'If colormap is `None`, an RGB or RGBA image should be given'\n raise ValueError(msg)\n # Ignore the alpha channel\n out = image[:, :, :3]\n else:\n img_cmap = plt.get_cmap(img_cmap)\n out = color.rgb2gray(image)\n # Ignore the alpha channel\n out = img_cmap(out)[:, :, :3]\n\n edge_cmap = plt.get_cmap(edge_cmap)\n\n # Handling the case where one node has multiple labels\n # offset is 1 so that regionprops does not ignore 0\n offset = 1\n map_array = np.arange(labels.max() + 1)\n for n, d in rag.nodes(data=True):\n for label in d['labels']:\n map_array[label] = offset\n offset += 1\n\n rag_labels = map_array[labels]\n regions = measure.regionprops(rag_labels)\n\n for (n, data), region in zip(rag.nodes(data=True), regions):\n data['centroid'] = tuple(map(int, region['centroid']))\n\n cc = colors.ColorConverter()\n if border_color is not None:\n border_color = cc.to_rgb(border_color)\n out = segmentation.mark_boundaries(out, rag_labels, color=border_color)\n\n ax.imshow(out)\n\n # Defining the end points of the edges\n # The tuple[::-1] syntax reverses a tuple as matplotlib uses (x,y)\n # convention while skimage uses (row, column)\n lines = [[rag.nodes[n1]['centroid'][::-1], rag.nodes[n2]['centroid'][::-1]]\n for (n1, n2) in rag.edges()]\n\n lc = LineCollection(lines, linewidths=edge_width, cmap=edge_cmap)\n edge_weights = [d['weight'] for x, y, d in rag.edges(data=True)]\n lc.set_array(np.array(edge_weights))\n ax.add_collection(lc)\n\n return lc\n", "path": "skimage/graph/_rag.py"}], "after_files": [{"content": "import networkx as nx\nimport numpy as np\nfrom scipy import ndimage as ndi\nfrom scipy import sparse\nimport math\n\nfrom .. import measure, segmentation, util, color\nfrom .._shared.version_requirements import require\n\n\ndef _edge_generator_from_csr(csr_matrix):\n \"\"\"Yield weighted edge triples for use by NetworkX from a CSR matrix.\n\n This function is a straight rewrite of\n `networkx.convert_matrix._csr_gen_triples`. Since that is a private\n function, it is safer to include our own here.\n\n Parameters\n ----------\n csr_matrix : scipy.sparse.csr_matrix\n The input matrix. An edge (i, j, w) will be yielded if there is a\n data value for coordinates (i, j) in the matrix, even if that value\n is 0.\n\n Yields\n ------\n i, j, w : (int, int, float) tuples\n Each value `w` in the matrix along with its coordinates (i, j).\n\n Examples\n --------\n\n >>> dense = np.eye(2, dtype=float)\n >>> csr = sparse.csr_matrix(dense)\n >>> edges = _edge_generator_from_csr(csr)\n >>> list(edges)\n [(0, 0, 1.0), (1, 1, 1.0)]\n \"\"\"\n nrows = csr_matrix.shape[0]\n values = csr_matrix.data\n indptr = csr_matrix.indptr\n col_indices = csr_matrix.indices\n for i in range(nrows):\n for j in range(indptr[i], indptr[i + 1]):\n yield i, col_indices[j], values[j]\n\n\ndef min_weight(graph, src, dst, n):\n \"\"\"Callback to handle merging nodes by choosing minimum weight.\n\n Returns a dictionary with `\"weight\"` set as either the weight between\n (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when\n both exist.\n\n Parameters\n ----------\n graph : RAG\n The graph under consideration.\n src, dst : int\n The verices in `graph` to be merged.\n n : int\n A neighbor of `src` or `dst` or both.\n\n Returns\n -------\n data : dict\n A dict with the `\"weight\"` attribute set the weight between\n (`src`, `n`) or (`dst`, `n`) in `graph` or the minimum of the two when\n both exist.\n\n \"\"\"\n\n # cover the cases where n only has edge to either `src` or `dst`\n default = {'weight': np.inf}\n w1 = graph[n].get(src, default)['weight']\n w2 = graph[n].get(dst, default)['weight']\n return {'weight': min(w1, w2)}\n\n\ndef _add_edge_filter(values, graph):\n \"\"\"Create edge in `graph` between central element of `values` and the rest.\n\n Add an edge between the middle element in `values` and\n all other elements of `values` into `graph`. ``values[len(values) // 2]``\n is expected to be the central value of the footprint used.\n\n Parameters\n ----------\n values : array\n The array to process.\n graph : RAG\n The graph to add edges in.\n\n Returns\n -------\n 0 : float\n Always returns 0. The return value is required so that `generic_filter`\n can put it in the output array, but it is ignored by this filter.\n \"\"\"\n values = values.astype(int)\n center = values[len(values) // 2]\n for value in values:\n if value != center and not graph.has_edge(center, value):\n graph.add_edge(center, value)\n return 0.\n\n\nclass RAG(nx.Graph):\n\n \"\"\"\n The Region Adjacency Graph (RAG) of an image, subclasses\n `networx.Graph <http://networkx.github.io/documentation/latest/reference/classes/graph.html>`_\n\n Parameters\n ----------\n label_image : array of int\n An initial segmentation, with each region labeled as a different\n integer. Every unique value in ``label_image`` will correspond to\n a node in the graph.\n connectivity : int in {1, ..., ``label_image.ndim``}, optional\n The connectivity between pixels in ``label_image``. For a 2D image,\n a connectivity of 1 corresponds to immediate neighbors up, down,\n left, and right, while a connectivity of 2 also includes diagonal\n neighbors. See `scipy.ndimage.generate_binary_structure`.\n data : networkx Graph specification, optional\n Initial or additional edges to pass to the NetworkX Graph\n constructor. See `networkx.Graph`. Valid edge specifications\n include edge list (list of tuples), NumPy arrays, and SciPy\n sparse matrices.\n **attr : keyword arguments, optional\n Additional attributes to add to the graph.\n \"\"\"\n\n def __init__(self, label_image=None, connectivity=1, data=None, **attr):\n\n super().__init__(data, **attr)\n if self.number_of_nodes() == 0:\n self.max_id = 0\n else:\n self.max_id = max(self.nodes())\n\n if label_image is not None:\n fp = ndi.generate_binary_structure(label_image.ndim, connectivity)\n # In the next ``ndi.generic_filter`` function, the kwarg\n # ``output`` is used to provide a strided array with a single\n # 64-bit floating point number, to which the function repeatedly\n # writes. This is done because even if we don't care about the\n # output, without this, a float array of the same shape as the\n # input image will be created and that could be expensive in\n # memory consumption.\n output = np.broadcast_to(1., label_image.shape)\n output.setflags(write=True)\n ndi.generic_filter(\n label_image,\n function=_add_edge_filter,\n footprint=fp,\n mode='nearest',\n output=output,\n extra_arguments=(self,))\n\n def merge_nodes(self, src, dst, weight_func=min_weight, in_place=True,\n extra_arguments=[], extra_keywords={}):\n \"\"\"Merge node `src` and `dst`.\n\n The new combined node is adjacent to all the neighbors of `src`\n and `dst`. `weight_func` is called to decide the weight of edges\n incident on the new node.\n\n Parameters\n ----------\n src, dst : int\n Nodes to be merged.\n weight_func : callable, optional\n Function to decide the attributes of edges incident on the new\n node. For each neighbor `n` for `src and `dst`, `weight_func` will\n be called as follows: `weight_func(src, dst, n, *extra_arguments,\n **extra_keywords)`. `src`, `dst` and `n` are IDs of vertices in the\n RAG object which is in turn a subclass of `networkx.Graph`. It is\n expected to return a dict of attributes of the resulting edge.\n in_place : bool, optional\n If set to `True`, the merged node has the id `dst`, else merged\n node has a new id which is returned.\n extra_arguments : sequence, optional\n The sequence of extra positional arguments passed to\n `weight_func`.\n extra_keywords : dictionary, optional\n The dict of keyword arguments passed to the `weight_func`.\n\n Returns\n -------\n id : int\n The id of the new node.\n\n Notes\n -----\n If `in_place` is `False` the resulting node has a new id, rather than\n `dst`.\n \"\"\"\n src_nbrs = set(self.neighbors(src))\n dst_nbrs = set(self.neighbors(dst))\n neighbors = (src_nbrs | dst_nbrs) - {src, dst}\n\n if in_place:\n new = dst\n else:\n new = self.next_id()\n self.add_node(new)\n\n for neighbor in neighbors:\n data = weight_func(self, src, dst, neighbor, *extra_arguments,\n **extra_keywords)\n self.add_edge(neighbor, new, attr_dict=data)\n\n self.nodes[new]['labels'] = (self.nodes[src]['labels'] +\n self.nodes[dst]['labels'])\n self.remove_node(src)\n\n if not in_place:\n self.remove_node(dst)\n\n return new\n\n def add_node(self, n, attr_dict=None, **attr):\n \"\"\"Add node `n` while updating the maximum node id.\n\n .. seealso:: :func:`networkx.Graph.add_node`.\"\"\"\n if attr_dict is None: # compatibility with old networkx\n attr_dict = attr\n else:\n attr_dict.update(attr)\n super().add_node(n, **attr_dict)\n self.max_id = max(n, self.max_id)\n\n def add_edge(self, u, v, attr_dict=None, **attr):\n \"\"\"Add an edge between `u` and `v` while updating max node id.\n\n .. seealso:: :func:`networkx.Graph.add_edge`.\"\"\"\n if attr_dict is None: # compatibility with old networkx\n attr_dict = attr\n else:\n attr_dict.update(attr)\n super().add_edge(u, v, **attr_dict)\n self.max_id = max(u, v, self.max_id)\n\n def copy(self):\n \"\"\"Copy the graph with its max node id.\n\n .. seealso:: :func:`networkx.Graph.copy`.\"\"\"\n g = super().copy()\n g.max_id = self.max_id\n return g\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n\n This is required when subclassing Graph with networkx v2 and\n does not cause problems for v1. Here is more detail from\n the network migrating from 1.x to 2.x document::\n\n With the new GraphViews (SubGraph, ReversedGraph, etc)\n you can't assume that ``G.__class__()`` will create a new\n instance of the same graph type as ``G``. In fact, the\n call signature for ``__class__`` differs depending on\n whether ``G`` is a view or a base class. For v2.x you\n should use ``G.fresh_copy()`` to create a null graph of\n the correct type---ready to fill with nodes and edges.\n\n \"\"\"\n return RAG()\n\n def next_id(self):\n \"\"\"Returns the `id` for the new node to be inserted.\n\n The current implementation returns one more than the maximum `id`.\n\n Returns\n -------\n id : int\n The `id` of the new node to be inserted.\n \"\"\"\n return self.max_id + 1\n\n def _add_node_silent(self, n):\n \"\"\"Add node `n` without updating the maximum node id.\n\n This is a convenience method used internally.\n\n .. seealso:: :func:`networkx.Graph.add_node`.\"\"\"\n super().add_node(n)\n\n\ndef rag_mean_color(image, labels, connectivity=2, mode='distance',\n sigma=255.0):\n \"\"\"Compute the Region Adjacency Graph using mean colors.\n\n Given an image and its initial segmentation, this method constructs the\n corresponding Region Adjacency Graph (RAG). Each node in the RAG\n represents a set of pixels within `image` with the same label in `labels`.\n The weight between two adjacent regions represents how similar or\n dissimilar two regions are depending on the `mode` parameter.\n\n Parameters\n ----------\n image : ndarray, shape(M, N, [..., P,] 3)\n Input image.\n labels : ndarray, shape(M, N, [..., P])\n The labelled image. This should have one dimension less than\n `image`. If `image` has dimensions `(M, N, 3)` `labels` should have\n dimensions `(M, N)`.\n connectivity : int, optional\n Pixels with a squared distance less than `connectivity` from each other\n are considered adjacent. It can range from 1 to `labels.ndim`. Its\n behavior is the same as `connectivity` parameter in\n ``scipy.ndimage.generate_binary_structure``.\n mode : {'distance', 'similarity'}, optional\n The strategy to assign edge weights.\n\n 'distance' : The weight between two adjacent regions is the\n :math:`|c_1 - c_2|`, where :math:`c_1` and :math:`c_2` are the mean\n colors of the two regions. It represents the Euclidean distance in\n their average color.\n\n 'similarity' : The weight between two adjacent is\n :math:`e^{-d^2/sigma}` where :math:`d=|c_1 - c_2|`, where\n :math:`c_1` and :math:`c_2` are the mean colors of the two regions.\n It represents how similar two regions are.\n sigma : float, optional\n Used for computation when `mode` is \"similarity\". It governs how\n close to each other two colors should be, for their corresponding edge\n weight to be significant. A very large value of `sigma` could make\n any two colors behave as though they were similar.\n\n Returns\n -------\n out : RAG\n The region adjacency graph.\n\n Examples\n --------\n >>> from skimage import data, segmentation, graph\n >>> img = data.astronaut()\n >>> labels = segmentation.slic(img)\n >>> rag = graph.rag_mean_color(img, labels)\n\n References\n ----------\n .. [1] Alain Tremeau and Philippe Colantoni\n \"Regions Adjacency Graph Applied To Color Image Segmentation\"\n :DOI:`10.1109/83.841950`\n \"\"\"\n graph = RAG(labels, connectivity=connectivity)\n\n for n in graph:\n graph.nodes[n].update({'labels': [n],\n 'pixel count': 0,\n 'total color': np.array([0, 0, 0],\n dtype=np.float64)})\n\n for index in np.ndindex(labels.shape):\n current = labels[index]\n graph.nodes[current]['pixel count'] += 1\n graph.nodes[current]['total color'] += image[index]\n\n for n in graph:\n graph.nodes[n]['mean color'] = (graph.nodes[n]['total color'] /\n graph.nodes[n]['pixel count'])\n\n for x, y, d in graph.edges(data=True):\n diff = graph.nodes[x]['mean color'] - graph.nodes[y]['mean color']\n diff = np.linalg.norm(diff)\n if mode == 'similarity':\n d['weight'] = math.e ** (-(diff ** 2) / sigma)\n elif mode == 'distance':\n d['weight'] = diff\n else:\n raise ValueError(f\"The mode '{mode}' is not recognised\")\n\n return graph\n\n\ndef rag_boundary(labels, edge_map, connectivity=2):\n \"\"\" Comouter RAG based on region boundaries\n\n Given an image's initial segmentation and its edge map this method\n constructs the corresponding Region Adjacency Graph (RAG). Each node in the\n RAG represents a set of pixels within the image with the same label in\n `labels`. The weight between two adjacent regions is the average value\n in `edge_map` along their boundary.\n\n labels : ndarray\n The labelled image.\n edge_map : ndarray\n This should have the same shape as that of `labels`. For all pixels\n along the boundary between 2 adjacent regions, the average value of the\n corresponding pixels in `edge_map` is the edge weight between them.\n connectivity : int, optional\n Pixels with a squared distance less than `connectivity` from each other\n are considered adjacent. It can range from 1 to `labels.ndim`. Its\n behavior is the same as `connectivity` parameter in\n `scipy.ndimage.generate_binary_structure`.\n\n Examples\n --------\n >>> from skimage import data, segmentation, filters, color, graph\n >>> img = data.chelsea()\n >>> labels = segmentation.slic(img)\n >>> edge_map = filters.sobel(color.rgb2gray(img))\n >>> rag = graph.rag_boundary(labels, edge_map)\n\n \"\"\"\n\n conn = ndi.generate_binary_structure(labels.ndim, connectivity)\n eroded = ndi.grey_erosion(labels, footprint=conn)\n dilated = ndi.grey_dilation(labels, footprint=conn)\n boundaries0 = (eroded != labels)\n boundaries1 = (dilated != labels)\n labels_small = np.concatenate((eroded[boundaries0], labels[boundaries1]))\n labels_large = np.concatenate((labels[boundaries0], dilated[boundaries1]))\n n = np.max(labels_large) + 1\n\n # use a dummy broadcast array as data for RAG\n ones = np.broadcast_to(1., labels_small.shape)\n count_matrix = sparse.coo_matrix((ones, (labels_small, labels_large)),\n dtype=int, shape=(n, n)).tocsr()\n data = np.concatenate((edge_map[boundaries0], edge_map[boundaries1]))\n\n data_coo = sparse.coo_matrix((data, (labels_small, labels_large)))\n graph_matrix = data_coo.tocsr()\n graph_matrix.data /= count_matrix.data\n\n rag = RAG()\n rag.add_weighted_edges_from(_edge_generator_from_csr(graph_matrix),\n weight='weight')\n rag.add_weighted_edges_from(_edge_generator_from_csr(count_matrix),\n weight='count')\n\n for n in rag.nodes():\n rag.nodes[n].update({'labels': [n]})\n\n return rag\n\n\n@require(\"matplotlib\", \">=3.3\")\ndef show_rag(labels, rag, image, border_color='black', edge_width=1.5,\n edge_cmap='magma', img_cmap='bone', in_place=True, ax=None):\n \"\"\"Show a Region Adjacency Graph on an image.\n\n Given a labelled image and its corresponding RAG, show the nodes and edges\n of the RAG on the image with the specified colors. Edges are displayed between\n the centroid of the 2 adjacent regions in the image.\n\n Parameters\n ----------\n labels : ndarray, shape (M, N)\n The labelled image.\n rag : RAG\n The Region Adjacency Graph.\n image : ndarray, shape (M, N[, 3])\n Input image. If `colormap` is `None`, the image should be in RGB\n format.\n border_color : color spec, optional\n Color with which the borders between regions are drawn.\n edge_width : float, optional\n The thickness with which the RAG edges are drawn.\n edge_cmap : :py:class:`matplotlib.colors.Colormap`, optional\n Any matplotlib colormap with which the edges are drawn.\n img_cmap : :py:class:`matplotlib.colors.Colormap`, optional\n Any matplotlib colormap with which the image is draw. If set to `None`\n the image is drawn as it is.\n in_place : bool, optional\n If set, the RAG is modified in place. For each node `n` the function\n will set a new attribute ``rag.nodes[n]['centroid']``.\n ax : :py:class:`matplotlib.axes.Axes`, optional\n The axes to draw on. If not specified, new axes are created and drawn\n on.\n\n Returns\n -------\n lc : :py:class:`matplotlib.collections.LineCollection`\n A collection of lines that represent the edges of the graph. It can be\n passed to the :meth:`matplotlib.figure.Figure.colorbar` function.\n\n Examples\n --------\n >>> from skimage import data, segmentation, graph\n >>> import matplotlib.pyplot as plt\n >>>\n >>> img = data.coffee()\n >>> labels = segmentation.slic(img)\n >>> g = graph.rag_mean_color(img, labels)\n >>> lc = graph.show_rag(labels, g, img)\n >>> cbar = plt.colorbar(lc)\n \"\"\"\n from matplotlib import colors\n from matplotlib import pyplot as plt\n from matplotlib.collections import LineCollection\n\n if not in_place:\n rag = rag.copy()\n\n if ax is None:\n fig, ax = plt.subplots()\n out = util.img_as_float(image, force_copy=True)\n\n if img_cmap is None:\n if image.ndim < 3 or image.shape[2] not in [3, 4]:\n msg = 'If colormap is `None`, an RGB or RGBA image should be given'\n raise ValueError(msg)\n # Ignore the alpha channel\n out = image[:, :, :3]\n else:\n img_cmap = plt.get_cmap(img_cmap)\n out = color.rgb2gray(image)\n # Ignore the alpha channel\n out = img_cmap(out)[:, :, :3]\n\n edge_cmap = plt.get_cmap(edge_cmap)\n\n # Handling the case where one node has multiple labels\n # offset is 1 so that regionprops does not ignore 0\n offset = 1\n map_array = np.arange(labels.max() + 1)\n for n, d in rag.nodes(data=True):\n for label in d['labels']:\n map_array[label] = offset\n offset += 1\n\n rag_labels = map_array[labels]\n regions = measure.regionprops(rag_labels)\n\n for (n, data), region in zip(rag.nodes(data=True), regions):\n data['centroid'] = tuple(map(int, region['centroid']))\n\n cc = colors.ColorConverter()\n if border_color is not None:\n border_color = cc.to_rgb(border_color)\n out = segmentation.mark_boundaries(out, rag_labels, color=border_color)\n\n ax.imshow(out)\n\n # Defining the end points of the edges\n # The tuple[::-1] syntax reverses a tuple as matplotlib uses (x,y)\n # convention while skimage uses (row, column)\n lines = [[rag.nodes[n1]['centroid'][::-1], rag.nodes[n2]['centroid'][::-1]]\n for (n1, n2) in rag.edges()]\n\n lc = LineCollection(lines, linewidths=edge_width, cmap=edge_cmap)\n edge_weights = [d['weight'] for x, y, d in rag.edges(data=True)]\n lc.set_array(np.array(edge_weights))\n ax.add_collection(lc)\n\n return lc\n", "path": "skimage/graph/_rag.py"}]} |
gh_patches_debug_1111 | rasdani/github-patches | git_diff | gratipay__gratipay.com-4197 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
403 clicking "fix credit card" in email when not logged in
My credit card expired and I got the email reminding me to fix payment info. I clicked the "fix credit card" button in the email and was taken to a 403 Forbidden page. Would expect to be taken to login form when I'm not already logged in. Thanks!
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gratipay/utils/__init__.py`
Content:
```
1 # encoding: utf8
2
3 from __future__ import absolute_import, division, print_function, unicode_literals
4
5 from base64 import urlsafe_b64encode, urlsafe_b64decode
6 from datetime import datetime, timedelta
7
8 from aspen import Response, json
9 from aspen.utils import to_rfc822, utcnow
10 from dependency_injection import resolve_dependencies
11 from postgres.cursors import SimpleCursorBase
12
13 import gratipay
14
15
16 BEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')
17
18 # Difference between current time and credit card expiring date when
19 # card is considered as expiring
20 EXPIRING_DELTA = timedelta(days = 30)
21
22
23 def dict_to_querystring(mapping):
24 if not mapping:
25 return u''
26
27 arguments = []
28 for key, values in mapping.iteritems():
29 for val in values:
30 arguments.append(u'='.join([key, val]))
31
32 return u'?' + u'&'.join(arguments)
33
34
35 def use_tildes_for_participants(website, request):
36 if request.path.raw.startswith('/~/'):
37 to = '/~' + request.path.raw[3:]
38 if request.qs.raw:
39 to += '?' + request.qs.raw
40 website.redirect(to)
41 elif request.path.raw.startswith('/~'):
42 request.path.__init__('/~/' + request.path.raw[2:])
43
44
45 def canonicalize(redirect, path, base, canonical, given, arguments=None):
46 if given != canonical:
47 assert canonical.lower() == given.lower() # sanity check
48 remainder = path[len(base + given):]
49
50 if arguments is not None:
51 arguments = dict_to_querystring(arguments)
52
53 newpath = base + canonical + remainder + arguments or ''
54 redirect(newpath)
55
56
57 def get_participant(state, restrict=True, resolve_unclaimed=True):
58 """Given a Request, raise Response or return Participant.
59
60 If restrict is True then we'll restrict access to owners and admins.
61
62 """
63 redirect = state['website'].redirect
64 request = state['request']
65 user = state['user']
66 slug = request.line.uri.path['username']
67 qs = request.line.uri.querystring
68 _ = state['_']
69
70 if restrict:
71 if user.ANON:
72 raise Response(403, _("You need to log in to access this page."))
73
74 from gratipay.models.participant import Participant # avoid circular import
75 participant = Participant.from_username(slug)
76
77 if participant is None:
78 raise Response(404)
79
80 canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)
81
82 if participant.is_closed:
83 if user.ADMIN:
84 return participant
85 raise Response(410)
86
87 if participant.claimed_time is None and resolve_unclaimed:
88 to = participant.resolve_unclaimed()
89 if to:
90 # This is a stub account (someone on another platform who hasn't
91 # actually registered with Gratipay yet)
92 redirect(to)
93 else:
94 # This is an archived account (result of take_over)
95 if user.ADMIN:
96 return participant
97 raise Response(404)
98
99 if restrict:
100 if participant != user.participant:
101 if not user.ADMIN:
102 raise Response(403, _("You are not authorized to access this page."))
103
104 return participant
105
106
107 def get_team(state):
108 """Given a Request, raise Response or return Team.
109 """
110 redirect = state['website'].redirect
111 request = state['request']
112 user = state['user']
113 slug = request.line.uri.path['team']
114 qs = request.line.uri.querystring
115
116 from gratipay.models.team import Team # avoid circular import
117 team = Team.from_slug(slug)
118
119 if team is None:
120 # Try to redirect to a Participant.
121 from gratipay.models.participant import Participant # avoid circular import
122 participant = Participant.from_username(slug)
123 if participant is not None:
124 qs = '?' + request.qs.raw if request.qs.raw else ''
125 redirect('/~' + request.path.raw[1:] + qs)
126 raise Response(404)
127
128 canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)
129
130 if team.is_closed and not user.ADMIN:
131 raise Response(410)
132
133 return team
134
135
136 def encode_for_querystring(s):
137 """Given a unicode, return a unicode that's safe for transport across a querystring.
138 """
139 if not isinstance(s, unicode):
140 raise TypeError('unicode required')
141 return urlsafe_b64encode(s.encode('utf8')).replace(b'=', b'~').decode('ascii')
142
143
144 def decode_from_querystring(s, **kw):
145 """Given a unicode computed by encode_for_querystring, return the inverse.
146
147 We raise Response(400) if the input value can't be decoded (i.e., it's not
148 ASCII, not padded properly, or not decodable as UTF-8 once Base64-decoded).
149
150 """
151 if not isinstance(s, unicode):
152 raise TypeError('unicode required')
153 try:
154 return urlsafe_b64decode(s.encode('ascii').replace(b'~', b'=')).decode('utf8')
155 except:
156 if 'default' in kw:
157 # Enable callers to handle errors without using try/except.
158 return kw['default']
159 raise Response(400, "invalid input")
160
161
162 def update_cta(website):
163 nusers = website.db.one("""
164 SELECT nusers FROM paydays
165 ORDER BY ts_end DESC LIMIT 1
166 """, default=0)
167 nreceiving_from = website.db.one("""
168 SELECT nreceiving_from
169 FROM teams
170 WHERE slug = 'Gratipay'
171 """, default=0)
172 website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0
173 if cur < 10: goal = 20
174 elif cur < 15: goal = 30
175 elif cur < 25: goal = 40
176 elif cur < 35: goal = 50
177 elif cur < 45: goal = 60
178 elif cur < 55: goal = 70
179 elif cur < 65: goal = 80
180 elif cur > 70: goal = None
181 website.support_goal = goal
182
183
184 def _execute(this, sql, params=[]):
185 print(sql.strip(), params)
186 super(SimpleCursorBase, this).execute(sql, params)
187
188 def log_cursor(f):
189 "Prints sql and params to stdout. Works globaly so watch for threaded use."
190 def wrapper(*a, **kw):
191 try:
192 SimpleCursorBase.execute = _execute
193 ret = f(*a, **kw)
194 finally:
195 del SimpleCursorBase.execute
196 return ret
197 return wrapper
198
199
200 def format_money(money):
201 format = '%.2f' if money < 1000 else '%.0f'
202 return format % money
203
204
205 def excerpt_intro(text, length=175, append=u'…'):
206 if not text:
207 return ''
208 if len(text) > length:
209 return text[:length] + append
210 return text
211
212
213 def is_card_expiring(expiration_year, expiration_month):
214 now = datetime.utcnow()
215 expiring_date = datetime(expiration_year, expiration_month, 1)
216 delta = expiring_date - now
217 return delta < EXPIRING_DELTA
218
219
220 def set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):
221 cookies[key] = value
222 cookie = cookies[key]
223 if expires:
224 if isinstance(expires, timedelta):
225 expires += utcnow()
226 if isinstance(expires, datetime):
227 expires = to_rfc822(expires).encode('ascii')
228 cookie[b'expires'] = expires
229 if httponly:
230 cookie[b'httponly'] = True
231 if path:
232 cookie[b'path'] = path
233 if gratipay.use_secure_cookies:
234 cookie[b'secure'] = True
235
236
237 def erase_cookie(cookies, key, **kw):
238 set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)
239
240
241 def filter_profile_nav(user, participant, pages):
242 out = []
243 for foo, bar, show_them, show_others in pages:
244 if (user.participant == participant and show_them) \
245 or (user.participant != participant and show_others) \
246 or user.ADMIN:
247 out.append((foo, bar, show_them, show_others))
248 return out
249
250
251 def to_javascript(obj):
252 """For when you want to inject an object into a <script> tag.
253 """
254 return json.dumps(obj).replace('</', '<\\/')
255
256
257 class LazyResponse(Response):
258
259 def __init__(self, code, lazy_body, **kw):
260 Response.__init__(self, code, '', **kw)
261 self.lazy_body = lazy_body
262
263 def render_body(self, state):
264 f = self.lazy_body
265 self.body = f(*resolve_dependencies(f, state).as_args)
266
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py
--- a/gratipay/utils/__init__.py
+++ b/gratipay/utils/__init__.py
@@ -69,7 +69,7 @@
if restrict:
if user.ANON:
- raise Response(403, _("You need to log in to access this page."))
+ raise Response(401, _("You need to log in to access this page."))
from gratipay.models.participant import Participant # avoid circular import
participant = Participant.from_username(slug)
| {"golden_diff": "diff --git a/gratipay/utils/__init__.py b/gratipay/utils/__init__.py\n--- a/gratipay/utils/__init__.py\n+++ b/gratipay/utils/__init__.py\n@@ -69,7 +69,7 @@\n \n if restrict:\n if user.ANON:\n- raise Response(403, _(\"You need to log in to access this page.\"))\n+ raise Response(401, _(\"You need to log in to access this page.\"))\n \n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n", "issue": "403 clicking \"fix credit card\" in email when not logged in\nMy credit card expired and I got the email reminding me to fix payment info. I clicked the \"fix credit card\" button in the email and was taken to a 403 Forbidden page. Would expect to be taken to login form when I'm not already logged in. Thanks!\n\n", "before_files": [{"content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom base64 import urlsafe_b64encode, urlsafe_b64decode\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cursors import SimpleCursorBase\n\nimport gratipay\n\n\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')\n\n# Difference between current time and credit card expiring date when\n# card is considered as expiring\nEXPIRING_DELTA = timedelta(days = 30)\n\n\ndef dict_to_querystring(mapping):\n if not mapping:\n return u''\n\n arguments = []\n for key, values in mapping.iteritems():\n for val in values:\n arguments.append(u'='.join([key, val]))\n\n return u'?' + u'&'.join(arguments)\n\n\ndef use_tildes_for_participants(website, request):\n if request.path.raw.startswith('/~/'):\n to = '/~' + request.path.raw[3:]\n if request.qs.raw:\n to += '?' + request.qs.raw\n website.redirect(to)\n elif request.path.raw.startswith('/~'):\n request.path.__init__('/~/' + request.path.raw[2:])\n\n\ndef canonicalize(redirect, path, base, canonical, given, arguments=None):\n if given != canonical:\n assert canonical.lower() == given.lower() # sanity check\n remainder = path[len(base + given):]\n\n if arguments is not None:\n arguments = dict_to_querystring(arguments)\n\n newpath = base + canonical + remainder + arguments or ''\n redirect(newpath)\n\n\ndef get_participant(state, restrict=True, resolve_unclaimed=True):\n \"\"\"Given a Request, raise Response or return Participant.\n\n If restrict is True then we'll restrict access to owners and admins.\n\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['username']\n qs = request.line.uri.querystring\n _ = state['_']\n\n if restrict:\n if user.ANON:\n raise Response(403, _(\"You need to log in to access this page.\"))\n\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n\n if participant is None:\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)\n\n if participant.is_closed:\n if user.ADMIN:\n return participant\n raise Response(410)\n\n if participant.claimed_time is None and resolve_unclaimed:\n to = participant.resolve_unclaimed()\n if to:\n # This is a stub account (someone on another platform who hasn't\n # actually registered with Gratipay yet)\n redirect(to)\n else:\n # This is an archived account (result of take_over)\n if user.ADMIN:\n return participant\n raise Response(404)\n\n if restrict:\n if participant != user.participant:\n if not user.ADMIN:\n raise Response(403, _(\"You are not authorized to access this page.\"))\n\n return participant\n\n\ndef get_team(state):\n \"\"\"Given a Request, raise Response or return Team.\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['team']\n qs = request.line.uri.querystring\n\n from gratipay.models.team import Team # avoid circular import\n team = Team.from_slug(slug)\n\n if team is None:\n # Try to redirect to a Participant.\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n if participant is not None:\n qs = '?' + request.qs.raw if request.qs.raw else ''\n redirect('/~' + request.path.raw[1:] + qs)\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)\n\n if team.is_closed and not user.ADMIN:\n raise Response(410)\n\n return team\n\n\ndef encode_for_querystring(s):\n \"\"\"Given a unicode, return a unicode that's safe for transport across a querystring.\n \"\"\"\n if not isinstance(s, unicode):\n raise TypeError('unicode required')\n return urlsafe_b64encode(s.encode('utf8')).replace(b'=', b'~').decode('ascii')\n\n\ndef decode_from_querystring(s, **kw):\n \"\"\"Given a unicode computed by encode_for_querystring, return the inverse.\n\n We raise Response(400) if the input value can't be decoded (i.e., it's not\n ASCII, not padded properly, or not decodable as UTF-8 once Base64-decoded).\n\n \"\"\"\n if not isinstance(s, unicode):\n raise TypeError('unicode required')\n try:\n return urlsafe_b64decode(s.encode('ascii').replace(b'~', b'=')).decode('utf8')\n except:\n if 'default' in kw:\n # Enable callers to handle errors without using try/except.\n return kw['default']\n raise Response(400, \"invalid input\")\n\n\ndef update_cta(website):\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n \"\"\", default=0)\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n WHERE slug = 'Gratipay'\n \"\"\", default=0)\n website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\n if cur < 10: goal = 20\n elif cur < 15: goal = 30\n elif cur < 25: goal = 40\n elif cur < 35: goal = 50\n elif cur < 45: goal = 60\n elif cur < 55: goal = 70\n elif cur < 65: goal = 80\n elif cur > 70: goal = None\n website.support_goal = goal\n\n\ndef _execute(this, sql, params=[]):\n print(sql.strip(), params)\n super(SimpleCursorBase, this).execute(sql, params)\n\ndef log_cursor(f):\n \"Prints sql and params to stdout. Works globaly so watch for threaded use.\"\n def wrapper(*a, **kw):\n try:\n SimpleCursorBase.execute = _execute\n ret = f(*a, **kw)\n finally:\n del SimpleCursorBase.execute\n return ret\n return wrapper\n\n\ndef format_money(money):\n format = '%.2f' if money < 1000 else '%.0f'\n return format % money\n\n\ndef excerpt_intro(text, length=175, append=u'\u2026'):\n if not text:\n return ''\n if len(text) > length:\n return text[:length] + append\n return text\n\n\ndef is_card_expiring(expiration_year, expiration_month):\n now = datetime.utcnow()\n expiring_date = datetime(expiration_year, expiration_month, 1)\n delta = expiring_date - now\n return delta < EXPIRING_DELTA\n\n\ndef set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):\n cookies[key] = value\n cookie = cookies[key]\n if expires:\n if isinstance(expires, timedelta):\n expires += utcnow()\n if isinstance(expires, datetime):\n expires = to_rfc822(expires).encode('ascii')\n cookie[b'expires'] = expires\n if httponly:\n cookie[b'httponly'] = True\n if path:\n cookie[b'path'] = path\n if gratipay.use_secure_cookies:\n cookie[b'secure'] = True\n\n\ndef erase_cookie(cookies, key, **kw):\n set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)\n\n\ndef filter_profile_nav(user, participant, pages):\n out = []\n for foo, bar, show_them, show_others in pages:\n if (user.participant == participant and show_them) \\\n or (user.participant != participant and show_others) \\\n or user.ADMIN:\n out.append((foo, bar, show_them, show_others))\n return out\n\n\ndef to_javascript(obj):\n \"\"\"For when you want to inject an object into a <script> tag.\n \"\"\"\n return json.dumps(obj).replace('</', '<\\\\/')\n\n\nclass LazyResponse(Response):\n\n def __init__(self, code, lazy_body, **kw):\n Response.__init__(self, code, '', **kw)\n self.lazy_body = lazy_body\n\n def render_body(self, state):\n f = self.lazy_body\n self.body = f(*resolve_dependencies(f, state).as_args)\n", "path": "gratipay/utils/__init__.py"}], "after_files": [{"content": "# encoding: utf8\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom base64 import urlsafe_b64encode, urlsafe_b64decode\nfrom datetime import datetime, timedelta\n\nfrom aspen import Response, json\nfrom aspen.utils import to_rfc822, utcnow\nfrom dependency_injection import resolve_dependencies\nfrom postgres.cursors import SimpleCursorBase\n\nimport gratipay\n\n\nBEGINNING_OF_EPOCH = to_rfc822(datetime(1970, 1, 1)).encode('ascii')\n\n# Difference between current time and credit card expiring date when\n# card is considered as expiring\nEXPIRING_DELTA = timedelta(days = 30)\n\n\ndef dict_to_querystring(mapping):\n if not mapping:\n return u''\n\n arguments = []\n for key, values in mapping.iteritems():\n for val in values:\n arguments.append(u'='.join([key, val]))\n\n return u'?' + u'&'.join(arguments)\n\n\ndef use_tildes_for_participants(website, request):\n if request.path.raw.startswith('/~/'):\n to = '/~' + request.path.raw[3:]\n if request.qs.raw:\n to += '?' + request.qs.raw\n website.redirect(to)\n elif request.path.raw.startswith('/~'):\n request.path.__init__('/~/' + request.path.raw[2:])\n\n\ndef canonicalize(redirect, path, base, canonical, given, arguments=None):\n if given != canonical:\n assert canonical.lower() == given.lower() # sanity check\n remainder = path[len(base + given):]\n\n if arguments is not None:\n arguments = dict_to_querystring(arguments)\n\n newpath = base + canonical + remainder + arguments or ''\n redirect(newpath)\n\n\ndef get_participant(state, restrict=True, resolve_unclaimed=True):\n \"\"\"Given a Request, raise Response or return Participant.\n\n If restrict is True then we'll restrict access to owners and admins.\n\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['username']\n qs = request.line.uri.querystring\n _ = state['_']\n\n if restrict:\n if user.ANON:\n raise Response(401, _(\"You need to log in to access this page.\"))\n\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n\n if participant is None:\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/~/', participant.username, slug, qs)\n\n if participant.is_closed:\n if user.ADMIN:\n return participant\n raise Response(410)\n\n if participant.claimed_time is None and resolve_unclaimed:\n to = participant.resolve_unclaimed()\n if to:\n # This is a stub account (someone on another platform who hasn't\n # actually registered with Gratipay yet)\n redirect(to)\n else:\n # This is an archived account (result of take_over)\n if user.ADMIN:\n return participant\n raise Response(404)\n\n if restrict:\n if participant != user.participant:\n if not user.ADMIN:\n raise Response(403, _(\"You are not authorized to access this page.\"))\n\n return participant\n\n\ndef get_team(state):\n \"\"\"Given a Request, raise Response or return Team.\n \"\"\"\n redirect = state['website'].redirect\n request = state['request']\n user = state['user']\n slug = request.line.uri.path['team']\n qs = request.line.uri.querystring\n\n from gratipay.models.team import Team # avoid circular import\n team = Team.from_slug(slug)\n\n if team is None:\n # Try to redirect to a Participant.\n from gratipay.models.participant import Participant # avoid circular import\n participant = Participant.from_username(slug)\n if participant is not None:\n qs = '?' + request.qs.raw if request.qs.raw else ''\n redirect('/~' + request.path.raw[1:] + qs)\n raise Response(404)\n\n canonicalize(redirect, request.line.uri.path.raw, '/', team.slug, slug, qs)\n\n if team.is_closed and not user.ADMIN:\n raise Response(410)\n\n return team\n\n\ndef encode_for_querystring(s):\n \"\"\"Given a unicode, return a unicode that's safe for transport across a querystring.\n \"\"\"\n if not isinstance(s, unicode):\n raise TypeError('unicode required')\n return urlsafe_b64encode(s.encode('utf8')).replace(b'=', b'~').decode('ascii')\n\n\ndef decode_from_querystring(s, **kw):\n \"\"\"Given a unicode computed by encode_for_querystring, return the inverse.\n\n We raise Response(400) if the input value can't be decoded (i.e., it's not\n ASCII, not padded properly, or not decodable as UTF-8 once Base64-decoded).\n\n \"\"\"\n if not isinstance(s, unicode):\n raise TypeError('unicode required')\n try:\n return urlsafe_b64decode(s.encode('ascii').replace(b'~', b'=')).decode('utf8')\n except:\n if 'default' in kw:\n # Enable callers to handle errors without using try/except.\n return kw['default']\n raise Response(400, \"invalid input\")\n\n\ndef update_cta(website):\n nusers = website.db.one(\"\"\"\n SELECT nusers FROM paydays\n ORDER BY ts_end DESC LIMIT 1\n \"\"\", default=0)\n nreceiving_from = website.db.one(\"\"\"\n SELECT nreceiving_from\n FROM teams\n WHERE slug = 'Gratipay'\n \"\"\", default=0)\n website.support_current = cur = int(round(nreceiving_from / nusers * 100)) if nusers else 0\n if cur < 10: goal = 20\n elif cur < 15: goal = 30\n elif cur < 25: goal = 40\n elif cur < 35: goal = 50\n elif cur < 45: goal = 60\n elif cur < 55: goal = 70\n elif cur < 65: goal = 80\n elif cur > 70: goal = None\n website.support_goal = goal\n\n\ndef _execute(this, sql, params=[]):\n print(sql.strip(), params)\n super(SimpleCursorBase, this).execute(sql, params)\n\ndef log_cursor(f):\n \"Prints sql and params to stdout. Works globaly so watch for threaded use.\"\n def wrapper(*a, **kw):\n try:\n SimpleCursorBase.execute = _execute\n ret = f(*a, **kw)\n finally:\n del SimpleCursorBase.execute\n return ret\n return wrapper\n\n\ndef format_money(money):\n format = '%.2f' if money < 1000 else '%.0f'\n return format % money\n\n\ndef excerpt_intro(text, length=175, append=u'\u2026'):\n if not text:\n return ''\n if len(text) > length:\n return text[:length] + append\n return text\n\n\ndef is_card_expiring(expiration_year, expiration_month):\n now = datetime.utcnow()\n expiring_date = datetime(expiration_year, expiration_month, 1)\n delta = expiring_date - now\n return delta < EXPIRING_DELTA\n\n\ndef set_cookie(cookies, key, value, expires=None, httponly=True, path=b'/'):\n cookies[key] = value\n cookie = cookies[key]\n if expires:\n if isinstance(expires, timedelta):\n expires += utcnow()\n if isinstance(expires, datetime):\n expires = to_rfc822(expires).encode('ascii')\n cookie[b'expires'] = expires\n if httponly:\n cookie[b'httponly'] = True\n if path:\n cookie[b'path'] = path\n if gratipay.use_secure_cookies:\n cookie[b'secure'] = True\n\n\ndef erase_cookie(cookies, key, **kw):\n set_cookie(cookies, key, '', BEGINNING_OF_EPOCH, **kw)\n\n\ndef filter_profile_nav(user, participant, pages):\n out = []\n for foo, bar, show_them, show_others in pages:\n if (user.participant == participant and show_them) \\\n or (user.participant != participant and show_others) \\\n or user.ADMIN:\n out.append((foo, bar, show_them, show_others))\n return out\n\n\ndef to_javascript(obj):\n \"\"\"For when you want to inject an object into a <script> tag.\n \"\"\"\n return json.dumps(obj).replace('</', '<\\\\/')\n\n\nclass LazyResponse(Response):\n\n def __init__(self, code, lazy_body, **kw):\n Response.__init__(self, code, '', **kw)\n self.lazy_body = lazy_body\n\n def render_body(self, state):\n f = self.lazy_body\n self.body = f(*resolve_dependencies(f, state).as_args)\n", "path": "gratipay/utils/__init__.py"}]} |
gh_patches_debug_1112 | rasdani/github-patches | git_diff | vyperlang__vyper-2526 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value
### Version Information
* vyper Version (output of `vyper --version`): 0.2.16
* OS: osx
* Python Version (output of `python --version`): python3
### I tried to compile my codes using "vyper file_name.vy" and this is the error I get
Please include information like:
*Error compiling: bounty.v

y
vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value
This is an unhandled internal compiler error. Please create an issue on Github to notify the developers.
* vyper
* the code that caused the failure (see [this link](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for help with formatting code)
* please try running your example with the --debug flag turned on
### How can it be fixed?
Fill this in if you know how to fix it.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vyper/old_codegen/external_call.py`
Content:
```
1 import vyper.utils as util
2 from vyper import ast as vy_ast
3 from vyper.exceptions import StateAccessViolation, StructureException, TypeCheckFailure
4 from vyper.old_codegen.abi import abi_encode, abi_type_of
5 from vyper.old_codegen.lll_node import Encoding, LLLnode
6 from vyper.old_codegen.parser_utils import (
7 calculate_type_for_external_return,
8 get_element_ptr,
9 getpos,
10 unwrap_location,
11 )
12 from vyper.old_codegen.types import TupleType, canonicalize_type, get_type_for_exact_size
13 from vyper.old_codegen.types.check import check_assign
14
15
16 def _pack_arguments(contract_sig, args, context, pos):
17 # abi encoding just treats all args as a big tuple
18 args_tuple_t = TupleType([x.typ for x in args])
19 args_as_tuple = LLLnode.from_list(["multi"] + [x for x in args], typ=args_tuple_t)
20 args_abi_t = abi_type_of(args_tuple_t)
21
22 # sanity typecheck - make sure the arguments can be assigned
23 dst_tuple_t = TupleType([arg.typ for arg in contract_sig.args][: len(args)])
24 _tmp = LLLnode("fake node", location="memory", typ=dst_tuple_t)
25 check_assign(_tmp, args_as_tuple, pos)
26
27 if contract_sig.return_type is not None:
28 return_abi_t = abi_type_of(calculate_type_for_external_return(contract_sig.return_type))
29
30 # we use the same buffer for args and returndata,
31 # so allocate enough space here for the returndata too.
32 buflen = max(args_abi_t.size_bound(), return_abi_t.size_bound())
33 else:
34 buflen = args_abi_t.size_bound()
35
36 buflen += 32 # padding for the method id
37
38 buf_t = get_type_for_exact_size(buflen)
39 buf = context.new_internal_variable(buf_t)
40
41 args_ofst = buf + 28
42 args_len = args_abi_t.size_bound() + 4
43
44 abi_signature = contract_sig.name + canonicalize_type(dst_tuple_t)
45
46 # layout:
47 # 32 bytes | args
48 # 0x..00<method_id_4bytes> | args
49 # the reason for the left padding is just so the alignment is easier.
50 # if we were only targeting constantinople, we could align
51 # to buf (and also keep code size small) by using
52 # (mstore buf (shl signature.method_id 224))
53 mstore_method_id = [["mstore", buf, util.abi_method_id(abi_signature)]]
54
55 if len(args) == 0:
56 encode_args = ["pass"]
57 else:
58 encode_args = abi_encode(buf + 32, args_as_tuple, pos)
59
60 return buf, mstore_method_id + [encode_args], args_ofst, args_len
61
62
63 def _returndata_encoding(contract_sig):
64 if contract_sig.is_from_json:
65 return Encoding.JSON_ABI
66 return Encoding.ABI
67
68
69 def _unpack_returndata(buf, contract_sig, context, pos):
70 return_t = contract_sig.return_type
71 if return_t is None:
72 return ["pass"], 0, 0
73
74 return_t = calculate_type_for_external_return(return_t)
75 # if the abi signature has a different type than
76 # the vyper type, we need to wrap and unwrap the type
77 # so that the ABI decoding works correctly
78 should_unwrap_abi_tuple = return_t != contract_sig.return_type
79
80 abi_return_t = abi_type_of(return_t)
81
82 min_return_size = abi_return_t.min_size()
83 max_return_size = abi_return_t.size_bound()
84 assert 0 < min_return_size <= max_return_size
85
86 ret_ofst = buf
87 ret_len = max_return_size
88
89 # revert when returndatasize is not in bounds
90 ret = []
91 # runtime: min_return_size <= returndatasize
92 # TODO move the -1 optimization to LLL optimizer
93 ret += [["assert", ["gt", "returndatasize", min_return_size - 1]]]
94
95 # add as the last LLLnode a pointer to the return data structure
96
97 # the return type has been wrapped by the calling contract;
98 # unwrap it so downstream code isn't confused.
99 # basically this expands to buf+32 if the return type has been wrapped
100 # in a tuple AND its ABI type is dynamic.
101 # in most cases, this simply will evaluate to ret.
102 # in the special case where the return type has been wrapped
103 # in a tuple AND its ABI type is dynamic, it expands to buf+32.
104 buf = LLLnode(buf, typ=return_t, encoding=_returndata_encoding(contract_sig), location="memory")
105
106 if should_unwrap_abi_tuple:
107 buf = get_element_ptr(buf, 0, pos=None, array_bounds_check=False)
108
109 ret += [buf]
110
111 return ret, ret_ofst, ret_len
112
113
114 def _external_call_helper(
115 contract_address, contract_sig, args_lll, context, pos=None, value=None, gas=None
116 ):
117
118 if value is None:
119 value = 0
120 if gas is None:
121 gas = "gas"
122
123 # sanity check
124 assert len(contract_sig.args) == len(args_lll)
125
126 if context.is_constant() and contract_sig.mutability not in ("view", "pure"):
127 # TODO is this already done in type checker?
128 raise StateAccessViolation(
129 f"May not call state modifying function '{contract_sig.name}' "
130 f"within {context.pp_constancy()}.",
131 pos,
132 )
133
134 sub = ["seq"]
135
136 buf, arg_packer, args_ofst, args_len = _pack_arguments(contract_sig, args_lll, context, pos)
137
138 ret_unpacker, ret_ofst, ret_len = _unpack_returndata(buf, contract_sig, context, pos)
139
140 sub += arg_packer
141
142 if contract_sig.return_type is None:
143 # if we do not expect return data, check that a contract exists at the
144 # target address. we must perform this check BEFORE the call because
145 # the contract might selfdestruct. on the other hand we can omit this
146 # when we _do_ expect return data because we later check
147 # `returndatasize` (that check works even if the contract
148 # selfdestructs).
149 sub.append(["assert", ["extcodesize", contract_address]])
150
151 if context.is_constant() or contract_sig.mutability in ("view", "pure"):
152 call_op = ["staticcall", gas, contract_address, args_ofst, args_len, ret_ofst, ret_len]
153 else:
154 call_op = ["call", gas, contract_address, value, args_ofst, args_len, ret_ofst, ret_len]
155
156 sub.append(["assert", call_op])
157
158 if contract_sig.return_type is not None:
159 sub += ret_unpacker
160
161 ret = LLLnode.from_list(
162 # set the encoding to ABI here, downstream code will decode and add clampers.
163 sub,
164 typ=contract_sig.return_type,
165 location="memory",
166 encoding=_returndata_encoding(contract_sig),
167 pos=pos,
168 )
169
170 return ret
171
172
173 # TODO push me up to expr.py
174 def get_gas_and_value(stmt_expr, context):
175 from vyper.old_codegen.expr import Expr # TODO rethink this circular import
176
177 value, gas = None, None
178 for kw in stmt_expr.keywords:
179 if kw.arg == "gas":
180 gas = Expr.parse_value_expr(kw.value, context)
181 elif kw.arg == "value":
182 value = Expr.parse_value_expr(kw.value, context)
183 else:
184 raise TypeCheckFailure("Unexpected keyword argument")
185 return value, gas
186
187
188 def lll_for_external_call(stmt_expr, context):
189 from vyper.old_codegen.expr import Expr # TODO rethink this circular import
190
191 pos = getpos(stmt_expr)
192 value, gas = get_gas_and_value(stmt_expr, context)
193 args_lll = [Expr(x, context).lll_node for x in stmt_expr.args]
194
195 if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(
196 stmt_expr.func.value, vy_ast.Call
197 ):
198 # e.g. `Foo(address).bar()`
199
200 # sanity check
201 assert len(stmt_expr.func.value.args) == 1
202 contract_name = stmt_expr.func.value.func.id
203 contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)
204
205 elif (
206 isinstance(stmt_expr.func.value, vy_ast.Attribute)
207 and stmt_expr.func.value.attr in context.globals
208 # TODO check for self?
209 and hasattr(context.globals[stmt_expr.func.value.attr].typ, "name")
210 ):
211 # e.g. `self.foo.bar()`
212
213 # sanity check
214 assert stmt_expr.func.value.value.id == "self", stmt_expr
215
216 contract_name = context.globals[stmt_expr.func.value.attr].typ.name
217 type_ = stmt_expr.func.value._metadata["type"]
218 var = context.globals[stmt_expr.func.value.attr]
219 contract_address = unwrap_location(
220 LLLnode.from_list(
221 type_.position.position,
222 typ=var.typ,
223 location="storage",
224 pos=pos,
225 annotation="self." + stmt_expr.func.value.attr,
226 )
227 )
228 else:
229 # TODO catch this during type checking
230 raise StructureException("Unsupported operator.", stmt_expr)
231
232 method_name = stmt_expr.func.attr
233 contract_sig = context.sigs[contract_name][method_name]
234
235 ret = _external_call_helper(
236 contract_address,
237 contract_sig,
238 args_lll,
239 context,
240 pos,
241 value=value,
242 gas=gas,
243 )
244 ret.annotation = stmt_expr.get("node_source_code")
245
246 return ret
247
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vyper/old_codegen/external_call.py b/vyper/old_codegen/external_call.py
--- a/vyper/old_codegen/external_call.py
+++ b/vyper/old_codegen/external_call.py
@@ -121,7 +121,7 @@
gas = "gas"
# sanity check
- assert len(contract_sig.args) == len(args_lll)
+ assert len(contract_sig.base_args) <= len(args_lll) <= len(contract_sig.args)
if context.is_constant() and contract_sig.mutability not in ("view", "pure"):
# TODO is this already done in type checker?
| {"golden_diff": "diff --git a/vyper/old_codegen/external_call.py b/vyper/old_codegen/external_call.py\n--- a/vyper/old_codegen/external_call.py\n+++ b/vyper/old_codegen/external_call.py\n@@ -121,7 +121,7 @@\n gas = \"gas\"\n \n # sanity check\n- assert len(contract_sig.args) == len(args_lll)\n+ assert len(contract_sig.base_args) <= len(args_lll) <= len(contract_sig.args)\n \n if context.is_constant() and contract_sig.mutability not in (\"view\", \"pure\"):\n # TODO is this already done in type checker?\n", "issue": "vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): 0.2.16\r\n* OS: osx\r\n* Python Version (output of `python --version`): python3\r\n\r\n### I tried to compile my codes using \"vyper file_name.vy\" and this is the error I get\r\n\r\nPlease include information like:\r\n\r\n*Error compiling: bounty.v\r\n\r\ny\r\nvyper.exceptions.TypeCheckFailure: pack_arguments did not return a value\r\n\r\nThis is an unhandled internal compiler error. Please create an issue on Github to notify the developers.\r\n* vyper\r\n* the code that caused the failure (see [this link](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for help with formatting code)\r\n* please try running your example with the --debug flag turned on\r\n\r\n\r\n### How can it be fixed?\r\n\r\nFill this in if you know how to fix it.\r\n\r\n\r\n\n", "before_files": [{"content": "import vyper.utils as util\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import StateAccessViolation, StructureException, TypeCheckFailure\nfrom vyper.old_codegen.abi import abi_encode, abi_type_of\nfrom vyper.old_codegen.lll_node import Encoding, LLLnode\nfrom vyper.old_codegen.parser_utils import (\n calculate_type_for_external_return,\n get_element_ptr,\n getpos,\n unwrap_location,\n)\nfrom vyper.old_codegen.types import TupleType, canonicalize_type, get_type_for_exact_size\nfrom vyper.old_codegen.types.check import check_assign\n\n\ndef _pack_arguments(contract_sig, args, context, pos):\n # abi encoding just treats all args as a big tuple\n args_tuple_t = TupleType([x.typ for x in args])\n args_as_tuple = LLLnode.from_list([\"multi\"] + [x for x in args], typ=args_tuple_t)\n args_abi_t = abi_type_of(args_tuple_t)\n\n # sanity typecheck - make sure the arguments can be assigned\n dst_tuple_t = TupleType([arg.typ for arg in contract_sig.args][: len(args)])\n _tmp = LLLnode(\"fake node\", location=\"memory\", typ=dst_tuple_t)\n check_assign(_tmp, args_as_tuple, pos)\n\n if contract_sig.return_type is not None:\n return_abi_t = abi_type_of(calculate_type_for_external_return(contract_sig.return_type))\n\n # we use the same buffer for args and returndata,\n # so allocate enough space here for the returndata too.\n buflen = max(args_abi_t.size_bound(), return_abi_t.size_bound())\n else:\n buflen = args_abi_t.size_bound()\n\n buflen += 32 # padding for the method id\n\n buf_t = get_type_for_exact_size(buflen)\n buf = context.new_internal_variable(buf_t)\n\n args_ofst = buf + 28\n args_len = args_abi_t.size_bound() + 4\n\n abi_signature = contract_sig.name + canonicalize_type(dst_tuple_t)\n\n # layout:\n # 32 bytes | args\n # 0x..00<method_id_4bytes> | args\n # the reason for the left padding is just so the alignment is easier.\n # if we were only targeting constantinople, we could align\n # to buf (and also keep code size small) by using\n # (mstore buf (shl signature.method_id 224))\n mstore_method_id = [[\"mstore\", buf, util.abi_method_id(abi_signature)]]\n\n if len(args) == 0:\n encode_args = [\"pass\"]\n else:\n encode_args = abi_encode(buf + 32, args_as_tuple, pos)\n\n return buf, mstore_method_id + [encode_args], args_ofst, args_len\n\n\ndef _returndata_encoding(contract_sig):\n if contract_sig.is_from_json:\n return Encoding.JSON_ABI\n return Encoding.ABI\n\n\ndef _unpack_returndata(buf, contract_sig, context, pos):\n return_t = contract_sig.return_type\n if return_t is None:\n return [\"pass\"], 0, 0\n\n return_t = calculate_type_for_external_return(return_t)\n # if the abi signature has a different type than\n # the vyper type, we need to wrap and unwrap the type\n # so that the ABI decoding works correctly\n should_unwrap_abi_tuple = return_t != contract_sig.return_type\n\n abi_return_t = abi_type_of(return_t)\n\n min_return_size = abi_return_t.min_size()\n max_return_size = abi_return_t.size_bound()\n assert 0 < min_return_size <= max_return_size\n\n ret_ofst = buf\n ret_len = max_return_size\n\n # revert when returndatasize is not in bounds\n ret = []\n # runtime: min_return_size <= returndatasize\n # TODO move the -1 optimization to LLL optimizer\n ret += [[\"assert\", [\"gt\", \"returndatasize\", min_return_size - 1]]]\n\n # add as the last LLLnode a pointer to the return data structure\n\n # the return type has been wrapped by the calling contract;\n # unwrap it so downstream code isn't confused.\n # basically this expands to buf+32 if the return type has been wrapped\n # in a tuple AND its ABI type is dynamic.\n # in most cases, this simply will evaluate to ret.\n # in the special case where the return type has been wrapped\n # in a tuple AND its ABI type is dynamic, it expands to buf+32.\n buf = LLLnode(buf, typ=return_t, encoding=_returndata_encoding(contract_sig), location=\"memory\")\n\n if should_unwrap_abi_tuple:\n buf = get_element_ptr(buf, 0, pos=None, array_bounds_check=False)\n\n ret += [buf]\n\n return ret, ret_ofst, ret_len\n\n\ndef _external_call_helper(\n contract_address, contract_sig, args_lll, context, pos=None, value=None, gas=None\n):\n\n if value is None:\n value = 0\n if gas is None:\n gas = \"gas\"\n\n # sanity check\n assert len(contract_sig.args) == len(args_lll)\n\n if context.is_constant() and contract_sig.mutability not in (\"view\", \"pure\"):\n # TODO is this already done in type checker?\n raise StateAccessViolation(\n f\"May not call state modifying function '{contract_sig.name}' \"\n f\"within {context.pp_constancy()}.\",\n pos,\n )\n\n sub = [\"seq\"]\n\n buf, arg_packer, args_ofst, args_len = _pack_arguments(contract_sig, args_lll, context, pos)\n\n ret_unpacker, ret_ofst, ret_len = _unpack_returndata(buf, contract_sig, context, pos)\n\n sub += arg_packer\n\n if contract_sig.return_type is None:\n # if we do not expect return data, check that a contract exists at the\n # target address. we must perform this check BEFORE the call because\n # the contract might selfdestruct. on the other hand we can omit this\n # when we _do_ expect return data because we later check\n # `returndatasize` (that check works even if the contract\n # selfdestructs).\n sub.append([\"assert\", [\"extcodesize\", contract_address]])\n\n if context.is_constant() or contract_sig.mutability in (\"view\", \"pure\"):\n call_op = [\"staticcall\", gas, contract_address, args_ofst, args_len, ret_ofst, ret_len]\n else:\n call_op = [\"call\", gas, contract_address, value, args_ofst, args_len, ret_ofst, ret_len]\n\n sub.append([\"assert\", call_op])\n\n if contract_sig.return_type is not None:\n sub += ret_unpacker\n\n ret = LLLnode.from_list(\n # set the encoding to ABI here, downstream code will decode and add clampers.\n sub,\n typ=contract_sig.return_type,\n location=\"memory\",\n encoding=_returndata_encoding(contract_sig),\n pos=pos,\n )\n\n return ret\n\n\n# TODO push me up to expr.py\ndef get_gas_and_value(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n value, gas = None, None\n for kw in stmt_expr.keywords:\n if kw.arg == \"gas\":\n gas = Expr.parse_value_expr(kw.value, context)\n elif kw.arg == \"value\":\n value = Expr.parse_value_expr(kw.value, context)\n else:\n raise TypeCheckFailure(\"Unexpected keyword argument\")\n return value, gas\n\n\ndef lll_for_external_call(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n pos = getpos(stmt_expr)\n value, gas = get_gas_and_value(stmt_expr, context)\n args_lll = [Expr(x, context).lll_node for x in stmt_expr.args]\n\n if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(\n stmt_expr.func.value, vy_ast.Call\n ):\n # e.g. `Foo(address).bar()`\n\n # sanity check\n assert len(stmt_expr.func.value.args) == 1\n contract_name = stmt_expr.func.value.func.id\n contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.globals\n # TODO check for self?\n and hasattr(context.globals[stmt_expr.func.value.attr].typ, \"name\")\n ):\n # e.g. `self.foo.bar()`\n\n # sanity check\n assert stmt_expr.func.value.value.id == \"self\", stmt_expr\n\n contract_name = context.globals[stmt_expr.func.value.attr].typ.name\n type_ = stmt_expr.func.value._metadata[\"type\"]\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n type_.position.position,\n typ=var.typ,\n location=\"storage\",\n pos=pos,\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n else:\n # TODO catch this during type checking\n raise StructureException(\"Unsupported operator.\", stmt_expr)\n\n method_name = stmt_expr.func.attr\n contract_sig = context.sigs[contract_name][method_name]\n\n ret = _external_call_helper(\n contract_address,\n contract_sig,\n args_lll,\n context,\n pos,\n value=value,\n gas=gas,\n )\n ret.annotation = stmt_expr.get(\"node_source_code\")\n\n return ret\n", "path": "vyper/old_codegen/external_call.py"}], "after_files": [{"content": "import vyper.utils as util\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import StateAccessViolation, StructureException, TypeCheckFailure\nfrom vyper.old_codegen.abi import abi_encode, abi_type_of\nfrom vyper.old_codegen.lll_node import Encoding, LLLnode\nfrom vyper.old_codegen.parser_utils import (\n calculate_type_for_external_return,\n get_element_ptr,\n getpos,\n unwrap_location,\n)\nfrom vyper.old_codegen.types import TupleType, canonicalize_type, get_type_for_exact_size\nfrom vyper.old_codegen.types.check import check_assign\n\n\ndef _pack_arguments(contract_sig, args, context, pos):\n # abi encoding just treats all args as a big tuple\n args_tuple_t = TupleType([x.typ for x in args])\n args_as_tuple = LLLnode.from_list([\"multi\"] + [x for x in args], typ=args_tuple_t)\n args_abi_t = abi_type_of(args_tuple_t)\n\n # sanity typecheck - make sure the arguments can be assigned\n dst_tuple_t = TupleType([arg.typ for arg in contract_sig.args][: len(args)])\n _tmp = LLLnode(\"fake node\", location=\"memory\", typ=dst_tuple_t)\n check_assign(_tmp, args_as_tuple, pos)\n\n if contract_sig.return_type is not None:\n return_abi_t = abi_type_of(calculate_type_for_external_return(contract_sig.return_type))\n\n # we use the same buffer for args and returndata,\n # so allocate enough space here for the returndata too.\n buflen = max(args_abi_t.size_bound(), return_abi_t.size_bound())\n else:\n buflen = args_abi_t.size_bound()\n\n buflen += 32 # padding for the method id\n\n buf_t = get_type_for_exact_size(buflen)\n buf = context.new_internal_variable(buf_t)\n\n args_ofst = buf + 28\n args_len = args_abi_t.size_bound() + 4\n\n abi_signature = contract_sig.name + canonicalize_type(dst_tuple_t)\n\n # layout:\n # 32 bytes | args\n # 0x..00<method_id_4bytes> | args\n # the reason for the left padding is just so the alignment is easier.\n # if we were only targeting constantinople, we could align\n # to buf (and also keep code size small) by using\n # (mstore buf (shl signature.method_id 224))\n mstore_method_id = [[\"mstore\", buf, util.abi_method_id(abi_signature)]]\n\n if len(args) == 0:\n encode_args = [\"pass\"]\n else:\n encode_args = abi_encode(buf + 32, args_as_tuple, pos)\n\n return buf, mstore_method_id + [encode_args], args_ofst, args_len\n\n\ndef _returndata_encoding(contract_sig):\n if contract_sig.is_from_json:\n return Encoding.JSON_ABI\n return Encoding.ABI\n\n\ndef _unpack_returndata(buf, contract_sig, context, pos):\n return_t = contract_sig.return_type\n if return_t is None:\n return [\"pass\"], 0, 0\n\n return_t = calculate_type_for_external_return(return_t)\n # if the abi signature has a different type than\n # the vyper type, we need to wrap and unwrap the type\n # so that the ABI decoding works correctly\n should_unwrap_abi_tuple = return_t != contract_sig.return_type\n\n abi_return_t = abi_type_of(return_t)\n\n min_return_size = abi_return_t.min_size()\n max_return_size = abi_return_t.size_bound()\n assert 0 < min_return_size <= max_return_size\n\n ret_ofst = buf\n ret_len = max_return_size\n\n # revert when returndatasize is not in bounds\n ret = []\n # runtime: min_return_size <= returndatasize\n # TODO move the -1 optimization to LLL optimizer\n ret += [[\"assert\", [\"gt\", \"returndatasize\", min_return_size - 1]]]\n\n # add as the last LLLnode a pointer to the return data structure\n\n # the return type has been wrapped by the calling contract;\n # unwrap it so downstream code isn't confused.\n # basically this expands to buf+32 if the return type has been wrapped\n # in a tuple AND its ABI type is dynamic.\n # in most cases, this simply will evaluate to ret.\n # in the special case where the return type has been wrapped\n # in a tuple AND its ABI type is dynamic, it expands to buf+32.\n buf = LLLnode(buf, typ=return_t, encoding=_returndata_encoding(contract_sig), location=\"memory\")\n\n if should_unwrap_abi_tuple:\n buf = get_element_ptr(buf, 0, pos=None, array_bounds_check=False)\n\n ret += [buf]\n\n return ret, ret_ofst, ret_len\n\n\ndef _external_call_helper(\n contract_address, contract_sig, args_lll, context, pos=None, value=None, gas=None\n):\n\n if value is None:\n value = 0\n if gas is None:\n gas = \"gas\"\n\n # sanity check\n assert len(contract_sig.base_args) <= len(args_lll) <= len(contract_sig.args)\n\n if context.is_constant() and contract_sig.mutability not in (\"view\", \"pure\"):\n # TODO is this already done in type checker?\n raise StateAccessViolation(\n f\"May not call state modifying function '{contract_sig.name}' \"\n f\"within {context.pp_constancy()}.\",\n pos,\n )\n\n sub = [\"seq\"]\n\n buf, arg_packer, args_ofst, args_len = _pack_arguments(contract_sig, args_lll, context, pos)\n\n ret_unpacker, ret_ofst, ret_len = _unpack_returndata(buf, contract_sig, context, pos)\n\n sub += arg_packer\n\n if contract_sig.return_type is None:\n # if we do not expect return data, check that a contract exists at the\n # target address. we must perform this check BEFORE the call because\n # the contract might selfdestruct. on the other hand we can omit this\n # when we _do_ expect return data because we later check\n # `returndatasize` (that check works even if the contract\n # selfdestructs).\n sub.append([\"assert\", [\"extcodesize\", contract_address]])\n\n if context.is_constant() or contract_sig.mutability in (\"view\", \"pure\"):\n call_op = [\"staticcall\", gas, contract_address, args_ofst, args_len, ret_ofst, ret_len]\n else:\n call_op = [\"call\", gas, contract_address, value, args_ofst, args_len, ret_ofst, ret_len]\n\n sub.append([\"assert\", call_op])\n\n if contract_sig.return_type is not None:\n sub += ret_unpacker\n\n ret = LLLnode.from_list(\n # set the encoding to ABI here, downstream code will decode and add clampers.\n sub,\n typ=contract_sig.return_type,\n location=\"memory\",\n encoding=_returndata_encoding(contract_sig),\n pos=pos,\n )\n\n return ret\n\n\n# TODO push me up to expr.py\ndef get_gas_and_value(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n value, gas = None, None\n for kw in stmt_expr.keywords:\n if kw.arg == \"gas\":\n gas = Expr.parse_value_expr(kw.value, context)\n elif kw.arg == \"value\":\n value = Expr.parse_value_expr(kw.value, context)\n else:\n raise TypeCheckFailure(\"Unexpected keyword argument\")\n return value, gas\n\n\ndef lll_for_external_call(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n pos = getpos(stmt_expr)\n value, gas = get_gas_and_value(stmt_expr, context)\n args_lll = [Expr(x, context).lll_node for x in stmt_expr.args]\n\n if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(\n stmt_expr.func.value, vy_ast.Call\n ):\n # e.g. `Foo(address).bar()`\n\n # sanity check\n assert len(stmt_expr.func.value.args) == 1\n contract_name = stmt_expr.func.value.func.id\n contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.globals\n # TODO check for self?\n and hasattr(context.globals[stmt_expr.func.value.attr].typ, \"name\")\n ):\n # e.g. `self.foo.bar()`\n\n # sanity check\n assert stmt_expr.func.value.value.id == \"self\", stmt_expr\n\n contract_name = context.globals[stmt_expr.func.value.attr].typ.name\n type_ = stmt_expr.func.value._metadata[\"type\"]\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n type_.position.position,\n typ=var.typ,\n location=\"storage\",\n pos=pos,\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n else:\n # TODO catch this during type checking\n raise StructureException(\"Unsupported operator.\", stmt_expr)\n\n method_name = stmt_expr.func.attr\n contract_sig = context.sigs[contract_name][method_name]\n\n ret = _external_call_helper(\n contract_address,\n contract_sig,\n args_lll,\n context,\n pos,\n value=value,\n gas=gas,\n )\n ret.annotation = stmt_expr.get(\"node_source_code\")\n\n return ret\n", "path": "vyper/old_codegen/external_call.py"}]} |
gh_patches_debug_1113 | rasdani/github-patches | git_diff | pulp__pulpcore-2248 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PulpImporter assumes tempfiles can always go to /tmp
This issue is a copy of https://pulp.plan.io/issues/8610 , to allow us to backport the fix from core/3.17 into 14/15/16 correctly.
**Version**
core/3.14+
**Describe the bug**
importer.pulp_import uses tempfile.TemporaryDirectory() in places like this:
https://github.com/pulp/pulpcore/blob/master/pulpcore/app/tasks/importer.py#L118
If your /tmp is small, and your export is Large, this can cause Bad Things to happen.
We should perhas set dir= to the workers work-directory?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pulpcore/app/tasks/importer.py`
Content:
```
1 import hashlib
2 import json
3 import os
4 import re
5 import subprocess
6 import tempfile
7 import tarfile
8 from gettext import gettext as _
9 from logging import getLogger
10
11 from django.conf import settings
12 from django.core.files.storage import default_storage
13 from django.db.models import F
14
15 from pkg_resources import DistributionNotFound, get_distribution
16 from rest_framework.serializers import ValidationError
17 from tablib import Dataset
18
19 from pulpcore.app.apps import get_plugin_config
20 from pulpcore.app.models import (
21 Artifact,
22 Content,
23 CreatedResource,
24 GroupProgressReport,
25 ProgressReport,
26 PulpImport,
27 PulpImporter,
28 Repository,
29 Task,
30 TaskGroup,
31 )
32 from pulpcore.app.modelresource import (
33 ArtifactResource,
34 ContentArtifactResource,
35 )
36 from pulpcore.constants import TASK_STATES
37 from pulpcore.tasking.tasks import dispatch
38
39 log = getLogger(__name__)
40
41 ARTIFACT_FILE = "pulpcore.app.modelresource.ArtifactResource.json"
42 REPO_FILE = "pulpcore.app.modelresource.RepositoryResource.json"
43 CONTENT_FILE = "pulpcore.app.modelresource.ContentResource.json"
44 CA_FILE = "pulpcore.app.modelresource.ContentArtifactResource.json"
45 VERSIONS_FILE = "versions.json"
46 CONTENT_MAPPING_FILE = "content_mapping.json"
47
48
49 def _destination_repo(importer, source_repo_name):
50 """Find the destination repository based on source repo's name."""
51 if importer.repo_mapping and importer.repo_mapping.get(source_repo_name):
52 dest_repo_name = importer.repo_mapping[source_repo_name]
53 else:
54 dest_repo_name = source_repo_name
55 return Repository.objects.get(name=dest_repo_name)
56
57
58 def _import_file(fpath, resource_class, do_raise=True):
59 try:
60 log.info(_("Importing file {}.").format(fpath))
61 with open(fpath, "r") as json_file:
62 data = Dataset().load(json_file.read(), format="json")
63 resource = resource_class()
64 log.info(_("...Importing resource {}.").format(resource.__class__.__name__))
65 return resource.import_data(data, raise_errors=do_raise)
66 except AttributeError:
67 log.error(_("FAILURE importing file {}!").format(fpath))
68 raise
69
70
71 def _check_versions(version_json):
72 """Compare the export version_json to the installed components."""
73 error_messages = []
74 for component in version_json:
75 try:
76 version = get_distribution(component["component"]).version
77 except DistributionNotFound:
78 error_messages.append(
79 _("Export uses {} which is not installed.").format(component["component"])
80 )
81 else:
82 if version != component["version"]:
83 error_messages.append(
84 _(
85 "Export version {export_ver} of {component} does not match "
86 "installed version {ver}."
87 ).format(
88 export_ver=component["version"],
89 component=component["component"],
90 ver=version,
91 )
92 )
93
94 if error_messages:
95 raise ValidationError((" ".join(error_messages)))
96
97
98 def import_repository_version(importer_pk, destination_repo_pk, source_repo_name, tar_path):
99 """
100 Import a repository version from a Pulp export.
101
102 Args:
103 importer_pk (str): Importer we are working with
104 destination_repo_pk (str): Primary key of Repository to import into.
105 source_repo_name (str): Name of the Repository in the export.
106 tar_path (str): A path to export tar.
107 """
108 dest_repo = Repository.objects.get(pk=destination_repo_pk)
109 importer = PulpImporter.objects.get(pk=importer_pk)
110
111 pb = ProgressReport(
112 message=f"Importing content for {dest_repo.name}",
113 code="import.repo.version.content",
114 state=TASK_STATES.RUNNING,
115 )
116 pb.save()
117
118 with tempfile.TemporaryDirectory() as temp_dir:
119 # Extract the repo file for the repo info
120 with tarfile.open(tar_path, "r:gz") as tar:
121 tar.extract(REPO_FILE, path=temp_dir)
122
123 with open(os.path.join(temp_dir, REPO_FILE), "r") as repo_data_file:
124 data = json.load(repo_data_file)
125
126 src_repo = next(repo for repo in data if repo["name"] == source_repo_name)
127
128 if dest_repo.pulp_type != src_repo["pulp_type"]:
129 raise ValidationError(
130 _(
131 "Repository type mismatch: {src_repo} ({src_type}) vs {dest_repo} "
132 "({dest_type})."
133 ).format(
134 src_repo=src_repo["name"],
135 src_type=src_repo["pulp_type"],
136 dest_repo=dest_repo.name,
137 dest_type=dest_repo.pulp_type,
138 )
139 )
140
141 rv_name = ""
142 # Extract the repo version files
143 with tarfile.open(tar_path, "r:gz") as tar:
144 for mem in tar.getmembers():
145 match = re.search(rf"(^repository-{source_repo_name}_[0-9]+)/.+", mem.name)
146 if match:
147 rv_name = match.group(1)
148 tar.extract(mem, path=temp_dir)
149
150 if not rv_name:
151 raise ValidationError(_("No RepositoryVersion found for {}").format(rv_name))
152
153 rv_path = os.path.join(temp_dir, rv_name)
154 # Content
155 plugin_name = src_repo["pulp_type"].split(".")[0]
156 cfg = get_plugin_config(plugin_name)
157
158 resulting_content_ids = []
159 for res_class in cfg.exportable_classes:
160 filename = f"{res_class.__module__}.{res_class.__name__}.json"
161 a_result = _import_file(os.path.join(rv_path, filename), res_class, do_raise=False)
162 # django import-export can have a problem with concurrent-imports that are
163 # importing the same 'thing' (e.g., a Package that exists in two different
164 # repo-versions that are being imported at the same time). We will try an import
165 # that will simply record errors as they happen (rather than failing with an exception)
166 # first. If errors happen, we'll do one retry before we give up on this repo-version's
167 # import.
168 if a_result.has_errors():
169 log.info(
170 _("...{} import-errors encountered importing {} from {}, retrying").format(
171 a_result.totals["error"], filename, rv_name
172 )
173 )
174 # Second attempt, we allow to raise an exception on any problem.
175 # This will either succeed, or log a fatal error and fail.
176 try:
177 a_result = _import_file(os.path.join(rv_path, filename), res_class)
178 except Exception as e: # noqa log on ANY exception and then re-raise
179 log.error(
180 _("FATAL import-failure importing {} from {}").format(filename, rv_name)
181 )
182 raise
183
184 resulting_content_ids.extend(
185 row.object_id for row in a_result.rows if row.import_type in ("new", "update")
186 )
187
188 # Once all content exists, create the ContentArtifact links
189 ca_path = os.path.join(rv_path, CA_FILE)
190 _import_file(ca_path, ContentArtifactResource)
191
192 # see if we have a content mapping
193 mapping_path = f"{rv_name}/{CONTENT_MAPPING_FILE}"
194 mapping = {}
195 with tarfile.open(tar_path, "r:gz") as tar:
196 if mapping_path in tar.getnames():
197 tar.extract(mapping_path, path=temp_dir)
198 with open(os.path.join(temp_dir, mapping_path), "r") as mapping_file:
199 mapping = json.load(mapping_file)
200
201 if mapping:
202 # use the content mapping to map content to repos
203 for repo_name, content_ids in mapping.items():
204 repo = _destination_repo(importer, repo_name)
205 content = Content.objects.filter(upstream_id__in=content_ids)
206 with repo.new_version() as new_version:
207 new_version.set_content(content)
208 else:
209 # just map all the content to our destination repo
210 content = Content.objects.filter(pk__in=resulting_content_ids)
211 with dest_repo.new_version() as new_version:
212 new_version.set_content(content)
213
214 content_count = content.count()
215 pb.total = content_count
216 pb.done = content_count
217 pb.state = TASK_STATES.COMPLETED
218 pb.save()
219
220 gpr = TaskGroup.current().group_progress_reports.filter(code="import.repo.versions")
221 gpr.update(done=F("done") + 1)
222
223
224 def pulp_import(importer_pk, path, toc):
225 """
226 Import a Pulp export into Pulp.
227
228 Args:
229 importer_pk (str): Primary key of PulpImporter to do the import
230 path (str): Path to the export to be imported
231 """
232
233 def _compute_hash(filename):
234 sha256_hash = hashlib.sha256()
235 with open(filename, "rb") as f:
236 # Read and update hash string value in blocks of 4K
237 for byte_block in iter(lambda: f.read(4096), b""):
238 sha256_hash.update(byte_block)
239 return sha256_hash.hexdigest()
240
241 def validate_toc(toc_filename):
242 """
243 Check validity of table-of-contents file.
244
245 table-of-contents must:
246 * exist
247 * be valid JSON
248 * point to chunked-export-files that exist 'next to' the 'toc' file
249 * point to chunks whose checksums match the checksums stored in the 'toc' file
250
251 Args:
252 toc_filename (str): The user-provided toc-file-path to be validated.
253
254 Raises:
255 ValidationError: If toc is not a valid JSON table-of-contents file,
256 or when toc points to chunked-export-files that can't be found in the same
257 directory as the toc-file, or the checksums of the chunks do not match the
258 checksums stored in toc.
259 """
260 with open(toc_filename) as json_file:
261 # Valid JSON?
262 the_toc = json.load(json_file)
263 if not the_toc.get("files", None) or not the_toc.get("meta", None):
264 raise ValidationError(_("Missing 'files' or 'meta' keys in table-of-contents!"))
265
266 base_dir = os.path.dirname(toc_filename)
267 # Points at chunks that exist?
268 missing_files = []
269 for f in sorted(the_toc["files"].keys()):
270 if not os.path.isfile(os.path.join(base_dir, f)):
271 missing_files.append(f)
272 if missing_files:
273 raise ValidationError(
274 _(
275 "Missing import-chunks named in table-of-contents: {}.".format(
276 str(missing_files)
277 )
278 )
279 )
280
281 errs = []
282 # validate the sha256 of the toc-entries
283 # gather errors for reporting at the end
284 chunks = sorted(the_toc["files"].keys())
285 data = dict(message="Validating Chunks", code="validate.chunks", total=len(chunks))
286 with ProgressReport(**data) as pb:
287 for chunk in pb.iter(chunks):
288 a_hash = _compute_hash(os.path.join(base_dir, chunk))
289 if not a_hash == the_toc["files"][chunk]:
290 err_str = "File {} expected checksum : {}, computed checksum : {}".format(
291 chunk, the_toc["files"][chunk], a_hash
292 )
293 errs.append(err_str)
294
295 # if there are any errors, report and fail
296 if errs:
297 raise ValidationError(_("Import chunk hash mismatch: {}).").format(str(errs)))
298
299 return the_toc
300
301 def validate_and_assemble(toc_filename):
302 """Validate checksums of, and reassemble, chunks in table-of-contents file."""
303 the_toc = validate_toc(toc_filename)
304 toc_dir = os.path.dirname(toc_filename)
305 result_file = os.path.join(toc_dir, the_toc["meta"]["file"])
306
307 # if we have only one entry in "files", it must be the full .tar.gz - return it
308 if len(the_toc["files"]) == 1:
309 return os.path.join(toc_dir, list(the_toc["files"].keys())[0])
310
311 # We have multiple chunks.
312 # reassemble into one file 'next to' the toc and return the resulting full-path
313 chunk_size = int(the_toc["meta"]["chunk_size"])
314 offset = 0
315 block_size = 1024
316 blocks_per_chunk = int(chunk_size / block_size)
317
318 # sorting-by-filename is REALLY IMPORTANT here
319 # keys are of the form <base-export-name>.00..<base-export-name>.NN,
320 # and must be reassembled IN ORDER
321 the_chunk_files = sorted(the_toc["files"].keys())
322
323 data = dict(
324 message="Recombining Chunks", code="recombine.chunks", total=len(the_chunk_files)
325 )
326 with ProgressReport(**data) as pb:
327 for chunk in pb.iter(the_chunk_files):
328 # For each chunk, add it to the reconstituted tar.gz, picking up where the previous
329 # chunk left off
330 subprocess.run(
331 [
332 "dd",
333 "if={}".format(os.path.join(toc_dir, chunk)),
334 "of={}".format(result_file),
335 "bs={}".format(str(block_size)),
336 "seek={}".format(str(offset)),
337 ],
338 )
339 offset += blocks_per_chunk
340 # To keep from taking up All The Disk, we delete each chunk after it has been added
341 # to the recombined file.
342 try:
343 subprocess.run(["rm", "-f", os.path.join(toc_dir, chunk)])
344 except OSError:
345 log.warning(
346 _("Failed to remove chunk {} after recombining. Continuing.").format(
347 os.path.join(toc_dir, chunk)
348 ),
349 exc_info=True,
350 )
351
352 combined_hash = _compute_hash(result_file)
353 if combined_hash != the_toc["meta"]["global_hash"]:
354 raise ValidationError(
355 _("Mismatch between combined .tar.gz checksum [{}] and originating [{}]).").format(
356 combined_hash, the_toc["meta"]["global_hash"]
357 )
358 )
359 # if we get this far, then: the chunk-files all existed, they all pass checksum validation,
360 # and there exists a combined .tar.gz, which *also* passes checksum-validation.
361 # Let the rest of the import process do its thing on the new combined-file.
362 return result_file
363
364 if toc:
365 log.info(_("Validating TOC {}.").format(toc))
366 path = validate_and_assemble(toc)
367
368 log.info(_("Importing {}.").format(path))
369 current_task = Task.current()
370 importer = PulpImporter.objects.get(pk=importer_pk)
371 the_import = PulpImport.objects.create(
372 importer=importer, task=current_task, params={"path": path}
373 )
374 CreatedResource.objects.create(content_object=the_import)
375
376 task_group = TaskGroup.objects.create(description=f"Import of {path}")
377 Task.objects.filter(pk=current_task.pk).update(task_group=task_group)
378 current_task.refresh_from_db()
379 CreatedResource.objects.create(content_object=task_group)
380
381 with tempfile.TemporaryDirectory() as temp_dir:
382 with tarfile.open(path, "r:gz") as tar:
383 tar.extractall(path=temp_dir)
384
385 # Check version info
386 with open(os.path.join(temp_dir, VERSIONS_FILE)) as version_file:
387 version_json = json.load(version_file)
388 _check_versions(version_json)
389
390 # Artifacts
391 ar_result = _import_file(os.path.join(temp_dir, ARTIFACT_FILE), ArtifactResource)
392 data = dict(
393 message="Importing Artifacts", code="import.artifacts", total=len(ar_result.rows)
394 )
395 with ProgressReport(**data) as pb:
396 for row in pb.iter(ar_result.rows):
397 artifact = Artifact.objects.get(pk=row.object_id)
398 base_path = os.path.join("artifact", artifact.sha256[0:2], artifact.sha256[2:])
399 src = os.path.join(temp_dir, base_path)
400 dest = os.path.join(settings.MEDIA_ROOT, base_path)
401
402 if not default_storage.exists(dest):
403 with open(src, "rb") as f:
404 default_storage.save(dest, f)
405
406 with open(os.path.join(temp_dir, REPO_FILE), "r") as repo_data_file:
407 data = json.load(repo_data_file)
408 gpr = GroupProgressReport(
409 message="Importing repository versions",
410 code="import.repo.versions",
411 total=len(data),
412 done=0,
413 task_group=task_group,
414 )
415 gpr.save()
416
417 for src_repo in data:
418 try:
419 dest_repo = _destination_repo(importer, src_repo["name"])
420 except Repository.DoesNotExist:
421 log.warning(
422 _("Could not find destination repo for {}. Skipping.").format(
423 src_repo["name"]
424 )
425 )
426 continue
427
428 dispatch(
429 import_repository_version,
430 exclusive_resources=[dest_repo],
431 args=[importer.pk, dest_repo.pk, src_repo["name"], path],
432 task_group=task_group,
433 )
434
435 task_group.finish()
436
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -378,7 +378,7 @@
current_task.refresh_from_db()
CreatedResource.objects.create(content_object=task_group)
- with tempfile.TemporaryDirectory() as temp_dir:
+ with tempfile.TemporaryDirectory(dir=".") as temp_dir:
with tarfile.open(path, "r:gz") as tar:
tar.extractall(path=temp_dir)
| {"golden_diff": "diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py\n--- a/pulpcore/app/tasks/importer.py\n+++ b/pulpcore/app/tasks/importer.py\n@@ -378,7 +378,7 @@\n current_task.refresh_from_db()\n CreatedResource.objects.create(content_object=task_group)\n \n- with tempfile.TemporaryDirectory() as temp_dir:\n+ with tempfile.TemporaryDirectory(dir=\".\") as temp_dir:\n with tarfile.open(path, \"r:gz\") as tar:\n tar.extractall(path=temp_dir)\n", "issue": "PulpImporter assumes tempfiles can always go to /tmp\nThis issue is a copy of https://pulp.plan.io/issues/8610 , to allow us to backport the fix from core/3.17 into 14/15/16 correctly.\r\n\r\n**Version**\r\ncore/3.14+\r\n\r\n**Describe the bug**\r\nimporter.pulp_import uses tempfile.TemporaryDirectory() in places like this:\r\n\r\nhttps://github.com/pulp/pulpcore/blob/master/pulpcore/app/tasks/importer.py#L118\r\n\r\nIf your /tmp is small, and your export is Large, this can cause Bad Things to happen.\r\n\r\nWe should perhas set dir= to the workers work-directory?\r\n\r\n\n", "before_files": [{"content": "import hashlib\nimport json\nimport os\nimport re\nimport subprocess\nimport tempfile\nimport tarfile\nfrom gettext import gettext as _\nfrom logging import getLogger\n\nfrom django.conf import settings\nfrom django.core.files.storage import default_storage\nfrom django.db.models import F\n\nfrom pkg_resources import DistributionNotFound, get_distribution\nfrom rest_framework.serializers import ValidationError\nfrom tablib import Dataset\n\nfrom pulpcore.app.apps import get_plugin_config\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n CreatedResource,\n GroupProgressReport,\n ProgressReport,\n PulpImport,\n PulpImporter,\n Repository,\n Task,\n TaskGroup,\n)\nfrom pulpcore.app.modelresource import (\n ArtifactResource,\n ContentArtifactResource,\n)\nfrom pulpcore.constants import TASK_STATES\nfrom pulpcore.tasking.tasks import dispatch\n\nlog = getLogger(__name__)\n\nARTIFACT_FILE = \"pulpcore.app.modelresource.ArtifactResource.json\"\nREPO_FILE = \"pulpcore.app.modelresource.RepositoryResource.json\"\nCONTENT_FILE = \"pulpcore.app.modelresource.ContentResource.json\"\nCA_FILE = \"pulpcore.app.modelresource.ContentArtifactResource.json\"\nVERSIONS_FILE = \"versions.json\"\nCONTENT_MAPPING_FILE = \"content_mapping.json\"\n\n\ndef _destination_repo(importer, source_repo_name):\n \"\"\"Find the destination repository based on source repo's name.\"\"\"\n if importer.repo_mapping and importer.repo_mapping.get(source_repo_name):\n dest_repo_name = importer.repo_mapping[source_repo_name]\n else:\n dest_repo_name = source_repo_name\n return Repository.objects.get(name=dest_repo_name)\n\n\ndef _import_file(fpath, resource_class, do_raise=True):\n try:\n log.info(_(\"Importing file {}.\").format(fpath))\n with open(fpath, \"r\") as json_file:\n data = Dataset().load(json_file.read(), format=\"json\")\n resource = resource_class()\n log.info(_(\"...Importing resource {}.\").format(resource.__class__.__name__))\n return resource.import_data(data, raise_errors=do_raise)\n except AttributeError:\n log.error(_(\"FAILURE importing file {}!\").format(fpath))\n raise\n\n\ndef _check_versions(version_json):\n \"\"\"Compare the export version_json to the installed components.\"\"\"\n error_messages = []\n for component in version_json:\n try:\n version = get_distribution(component[\"component\"]).version\n except DistributionNotFound:\n error_messages.append(\n _(\"Export uses {} which is not installed.\").format(component[\"component\"])\n )\n else:\n if version != component[\"version\"]:\n error_messages.append(\n _(\n \"Export version {export_ver} of {component} does not match \"\n \"installed version {ver}.\"\n ).format(\n export_ver=component[\"version\"],\n component=component[\"component\"],\n ver=version,\n )\n )\n\n if error_messages:\n raise ValidationError((\" \".join(error_messages)))\n\n\ndef import_repository_version(importer_pk, destination_repo_pk, source_repo_name, tar_path):\n \"\"\"\n Import a repository version from a Pulp export.\n\n Args:\n importer_pk (str): Importer we are working with\n destination_repo_pk (str): Primary key of Repository to import into.\n source_repo_name (str): Name of the Repository in the export.\n tar_path (str): A path to export tar.\n \"\"\"\n dest_repo = Repository.objects.get(pk=destination_repo_pk)\n importer = PulpImporter.objects.get(pk=importer_pk)\n\n pb = ProgressReport(\n message=f\"Importing content for {dest_repo.name}\",\n code=\"import.repo.version.content\",\n state=TASK_STATES.RUNNING,\n )\n pb.save()\n\n with tempfile.TemporaryDirectory() as temp_dir:\n # Extract the repo file for the repo info\n with tarfile.open(tar_path, \"r:gz\") as tar:\n tar.extract(REPO_FILE, path=temp_dir)\n\n with open(os.path.join(temp_dir, REPO_FILE), \"r\") as repo_data_file:\n data = json.load(repo_data_file)\n\n src_repo = next(repo for repo in data if repo[\"name\"] == source_repo_name)\n\n if dest_repo.pulp_type != src_repo[\"pulp_type\"]:\n raise ValidationError(\n _(\n \"Repository type mismatch: {src_repo} ({src_type}) vs {dest_repo} \"\n \"({dest_type}).\"\n ).format(\n src_repo=src_repo[\"name\"],\n src_type=src_repo[\"pulp_type\"],\n dest_repo=dest_repo.name,\n dest_type=dest_repo.pulp_type,\n )\n )\n\n rv_name = \"\"\n # Extract the repo version files\n with tarfile.open(tar_path, \"r:gz\") as tar:\n for mem in tar.getmembers():\n match = re.search(rf\"(^repository-{source_repo_name}_[0-9]+)/.+\", mem.name)\n if match:\n rv_name = match.group(1)\n tar.extract(mem, path=temp_dir)\n\n if not rv_name:\n raise ValidationError(_(\"No RepositoryVersion found for {}\").format(rv_name))\n\n rv_path = os.path.join(temp_dir, rv_name)\n # Content\n plugin_name = src_repo[\"pulp_type\"].split(\".\")[0]\n cfg = get_plugin_config(plugin_name)\n\n resulting_content_ids = []\n for res_class in cfg.exportable_classes:\n filename = f\"{res_class.__module__}.{res_class.__name__}.json\"\n a_result = _import_file(os.path.join(rv_path, filename), res_class, do_raise=False)\n # django import-export can have a problem with concurrent-imports that are\n # importing the same 'thing' (e.g., a Package that exists in two different\n # repo-versions that are being imported at the same time). We will try an import\n # that will simply record errors as they happen (rather than failing with an exception)\n # first. If errors happen, we'll do one retry before we give up on this repo-version's\n # import.\n if a_result.has_errors():\n log.info(\n _(\"...{} import-errors encountered importing {} from {}, retrying\").format(\n a_result.totals[\"error\"], filename, rv_name\n )\n )\n # Second attempt, we allow to raise an exception on any problem.\n # This will either succeed, or log a fatal error and fail.\n try:\n a_result = _import_file(os.path.join(rv_path, filename), res_class)\n except Exception as e: # noqa log on ANY exception and then re-raise\n log.error(\n _(\"FATAL import-failure importing {} from {}\").format(filename, rv_name)\n )\n raise\n\n resulting_content_ids.extend(\n row.object_id for row in a_result.rows if row.import_type in (\"new\", \"update\")\n )\n\n # Once all content exists, create the ContentArtifact links\n ca_path = os.path.join(rv_path, CA_FILE)\n _import_file(ca_path, ContentArtifactResource)\n\n # see if we have a content mapping\n mapping_path = f\"{rv_name}/{CONTENT_MAPPING_FILE}\"\n mapping = {}\n with tarfile.open(tar_path, \"r:gz\") as tar:\n if mapping_path in tar.getnames():\n tar.extract(mapping_path, path=temp_dir)\n with open(os.path.join(temp_dir, mapping_path), \"r\") as mapping_file:\n mapping = json.load(mapping_file)\n\n if mapping:\n # use the content mapping to map content to repos\n for repo_name, content_ids in mapping.items():\n repo = _destination_repo(importer, repo_name)\n content = Content.objects.filter(upstream_id__in=content_ids)\n with repo.new_version() as new_version:\n new_version.set_content(content)\n else:\n # just map all the content to our destination repo\n content = Content.objects.filter(pk__in=resulting_content_ids)\n with dest_repo.new_version() as new_version:\n new_version.set_content(content)\n\n content_count = content.count()\n pb.total = content_count\n pb.done = content_count\n pb.state = TASK_STATES.COMPLETED\n pb.save()\n\n gpr = TaskGroup.current().group_progress_reports.filter(code=\"import.repo.versions\")\n gpr.update(done=F(\"done\") + 1)\n\n\ndef pulp_import(importer_pk, path, toc):\n \"\"\"\n Import a Pulp export into Pulp.\n\n Args:\n importer_pk (str): Primary key of PulpImporter to do the import\n path (str): Path to the export to be imported\n \"\"\"\n\n def _compute_hash(filename):\n sha256_hash = hashlib.sha256()\n with open(filename, \"rb\") as f:\n # Read and update hash string value in blocks of 4K\n for byte_block in iter(lambda: f.read(4096), b\"\"):\n sha256_hash.update(byte_block)\n return sha256_hash.hexdigest()\n\n def validate_toc(toc_filename):\n \"\"\"\n Check validity of table-of-contents file.\n\n table-of-contents must:\n * exist\n * be valid JSON\n * point to chunked-export-files that exist 'next to' the 'toc' file\n * point to chunks whose checksums match the checksums stored in the 'toc' file\n\n Args:\n toc_filename (str): The user-provided toc-file-path to be validated.\n\n Raises:\n ValidationError: If toc is not a valid JSON table-of-contents file,\n or when toc points to chunked-export-files that can't be found in the same\n directory as the toc-file, or the checksums of the chunks do not match the\n checksums stored in toc.\n \"\"\"\n with open(toc_filename) as json_file:\n # Valid JSON?\n the_toc = json.load(json_file)\n if not the_toc.get(\"files\", None) or not the_toc.get(\"meta\", None):\n raise ValidationError(_(\"Missing 'files' or 'meta' keys in table-of-contents!\"))\n\n base_dir = os.path.dirname(toc_filename)\n # Points at chunks that exist?\n missing_files = []\n for f in sorted(the_toc[\"files\"].keys()):\n if not os.path.isfile(os.path.join(base_dir, f)):\n missing_files.append(f)\n if missing_files:\n raise ValidationError(\n _(\n \"Missing import-chunks named in table-of-contents: {}.\".format(\n str(missing_files)\n )\n )\n )\n\n errs = []\n # validate the sha256 of the toc-entries\n # gather errors for reporting at the end\n chunks = sorted(the_toc[\"files\"].keys())\n data = dict(message=\"Validating Chunks\", code=\"validate.chunks\", total=len(chunks))\n with ProgressReport(**data) as pb:\n for chunk in pb.iter(chunks):\n a_hash = _compute_hash(os.path.join(base_dir, chunk))\n if not a_hash == the_toc[\"files\"][chunk]:\n err_str = \"File {} expected checksum : {}, computed checksum : {}\".format(\n chunk, the_toc[\"files\"][chunk], a_hash\n )\n errs.append(err_str)\n\n # if there are any errors, report and fail\n if errs:\n raise ValidationError(_(\"Import chunk hash mismatch: {}).\").format(str(errs)))\n\n return the_toc\n\n def validate_and_assemble(toc_filename):\n \"\"\"Validate checksums of, and reassemble, chunks in table-of-contents file.\"\"\"\n the_toc = validate_toc(toc_filename)\n toc_dir = os.path.dirname(toc_filename)\n result_file = os.path.join(toc_dir, the_toc[\"meta\"][\"file\"])\n\n # if we have only one entry in \"files\", it must be the full .tar.gz - return it\n if len(the_toc[\"files\"]) == 1:\n return os.path.join(toc_dir, list(the_toc[\"files\"].keys())[0])\n\n # We have multiple chunks.\n # reassemble into one file 'next to' the toc and return the resulting full-path\n chunk_size = int(the_toc[\"meta\"][\"chunk_size\"])\n offset = 0\n block_size = 1024\n blocks_per_chunk = int(chunk_size / block_size)\n\n # sorting-by-filename is REALLY IMPORTANT here\n # keys are of the form <base-export-name>.00..<base-export-name>.NN,\n # and must be reassembled IN ORDER\n the_chunk_files = sorted(the_toc[\"files\"].keys())\n\n data = dict(\n message=\"Recombining Chunks\", code=\"recombine.chunks\", total=len(the_chunk_files)\n )\n with ProgressReport(**data) as pb:\n for chunk in pb.iter(the_chunk_files):\n # For each chunk, add it to the reconstituted tar.gz, picking up where the previous\n # chunk left off\n subprocess.run(\n [\n \"dd\",\n \"if={}\".format(os.path.join(toc_dir, chunk)),\n \"of={}\".format(result_file),\n \"bs={}\".format(str(block_size)),\n \"seek={}\".format(str(offset)),\n ],\n )\n offset += blocks_per_chunk\n # To keep from taking up All The Disk, we delete each chunk after it has been added\n # to the recombined file.\n try:\n subprocess.run([\"rm\", \"-f\", os.path.join(toc_dir, chunk)])\n except OSError:\n log.warning(\n _(\"Failed to remove chunk {} after recombining. Continuing.\").format(\n os.path.join(toc_dir, chunk)\n ),\n exc_info=True,\n )\n\n combined_hash = _compute_hash(result_file)\n if combined_hash != the_toc[\"meta\"][\"global_hash\"]:\n raise ValidationError(\n _(\"Mismatch between combined .tar.gz checksum [{}] and originating [{}]).\").format(\n combined_hash, the_toc[\"meta\"][\"global_hash\"]\n )\n )\n # if we get this far, then: the chunk-files all existed, they all pass checksum validation,\n # and there exists a combined .tar.gz, which *also* passes checksum-validation.\n # Let the rest of the import process do its thing on the new combined-file.\n return result_file\n\n if toc:\n log.info(_(\"Validating TOC {}.\").format(toc))\n path = validate_and_assemble(toc)\n\n log.info(_(\"Importing {}.\").format(path))\n current_task = Task.current()\n importer = PulpImporter.objects.get(pk=importer_pk)\n the_import = PulpImport.objects.create(\n importer=importer, task=current_task, params={\"path\": path}\n )\n CreatedResource.objects.create(content_object=the_import)\n\n task_group = TaskGroup.objects.create(description=f\"Import of {path}\")\n Task.objects.filter(pk=current_task.pk).update(task_group=task_group)\n current_task.refresh_from_db()\n CreatedResource.objects.create(content_object=task_group)\n\n with tempfile.TemporaryDirectory() as temp_dir:\n with tarfile.open(path, \"r:gz\") as tar:\n tar.extractall(path=temp_dir)\n\n # Check version info\n with open(os.path.join(temp_dir, VERSIONS_FILE)) as version_file:\n version_json = json.load(version_file)\n _check_versions(version_json)\n\n # Artifacts\n ar_result = _import_file(os.path.join(temp_dir, ARTIFACT_FILE), ArtifactResource)\n data = dict(\n message=\"Importing Artifacts\", code=\"import.artifacts\", total=len(ar_result.rows)\n )\n with ProgressReport(**data) as pb:\n for row in pb.iter(ar_result.rows):\n artifact = Artifact.objects.get(pk=row.object_id)\n base_path = os.path.join(\"artifact\", artifact.sha256[0:2], artifact.sha256[2:])\n src = os.path.join(temp_dir, base_path)\n dest = os.path.join(settings.MEDIA_ROOT, base_path)\n\n if not default_storage.exists(dest):\n with open(src, \"rb\") as f:\n default_storage.save(dest, f)\n\n with open(os.path.join(temp_dir, REPO_FILE), \"r\") as repo_data_file:\n data = json.load(repo_data_file)\n gpr = GroupProgressReport(\n message=\"Importing repository versions\",\n code=\"import.repo.versions\",\n total=len(data),\n done=0,\n task_group=task_group,\n )\n gpr.save()\n\n for src_repo in data:\n try:\n dest_repo = _destination_repo(importer, src_repo[\"name\"])\n except Repository.DoesNotExist:\n log.warning(\n _(\"Could not find destination repo for {}. Skipping.\").format(\n src_repo[\"name\"]\n )\n )\n continue\n\n dispatch(\n import_repository_version,\n exclusive_resources=[dest_repo],\n args=[importer.pk, dest_repo.pk, src_repo[\"name\"], path],\n task_group=task_group,\n )\n\n task_group.finish()\n", "path": "pulpcore/app/tasks/importer.py"}], "after_files": [{"content": "import hashlib\nimport json\nimport os\nimport re\nimport subprocess\nimport tempfile\nimport tarfile\nfrom gettext import gettext as _\nfrom logging import getLogger\n\nfrom django.conf import settings\nfrom django.core.files.storage import default_storage\nfrom django.db.models import F\n\nfrom pkg_resources import DistributionNotFound, get_distribution\nfrom rest_framework.serializers import ValidationError\nfrom tablib import Dataset\n\nfrom pulpcore.app.apps import get_plugin_config\nfrom pulpcore.app.models import (\n Artifact,\n Content,\n CreatedResource,\n GroupProgressReport,\n ProgressReport,\n PulpImport,\n PulpImporter,\n Repository,\n Task,\n TaskGroup,\n)\nfrom pulpcore.app.modelresource import (\n ArtifactResource,\n ContentArtifactResource,\n)\nfrom pulpcore.constants import TASK_STATES\nfrom pulpcore.tasking.tasks import dispatch\n\nlog = getLogger(__name__)\n\nARTIFACT_FILE = \"pulpcore.app.modelresource.ArtifactResource.json\"\nREPO_FILE = \"pulpcore.app.modelresource.RepositoryResource.json\"\nCONTENT_FILE = \"pulpcore.app.modelresource.ContentResource.json\"\nCA_FILE = \"pulpcore.app.modelresource.ContentArtifactResource.json\"\nVERSIONS_FILE = \"versions.json\"\nCONTENT_MAPPING_FILE = \"content_mapping.json\"\n\n\ndef _destination_repo(importer, source_repo_name):\n \"\"\"Find the destination repository based on source repo's name.\"\"\"\n if importer.repo_mapping and importer.repo_mapping.get(source_repo_name):\n dest_repo_name = importer.repo_mapping[source_repo_name]\n else:\n dest_repo_name = source_repo_name\n return Repository.objects.get(name=dest_repo_name)\n\n\ndef _import_file(fpath, resource_class, do_raise=True):\n try:\n log.info(_(\"Importing file {}.\").format(fpath))\n with open(fpath, \"r\") as json_file:\n data = Dataset().load(json_file.read(), format=\"json\")\n resource = resource_class()\n log.info(_(\"...Importing resource {}.\").format(resource.__class__.__name__))\n return resource.import_data(data, raise_errors=do_raise)\n except AttributeError:\n log.error(_(\"FAILURE importing file {}!\").format(fpath))\n raise\n\n\ndef _check_versions(version_json):\n \"\"\"Compare the export version_json to the installed components.\"\"\"\n error_messages = []\n for component in version_json:\n try:\n version = get_distribution(component[\"component\"]).version\n except DistributionNotFound:\n error_messages.append(\n _(\"Export uses {} which is not installed.\").format(component[\"component\"])\n )\n else:\n if version != component[\"version\"]:\n error_messages.append(\n _(\n \"Export version {export_ver} of {component} does not match \"\n \"installed version {ver}.\"\n ).format(\n export_ver=component[\"version\"],\n component=component[\"component\"],\n ver=version,\n )\n )\n\n if error_messages:\n raise ValidationError((\" \".join(error_messages)))\n\n\ndef import_repository_version(importer_pk, destination_repo_pk, source_repo_name, tar_path):\n \"\"\"\n Import a repository version from a Pulp export.\n\n Args:\n importer_pk (str): Importer we are working with\n destination_repo_pk (str): Primary key of Repository to import into.\n source_repo_name (str): Name of the Repository in the export.\n tar_path (str): A path to export tar.\n \"\"\"\n dest_repo = Repository.objects.get(pk=destination_repo_pk)\n importer = PulpImporter.objects.get(pk=importer_pk)\n\n pb = ProgressReport(\n message=f\"Importing content for {dest_repo.name}\",\n code=\"import.repo.version.content\",\n state=TASK_STATES.RUNNING,\n )\n pb.save()\n\n with tempfile.TemporaryDirectory() as temp_dir:\n # Extract the repo file for the repo info\n with tarfile.open(tar_path, \"r:gz\") as tar:\n tar.extract(REPO_FILE, path=temp_dir)\n\n with open(os.path.join(temp_dir, REPO_FILE), \"r\") as repo_data_file:\n data = json.load(repo_data_file)\n\n src_repo = next(repo for repo in data if repo[\"name\"] == source_repo_name)\n\n if dest_repo.pulp_type != src_repo[\"pulp_type\"]:\n raise ValidationError(\n _(\n \"Repository type mismatch: {src_repo} ({src_type}) vs {dest_repo} \"\n \"({dest_type}).\"\n ).format(\n src_repo=src_repo[\"name\"],\n src_type=src_repo[\"pulp_type\"],\n dest_repo=dest_repo.name,\n dest_type=dest_repo.pulp_type,\n )\n )\n\n rv_name = \"\"\n # Extract the repo version files\n with tarfile.open(tar_path, \"r:gz\") as tar:\n for mem in tar.getmembers():\n match = re.search(rf\"(^repository-{source_repo_name}_[0-9]+)/.+\", mem.name)\n if match:\n rv_name = match.group(1)\n tar.extract(mem, path=temp_dir)\n\n if not rv_name:\n raise ValidationError(_(\"No RepositoryVersion found for {}\").format(rv_name))\n\n rv_path = os.path.join(temp_dir, rv_name)\n # Content\n plugin_name = src_repo[\"pulp_type\"].split(\".\")[0]\n cfg = get_plugin_config(plugin_name)\n\n resulting_content_ids = []\n for res_class in cfg.exportable_classes:\n filename = f\"{res_class.__module__}.{res_class.__name__}.json\"\n a_result = _import_file(os.path.join(rv_path, filename), res_class, do_raise=False)\n # django import-export can have a problem with concurrent-imports that are\n # importing the same 'thing' (e.g., a Package that exists in two different\n # repo-versions that are being imported at the same time). We will try an import\n # that will simply record errors as they happen (rather than failing with an exception)\n # first. If errors happen, we'll do one retry before we give up on this repo-version's\n # import.\n if a_result.has_errors():\n log.info(\n _(\"...{} import-errors encountered importing {} from {}, retrying\").format(\n a_result.totals[\"error\"], filename, rv_name\n )\n )\n # Second attempt, we allow to raise an exception on any problem.\n # This will either succeed, or log a fatal error and fail.\n try:\n a_result = _import_file(os.path.join(rv_path, filename), res_class)\n except Exception as e: # noqa log on ANY exception and then re-raise\n log.error(\n _(\"FATAL import-failure importing {} from {}\").format(filename, rv_name)\n )\n raise\n\n resulting_content_ids.extend(\n row.object_id for row in a_result.rows if row.import_type in (\"new\", \"update\")\n )\n\n # Once all content exists, create the ContentArtifact links\n ca_path = os.path.join(rv_path, CA_FILE)\n _import_file(ca_path, ContentArtifactResource)\n\n # see if we have a content mapping\n mapping_path = f\"{rv_name}/{CONTENT_MAPPING_FILE}\"\n mapping = {}\n with tarfile.open(tar_path, \"r:gz\") as tar:\n if mapping_path in tar.getnames():\n tar.extract(mapping_path, path=temp_dir)\n with open(os.path.join(temp_dir, mapping_path), \"r\") as mapping_file:\n mapping = json.load(mapping_file)\n\n if mapping:\n # use the content mapping to map content to repos\n for repo_name, content_ids in mapping.items():\n repo = _destination_repo(importer, repo_name)\n content = Content.objects.filter(upstream_id__in=content_ids)\n with repo.new_version() as new_version:\n new_version.set_content(content)\n else:\n # just map all the content to our destination repo\n content = Content.objects.filter(pk__in=resulting_content_ids)\n with dest_repo.new_version() as new_version:\n new_version.set_content(content)\n\n content_count = content.count()\n pb.total = content_count\n pb.done = content_count\n pb.state = TASK_STATES.COMPLETED\n pb.save()\n\n gpr = TaskGroup.current().group_progress_reports.filter(code=\"import.repo.versions\")\n gpr.update(done=F(\"done\") + 1)\n\n\ndef pulp_import(importer_pk, path, toc):\n \"\"\"\n Import a Pulp export into Pulp.\n\n Args:\n importer_pk (str): Primary key of PulpImporter to do the import\n path (str): Path to the export to be imported\n \"\"\"\n\n def _compute_hash(filename):\n sha256_hash = hashlib.sha256()\n with open(filename, \"rb\") as f:\n # Read and update hash string value in blocks of 4K\n for byte_block in iter(lambda: f.read(4096), b\"\"):\n sha256_hash.update(byte_block)\n return sha256_hash.hexdigest()\n\n def validate_toc(toc_filename):\n \"\"\"\n Check validity of table-of-contents file.\n\n table-of-contents must:\n * exist\n * be valid JSON\n * point to chunked-export-files that exist 'next to' the 'toc' file\n * point to chunks whose checksums match the checksums stored in the 'toc' file\n\n Args:\n toc_filename (str): The user-provided toc-file-path to be validated.\n\n Raises:\n ValidationError: If toc is not a valid JSON table-of-contents file,\n or when toc points to chunked-export-files that can't be found in the same\n directory as the toc-file, or the checksums of the chunks do not match the\n checksums stored in toc.\n \"\"\"\n with open(toc_filename) as json_file:\n # Valid JSON?\n the_toc = json.load(json_file)\n if not the_toc.get(\"files\", None) or not the_toc.get(\"meta\", None):\n raise ValidationError(_(\"Missing 'files' or 'meta' keys in table-of-contents!\"))\n\n base_dir = os.path.dirname(toc_filename)\n # Points at chunks that exist?\n missing_files = []\n for f in sorted(the_toc[\"files\"].keys()):\n if not os.path.isfile(os.path.join(base_dir, f)):\n missing_files.append(f)\n if missing_files:\n raise ValidationError(\n _(\n \"Missing import-chunks named in table-of-contents: {}.\".format(\n str(missing_files)\n )\n )\n )\n\n errs = []\n # validate the sha256 of the toc-entries\n # gather errors for reporting at the end\n chunks = sorted(the_toc[\"files\"].keys())\n data = dict(message=\"Validating Chunks\", code=\"validate.chunks\", total=len(chunks))\n with ProgressReport(**data) as pb:\n for chunk in pb.iter(chunks):\n a_hash = _compute_hash(os.path.join(base_dir, chunk))\n if not a_hash == the_toc[\"files\"][chunk]:\n err_str = \"File {} expected checksum : {}, computed checksum : {}\".format(\n chunk, the_toc[\"files\"][chunk], a_hash\n )\n errs.append(err_str)\n\n # if there are any errors, report and fail\n if errs:\n raise ValidationError(_(\"Import chunk hash mismatch: {}).\").format(str(errs)))\n\n return the_toc\n\n def validate_and_assemble(toc_filename):\n \"\"\"Validate checksums of, and reassemble, chunks in table-of-contents file.\"\"\"\n the_toc = validate_toc(toc_filename)\n toc_dir = os.path.dirname(toc_filename)\n result_file = os.path.join(toc_dir, the_toc[\"meta\"][\"file\"])\n\n # if we have only one entry in \"files\", it must be the full .tar.gz - return it\n if len(the_toc[\"files\"]) == 1:\n return os.path.join(toc_dir, list(the_toc[\"files\"].keys())[0])\n\n # We have multiple chunks.\n # reassemble into one file 'next to' the toc and return the resulting full-path\n chunk_size = int(the_toc[\"meta\"][\"chunk_size\"])\n offset = 0\n block_size = 1024\n blocks_per_chunk = int(chunk_size / block_size)\n\n # sorting-by-filename is REALLY IMPORTANT here\n # keys are of the form <base-export-name>.00..<base-export-name>.NN,\n # and must be reassembled IN ORDER\n the_chunk_files = sorted(the_toc[\"files\"].keys())\n\n data = dict(\n message=\"Recombining Chunks\", code=\"recombine.chunks\", total=len(the_chunk_files)\n )\n with ProgressReport(**data) as pb:\n for chunk in pb.iter(the_chunk_files):\n # For each chunk, add it to the reconstituted tar.gz, picking up where the previous\n # chunk left off\n subprocess.run(\n [\n \"dd\",\n \"if={}\".format(os.path.join(toc_dir, chunk)),\n \"of={}\".format(result_file),\n \"bs={}\".format(str(block_size)),\n \"seek={}\".format(str(offset)),\n ],\n )\n offset += blocks_per_chunk\n # To keep from taking up All The Disk, we delete each chunk after it has been added\n # to the recombined file.\n try:\n subprocess.run([\"rm\", \"-f\", os.path.join(toc_dir, chunk)])\n except OSError:\n log.warning(\n _(\"Failed to remove chunk {} after recombining. Continuing.\").format(\n os.path.join(toc_dir, chunk)\n ),\n exc_info=True,\n )\n\n combined_hash = _compute_hash(result_file)\n if combined_hash != the_toc[\"meta\"][\"global_hash\"]:\n raise ValidationError(\n _(\"Mismatch between combined .tar.gz checksum [{}] and originating [{}]).\").format(\n combined_hash, the_toc[\"meta\"][\"global_hash\"]\n )\n )\n # if we get this far, then: the chunk-files all existed, they all pass checksum validation,\n # and there exists a combined .tar.gz, which *also* passes checksum-validation.\n # Let the rest of the import process do its thing on the new combined-file.\n return result_file\n\n if toc:\n log.info(_(\"Validating TOC {}.\").format(toc))\n path = validate_and_assemble(toc)\n\n log.info(_(\"Importing {}.\").format(path))\n current_task = Task.current()\n importer = PulpImporter.objects.get(pk=importer_pk)\n the_import = PulpImport.objects.create(\n importer=importer, task=current_task, params={\"path\": path}\n )\n CreatedResource.objects.create(content_object=the_import)\n\n task_group = TaskGroup.objects.create(description=f\"Import of {path}\")\n Task.objects.filter(pk=current_task.pk).update(task_group=task_group)\n current_task.refresh_from_db()\n CreatedResource.objects.create(content_object=task_group)\n\n with tempfile.TemporaryDirectory(dir=\".\") as temp_dir:\n with tarfile.open(path, \"r:gz\") as tar:\n tar.extractall(path=temp_dir)\n\n # Check version info\n with open(os.path.join(temp_dir, VERSIONS_FILE)) as version_file:\n version_json = json.load(version_file)\n _check_versions(version_json)\n\n # Artifacts\n ar_result = _import_file(os.path.join(temp_dir, ARTIFACT_FILE), ArtifactResource)\n data = dict(\n message=\"Importing Artifacts\", code=\"import.artifacts\", total=len(ar_result.rows)\n )\n with ProgressReport(**data) as pb:\n for row in pb.iter(ar_result.rows):\n artifact = Artifact.objects.get(pk=row.object_id)\n base_path = os.path.join(\"artifact\", artifact.sha256[0:2], artifact.sha256[2:])\n src = os.path.join(temp_dir, base_path)\n dest = os.path.join(settings.MEDIA_ROOT, base_path)\n\n if not default_storage.exists(dest):\n with open(src, \"rb\") as f:\n default_storage.save(dest, f)\n\n with open(os.path.join(temp_dir, REPO_FILE), \"r\") as repo_data_file:\n data = json.load(repo_data_file)\n gpr = GroupProgressReport(\n message=\"Importing repository versions\",\n code=\"import.repo.versions\",\n total=len(data),\n done=0,\n task_group=task_group,\n )\n gpr.save()\n\n for src_repo in data:\n try:\n dest_repo = _destination_repo(importer, src_repo[\"name\"])\n except Repository.DoesNotExist:\n log.warning(\n _(\"Could not find destination repo for {}. Skipping.\").format(\n src_repo[\"name\"]\n )\n )\n continue\n\n dispatch(\n import_repository_version,\n exclusive_resources=[dest_repo],\n args=[importer.pk, dest_repo.pk, src_repo[\"name\"], path],\n task_group=task_group,\n )\n\n task_group.finish()\n", "path": "pulpcore/app/tasks/importer.py"}]} |
gh_patches_debug_1114 | rasdani/github-patches | git_diff | numpy__numpy-8801 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
np.hstack([ ]) throws IndexError instead of ValueError
Calling `np.hstack([ ])` throws an `IndexError`. It takes a bit more work to find out the root cause of the error, ie: the empty list.
```
>>> import numpy as np
>>> np.hstack([ ])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/core/shape_base.py", line 277, in hstack
if arrs[0].ndim == 1:
IndexError: list index out of range
```
`np.vstack`, on the other hand, throws a more appropriate error.
```
>>> np.vstack([ ])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/core/shape_base.py", line 230, in vstack
return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
ValueError: need at least one array to concatenate
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/core/shape_base.py`
Content:
```
1 from __future__ import division, absolute_import, print_function
2
3 __all__ = ['atleast_1d', 'atleast_2d', 'atleast_3d', 'vstack', 'hstack',
4 'stack']
5
6 from . import numeric as _nx
7 from .numeric import asanyarray, newaxis
8 from .multiarray import normalize_axis_index
9
10 def atleast_1d(*arys):
11 """
12 Convert inputs to arrays with at least one dimension.
13
14 Scalar inputs are converted to 1-dimensional arrays, whilst
15 higher-dimensional inputs are preserved.
16
17 Parameters
18 ----------
19 arys1, arys2, ... : array_like
20 One or more input arrays.
21
22 Returns
23 -------
24 ret : ndarray
25 An array, or list of arrays, each with ``a.ndim >= 1``.
26 Copies are made only if necessary.
27
28 See Also
29 --------
30 atleast_2d, atleast_3d
31
32 Examples
33 --------
34 >>> np.atleast_1d(1.0)
35 array([ 1.])
36
37 >>> x = np.arange(9.0).reshape(3,3)
38 >>> np.atleast_1d(x)
39 array([[ 0., 1., 2.],
40 [ 3., 4., 5.],
41 [ 6., 7., 8.]])
42 >>> np.atleast_1d(x) is x
43 True
44
45 >>> np.atleast_1d(1, [3, 4])
46 [array([1]), array([3, 4])]
47
48 """
49 res = []
50 for ary in arys:
51 ary = asanyarray(ary)
52 if ary.ndim == 0:
53 result = ary.reshape(1)
54 else:
55 result = ary
56 res.append(result)
57 if len(res) == 1:
58 return res[0]
59 else:
60 return res
61
62 def atleast_2d(*arys):
63 """
64 View inputs as arrays with at least two dimensions.
65
66 Parameters
67 ----------
68 arys1, arys2, ... : array_like
69 One or more array-like sequences. Non-array inputs are converted
70 to arrays. Arrays that already have two or more dimensions are
71 preserved.
72
73 Returns
74 -------
75 res, res2, ... : ndarray
76 An array, or list of arrays, each with ``a.ndim >= 2``.
77 Copies are avoided where possible, and views with two or more
78 dimensions are returned.
79
80 See Also
81 --------
82 atleast_1d, atleast_3d
83
84 Examples
85 --------
86 >>> np.atleast_2d(3.0)
87 array([[ 3.]])
88
89 >>> x = np.arange(3.0)
90 >>> np.atleast_2d(x)
91 array([[ 0., 1., 2.]])
92 >>> np.atleast_2d(x).base is x
93 True
94
95 >>> np.atleast_2d(1, [1, 2], [[1, 2]])
96 [array([[1]]), array([[1, 2]]), array([[1, 2]])]
97
98 """
99 res = []
100 for ary in arys:
101 ary = asanyarray(ary)
102 if ary.ndim == 0:
103 result = ary.reshape(1, 1)
104 elif ary.ndim == 1:
105 result = ary[newaxis,:]
106 else:
107 result = ary
108 res.append(result)
109 if len(res) == 1:
110 return res[0]
111 else:
112 return res
113
114 def atleast_3d(*arys):
115 """
116 View inputs as arrays with at least three dimensions.
117
118 Parameters
119 ----------
120 arys1, arys2, ... : array_like
121 One or more array-like sequences. Non-array inputs are converted to
122 arrays. Arrays that already have three or more dimensions are
123 preserved.
124
125 Returns
126 -------
127 res1, res2, ... : ndarray
128 An array, or list of arrays, each with ``a.ndim >= 3``. Copies are
129 avoided where possible, and views with three or more dimensions are
130 returned. For example, a 1-D array of shape ``(N,)`` becomes a view
131 of shape ``(1, N, 1)``, and a 2-D array of shape ``(M, N)`` becomes a
132 view of shape ``(M, N, 1)``.
133
134 See Also
135 --------
136 atleast_1d, atleast_2d
137
138 Examples
139 --------
140 >>> np.atleast_3d(3.0)
141 array([[[ 3.]]])
142
143 >>> x = np.arange(3.0)
144 >>> np.atleast_3d(x).shape
145 (1, 3, 1)
146
147 >>> x = np.arange(12.0).reshape(4,3)
148 >>> np.atleast_3d(x).shape
149 (4, 3, 1)
150 >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself
151 True
152
153 >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]):
154 ... print(arr, arr.shape)
155 ...
156 [[[1]
157 [2]]] (1, 2, 1)
158 [[[1]
159 [2]]] (1, 2, 1)
160 [[[1 2]]] (1, 1, 2)
161
162 """
163 res = []
164 for ary in arys:
165 ary = asanyarray(ary)
166 if ary.ndim == 0:
167 result = ary.reshape(1, 1, 1)
168 elif ary.ndim == 1:
169 result = ary[newaxis,:, newaxis]
170 elif ary.ndim == 2:
171 result = ary[:,:, newaxis]
172 else:
173 result = ary
174 res.append(result)
175 if len(res) == 1:
176 return res[0]
177 else:
178 return res
179
180
181 def vstack(tup):
182 """
183 Stack arrays in sequence vertically (row wise).
184
185 Take a sequence of arrays and stack them vertically to make a single
186 array. Rebuild arrays divided by `vsplit`.
187
188 This function continues to be supported for backward compatibility, but
189 you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``
190 function was added in NumPy 1.10.
191
192 Parameters
193 ----------
194 tup : sequence of ndarrays
195 Tuple containing arrays to be stacked. The arrays must have the same
196 shape along all but the first axis.
197
198 Returns
199 -------
200 stacked : ndarray
201 The array formed by stacking the given arrays.
202
203 See Also
204 --------
205 stack : Join a sequence of arrays along a new axis.
206 hstack : Stack arrays in sequence horizontally (column wise).
207 dstack : Stack arrays in sequence depth wise (along third dimension).
208 concatenate : Join a sequence of arrays along an existing axis.
209 vsplit : Split array into a list of multiple sub-arrays vertically.
210
211 Notes
212 -----
213 Equivalent to ``np.concatenate(tup, axis=0)`` if `tup` contains arrays that
214 are at least 2-dimensional.
215
216 Examples
217 --------
218 >>> a = np.array([1, 2, 3])
219 >>> b = np.array([2, 3, 4])
220 >>> np.vstack((a,b))
221 array([[1, 2, 3],
222 [2, 3, 4]])
223
224 >>> a = np.array([[1], [2], [3]])
225 >>> b = np.array([[2], [3], [4]])
226 >>> np.vstack((a,b))
227 array([[1],
228 [2],
229 [3],
230 [2],
231 [3],
232 [4]])
233
234 """
235 return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)
236
237 def hstack(tup):
238 """
239 Stack arrays in sequence horizontally (column wise).
240
241 Take a sequence of arrays and stack them horizontally to make
242 a single array. Rebuild arrays divided by `hsplit`.
243
244 This function continues to be supported for backward compatibility, but
245 you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``
246 function was added in NumPy 1.10.
247
248 Parameters
249 ----------
250 tup : sequence of ndarrays
251 All arrays must have the same shape along all but the second axis.
252
253 Returns
254 -------
255 stacked : ndarray
256 The array formed by stacking the given arrays.
257
258 See Also
259 --------
260 stack : Join a sequence of arrays along a new axis.
261 vstack : Stack arrays in sequence vertically (row wise).
262 dstack : Stack arrays in sequence depth wise (along third axis).
263 concatenate : Join a sequence of arrays along an existing axis.
264 hsplit : Split array along second axis.
265
266 Notes
267 -----
268 Equivalent to ``np.concatenate(tup, axis=1)``
269
270 Examples
271 --------
272 >>> a = np.array((1,2,3))
273 >>> b = np.array((2,3,4))
274 >>> np.hstack((a,b))
275 array([1, 2, 3, 2, 3, 4])
276 >>> a = np.array([[1],[2],[3]])
277 >>> b = np.array([[2],[3],[4]])
278 >>> np.hstack((a,b))
279 array([[1, 2],
280 [2, 3],
281 [3, 4]])
282
283 """
284 arrs = [atleast_1d(_m) for _m in tup]
285 # As a special case, dimension 0 of 1-dimensional arrays is "horizontal"
286 if arrs[0].ndim == 1:
287 return _nx.concatenate(arrs, 0)
288 else:
289 return _nx.concatenate(arrs, 1)
290
291 def stack(arrays, axis=0):
292 """
293 Join a sequence of arrays along a new axis.
294
295 The `axis` parameter specifies the index of the new axis in the dimensions
296 of the result. For example, if ``axis=0`` it will be the first dimension
297 and if ``axis=-1`` it will be the last dimension.
298
299 .. versionadded:: 1.10.0
300
301 Parameters
302 ----------
303 arrays : sequence of array_like
304 Each array must have the same shape.
305 axis : int, optional
306 The axis in the result array along which the input arrays are stacked.
307
308 Returns
309 -------
310 stacked : ndarray
311 The stacked array has one more dimension than the input arrays.
312
313 See Also
314 --------
315 concatenate : Join a sequence of arrays along an existing axis.
316 split : Split array into a list of multiple sub-arrays of equal size.
317
318 Examples
319 --------
320 >>> arrays = [np.random.randn(3, 4) for _ in range(10)]
321 >>> np.stack(arrays, axis=0).shape
322 (10, 3, 4)
323
324 >>> np.stack(arrays, axis=1).shape
325 (3, 10, 4)
326
327 >>> np.stack(arrays, axis=2).shape
328 (3, 4, 10)
329
330 >>> a = np.array([1, 2, 3])
331 >>> b = np.array([2, 3, 4])
332 >>> np.stack((a, b))
333 array([[1, 2, 3],
334 [2, 3, 4]])
335
336 >>> np.stack((a, b), axis=-1)
337 array([[1, 2],
338 [2, 3],
339 [3, 4]])
340
341 """
342 arrays = [asanyarray(arr) for arr in arrays]
343 if not arrays:
344 raise ValueError('need at least one array to stack')
345
346 shapes = set(arr.shape for arr in arrays)
347 if len(shapes) != 1:
348 raise ValueError('all input arrays must have the same shape')
349
350 result_ndim = arrays[0].ndim + 1
351 axis = normalize_axis_index(axis, result_ndim)
352
353 sl = (slice(None),) * axis + (_nx.newaxis,)
354 expanded_arrays = [arr[sl] for arr in arrays]
355 return _nx.concatenate(expanded_arrays, axis=axis)
356
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpy/core/shape_base.py b/numpy/core/shape_base.py
--- a/numpy/core/shape_base.py
+++ b/numpy/core/shape_base.py
@@ -283,7 +283,7 @@
"""
arrs = [atleast_1d(_m) for _m in tup]
# As a special case, dimension 0 of 1-dimensional arrays is "horizontal"
- if arrs[0].ndim == 1:
+ if arrs and arrs[0].ndim == 1:
return _nx.concatenate(arrs, 0)
else:
return _nx.concatenate(arrs, 1)
| {"golden_diff": "diff --git a/numpy/core/shape_base.py b/numpy/core/shape_base.py\n--- a/numpy/core/shape_base.py\n+++ b/numpy/core/shape_base.py\n@@ -283,7 +283,7 @@\n \"\"\"\n arrs = [atleast_1d(_m) for _m in tup]\n # As a special case, dimension 0 of 1-dimensional arrays is \"horizontal\"\n- if arrs[0].ndim == 1:\n+ if arrs and arrs[0].ndim == 1:\n return _nx.concatenate(arrs, 0)\n else:\n return _nx.concatenate(arrs, 1)\n", "issue": "np.hstack([ ]) throws IndexError instead of ValueError\nCalling `np.hstack([ ])` throws an `IndexError`. It takes a bit more work to find out the root cause of the error, ie: the empty list.\r\n```\r\n>>> import numpy as np\r\n>>> np.hstack([ ])\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/lib/python3/dist-packages/numpy/core/shape_base.py\", line 277, in hstack\r\n if arrs[0].ndim == 1:\r\nIndexError: list index out of range\r\n```\r\n\r\n`np.vstack`, on the other hand, throws a more appropriate error.\r\n\r\n```\r\n>>> np.vstack([ ])\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/lib/python3/dist-packages/numpy/core/shape_base.py\", line 230, in vstack\r\n return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)\r\nValueError: need at least one array to concatenate\r\n```\r\n\n", "before_files": [{"content": "from __future__ import division, absolute_import, print_function\n\n__all__ = ['atleast_1d', 'atleast_2d', 'atleast_3d', 'vstack', 'hstack',\n 'stack']\n\nfrom . import numeric as _nx\nfrom .numeric import asanyarray, newaxis\nfrom .multiarray import normalize_axis_index\n\ndef atleast_1d(*arys):\n \"\"\"\n Convert inputs to arrays with at least one dimension.\n\n Scalar inputs are converted to 1-dimensional arrays, whilst\n higher-dimensional inputs are preserved.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more input arrays.\n\n Returns\n -------\n ret : ndarray\n An array, or list of arrays, each with ``a.ndim >= 1``.\n Copies are made only if necessary.\n\n See Also\n --------\n atleast_2d, atleast_3d\n\n Examples\n --------\n >>> np.atleast_1d(1.0)\n array([ 1.])\n\n >>> x = np.arange(9.0).reshape(3,3)\n >>> np.atleast_1d(x)\n array([[ 0., 1., 2.],\n [ 3., 4., 5.],\n [ 6., 7., 8.]])\n >>> np.atleast_1d(x) is x\n True\n\n >>> np.atleast_1d(1, [3, 4])\n [array([1]), array([3, 4])]\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1)\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\ndef atleast_2d(*arys):\n \"\"\"\n View inputs as arrays with at least two dimensions.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more array-like sequences. Non-array inputs are converted\n to arrays. Arrays that already have two or more dimensions are\n preserved.\n\n Returns\n -------\n res, res2, ... : ndarray\n An array, or list of arrays, each with ``a.ndim >= 2``.\n Copies are avoided where possible, and views with two or more\n dimensions are returned.\n\n See Also\n --------\n atleast_1d, atleast_3d\n\n Examples\n --------\n >>> np.atleast_2d(3.0)\n array([[ 3.]])\n\n >>> x = np.arange(3.0)\n >>> np.atleast_2d(x)\n array([[ 0., 1., 2.]])\n >>> np.atleast_2d(x).base is x\n True\n\n >>> np.atleast_2d(1, [1, 2], [[1, 2]])\n [array([[1]]), array([[1, 2]]), array([[1, 2]])]\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1, 1)\n elif ary.ndim == 1:\n result = ary[newaxis,:]\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\ndef atleast_3d(*arys):\n \"\"\"\n View inputs as arrays with at least three dimensions.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more array-like sequences. Non-array inputs are converted to\n arrays. Arrays that already have three or more dimensions are\n preserved.\n\n Returns\n -------\n res1, res2, ... : ndarray\n An array, or list of arrays, each with ``a.ndim >= 3``. Copies are\n avoided where possible, and views with three or more dimensions are\n returned. For example, a 1-D array of shape ``(N,)`` becomes a view\n of shape ``(1, N, 1)``, and a 2-D array of shape ``(M, N)`` becomes a\n view of shape ``(M, N, 1)``.\n\n See Also\n --------\n atleast_1d, atleast_2d\n\n Examples\n --------\n >>> np.atleast_3d(3.0)\n array([[[ 3.]]])\n\n >>> x = np.arange(3.0)\n >>> np.atleast_3d(x).shape\n (1, 3, 1)\n\n >>> x = np.arange(12.0).reshape(4,3)\n >>> np.atleast_3d(x).shape\n (4, 3, 1)\n >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself\n True\n\n >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]):\n ... print(arr, arr.shape)\n ...\n [[[1]\n [2]]] (1, 2, 1)\n [[[1]\n [2]]] (1, 2, 1)\n [[[1 2]]] (1, 1, 2)\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1, 1, 1)\n elif ary.ndim == 1:\n result = ary[newaxis,:, newaxis]\n elif ary.ndim == 2:\n result = ary[:,:, newaxis]\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\n\ndef vstack(tup):\n \"\"\"\n Stack arrays in sequence vertically (row wise).\n\n Take a sequence of arrays and stack them vertically to make a single\n array. Rebuild arrays divided by `vsplit`.\n\n This function continues to be supported for backward compatibility, but\n you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``\n function was added in NumPy 1.10.\n\n Parameters\n ----------\n tup : sequence of ndarrays\n Tuple containing arrays to be stacked. The arrays must have the same\n shape along all but the first axis.\n\n Returns\n -------\n stacked : ndarray\n The array formed by stacking the given arrays.\n\n See Also\n --------\n stack : Join a sequence of arrays along a new axis.\n hstack : Stack arrays in sequence horizontally (column wise).\n dstack : Stack arrays in sequence depth wise (along third dimension).\n concatenate : Join a sequence of arrays along an existing axis.\n vsplit : Split array into a list of multiple sub-arrays vertically.\n\n Notes\n -----\n Equivalent to ``np.concatenate(tup, axis=0)`` if `tup` contains arrays that\n are at least 2-dimensional.\n\n Examples\n --------\n >>> a = np.array([1, 2, 3])\n >>> b = np.array([2, 3, 4])\n >>> np.vstack((a,b))\n array([[1, 2, 3],\n [2, 3, 4]])\n\n >>> a = np.array([[1], [2], [3]])\n >>> b = np.array([[2], [3], [4]])\n >>> np.vstack((a,b))\n array([[1],\n [2],\n [3],\n [2],\n [3],\n [4]])\n\n \"\"\"\n return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)\n\ndef hstack(tup):\n \"\"\"\n Stack arrays in sequence horizontally (column wise).\n\n Take a sequence of arrays and stack them horizontally to make\n a single array. Rebuild arrays divided by `hsplit`.\n\n This function continues to be supported for backward compatibility, but\n you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``\n function was added in NumPy 1.10.\n\n Parameters\n ----------\n tup : sequence of ndarrays\n All arrays must have the same shape along all but the second axis.\n\n Returns\n -------\n stacked : ndarray\n The array formed by stacking the given arrays.\n\n See Also\n --------\n stack : Join a sequence of arrays along a new axis.\n vstack : Stack arrays in sequence vertically (row wise).\n dstack : Stack arrays in sequence depth wise (along third axis).\n concatenate : Join a sequence of arrays along an existing axis.\n hsplit : Split array along second axis.\n\n Notes\n -----\n Equivalent to ``np.concatenate(tup, axis=1)``\n\n Examples\n --------\n >>> a = np.array((1,2,3))\n >>> b = np.array((2,3,4))\n >>> np.hstack((a,b))\n array([1, 2, 3, 2, 3, 4])\n >>> a = np.array([[1],[2],[3]])\n >>> b = np.array([[2],[3],[4]])\n >>> np.hstack((a,b))\n array([[1, 2],\n [2, 3],\n [3, 4]])\n\n \"\"\"\n arrs = [atleast_1d(_m) for _m in tup]\n # As a special case, dimension 0 of 1-dimensional arrays is \"horizontal\"\n if arrs[0].ndim == 1:\n return _nx.concatenate(arrs, 0)\n else:\n return _nx.concatenate(arrs, 1)\n\ndef stack(arrays, axis=0):\n \"\"\"\n Join a sequence of arrays along a new axis.\n\n The `axis` parameter specifies the index of the new axis in the dimensions\n of the result. For example, if ``axis=0`` it will be the first dimension\n and if ``axis=-1`` it will be the last dimension.\n\n .. versionadded:: 1.10.0\n\n Parameters\n ----------\n arrays : sequence of array_like\n Each array must have the same shape.\n axis : int, optional\n The axis in the result array along which the input arrays are stacked.\n\n Returns\n -------\n stacked : ndarray\n The stacked array has one more dimension than the input arrays.\n\n See Also\n --------\n concatenate : Join a sequence of arrays along an existing axis.\n split : Split array into a list of multiple sub-arrays of equal size.\n\n Examples\n --------\n >>> arrays = [np.random.randn(3, 4) for _ in range(10)]\n >>> np.stack(arrays, axis=0).shape\n (10, 3, 4)\n\n >>> np.stack(arrays, axis=1).shape\n (3, 10, 4)\n\n >>> np.stack(arrays, axis=2).shape\n (3, 4, 10)\n\n >>> a = np.array([1, 2, 3])\n >>> b = np.array([2, 3, 4])\n >>> np.stack((a, b))\n array([[1, 2, 3],\n [2, 3, 4]])\n\n >>> np.stack((a, b), axis=-1)\n array([[1, 2],\n [2, 3],\n [3, 4]])\n\n \"\"\"\n arrays = [asanyarray(arr) for arr in arrays]\n if not arrays:\n raise ValueError('need at least one array to stack')\n\n shapes = set(arr.shape for arr in arrays)\n if len(shapes) != 1:\n raise ValueError('all input arrays must have the same shape')\n\n result_ndim = arrays[0].ndim + 1\n axis = normalize_axis_index(axis, result_ndim)\n\n sl = (slice(None),) * axis + (_nx.newaxis,)\n expanded_arrays = [arr[sl] for arr in arrays]\n return _nx.concatenate(expanded_arrays, axis=axis)\n", "path": "numpy/core/shape_base.py"}], "after_files": [{"content": "from __future__ import division, absolute_import, print_function\n\n__all__ = ['atleast_1d', 'atleast_2d', 'atleast_3d', 'vstack', 'hstack',\n 'stack']\n\nfrom . import numeric as _nx\nfrom .numeric import asanyarray, newaxis\nfrom .multiarray import normalize_axis_index\n\ndef atleast_1d(*arys):\n \"\"\"\n Convert inputs to arrays with at least one dimension.\n\n Scalar inputs are converted to 1-dimensional arrays, whilst\n higher-dimensional inputs are preserved.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more input arrays.\n\n Returns\n -------\n ret : ndarray\n An array, or list of arrays, each with ``a.ndim >= 1``.\n Copies are made only if necessary.\n\n See Also\n --------\n atleast_2d, atleast_3d\n\n Examples\n --------\n >>> np.atleast_1d(1.0)\n array([ 1.])\n\n >>> x = np.arange(9.0).reshape(3,3)\n >>> np.atleast_1d(x)\n array([[ 0., 1., 2.],\n [ 3., 4., 5.],\n [ 6., 7., 8.]])\n >>> np.atleast_1d(x) is x\n True\n\n >>> np.atleast_1d(1, [3, 4])\n [array([1]), array([3, 4])]\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1)\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\ndef atleast_2d(*arys):\n \"\"\"\n View inputs as arrays with at least two dimensions.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more array-like sequences. Non-array inputs are converted\n to arrays. Arrays that already have two or more dimensions are\n preserved.\n\n Returns\n -------\n res, res2, ... : ndarray\n An array, or list of arrays, each with ``a.ndim >= 2``.\n Copies are avoided where possible, and views with two or more\n dimensions are returned.\n\n See Also\n --------\n atleast_1d, atleast_3d\n\n Examples\n --------\n >>> np.atleast_2d(3.0)\n array([[ 3.]])\n\n >>> x = np.arange(3.0)\n >>> np.atleast_2d(x)\n array([[ 0., 1., 2.]])\n >>> np.atleast_2d(x).base is x\n True\n\n >>> np.atleast_2d(1, [1, 2], [[1, 2]])\n [array([[1]]), array([[1, 2]]), array([[1, 2]])]\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1, 1)\n elif ary.ndim == 1:\n result = ary[newaxis,:]\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\ndef atleast_3d(*arys):\n \"\"\"\n View inputs as arrays with at least three dimensions.\n\n Parameters\n ----------\n arys1, arys2, ... : array_like\n One or more array-like sequences. Non-array inputs are converted to\n arrays. Arrays that already have three or more dimensions are\n preserved.\n\n Returns\n -------\n res1, res2, ... : ndarray\n An array, or list of arrays, each with ``a.ndim >= 3``. Copies are\n avoided where possible, and views with three or more dimensions are\n returned. For example, a 1-D array of shape ``(N,)`` becomes a view\n of shape ``(1, N, 1)``, and a 2-D array of shape ``(M, N)`` becomes a\n view of shape ``(M, N, 1)``.\n\n See Also\n --------\n atleast_1d, atleast_2d\n\n Examples\n --------\n >>> np.atleast_3d(3.0)\n array([[[ 3.]]])\n\n >>> x = np.arange(3.0)\n >>> np.atleast_3d(x).shape\n (1, 3, 1)\n\n >>> x = np.arange(12.0).reshape(4,3)\n >>> np.atleast_3d(x).shape\n (4, 3, 1)\n >>> np.atleast_3d(x).base is x.base # x is a reshape, so not base itself\n True\n\n >>> for arr in np.atleast_3d([1, 2], [[1, 2]], [[[1, 2]]]):\n ... print(arr, arr.shape)\n ...\n [[[1]\n [2]]] (1, 2, 1)\n [[[1]\n [2]]] (1, 2, 1)\n [[[1 2]]] (1, 1, 2)\n\n \"\"\"\n res = []\n for ary in arys:\n ary = asanyarray(ary)\n if ary.ndim == 0:\n result = ary.reshape(1, 1, 1)\n elif ary.ndim == 1:\n result = ary[newaxis,:, newaxis]\n elif ary.ndim == 2:\n result = ary[:,:, newaxis]\n else:\n result = ary\n res.append(result)\n if len(res) == 1:\n return res[0]\n else:\n return res\n\n\ndef vstack(tup):\n \"\"\"\n Stack arrays in sequence vertically (row wise).\n\n Take a sequence of arrays and stack them vertically to make a single\n array. Rebuild arrays divided by `vsplit`.\n\n This function continues to be supported for backward compatibility, but\n you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``\n function was added in NumPy 1.10.\n\n Parameters\n ----------\n tup : sequence of ndarrays\n Tuple containing arrays to be stacked. The arrays must have the same\n shape along all but the first axis.\n\n Returns\n -------\n stacked : ndarray\n The array formed by stacking the given arrays.\n\n See Also\n --------\n stack : Join a sequence of arrays along a new axis.\n hstack : Stack arrays in sequence horizontally (column wise).\n dstack : Stack arrays in sequence depth wise (along third dimension).\n concatenate : Join a sequence of arrays along an existing axis.\n vsplit : Split array into a list of multiple sub-arrays vertically.\n\n Notes\n -----\n Equivalent to ``np.concatenate(tup, axis=0)`` if `tup` contains arrays that\n are at least 2-dimensional.\n\n Examples\n --------\n >>> a = np.array([1, 2, 3])\n >>> b = np.array([2, 3, 4])\n >>> np.vstack((a,b))\n array([[1, 2, 3],\n [2, 3, 4]])\n\n >>> a = np.array([[1], [2], [3]])\n >>> b = np.array([[2], [3], [4]])\n >>> np.vstack((a,b))\n array([[1],\n [2],\n [3],\n [2],\n [3],\n [4]])\n\n \"\"\"\n return _nx.concatenate([atleast_2d(_m) for _m in tup], 0)\n\ndef hstack(tup):\n \"\"\"\n Stack arrays in sequence horizontally (column wise).\n\n Take a sequence of arrays and stack them horizontally to make\n a single array. Rebuild arrays divided by `hsplit`.\n\n This function continues to be supported for backward compatibility, but\n you should prefer ``np.concatenate`` or ``np.stack``. The ``np.stack``\n function was added in NumPy 1.10.\n\n Parameters\n ----------\n tup : sequence of ndarrays\n All arrays must have the same shape along all but the second axis.\n\n Returns\n -------\n stacked : ndarray\n The array formed by stacking the given arrays.\n\n See Also\n --------\n stack : Join a sequence of arrays along a new axis.\n vstack : Stack arrays in sequence vertically (row wise).\n dstack : Stack arrays in sequence depth wise (along third axis).\n concatenate : Join a sequence of arrays along an existing axis.\n hsplit : Split array along second axis.\n\n Notes\n -----\n Equivalent to ``np.concatenate(tup, axis=1)``\n\n Examples\n --------\n >>> a = np.array((1,2,3))\n >>> b = np.array((2,3,4))\n >>> np.hstack((a,b))\n array([1, 2, 3, 2, 3, 4])\n >>> a = np.array([[1],[2],[3]])\n >>> b = np.array([[2],[3],[4]])\n >>> np.hstack((a,b))\n array([[1, 2],\n [2, 3],\n [3, 4]])\n\n \"\"\"\n arrs = [atleast_1d(_m) for _m in tup]\n # As a special case, dimension 0 of 1-dimensional arrays is \"horizontal\"\n if arrs and arrs[0].ndim == 1:\n return _nx.concatenate(arrs, 0)\n else:\n return _nx.concatenate(arrs, 1)\n\ndef stack(arrays, axis=0):\n \"\"\"\n Join a sequence of arrays along a new axis.\n\n The `axis` parameter specifies the index of the new axis in the dimensions\n of the result. For example, if ``axis=0`` it will be the first dimension\n and if ``axis=-1`` it will be the last dimension.\n\n .. versionadded:: 1.10.0\n\n Parameters\n ----------\n arrays : sequence of array_like\n Each array must have the same shape.\n axis : int, optional\n The axis in the result array along which the input arrays are stacked.\n\n Returns\n -------\n stacked : ndarray\n The stacked array has one more dimension than the input arrays.\n\n See Also\n --------\n concatenate : Join a sequence of arrays along an existing axis.\n split : Split array into a list of multiple sub-arrays of equal size.\n\n Examples\n --------\n >>> arrays = [np.random.randn(3, 4) for _ in range(10)]\n >>> np.stack(arrays, axis=0).shape\n (10, 3, 4)\n\n >>> np.stack(arrays, axis=1).shape\n (3, 10, 4)\n\n >>> np.stack(arrays, axis=2).shape\n (3, 4, 10)\n\n >>> a = np.array([1, 2, 3])\n >>> b = np.array([2, 3, 4])\n >>> np.stack((a, b))\n array([[1, 2, 3],\n [2, 3, 4]])\n\n >>> np.stack((a, b), axis=-1)\n array([[1, 2],\n [2, 3],\n [3, 4]])\n\n \"\"\"\n arrays = [asanyarray(arr) for arr in arrays]\n if not arrays:\n raise ValueError('need at least one array to stack')\n\n shapes = set(arr.shape for arr in arrays)\n if len(shapes) != 1:\n raise ValueError('all input arrays must have the same shape')\n\n result_ndim = arrays[0].ndim + 1\n axis = normalize_axis_index(axis, result_ndim)\n\n sl = (slice(None),) * axis + (_nx.newaxis,)\n expanded_arrays = [arr[sl] for arr in arrays]\n return _nx.concatenate(expanded_arrays, axis=axis)\n", "path": "numpy/core/shape_base.py"}]} |
gh_patches_debug_1115 | rasdani/github-patches | git_diff | scikit-hep__awkward-1822 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Add statistics tracker to the new website
### Which documentation?
Tutorials site
### What needs to be documented?
Stefan gave us a `data-domain` for Plausible; we should add this text
```html
<script defer data-domain="awkward-array.org" src="https://views.scientific-python.org/js/plausible.js"></script>
```
to every HTML page. The HTML is generated by Sphinx, so it has to go into a template somewhere. I think we don't actually have the default HTML templates; I think that all of the HTML in [docs-sphinx/_templates](https://github.com/scikit-hep/awkward/tree/main/docs-sphinx/_templates) is an override. For instance, the [breadcrumbs.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/breadcrumbs.html) is just to eliminate something that would be on the default page (the "fork me on GitHub" ribbon?).
@agoose77, you've recently added [funding.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/funding.html), which appears at the bottom of every page. (I just checked.) If the `<script>` is added there, I think it would reach every page.
Except redirects. The [redirect.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/redirect.html) describes an entire page. It's debatable whether we'd want to count redirects. It would tell us how many people are using old URLs versus new URLs, but I don't see how useful that information is, and it would have to be disaggregated from the totals, since redirected links shouldn't be counted twice if we're interested in how many people went to a given _topic_ (not _page_). I vote for no statistics on redirects, and I'm not even sure if the script will work if the redirect happens through the meta mechanism (because such an access has `<noscript>`.
Arguably, a `<script>` element ought to go in the HTML `<head>`, rather than the `<body>` (`footer.html`). That's where I usually see them. Actually, I stand corrected: they're legal anywhere, and there are reasons to put them in the `<body>`. [This StackOverflow post](https://stackoverflow.com/a/24070373/1623645) presents the pros and cons: page rendering will pause while a script is being downloaded an executed, and that's no good. We could follow that page's "antiquated recommendation" by putting the `<script>` at the end of the page (`footer.html`); the reason against it doesn't apply: we don't need the statistics-counter script to run to render the page—that can happen late. The "modern approach" is to use `async` or `defer`, which I just noticed is in our snippet, so there are no constraints on when this snippet can be placed. (And it could be `async`, rather than `defer`, because we don't care whether it runs before or after other scripts on the page.)
The only argument I can see for putting it in the `<head>`, then, is that if the statistics-counter starts too late, we could undercount our bounce rate: users click the back button before rendering gets to the `<script>` and the page view gets counted. There will always be some cut-off in the bounce rate, much like $p_T$ in a tracking distribution, since very small values are hard to measure. Having a lower implicit cut on bounce rate, rather than a higher implicit cut, doesn't sound very important to me.
Trying to get it into the `<head>` would mean overloading more templates, and I don't like to overload Sphinx templates because it means we no longer get version updates for that part of the page, and who knows if the template designer intends some relationship between two parts of a page, starting with a particular version number? So I'm in favor of adding the `<script>` to `footer.html`, and `defer` may be replaced by `async` just to loosen an unnecessary constraint for the browser.
Sorry for the long-winded write-up; just thinking through the issues while I type!
Oh, one last thing: let's add the statistics-counter to the new documentation _only_. In other words, _not_ the one with the `latest` tag. The v1 documentation (`latest`) is split between two sites, I don't want to add it to the Netlify site, and the statistics will be easier to interpret if we have it on only one site: we'll know what page is meant by a given URL. We should also see the turn-on curve when the new documentation goes public. If we include the old documentation, we might not be able to filter it out of the statistics, since some of the URLs are the same.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs-sphinx/conf.py`
Content:
```
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import json
15 import datetime
16 import runpy
17 import sys
18 import subprocess
19 import pathlib
20
21 # -- Project information -----------------------------------------------------
22
23 project = "Awkward Array"
24 copyright = f"{datetime.datetime.now().year}, Awkward Array development team"
25 author = "Jim Pivarski"
26
27 # -- General configuration ---------------------------------------------------
28
29 # Add any Sphinx extension module names here, as strings. They can be
30 # extensions coming with Sphinx (named "sphinx.ext.*") or your custom
31 # ones.
32 extensions = [
33 "sphinx_copybutton",
34 "sphinx_design",
35 "sphinx_external_toc",
36 "sphinx.ext.intersphinx",
37 "myst_nb",
38 # Preserve old links
39 "sphinx_reredirects",
40 "jupyterlite_sphinx",
41 ]
42
43 # Add any paths that contain templates here, relative to this directory.
44 templates_path = ["_templates"]
45
46 # List of patterns, relative to source directory, that match files and
47 # directories to ignore when looking for source files.
48 # This pattern also affects html_static_path and html_extra_path.
49 exclude_patterns = ["_build", "_templates", "Thumbs.db", "jupyter_execute", ".*"]
50
51 # -- Options for HTML output -------------------------------------------------
52
53 # The theme to use for HTML and HTML Help pages. See the documentation for
54 # a list of builtin themes.
55 #
56 html_context = {
57 "github_user": "scikit-hep",
58 "github_repo": "awkward",
59 # TODO: set this
60 "github_version": os.environ.get("READTHEDOCS_VERSION", "main"),
61 "doc_path": "docs-sphinx",
62 }
63 html_theme = "pydata_sphinx_theme"
64 html_show_sourcelink = True
65 html_theme_options = {
66 "logo": {
67 "image_light": "image/logo-300px.png",
68 "image_dark": "image/logo-300px-white.png",
69 },
70 "github_url": "https://github.com/scikit-hep/awkward",
71 # Add light/dark mode and documentation version switcher:
72 "navbar_end": ["theme-switcher", "navbar-icon-links"],
73 "footer_items": ["copyright", "sphinx-version", "funding"],
74 "icon_links": [
75 {
76 "name": "PyPI",
77 "url": "https://pypi.org/project/awkward",
78 "icon": "fab fa-python",
79 }
80 ],
81 "use_edit_page_button": True,
82 "external_links": [
83 {
84 "name": "Contributor guide",
85 "url": "https://github.com/scikit-hep/awkward/blob/main/CONTRIBUTING.md",
86 },
87 {
88 "name": "Release history",
89 "url": "https://github.com/scikit-hep/awkward/releases",
90 },
91 ],
92 }
93
94 # Add any paths that contain custom static files (such as style sheets) here,
95 # relative to this directory. They are copied after the builtin static files,
96 # so a file named "default.css" will overwrite the builtin "default.css".
97 html_static_path = ["_static"]
98 html_css_files = ["css/awkward.css"]
99
100 # MyST settings
101 myst_enable_extensions = [
102 "colon_fence",
103 ]
104
105 nb_execution_mode = "cache"
106 nb_execution_raise_on_error = True
107 # unpkg is currently _very_ slow
108 nb_ipywidgets_js = {
109 # Load RequireJS, used by the IPywidgets for dependency management
110 "https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js": {
111 "integrity": "sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=",
112 "crossorigin": "anonymous",
113 },
114 # Load IPywidgets bundle for embedding.
115 "https://cdn.jsdelivr.net/npm/@jupyter-widgets/[email protected]/dist/embed-amd.js": {
116 "data-jupyter-widgets-cdn": "https://cdn.jsdelivr.net/npm/",
117 "crossorigin": "anonymous",
118 },
119 }
120 # Additional stuff
121 master_doc = "index"
122
123 # Cross-reference existing Python objects
124 intersphinx_mapping = {
125 "python": ("https://docs.python.org/3/", None),
126 "pandas": ("https://pandas.pydata.org/pandas-docs/stable", None),
127 "numpy": ("https://numpy.org/doc/stable", None),
128 "scipy": ("https://docs.scipy.org/doc/scipy", None),
129 "numba": ("https://numba.pydata.org/numba-doc/latest", None),
130 "arrow": ("https://arrow.apache.org/docs/", None),
131 "jax": ("https://jax.readthedocs.io/en/latest", None),
132 }
133
134 # Preserve legacy routes
135 with open("redirects.json") as f:
136 redirects = json.load(f)
137
138 redirect_html_template_file = "_templates/redirect.html"
139
140 # JupyterLite configuration
141 jupyterlite_dir = "./lite"
142 # Don't override ipynb format
143 jupyterlite_bind_ipynb_suffix = False
144 # We've disabled localstorage, so we must provide the contents explicitly
145 jupyterlite_contents = ["getting-started/demo/*"]
146
147 HERE = pathlib.Path(__file__).parent
148
149 # Generate C++ bindings
150 subprocess.check_call(
151 ["doxygen", str(HERE.parent / "docs-doxygen" / "Doxyfile")], cwd=HERE.parent
152 )
153
154 # Generate Python docstrings
155 runpy.run_path(HERE / "prepare_docstrings.py", run_name="__main__")
156
157 # Generate kernel docs
158 runpy.run_path(HERE.parent / "dev" / "generate-kerneldocs.py", run_name="__main__")
159
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs-sphinx/conf.py b/docs-sphinx/conf.py
--- a/docs-sphinx/conf.py
+++ b/docs-sphinx/conf.py
@@ -89,6 +89,10 @@
"url": "https://github.com/scikit-hep/awkward/releases",
},
],
+ "analytics": {
+ "plausible_analytics_domain": "awkward-array.org",
+ "plausible_analytics_url": "https://views.scientific-python.org/js/plausible.js"
+ }
}
# Add any paths that contain custom static files (such as style sheets) here,
| {"golden_diff": "diff --git a/docs-sphinx/conf.py b/docs-sphinx/conf.py\n--- a/docs-sphinx/conf.py\n+++ b/docs-sphinx/conf.py\n@@ -89,6 +89,10 @@\n \"url\": \"https://github.com/scikit-hep/awkward/releases\",\n },\n ],\n+ \"analytics\": {\n+ \"plausible_analytics_domain\": \"awkward-array.org\",\n+ \"plausible_analytics_url\": \"https://views.scientific-python.org/js/plausible.js\"\n+ }\n }\n \n # Add any paths that contain custom static files (such as style sheets) here,\n", "issue": "Add statistics tracker to the new website\n### Which documentation?\n\nTutorials site\n\n### What needs to be documented?\n\nStefan gave us a `data-domain` for Plausible; we should add this text\r\n\r\n```html\r\n<script defer data-domain=\"awkward-array.org\" src=\"https://views.scientific-python.org/js/plausible.js\"></script>\r\n```\r\n\r\nto every HTML page. The HTML is generated by Sphinx, so it has to go into a template somewhere. I think we don't actually have the default HTML templates; I think that all of the HTML in [docs-sphinx/_templates](https://github.com/scikit-hep/awkward/tree/main/docs-sphinx/_templates) is an override. For instance, the [breadcrumbs.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/breadcrumbs.html) is just to eliminate something that would be on the default page (the \"fork me on GitHub\" ribbon?).\r\n\r\n@agoose77, you've recently added [funding.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/funding.html), which appears at the bottom of every page. (I just checked.) If the `<script>` is added there, I think it would reach every page.\r\n\r\nExcept redirects. The [redirect.html](https://github.com/scikit-hep/awkward/blob/main/docs-sphinx/_templates/redirect.html) describes an entire page. It's debatable whether we'd want to count redirects. It would tell us how many people are using old URLs versus new URLs, but I don't see how useful that information is, and it would have to be disaggregated from the totals, since redirected links shouldn't be counted twice if we're interested in how many people went to a given _topic_ (not _page_). I vote for no statistics on redirects, and I'm not even sure if the script will work if the redirect happens through the meta mechanism (because such an access has `<noscript>`.\r\n\r\nArguably, a `<script>` element ought to go in the HTML `<head>`, rather than the `<body>` (`footer.html`). That's where I usually see them. Actually, I stand corrected: they're legal anywhere, and there are reasons to put them in the `<body>`. [This StackOverflow post](https://stackoverflow.com/a/24070373/1623645) presents the pros and cons: page rendering will pause while a script is being downloaded an executed, and that's no good. We could follow that page's \"antiquated recommendation\" by putting the `<script>` at the end of the page (`footer.html`); the reason against it doesn't apply: we don't need the statistics-counter script to run to render the page\u2014that can happen late. The \"modern approach\" is to use `async` or `defer`, which I just noticed is in our snippet, so there are no constraints on when this snippet can be placed. (And it could be `async`, rather than `defer`, because we don't care whether it runs before or after other scripts on the page.)\r\n\r\nThe only argument I can see for putting it in the `<head>`, then, is that if the statistics-counter starts too late, we could undercount our bounce rate: users click the back button before rendering gets to the `<script>` and the page view gets counted. There will always be some cut-off in the bounce rate, much like $p_T$ in a tracking distribution, since very small values are hard to measure. Having a lower implicit cut on bounce rate, rather than a higher implicit cut, doesn't sound very important to me.\r\n\r\nTrying to get it into the `<head>` would mean overloading more templates, and I don't like to overload Sphinx templates because it means we no longer get version updates for that part of the page, and who knows if the template designer intends some relationship between two parts of a page, starting with a particular version number? So I'm in favor of adding the `<script>` to `footer.html`, and `defer` may be replaced by `async` just to loosen an unnecessary constraint for the browser.\r\n\r\nSorry for the long-winded write-up; just thinking through the issues while I type!\r\n\r\nOh, one last thing: let's add the statistics-counter to the new documentation _only_. In other words, _not_ the one with the `latest` tag. The v1 documentation (`latest`) is split between two sites, I don't want to add it to the Netlify site, and the statistics will be easier to interpret if we have it on only one site: we'll know what page is meant by a given URL. We should also see the turn-on curve when the new documentation goes public. If we include the old documentation, we might not be able to filter it out of the statistics, since some of the URLs are the same.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport json\nimport datetime\nimport runpy\nimport sys\nimport subprocess\nimport pathlib\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Awkward Array\"\ncopyright = f\"{datetime.datetime.now().year}, Awkward Array development team\"\nauthor = \"Jim Pivarski\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named \"sphinx.ext.*\") or your custom\n# ones.\nextensions = [\n \"sphinx_copybutton\",\n \"sphinx_design\",\n \"sphinx_external_toc\",\n \"sphinx.ext.intersphinx\",\n \"myst_nb\",\n # Preserve old links\n \"sphinx_reredirects\",\n \"jupyterlite_sphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \"_templates\", \"Thumbs.db\", \"jupyter_execute\", \".*\"]\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_context = {\n \"github_user\": \"scikit-hep\",\n \"github_repo\": \"awkward\",\n # TODO: set this\n \"github_version\": os.environ.get(\"READTHEDOCS_VERSION\", \"main\"),\n \"doc_path\": \"docs-sphinx\",\n}\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_show_sourcelink = True\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"image/logo-300px.png\",\n \"image_dark\": \"image/logo-300px-white.png\",\n },\n \"github_url\": \"https://github.com/scikit-hep/awkward\",\n # Add light/dark mode and documentation version switcher:\n \"navbar_end\": [\"theme-switcher\", \"navbar-icon-links\"],\n \"footer_items\": [\"copyright\", \"sphinx-version\", \"funding\"],\n \"icon_links\": [\n {\n \"name\": \"PyPI\",\n \"url\": \"https://pypi.org/project/awkward\",\n \"icon\": \"fab fa-python\",\n }\n ],\n \"use_edit_page_button\": True,\n \"external_links\": [\n {\n \"name\": \"Contributor guide\",\n \"url\": \"https://github.com/scikit-hep/awkward/blob/main/CONTRIBUTING.md\",\n },\n {\n \"name\": \"Release history\",\n \"url\": \"https://github.com/scikit-hep/awkward/releases\",\n },\n ],\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"css/awkward.css\"]\n\n# MyST settings\nmyst_enable_extensions = [\n \"colon_fence\",\n]\n\nnb_execution_mode = \"cache\"\nnb_execution_raise_on_error = True\n# unpkg is currently _very_ slow\nnb_ipywidgets_js = {\n # Load RequireJS, used by the IPywidgets for dependency management\n \"https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js\": {\n \"integrity\": \"sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=\",\n \"crossorigin\": \"anonymous\",\n },\n # Load IPywidgets bundle for embedding.\n \"https://cdn.jsdelivr.net/npm/@jupyter-widgets/[email protected]/dist/embed-amd.js\": {\n \"data-jupyter-widgets-cdn\": \"https://cdn.jsdelivr.net/npm/\",\n \"crossorigin\": \"anonymous\",\n },\n}\n# Additional stuff\nmaster_doc = \"index\"\n\n# Cross-reference existing Python objects\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n \"pandas\": (\"https://pandas.pydata.org/pandas-docs/stable\", None),\n \"numpy\": (\"https://numpy.org/doc/stable\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy\", None),\n \"numba\": (\"https://numba.pydata.org/numba-doc/latest\", None),\n \"arrow\": (\"https://arrow.apache.org/docs/\", None),\n \"jax\": (\"https://jax.readthedocs.io/en/latest\", None),\n}\n\n# Preserve legacy routes\nwith open(\"redirects.json\") as f:\n redirects = json.load(f)\n\nredirect_html_template_file = \"_templates/redirect.html\"\n\n# JupyterLite configuration\njupyterlite_dir = \"./lite\"\n# Don't override ipynb format\njupyterlite_bind_ipynb_suffix = False\n# We've disabled localstorage, so we must provide the contents explicitly\njupyterlite_contents = [\"getting-started/demo/*\"]\n\nHERE = pathlib.Path(__file__).parent\n\n# Generate C++ bindings\nsubprocess.check_call(\n [\"doxygen\", str(HERE.parent / \"docs-doxygen\" / \"Doxyfile\")], cwd=HERE.parent\n)\n\n# Generate Python docstrings\nrunpy.run_path(HERE / \"prepare_docstrings.py\", run_name=\"__main__\")\n\n# Generate kernel docs\nrunpy.run_path(HERE.parent / \"dev\" / \"generate-kerneldocs.py\", run_name=\"__main__\")\n", "path": "docs-sphinx/conf.py"}], "after_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport json\nimport datetime\nimport runpy\nimport sys\nimport subprocess\nimport pathlib\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Awkward Array\"\ncopyright = f\"{datetime.datetime.now().year}, Awkward Array development team\"\nauthor = \"Jim Pivarski\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named \"sphinx.ext.*\") or your custom\n# ones.\nextensions = [\n \"sphinx_copybutton\",\n \"sphinx_design\",\n \"sphinx_external_toc\",\n \"sphinx.ext.intersphinx\",\n \"myst_nb\",\n # Preserve old links\n \"sphinx_reredirects\",\n \"jupyterlite_sphinx\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_build\", \"_templates\", \"Thumbs.db\", \"jupyter_execute\", \".*\"]\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_context = {\n \"github_user\": \"scikit-hep\",\n \"github_repo\": \"awkward\",\n # TODO: set this\n \"github_version\": os.environ.get(\"READTHEDOCS_VERSION\", \"main\"),\n \"doc_path\": \"docs-sphinx\",\n}\nhtml_theme = \"pydata_sphinx_theme\"\nhtml_show_sourcelink = True\nhtml_theme_options = {\n \"logo\": {\n \"image_light\": \"image/logo-300px.png\",\n \"image_dark\": \"image/logo-300px-white.png\",\n },\n \"github_url\": \"https://github.com/scikit-hep/awkward\",\n # Add light/dark mode and documentation version switcher:\n \"navbar_end\": [\"theme-switcher\", \"navbar-icon-links\"],\n \"footer_items\": [\"copyright\", \"sphinx-version\", \"funding\"],\n \"icon_links\": [\n {\n \"name\": \"PyPI\",\n \"url\": \"https://pypi.org/project/awkward\",\n \"icon\": \"fab fa-python\",\n }\n ],\n \"use_edit_page_button\": True,\n \"external_links\": [\n {\n \"name\": \"Contributor guide\",\n \"url\": \"https://github.com/scikit-hep/awkward/blob/main/CONTRIBUTING.md\",\n },\n {\n \"name\": \"Release history\",\n \"url\": \"https://github.com/scikit-hep/awkward/releases\",\n },\n ],\n \"analytics\": {\n \"plausible_analytics_domain\": \"awkward-array.org\",\n \"plausible_analytics_url\": \"https://views.scientific-python.org/js/plausible.js\"\n }\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\nhtml_css_files = [\"css/awkward.css\"]\n\n# MyST settings\nmyst_enable_extensions = [\n \"colon_fence\",\n]\n\nnb_execution_mode = \"cache\"\nnb_execution_raise_on_error = True\n# unpkg is currently _very_ slow\nnb_ipywidgets_js = {\n # Load RequireJS, used by the IPywidgets for dependency management\n \"https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js\": {\n \"integrity\": \"sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=\",\n \"crossorigin\": \"anonymous\",\n },\n # Load IPywidgets bundle for embedding.\n \"https://cdn.jsdelivr.net/npm/@jupyter-widgets/[email protected]/dist/embed-amd.js\": {\n \"data-jupyter-widgets-cdn\": \"https://cdn.jsdelivr.net/npm/\",\n \"crossorigin\": \"anonymous\",\n },\n}\n# Additional stuff\nmaster_doc = \"index\"\n\n# Cross-reference existing Python objects\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3/\", None),\n \"pandas\": (\"https://pandas.pydata.org/pandas-docs/stable\", None),\n \"numpy\": (\"https://numpy.org/doc/stable\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy\", None),\n \"numba\": (\"https://numba.pydata.org/numba-doc/latest\", None),\n \"arrow\": (\"https://arrow.apache.org/docs/\", None),\n \"jax\": (\"https://jax.readthedocs.io/en/latest\", None),\n}\n\n# Preserve legacy routes\nwith open(\"redirects.json\") as f:\n redirects = json.load(f)\n\nredirect_html_template_file = \"_templates/redirect.html\"\n\n# JupyterLite configuration\njupyterlite_dir = \"./lite\"\n# Don't override ipynb format\njupyterlite_bind_ipynb_suffix = False\n# We've disabled localstorage, so we must provide the contents explicitly\njupyterlite_contents = [\"getting-started/demo/*\"]\n\nHERE = pathlib.Path(__file__).parent\n\n# Generate C++ bindings\nsubprocess.check_call(\n [\"doxygen\", str(HERE.parent / \"docs-doxygen\" / \"Doxyfile\")], cwd=HERE.parent\n)\n\n# Generate Python docstrings\nrunpy.run_path(HERE / \"prepare_docstrings.py\")\n\n# Generate kernel docs\nrunpy.run_path(HERE.parent / \"dev\" / \"generate-kerneldocs.py\")\n", "path": "docs-sphinx/conf.py"}]} |
gh_patches_debug_1116 | rasdani/github-patches | git_diff | horovod__horovod-2121 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Error in computing gradients when using allgather
**Environment:**
1. Framework: TensorFlow
2. Framework version: 2.0
3. Horovod version: 0.18.2
I am trying to get the median of a tensor computed across all batches and all processes. However, I got an error TypeError: Expected int32, got None of type 'NoneType' instead.It seems that computing gradients does not work well with horovod's allgather operation. A simple illustration of what I would like to achieve is as follows:
>with tf.GradientTape() as tape:
    my_tensor = compute_my_tensor()
    gathered_my_tensor = hvd.allgather(my_tensor)
    median = get_median(gathered_my_tensor)
    loss = get_loss(my_tensor, median, training=True)
tape = hvd.DistributedGradientTape(tape)
grads = tape.gradient(loss, trainable_variables)
optimizer.apply_gradients(zip(grads, trainable_variables))
BTW, when I use eager mode of tensorflow, there will be no error
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `horovod/tensorflow/mpi_ops.py`
Content:
```
1 # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
2 # Modifications copyright (C) 2019 Uber Technologies, Inc.
3 # Modifications copyright Microsoft
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # =============================================================================
17 """Inter-process communication using MPI."""
18
19 import re
20 import tensorflow as tf
21 from tensorflow.python.framework import load_library
22 from tensorflow.python.framework import ops
23 from tensorflow.python.platform import resource_loader
24
25 from horovod.common.util import get_ext_suffix, get_average_backwards_compatibility_fun, gpu_available, \
26 num_rank_is_power_2
27 from horovod.common.basics import HorovodBasics as _HorovodBasics
28 from horovod.tensorflow.util import _executing_eagerly
29
30
31 def _load_library(name):
32 """Loads a .so file containing the specified operators.
33
34 Args:
35 name: The name of the .so file to load.
36
37 Raises:
38 NotFoundError if were not able to load .so file.
39 """
40 filename = resource_loader.get_path_to_datafile(name)
41 library = load_library.load_op_library(filename)
42 return library
43
44
45 MPI_LIB = _load_library('mpi_lib' + get_ext_suffix())
46
47 _basics = _HorovodBasics(__file__, 'mpi_lib')
48
49 # import basic methods
50 init = _basics.init
51 shutdown = _basics.shutdown
52 size = _basics.size
53 local_size = _basics.local_size
54 rank = _basics.rank
55 local_rank = _basics.local_rank
56 mpi_threads_supported = _basics.mpi_threads_supported
57 mpi_enabled = _basics.mpi_enabled
58 mpi_built = _basics.mpi_built
59 gloo_enabled = _basics.gloo_enabled
60 gloo_built = _basics.gloo_built
61 nccl_built = _basics.nccl_built
62 ddl_built = _basics.ddl_built
63 ccl_built = _basics.ccl_built
64
65 # import reduction op values
66 Average = _basics.Average
67 Sum = _basics.Sum
68 Adasum = _basics.Adasum
69
70 is_homogeneous = _basics.is_homogeneous
71
72 handle_average_backwards_compatibility = get_average_backwards_compatibility_fun(_basics)
73
74 check_num_rank_power_of_2 = num_rank_is_power_2
75
76
77 # This function will create a default device map which includes all visible devices.
78 # Please run this function in a subprocess
79 def _check_has_gpu():
80 import tensorflow as tf
81 return tf.test.is_gpu_available()
82
83
84 def _normalize_name(name):
85 """Normalizes operation name to TensorFlow rules."""
86 return re.sub('[^a-zA-Z0-9_]', '_', name)
87
88
89 def _allreduce(tensor, name=None, op=Sum):
90 """An op which reduces an input tensor over all the Horovod processes. The
91 default reduction is a sum.
92
93 The reduction operation is keyed by the name of the op. The tensor type and
94 shape must be the same on all Horovod processes for a given name. The reduction
95 will not start until all processes are ready to send and receive the tensor.
96
97 Returns:
98 A tensor of the same shape and type as `tensor`, summed across all
99 processes.
100 """
101 if name is None and not _executing_eagerly():
102 name = 'HorovodAllreduce_%s' % _normalize_name(tensor.name)
103 return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op)
104
105
106 @ops.RegisterGradient('HorovodAllreduce')
107 def _allreduce_grad(op, grad):
108 """Gradient for allreduce op.
109
110 Args:
111 op: An operation.
112 grad: `Tensor` gradient with respect to the output of the op.
113
114 Returns:
115 The gradient with respect to the input of the op.
116 """
117 reduce_op = op.get_attr('reduce_op')
118 return _allreduce(grad, op=reduce_op)
119
120
121 def allgather(tensor, name=None):
122 """An op which concatenates the input tensor with the same input tensor on
123 all other Horovod processes.
124
125 The concatenation is done on the first dimension, so the input tensors on the
126 different processes must have the same rank and shape, except for the first
127 dimension, which is allowed to be different.
128
129 Returns:
130 A tensor of the same type as `tensor`, concatenated on dimension zero
131 across all processes. The shape is identical to the input shape, except for
132 the first dimension, which may be greater and is the sum of all first
133 dimensions of the tensors in different Horovod processes.
134 """
135 if name is None and not _executing_eagerly():
136 name = 'HorovodAllgather_%s' % _normalize_name(tensor.name)
137 return MPI_LIB.horovod_allgather(tensor, name=name)
138
139
140 @ops.RegisterGradient('HorovodAllgather')
141 def _allgather_grad(op, grad):
142 """Gradient for allgather op.
143
144 Args:
145 op: An operation.
146 grad: `Tensor` gradient with respect to the output of the op.
147
148 Returns:
149 The gradient with respect to the input of the op.
150 """
151 grad = _allreduce(grad)
152
153 with tf.device('/cpu:0'):
154 # Keep the tensor of split sizes on CPU.
155 x = op.inputs[0]
156 d0 = x.get_shape().as_list()[0]
157 d = tf.convert_to_tensor([d0], dtype=tf.int32)
158
159 s = size()
160 d = tf.reshape(allgather(d), [s])
161
162 splits = tf.split(grad, num_or_size_splits=d, axis=0)
163 return splits[rank()]
164
165
166 def broadcast(tensor, root_rank, name=None):
167 """An op which broadcasts the input tensor on root rank to the same input tensor
168 on all other Horovod processes.
169
170 The broadcast operation is keyed by the name of the op. The tensor type and
171 shape must be the same on all Horovod processes for a given name. The broadcast
172 will not start until all processes are ready to send and receive the tensor.
173
174 Returns:
175 A tensor of the same shape and type as `tensor`, with the value broadcasted
176 from root rank.
177 """
178 if name is None and not _executing_eagerly():
179 name = 'HorovodBroadcast_%s' % _normalize_name(tensor.name)
180 return MPI_LIB.horovod_broadcast(tensor, name=name, root_rank=root_rank)
181
182
183 @ops.RegisterGradient('HorovodBroadcast')
184 def _broadcast_grad(op, grad):
185 """Gradient for broadcast op.
186
187 Args:
188 op: An operation.
189 grad: `Tensor` gradient with respect to the output of the op.
190
191 Returns:
192 The gradient with respect to the input of the op.
193 """
194 root_rank = op.get_attr('root_rank')
195 grad_reduced = _allreduce(grad)
196 if rank() != root_rank:
197 return grad_reduced * 0
198 return grad_reduced
199
200
201 def join():
202 return MPI_LIB.horovod_join()
203
204
205 def size_op(name=None):
206 """An op that returns the number of Horovod processes.
207
208 This operation determines the return value at the graph execution time,
209 rather than at the graph construction time, and so allows for a graph to be
210 constructed in a different environment than where it will be executed.
211
212 Returns:
213 An integer scalar containing the number of Horovod processes.
214 """
215 return MPI_LIB.horovod_size(name=name)
216
217
218 ops.NotDifferentiable('HorovodSize')
219
220
221 def local_size_op(name=None):
222 """An op that returns the number of Horovod processes within the
223 node the current process is running on.
224
225 This operation determines the return value at the graph execution time,
226 rather than at the graph construction time, and so allows for a graph to be
227 constructed in a different environment than where it will be executed.
228
229 Returns:
230 An integer scalar containing the number of local Horovod processes.
231 """
232 return MPI_LIB.horovod_local_size(name=name)
233
234
235 ops.NotDifferentiable('HorovodLocalSize')
236
237
238 def rank_op(name=None):
239 """An op that returns the Horovod rank of the calling process.
240
241 This operation determines the return value at the graph execution time,
242 rather than at the graph construction time, and so allows for a graph to be
243 constructed in a different environment than where it will be executed.
244
245 Returns:
246 An integer scalar with the Horovod rank of the calling process.
247 """
248 return MPI_LIB.horovod_rank(name=name)
249
250
251 ops.NotDifferentiable('HorovodRank')
252
253
254 def local_rank_op(name=None):
255 """An op that returns the local Horovod rank of the calling process, within the
256 node that it is running on. For example, if there are seven processes running
257 on a node, their local ranks will be zero through six, inclusive.
258
259 This operation determines the return value at the graph execution time,
260 rather than at the graph construction time, and so allows for a graph to be
261 constructed in a different environment than where it will be executed.
262
263 Returns:
264 An integer scalar with the local Horovod rank of the calling process.
265 """
266 return MPI_LIB.horovod_rank(name=name)
267
268
269 ops.NotDifferentiable('HorovodLocalRank')
270
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/horovod/tensorflow/mpi_ops.py b/horovod/tensorflow/mpi_ops.py
--- a/horovod/tensorflow/mpi_ops.py
+++ b/horovod/tensorflow/mpi_ops.py
@@ -152,8 +152,8 @@
with tf.device('/cpu:0'):
# Keep the tensor of split sizes on CPU.
x = op.inputs[0]
- d0 = x.get_shape().as_list()[0]
- d = tf.convert_to_tensor([d0], dtype=tf.int32)
+ d = tf.shape(x)
+ d = tf.reshape(d[0], [1])
s = size()
d = tf.reshape(allgather(d), [s])
| {"golden_diff": "diff --git a/horovod/tensorflow/mpi_ops.py b/horovod/tensorflow/mpi_ops.py\n--- a/horovod/tensorflow/mpi_ops.py\n+++ b/horovod/tensorflow/mpi_ops.py\n@@ -152,8 +152,8 @@\n with tf.device('/cpu:0'):\n # Keep the tensor of split sizes on CPU.\n x = op.inputs[0]\n- d0 = x.get_shape().as_list()[0]\n- d = tf.convert_to_tensor([d0], dtype=tf.int32)\n+ d = tf.shape(x)\n+ d = tf.reshape(d[0], [1])\n \n s = size()\n d = tf.reshape(allgather(d), [s])\n", "issue": "Error in computing gradients when using allgather\n**Environment:**\r\n1. Framework: TensorFlow\r\n2. Framework version: 2.0\r\n3. Horovod version: 0.18.2\r\n\r\nI am trying to get the median of a tensor computed across all batches and all processes. However, I got an error TypeError: Expected int32, got None of type 'NoneType' instead.It seems that computing gradients does not work well with horovod's allgather operation. A simple illustration of what I would like to achieve is as follows:\r\n\r\n>with tf.GradientTape() as tape: \r\n    my_tensor = compute_my_tensor() \r\n    gathered_my_tensor = hvd.allgather(my_tensor) \r\n    median = get_median(gathered_my_tensor)\r\n    loss = get_loss(my_tensor, median, training=True)\r\ntape = hvd.DistributedGradientTape(tape)\r\ngrads = tape.gradient(loss, trainable_variables)\r\noptimizer.apply_gradients(zip(grads, trainable_variables))\r\n\r\nBTW, when I use eager mode of tensorflow, there will be no error\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016 The TensorFlow Authors. All Rights Reserved.\n# Modifications copyright (C) 2019 Uber Technologies, Inc.\n# Modifications copyright Microsoft\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\n\"\"\"Inter-process communication using MPI.\"\"\"\n\nimport re\nimport tensorflow as tf\nfrom tensorflow.python.framework import load_library\nfrom tensorflow.python.framework import ops\nfrom tensorflow.python.platform import resource_loader\n\nfrom horovod.common.util import get_ext_suffix, get_average_backwards_compatibility_fun, gpu_available, \\\n num_rank_is_power_2\nfrom horovod.common.basics import HorovodBasics as _HorovodBasics\nfrom horovod.tensorflow.util import _executing_eagerly\n\n\ndef _load_library(name):\n \"\"\"Loads a .so file containing the specified operators.\n\n Args:\n name: The name of the .so file to load.\n\n Raises:\n NotFoundError if were not able to load .so file.\n \"\"\"\n filename = resource_loader.get_path_to_datafile(name)\n library = load_library.load_op_library(filename)\n return library\n\n\nMPI_LIB = _load_library('mpi_lib' + get_ext_suffix())\n\n_basics = _HorovodBasics(__file__, 'mpi_lib')\n\n# import basic methods\ninit = _basics.init\nshutdown = _basics.shutdown\nsize = _basics.size\nlocal_size = _basics.local_size\nrank = _basics.rank\nlocal_rank = _basics.local_rank\nmpi_threads_supported = _basics.mpi_threads_supported\nmpi_enabled = _basics.mpi_enabled\nmpi_built = _basics.mpi_built\ngloo_enabled = _basics.gloo_enabled\ngloo_built = _basics.gloo_built\nnccl_built = _basics.nccl_built\nddl_built = _basics.ddl_built\nccl_built = _basics.ccl_built\n\n# import reduction op values\nAverage = _basics.Average\nSum = _basics.Sum\nAdasum = _basics.Adasum\n\nis_homogeneous = _basics.is_homogeneous\n\nhandle_average_backwards_compatibility = get_average_backwards_compatibility_fun(_basics)\n\ncheck_num_rank_power_of_2 = num_rank_is_power_2\n\n\n# This function will create a default device map which includes all visible devices.\n# Please run this function in a subprocess\ndef _check_has_gpu():\n import tensorflow as tf\n return tf.test.is_gpu_available()\n\n\ndef _normalize_name(name):\n \"\"\"Normalizes operation name to TensorFlow rules.\"\"\"\n return re.sub('[^a-zA-Z0-9_]', '_', name)\n\n\ndef _allreduce(tensor, name=None, op=Sum):\n \"\"\"An op which reduces an input tensor over all the Horovod processes. The\n default reduction is a sum.\n\n The reduction operation is keyed by the name of the op. The tensor type and\n shape must be the same on all Horovod processes for a given name. The reduction\n will not start until all processes are ready to send and receive the tensor.\n\n Returns:\n A tensor of the same shape and type as `tensor`, summed across all\n processes.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodAllreduce_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op)\n\n\[email protected]('HorovodAllreduce')\ndef _allreduce_grad(op, grad):\n \"\"\"Gradient for allreduce op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n reduce_op = op.get_attr('reduce_op')\n return _allreduce(grad, op=reduce_op)\n\n\ndef allgather(tensor, name=None):\n \"\"\"An op which concatenates the input tensor with the same input tensor on\n all other Horovod processes.\n\n The concatenation is done on the first dimension, so the input tensors on the\n different processes must have the same rank and shape, except for the first\n dimension, which is allowed to be different.\n\n Returns:\n A tensor of the same type as `tensor`, concatenated on dimension zero\n across all processes. The shape is identical to the input shape, except for\n the first dimension, which may be greater and is the sum of all first\n dimensions of the tensors in different Horovod processes.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodAllgather_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_allgather(tensor, name=name)\n\n\[email protected]('HorovodAllgather')\ndef _allgather_grad(op, grad):\n \"\"\"Gradient for allgather op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n grad = _allreduce(grad)\n\n with tf.device('/cpu:0'):\n # Keep the tensor of split sizes on CPU.\n x = op.inputs[0]\n d0 = x.get_shape().as_list()[0]\n d = tf.convert_to_tensor([d0], dtype=tf.int32)\n\n s = size()\n d = tf.reshape(allgather(d), [s])\n\n splits = tf.split(grad, num_or_size_splits=d, axis=0)\n return splits[rank()]\n\n\ndef broadcast(tensor, root_rank, name=None):\n \"\"\"An op which broadcasts the input tensor on root rank to the same input tensor\n on all other Horovod processes.\n\n The broadcast operation is keyed by the name of the op. The tensor type and\n shape must be the same on all Horovod processes for a given name. The broadcast\n will not start until all processes are ready to send and receive the tensor.\n\n Returns:\n A tensor of the same shape and type as `tensor`, with the value broadcasted\n from root rank.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodBroadcast_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_broadcast(tensor, name=name, root_rank=root_rank)\n\n\[email protected]('HorovodBroadcast')\ndef _broadcast_grad(op, grad):\n \"\"\"Gradient for broadcast op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n root_rank = op.get_attr('root_rank')\n grad_reduced = _allreduce(grad)\n if rank() != root_rank:\n return grad_reduced * 0\n return grad_reduced\n\n\ndef join():\n return MPI_LIB.horovod_join()\n\n\ndef size_op(name=None):\n \"\"\"An op that returns the number of Horovod processes.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar containing the number of Horovod processes.\n \"\"\"\n return MPI_LIB.horovod_size(name=name)\n\n\nops.NotDifferentiable('HorovodSize')\n\n\ndef local_size_op(name=None):\n \"\"\"An op that returns the number of Horovod processes within the\n node the current process is running on.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar containing the number of local Horovod processes.\n \"\"\"\n return MPI_LIB.horovod_local_size(name=name)\n\n\nops.NotDifferentiable('HorovodLocalSize')\n\n\ndef rank_op(name=None):\n \"\"\"An op that returns the Horovod rank of the calling process.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar with the Horovod rank of the calling process.\n \"\"\"\n return MPI_LIB.horovod_rank(name=name)\n\n\nops.NotDifferentiable('HorovodRank')\n\n\ndef local_rank_op(name=None):\n \"\"\"An op that returns the local Horovod rank of the calling process, within the\n node that it is running on. For example, if there are seven processes running\n on a node, their local ranks will be zero through six, inclusive.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar with the local Horovod rank of the calling process.\n \"\"\"\n return MPI_LIB.horovod_rank(name=name)\n\n\nops.NotDifferentiable('HorovodLocalRank')\n", "path": "horovod/tensorflow/mpi_ops.py"}], "after_files": [{"content": "# Copyright 2016 The TensorFlow Authors. All Rights Reserved.\n# Modifications copyright (C) 2019 Uber Technologies, Inc.\n# Modifications copyright Microsoft\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# =============================================================================\n\"\"\"Inter-process communication using MPI.\"\"\"\n\nimport re\nimport tensorflow as tf\nfrom tensorflow.python.framework import load_library\nfrom tensorflow.python.framework import ops\nfrom tensorflow.python.platform import resource_loader\n\nfrom horovod.common.util import get_ext_suffix, get_average_backwards_compatibility_fun, gpu_available, \\\n num_rank_is_power_2\nfrom horovod.common.basics import HorovodBasics as _HorovodBasics\nfrom horovod.tensorflow.util import _executing_eagerly\n\n\ndef _load_library(name):\n \"\"\"Loads a .so file containing the specified operators.\n\n Args:\n name: The name of the .so file to load.\n\n Raises:\n NotFoundError if were not able to load .so file.\n \"\"\"\n filename = resource_loader.get_path_to_datafile(name)\n library = load_library.load_op_library(filename)\n return library\n\n\nMPI_LIB = _load_library('mpi_lib' + get_ext_suffix())\n\n_basics = _HorovodBasics(__file__, 'mpi_lib')\n\n# import basic methods\ninit = _basics.init\nshutdown = _basics.shutdown\nsize = _basics.size\nlocal_size = _basics.local_size\nrank = _basics.rank\nlocal_rank = _basics.local_rank\nmpi_threads_supported = _basics.mpi_threads_supported\nmpi_enabled = _basics.mpi_enabled\nmpi_built = _basics.mpi_built\ngloo_enabled = _basics.gloo_enabled\ngloo_built = _basics.gloo_built\nnccl_built = _basics.nccl_built\nddl_built = _basics.ddl_built\nccl_built = _basics.ccl_built\n\n# import reduction op values\nAverage = _basics.Average\nSum = _basics.Sum\nAdasum = _basics.Adasum\n\nis_homogeneous = _basics.is_homogeneous\n\nhandle_average_backwards_compatibility = get_average_backwards_compatibility_fun(_basics)\n\ncheck_num_rank_power_of_2 = num_rank_is_power_2\n\n\n# This function will create a default device map which includes all visible devices.\n# Please run this function in a subprocess\ndef _check_has_gpu():\n import tensorflow as tf\n return tf.test.is_gpu_available()\n\n\ndef _normalize_name(name):\n \"\"\"Normalizes operation name to TensorFlow rules.\"\"\"\n return re.sub('[^a-zA-Z0-9_]', '_', name)\n\n\ndef _allreduce(tensor, name=None, op=Sum):\n \"\"\"An op which reduces an input tensor over all the Horovod processes. The\n default reduction is a sum.\n\n The reduction operation is keyed by the name of the op. The tensor type and\n shape must be the same on all Horovod processes for a given name. The reduction\n will not start until all processes are ready to send and receive the tensor.\n\n Returns:\n A tensor of the same shape and type as `tensor`, summed across all\n processes.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodAllreduce_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_allreduce(tensor, name=name, reduce_op=op)\n\n\[email protected]('HorovodAllreduce')\ndef _allreduce_grad(op, grad):\n \"\"\"Gradient for allreduce op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n return _allreduce(grad)\n\n\ndef allgather(tensor, name=None):\n \"\"\"An op which concatenates the input tensor with the same input tensor on\n all other Horovod processes.\n\n The concatenation is done on the first dimension, so the input tensors on the\n different processes must have the same rank and shape, except for the first\n dimension, which is allowed to be different.\n\n Returns:\n A tensor of the same type as `tensor`, concatenated on dimension zero\n across all processes. The shape is identical to the input shape, except for\n the first dimension, which may be greater and is the sum of all first\n dimensions of the tensors in different Horovod processes.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodAllgather_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_allgather(tensor, name=name)\n\n\[email protected]('HorovodAllgather')\ndef _allgather_grad(op, grad):\n \"\"\"Gradient for allgather op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n grad = _allreduce(grad)\n\n with tf.device('/cpu:0'):\n # Keep the tensor of split sizes on CPU.\n x = op.inputs[0]\n d = tf.shape(x)\n d = tf.reshape(d[0], [1])\n\n s = size()\n d = tf.reshape(allgather(d), [s])\n\n splits = tf.split(grad, num_or_size_splits=d, axis=0)\n return splits[rank()]\n\n\ndef broadcast(tensor, root_rank, name=None):\n \"\"\"An op which broadcasts the input tensor on root rank to the same input tensor\n on all other Horovod processes.\n\n The broadcast operation is keyed by the name of the op. The tensor type and\n shape must be the same on all Horovod processes for a given name. The broadcast\n will not start until all processes are ready to send and receive the tensor.\n\n Returns:\n A tensor of the same shape and type as `tensor`, with the value broadcasted\n from root rank.\n \"\"\"\n if name is None and not _executing_eagerly():\n name = 'HorovodBroadcast_%s' % _normalize_name(tensor.name)\n return MPI_LIB.horovod_broadcast(tensor, name=name, root_rank=root_rank)\n\n\[email protected]('HorovodBroadcast')\ndef _broadcast_grad(op, grad):\n \"\"\"Gradient for broadcast op.\n\n Args:\n op: An operation.\n grad: `Tensor` gradient with respect to the output of the op.\n\n Returns:\n The gradient with respect to the input of the op.\n \"\"\"\n root_rank = op.get_attr('root_rank')\n grad_reduced = _allreduce(grad)\n if rank() != root_rank:\n return grad_reduced * 0\n return grad_reduced\n\n\ndef join():\n return MPI_LIB.horovod_join()\n\n\ndef size_op(name=None):\n \"\"\"An op that returns the number of Horovod processes.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar containing the number of Horovod processes.\n \"\"\"\n return MPI_LIB.horovod_size(name=name)\n\n\nops.NotDifferentiable('HorovodSize')\n\n\ndef local_size_op(name=None):\n \"\"\"An op that returns the number of Horovod processes within the\n node the current process is running on.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar containing the number of local Horovod processes.\n \"\"\"\n return MPI_LIB.horovod_local_size(name=name)\n\n\nops.NotDifferentiable('HorovodLocalSize')\n\n\ndef rank_op(name=None):\n \"\"\"An op that returns the Horovod rank of the calling process.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar with the Horovod rank of the calling process.\n \"\"\"\n return MPI_LIB.horovod_rank(name=name)\n\n\nops.NotDifferentiable('HorovodRank')\n\n\ndef local_rank_op(name=None):\n \"\"\"An op that returns the local Horovod rank of the calling process, within the\n node that it is running on. For example, if there are seven processes running\n on a node, their local ranks will be zero through six, inclusive.\n\n This operation determines the return value at the graph execution time,\n rather than at the graph construction time, and so allows for a graph to be\n constructed in a different environment than where it will be executed.\n\n Returns:\n An integer scalar with the local Horovod rank of the calling process.\n \"\"\"\n return MPI_LIB.horovod_rank(name=name)\n\n\nops.NotDifferentiable('HorovodLocalRank')\n", "path": "horovod/tensorflow/mpi_ops.py"}]} |
gh_patches_debug_1117 | rasdani/github-patches | git_diff | Rapptz__discord.py-9380 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Piped Audio Input Ends Prematurely
### Summary
Piped video/audio input from sources like `youtube-dl` does not terminate the pipe stream correctly, often cutting off the last bit of the stream.
### Reproduction Steps
- Stream audio from another process using `subprocess.Popen`; in my case I was using `youtube-dl`
- Wait for the audio to play until it nears the end of the stream
### Minimal Reproducible Code
```python
client = await ctx.author.voice.channel.connect()
url = "https://www.youtube.com/watch?v=KAwyWkksXuo"
ytdl = subprocess.Popen(["youtube-dl", "-f", "bestaudio/worst", "-i", url, "-o", "-"], stdout=subprocess.PIPE)
audsrc = discord.FFmpegPCMAudio(ytdl.stdout, pipe=True)
client.play(audsrc)
```
### Expected Results
Discord.py plays the stream until the very end, then closes FFMPEG and stops playback.
### Actual Results
The stream is cut off slightly before the track actually finishes. For the sample video (cbat), it terminates roughly 6 seconds before the stream actually finishes. In addition, FFMPEG terminates with code 255, indicating a forced termination of the program.
### Intents
members, message_content, messages
### System Information
- Python v3.10.7-final
- discord.py v2.0.1-final
- aiohttp v3.8.3
- system info: Darwin 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:17:10 PDT 2022; root:xnu-8020.140.49~2/RELEASE_X86_64
### Checklist
- [X] I have searched the open issues for duplicates.
- [X] I have shown the entire traceback, if possible.
- [X] I have removed my token from display, if visible.
### Additional Context
My current solution for this involves a modification of the `FFmpegAudio` class in `player.py`.
```python
class FFmpegAudio(AudioSource):
# ...
def _pipe_writer(self, source: io.BufferedIOBase) -> None:
while self._process:
# arbitrarily large read size
data = source.read(8192)
if not data:
# self._process.terminate() <--- Removed this line, replaced with following
self._stdin.close()
return
try:
if self._stdin is not None:
self._stdin.write(data)
except Exception:
_log.debug('Write error for %s, this is probably not a problem', self, exc_info=True)
# at this point the source data is either exhausted or the process is fubar
self._process.terminate()
return
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `discord/player.py`
Content:
```
1 """
2 The MIT License (MIT)
3
4 Copyright (c) 2015-present Rapptz
5
6 Permission is hereby granted, free of charge, to any person obtaining a
7 copy of this software and associated documentation files (the "Software"),
8 to deal in the Software without restriction, including without limitation
9 the rights to use, copy, modify, merge, publish, distribute, sublicense,
10 and/or sell copies of the Software, and to permit persons to whom the
11 Software is furnished to do so, subject to the following conditions:
12
13 The above copyright notice and this permission notice shall be included in
14 all copies or substantial portions of the Software.
15
16 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
17 OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
21 FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
22 DEALINGS IN THE SOFTWARE.
23 """
24 from __future__ import annotations
25
26 import threading
27 import subprocess
28 import audioop
29 import asyncio
30 import logging
31 import shlex
32 import time
33 import json
34 import sys
35 import re
36 import io
37
38 from typing import Any, Callable, Generic, IO, Optional, TYPE_CHECKING, Tuple, TypeVar, Union
39
40 from .enums import SpeakingState
41 from .errors import ClientException
42 from .opus import Encoder as OpusEncoder
43 from .oggparse import OggStream
44 from .utils import MISSING
45
46 if TYPE_CHECKING:
47 from typing_extensions import Self
48
49 from .voice_client import VoiceClient
50
51
52 AT = TypeVar('AT', bound='AudioSource')
53
54 _log = logging.getLogger(__name__)
55
56 __all__ = (
57 'AudioSource',
58 'PCMAudio',
59 'FFmpegAudio',
60 'FFmpegPCMAudio',
61 'FFmpegOpusAudio',
62 'PCMVolumeTransformer',
63 )
64
65 CREATE_NO_WINDOW: int
66
67 if sys.platform != 'win32':
68 CREATE_NO_WINDOW = 0
69 else:
70 CREATE_NO_WINDOW = 0x08000000
71
72
73 class AudioSource:
74 """Represents an audio stream.
75
76 The audio stream can be Opus encoded or not, however if the audio stream
77 is not Opus encoded then the audio format must be 16-bit 48KHz stereo PCM.
78
79 .. warning::
80
81 The audio source reads are done in a separate thread.
82 """
83
84 def read(self) -> bytes:
85 """Reads 20ms worth of audio.
86
87 Subclasses must implement this.
88
89 If the audio is complete, then returning an empty
90 :term:`py:bytes-like object` to signal this is the way to do so.
91
92 If :meth:`~AudioSource.is_opus` method returns ``True``, then it must return
93 20ms worth of Opus encoded audio. Otherwise, it must be 20ms
94 worth of 16-bit 48KHz stereo PCM, which is about 3,840 bytes
95 per frame (20ms worth of audio).
96
97 Returns
98 --------
99 :class:`bytes`
100 A bytes like object that represents the PCM or Opus data.
101 """
102 raise NotImplementedError
103
104 def is_opus(self) -> bool:
105 """Checks if the audio source is already encoded in Opus."""
106 return False
107
108 def cleanup(self) -> None:
109 """Called when clean-up is needed to be done.
110
111 Useful for clearing buffer data or processes after
112 it is done playing audio.
113 """
114 pass
115
116 def __del__(self) -> None:
117 self.cleanup()
118
119
120 class PCMAudio(AudioSource):
121 """Represents raw 16-bit 48KHz stereo PCM audio source.
122
123 Attributes
124 -----------
125 stream: :term:`py:file object`
126 A file-like object that reads byte data representing raw PCM.
127 """
128
129 def __init__(self, stream: io.BufferedIOBase) -> None:
130 self.stream: io.BufferedIOBase = stream
131
132 def read(self) -> bytes:
133 ret = self.stream.read(OpusEncoder.FRAME_SIZE)
134 if len(ret) != OpusEncoder.FRAME_SIZE:
135 return b''
136 return ret
137
138
139 class FFmpegAudio(AudioSource):
140 """Represents an FFmpeg (or AVConv) based AudioSource.
141
142 User created AudioSources using FFmpeg differently from how :class:`FFmpegPCMAudio` and
143 :class:`FFmpegOpusAudio` work should subclass this.
144
145 .. versionadded:: 1.3
146 """
147
148 def __init__(
149 self,
150 source: Union[str, io.BufferedIOBase],
151 *,
152 executable: str = 'ffmpeg',
153 args: Any,
154 **subprocess_kwargs: Any,
155 ):
156 piping = subprocess_kwargs.get('stdin') == subprocess.PIPE
157 if piping and isinstance(source, str):
158 raise TypeError("parameter conflict: 'source' parameter cannot be a string when piping to stdin")
159
160 args = [executable, *args]
161 kwargs = {'stdout': subprocess.PIPE}
162 kwargs.update(subprocess_kwargs)
163
164 # Ensure attribute is assigned even in the case of errors
165 self._process: subprocess.Popen = MISSING
166 self._process = self._spawn_process(args, **kwargs)
167 self._stdout: IO[bytes] = self._process.stdout # type: ignore # process stdout is explicitly set
168 self._stdin: Optional[IO[bytes]] = None
169 self._pipe_thread: Optional[threading.Thread] = None
170
171 if piping:
172 n = f'popen-stdin-writer:{id(self):#x}'
173 self._stdin = self._process.stdin
174 self._pipe_thread = threading.Thread(target=self._pipe_writer, args=(source,), daemon=True, name=n)
175 self._pipe_thread.start()
176
177 def _spawn_process(self, args: Any, **subprocess_kwargs: Any) -> subprocess.Popen:
178 process = None
179 try:
180 process = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, **subprocess_kwargs)
181 except FileNotFoundError:
182 executable = args.partition(' ')[0] if isinstance(args, str) else args[0]
183 raise ClientException(executable + ' was not found.') from None
184 except subprocess.SubprocessError as exc:
185 raise ClientException(f'Popen failed: {exc.__class__.__name__}: {exc}') from exc
186 else:
187 return process
188
189 def _kill_process(self) -> None:
190 proc = self._process
191 if proc is MISSING:
192 return
193
194 _log.debug('Preparing to terminate ffmpeg process %s.', proc.pid)
195
196 try:
197 proc.kill()
198 except Exception:
199 _log.exception('Ignoring error attempting to kill ffmpeg process %s', proc.pid)
200
201 if proc.poll() is None:
202 _log.info('ffmpeg process %s has not terminated. Waiting to terminate...', proc.pid)
203 proc.communicate()
204 _log.info('ffmpeg process %s should have terminated with a return code of %s.', proc.pid, proc.returncode)
205 else:
206 _log.info('ffmpeg process %s successfully terminated with return code of %s.', proc.pid, proc.returncode)
207
208 def _pipe_writer(self, source: io.BufferedIOBase) -> None:
209 while self._process:
210 # arbitrarily large read size
211 data = source.read(8192)
212 if not data:
213 self._process.terminate()
214 return
215 try:
216 if self._stdin is not None:
217 self._stdin.write(data)
218 except Exception:
219 _log.debug('Write error for %s, this is probably not a problem', self, exc_info=True)
220 # at this point the source data is either exhausted or the process is fubar
221 self._process.terminate()
222 return
223
224 def cleanup(self) -> None:
225 self._kill_process()
226 self._process = self._stdout = self._stdin = MISSING
227
228
229 class FFmpegPCMAudio(FFmpegAudio):
230 """An audio source from FFmpeg (or AVConv).
231
232 This launches a sub-process to a specific input file given.
233
234 .. warning::
235
236 You must have the ffmpeg or avconv executable in your path environment
237 variable in order for this to work.
238
239 Parameters
240 ------------
241 source: Union[:class:`str`, :class:`io.BufferedIOBase`]
242 The input that ffmpeg will take and convert to PCM bytes.
243 If ``pipe`` is ``True`` then this is a file-like object that is
244 passed to the stdin of ffmpeg.
245 executable: :class:`str`
246 The executable name (and path) to use. Defaults to ``ffmpeg``.
247 pipe: :class:`bool`
248 If ``True``, denotes that ``source`` parameter will be passed
249 to the stdin of ffmpeg. Defaults to ``False``.
250 stderr: Optional[:term:`py:file object`]
251 A file-like object to pass to the Popen constructor.
252 Could also be an instance of ``subprocess.PIPE``.
253 before_options: Optional[:class:`str`]
254 Extra command line arguments to pass to ffmpeg before the ``-i`` flag.
255 options: Optional[:class:`str`]
256 Extra command line arguments to pass to ffmpeg after the ``-i`` flag.
257
258 Raises
259 --------
260 ClientException
261 The subprocess failed to be created.
262 """
263
264 def __init__(
265 self,
266 source: Union[str, io.BufferedIOBase],
267 *,
268 executable: str = 'ffmpeg',
269 pipe: bool = False,
270 stderr: Optional[IO[str]] = None,
271 before_options: Optional[str] = None,
272 options: Optional[str] = None,
273 ) -> None:
274 args = []
275 subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}
276
277 if isinstance(before_options, str):
278 args.extend(shlex.split(before_options))
279
280 args.append('-i')
281 args.append('-' if pipe else source)
282 args.extend(('-f', 's16le', '-ar', '48000', '-ac', '2', '-loglevel', 'warning'))
283
284 if isinstance(options, str):
285 args.extend(shlex.split(options))
286
287 args.append('pipe:1')
288
289 super().__init__(source, executable=executable, args=args, **subprocess_kwargs)
290
291 def read(self) -> bytes:
292 ret = self._stdout.read(OpusEncoder.FRAME_SIZE)
293 if len(ret) != OpusEncoder.FRAME_SIZE:
294 return b''
295 return ret
296
297 def is_opus(self) -> bool:
298 return False
299
300
301 class FFmpegOpusAudio(FFmpegAudio):
302 """An audio source from FFmpeg (or AVConv).
303
304 This launches a sub-process to a specific input file given. However, rather than
305 producing PCM packets like :class:`FFmpegPCMAudio` does that need to be encoded to
306 Opus, this class produces Opus packets, skipping the encoding step done by the library.
307
308 Alternatively, instead of instantiating this class directly, you can use
309 :meth:`FFmpegOpusAudio.from_probe` to probe for bitrate and codec information. This
310 can be used to opportunistically skip pointless re-encoding of existing Opus audio data
311 for a boost in performance at the cost of a short initial delay to gather the information.
312 The same can be achieved by passing ``copy`` to the ``codec`` parameter, but only if you
313 know that the input source is Opus encoded beforehand.
314
315 .. versionadded:: 1.3
316
317 .. warning::
318
319 You must have the ffmpeg or avconv executable in your path environment
320 variable in order for this to work.
321
322 Parameters
323 ------------
324 source: Union[:class:`str`, :class:`io.BufferedIOBase`]
325 The input that ffmpeg will take and convert to Opus bytes.
326 If ``pipe`` is ``True`` then this is a file-like object that is
327 passed to the stdin of ffmpeg.
328 bitrate: :class:`int`
329 The bitrate in kbps to encode the output to. Defaults to ``128``.
330 codec: Optional[:class:`str`]
331 The codec to use to encode the audio data. Normally this would be
332 just ``libopus``, but is used by :meth:`FFmpegOpusAudio.from_probe` to
333 opportunistically skip pointlessly re-encoding Opus audio data by passing
334 ``copy`` as the codec value. Any values other than ``copy``, ``opus``, or
335 ``libopus`` will be considered ``libopus``. Defaults to ``libopus``.
336
337 .. warning::
338
339 Do not provide this parameter unless you are certain that the audio input is
340 already Opus encoded. For typical use :meth:`FFmpegOpusAudio.from_probe`
341 should be used to determine the proper value for this parameter.
342
343 executable: :class:`str`
344 The executable name (and path) to use. Defaults to ``ffmpeg``.
345 pipe: :class:`bool`
346 If ``True``, denotes that ``source`` parameter will be passed
347 to the stdin of ffmpeg. Defaults to ``False``.
348 stderr: Optional[:term:`py:file object`]
349 A file-like object to pass to the Popen constructor.
350 Could also be an instance of ``subprocess.PIPE``.
351 before_options: Optional[:class:`str`]
352 Extra command line arguments to pass to ffmpeg before the ``-i`` flag.
353 options: Optional[:class:`str`]
354 Extra command line arguments to pass to ffmpeg after the ``-i`` flag.
355
356 Raises
357 --------
358 ClientException
359 The subprocess failed to be created.
360 """
361
362 def __init__(
363 self,
364 source: Union[str, io.BufferedIOBase],
365 *,
366 bitrate: Optional[int] = None,
367 codec: Optional[str] = None,
368 executable: str = 'ffmpeg',
369 pipe: bool = False,
370 stderr: Optional[IO[bytes]] = None,
371 before_options: Optional[str] = None,
372 options: Optional[str] = None,
373 ) -> None:
374 args = []
375 subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}
376
377 if isinstance(before_options, str):
378 args.extend(shlex.split(before_options))
379
380 args.append('-i')
381 args.append('-' if pipe else source)
382
383 codec = 'copy' if codec in ('opus', 'libopus') else 'libopus'
384 bitrate = bitrate if bitrate is not None else 128
385
386 # fmt: off
387 args.extend(('-map_metadata', '-1',
388 '-f', 'opus',
389 '-c:a', codec,
390 '-ar', '48000',
391 '-ac', '2',
392 '-b:a', f'{bitrate}k',
393 '-loglevel', 'warning'))
394 # fmt: on
395
396 if isinstance(options, str):
397 args.extend(shlex.split(options))
398
399 args.append('pipe:1')
400
401 super().__init__(source, executable=executable, args=args, **subprocess_kwargs)
402 self._packet_iter = OggStream(self._stdout).iter_packets()
403
404 @classmethod
405 async def from_probe(
406 cls,
407 source: str,
408 *,
409 method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,
410 **kwargs: Any,
411 ) -> Self:
412 """|coro|
413
414 A factory method that creates a :class:`FFmpegOpusAudio` after probing
415 the input source for audio codec and bitrate information.
416
417 Examples
418 ----------
419
420 Use this function to create an :class:`FFmpegOpusAudio` instance instead of the constructor: ::
421
422 source = await discord.FFmpegOpusAudio.from_probe("song.webm")
423 voice_client.play(source)
424
425 If you are on Windows and don't have ffprobe installed, use the ``fallback`` method
426 to probe using ffmpeg instead: ::
427
428 source = await discord.FFmpegOpusAudio.from_probe("song.webm", method='fallback')
429 voice_client.play(source)
430
431 Using a custom method of determining codec and bitrate: ::
432
433 def custom_probe(source, executable):
434 # some analysis code here
435 return codec, bitrate
436
437 source = await discord.FFmpegOpusAudio.from_probe("song.webm", method=custom_probe)
438 voice_client.play(source)
439
440 Parameters
441 ------------
442 source
443 Identical to the ``source`` parameter for the constructor.
444 method: Optional[Union[:class:`str`, Callable[:class:`str`, :class:`str`]]]
445 The probing method used to determine bitrate and codec information. As a string, valid
446 values are ``native`` to use ffprobe (or avprobe) and ``fallback`` to use ffmpeg
447 (or avconv). As a callable, it must take two string arguments, ``source`` and
448 ``executable``. Both parameters are the same values passed to this factory function.
449 ``executable`` will default to ``ffmpeg`` if not provided as a keyword argument.
450 kwargs
451 The remaining parameters to be passed to the :class:`FFmpegOpusAudio` constructor,
452 excluding ``bitrate`` and ``codec``.
453
454 Raises
455 --------
456 AttributeError
457 Invalid probe method, must be ``'native'`` or ``'fallback'``.
458 TypeError
459 Invalid value for ``probe`` parameter, must be :class:`str` or a callable.
460
461 Returns
462 --------
463 :class:`FFmpegOpusAudio`
464 An instance of this class.
465 """
466
467 executable = kwargs.get('executable')
468 codec, bitrate = await cls.probe(source, method=method, executable=executable)
469 return cls(source, bitrate=bitrate, codec=codec, **kwargs)
470
471 @classmethod
472 async def probe(
473 cls,
474 source: str,
475 *,
476 method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,
477 executable: Optional[str] = None,
478 ) -> Tuple[Optional[str], Optional[int]]:
479 """|coro|
480
481 Probes the input source for bitrate and codec information.
482
483 Parameters
484 ------------
485 source
486 Identical to the ``source`` parameter for :class:`FFmpegOpusAudio`.
487 method
488 Identical to the ``method`` parameter for :meth:`FFmpegOpusAudio.from_probe`.
489 executable: :class:`str`
490 Identical to the ``executable`` parameter for :class:`FFmpegOpusAudio`.
491
492 Raises
493 --------
494 AttributeError
495 Invalid probe method, must be ``'native'`` or ``'fallback'``.
496 TypeError
497 Invalid value for ``probe`` parameter, must be :class:`str` or a callable.
498
499 Returns
500 ---------
501 Optional[Tuple[Optional[:class:`str`], :class:`int`]]
502 A 2-tuple with the codec and bitrate of the input source.
503 """
504
505 method = method or 'native'
506 executable = executable or 'ffmpeg'
507 probefunc = fallback = None
508
509 if isinstance(method, str):
510 probefunc = getattr(cls, '_probe_codec_' + method, None)
511 if probefunc is None:
512 raise AttributeError(f"Invalid probe method {method!r}")
513
514 if probefunc is cls._probe_codec_native:
515 fallback = cls._probe_codec_fallback
516
517 elif callable(method):
518 probefunc = method
519 fallback = cls._probe_codec_fallback
520 else:
521 raise TypeError(f"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'")
522
523 codec = bitrate = None
524 loop = asyncio.get_running_loop()
525 try:
526 codec, bitrate = await loop.run_in_executor(None, lambda: probefunc(source, executable))
527 except Exception:
528 if not fallback:
529 _log.exception("Probe '%s' using '%s' failed", method, executable)
530 return # type: ignore
531
532 _log.exception("Probe '%s' using '%s' failed, trying fallback", method, executable)
533 try:
534 codec, bitrate = await loop.run_in_executor(None, lambda: fallback(source, executable))
535 except Exception:
536 _log.exception("Fallback probe using '%s' failed", executable)
537 else:
538 _log.debug("Fallback probe found codec=%s, bitrate=%s", codec, bitrate)
539 else:
540 _log.debug("Probe found codec=%s, bitrate=%s", codec, bitrate)
541 finally:
542 return codec, bitrate
543
544 @staticmethod
545 def _probe_codec_native(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:
546 exe = executable[:2] + 'probe' if executable in ('ffmpeg', 'avconv') else executable
547 args = [exe, '-v', 'quiet', '-print_format', 'json', '-show_streams', '-select_streams', 'a:0', source]
548 output = subprocess.check_output(args, timeout=20)
549 codec = bitrate = None
550
551 if output:
552 data = json.loads(output)
553 streamdata = data['streams'][0]
554
555 codec = streamdata.get('codec_name')
556 bitrate = int(streamdata.get('bit_rate', 0))
557 bitrate = max(round(bitrate / 1000), 512)
558
559 return codec, bitrate
560
561 @staticmethod
562 def _probe_codec_fallback(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:
563 args = [executable, '-hide_banner', '-i', source]
564 proc = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
565 out, _ = proc.communicate(timeout=20)
566 output = out.decode('utf8')
567 codec = bitrate = None
568
569 codec_match = re.search(r"Stream #0.*?Audio: (\w+)", output)
570 if codec_match:
571 codec = codec_match.group(1)
572
573 br_match = re.search(r"(\d+) [kK]b/s", output)
574 if br_match:
575 bitrate = max(int(br_match.group(1)), 512)
576
577 return codec, bitrate
578
579 def read(self) -> bytes:
580 return next(self._packet_iter, b'')
581
582 def is_opus(self) -> bool:
583 return True
584
585
586 class PCMVolumeTransformer(AudioSource, Generic[AT]):
587 """Transforms a previous :class:`AudioSource` to have volume controls.
588
589 This does not work on audio sources that have :meth:`AudioSource.is_opus`
590 set to ``True``.
591
592 Parameters
593 ------------
594 original: :class:`AudioSource`
595 The original AudioSource to transform.
596 volume: :class:`float`
597 The initial volume to set it to.
598 See :attr:`volume` for more info.
599
600 Raises
601 -------
602 TypeError
603 Not an audio source.
604 ClientException
605 The audio source is opus encoded.
606 """
607
608 def __init__(self, original: AT, volume: float = 1.0):
609 if not isinstance(original, AudioSource):
610 raise TypeError(f'expected AudioSource not {original.__class__.__name__}.')
611
612 if original.is_opus():
613 raise ClientException('AudioSource must not be Opus encoded.')
614
615 self.original: AT = original
616 self.volume = volume
617
618 @property
619 def volume(self) -> float:
620 """Retrieves or sets the volume as a floating point percentage (e.g. ``1.0`` for 100%)."""
621 return self._volume
622
623 @volume.setter
624 def volume(self, value: float) -> None:
625 self._volume = max(value, 0.0)
626
627 def cleanup(self) -> None:
628 self.original.cleanup()
629
630 def read(self) -> bytes:
631 ret = self.original.read()
632 return audioop.mul(ret, 2, min(self._volume, 2.0))
633
634
635 class AudioPlayer(threading.Thread):
636 DELAY: float = OpusEncoder.FRAME_LENGTH / 1000.0
637
638 def __init__(
639 self,
640 source: AudioSource,
641 client: VoiceClient,
642 *,
643 after: Optional[Callable[[Optional[Exception]], Any]] = None,
644 ) -> None:
645 threading.Thread.__init__(self)
646 self.daemon: bool = True
647 self.source: AudioSource = source
648 self.client: VoiceClient = client
649 self.after: Optional[Callable[[Optional[Exception]], Any]] = after
650
651 self._end: threading.Event = threading.Event()
652 self._resumed: threading.Event = threading.Event()
653 self._resumed.set() # we are not paused
654 self._current_error: Optional[Exception] = None
655 self._connected: threading.Event = client._connected
656 self._lock: threading.Lock = threading.Lock()
657
658 if after is not None and not callable(after):
659 raise TypeError('Expected a callable for the "after" parameter.')
660
661 def _do_run(self) -> None:
662 self.loops = 0
663 self._start = time.perf_counter()
664
665 # getattr lookup speed ups
666 play_audio = self.client.send_audio_packet
667 self._speak(SpeakingState.voice)
668
669 while not self._end.is_set():
670 # are we paused?
671 if not self._resumed.is_set():
672 # wait until we aren't
673 self._resumed.wait()
674 continue
675
676 # are we disconnected from voice?
677 if not self._connected.is_set():
678 # wait until we are connected
679 self._connected.wait()
680 # reset our internal data
681 self.loops = 0
682 self._start = time.perf_counter()
683
684 self.loops += 1
685 data = self.source.read()
686
687 if not data:
688 self.stop()
689 break
690
691 play_audio(data, encode=not self.source.is_opus())
692 next_time = self._start + self.DELAY * self.loops
693 delay = max(0, self.DELAY + (next_time - time.perf_counter()))
694 time.sleep(delay)
695
696 def run(self) -> None:
697 try:
698 self._do_run()
699 except Exception as exc:
700 self._current_error = exc
701 self.stop()
702 finally:
703 self._call_after()
704 self.source.cleanup()
705
706 def _call_after(self) -> None:
707 error = self._current_error
708
709 if self.after is not None:
710 try:
711 self.after(error)
712 except Exception as exc:
713 exc.__context__ = error
714 _log.exception('Calling the after function failed.', exc_info=exc)
715 elif error:
716 _log.exception('Exception in voice thread %s', self.name, exc_info=error)
717
718 def stop(self) -> None:
719 self._end.set()
720 self._resumed.set()
721 self._speak(SpeakingState.none)
722
723 def pause(self, *, update_speaking: bool = True) -> None:
724 self._resumed.clear()
725 if update_speaking:
726 self._speak(SpeakingState.none)
727
728 def resume(self, *, update_speaking: bool = True) -> None:
729 self.loops: int = 0
730 self._start: float = time.perf_counter()
731 self._resumed.set()
732 if update_speaking:
733 self._speak(SpeakingState.voice)
734
735 def is_playing(self) -> bool:
736 return self._resumed.is_set() and not self._end.is_set()
737
738 def is_paused(self) -> bool:
739 return not self._end.is_set() and not self._resumed.is_set()
740
741 def _set_source(self, source: AudioSource) -> None:
742 with self._lock:
743 self.pause(update_speaking=False)
744 self.source = source
745 self.resume(update_speaking=False)
746
747 def _speak(self, speaking: SpeakingState) -> None:
748 try:
749 asyncio.run_coroutine_threadsafe(self.client.ws.speak(speaking), self.client.client.loop)
750 except Exception:
751 _log.exception("Speaking call in player failed")
752
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/discord/player.py b/discord/player.py
--- a/discord/player.py
+++ b/discord/player.py
@@ -210,7 +210,8 @@
# arbitrarily large read size
data = source.read(8192)
if not data:
- self._process.terminate()
+ if self._stdin is not None:
+ self._stdin.close()
return
try:
if self._stdin is not None:
| {"golden_diff": "diff --git a/discord/player.py b/discord/player.py\n--- a/discord/player.py\n+++ b/discord/player.py\n@@ -210,7 +210,8 @@\n # arbitrarily large read size\n data = source.read(8192)\n if not data:\n- self._process.terminate()\n+ if self._stdin is not None:\n+ self._stdin.close()\n return\n try:\n if self._stdin is not None:\n", "issue": "Piped Audio Input Ends Prematurely\n### Summary\r\n\r\nPiped video/audio input from sources like `youtube-dl` does not terminate the pipe stream correctly, often cutting off the last bit of the stream.\r\n\r\n### Reproduction Steps\r\n\r\n- Stream audio from another process using `subprocess.Popen`; in my case I was using `youtube-dl`\r\n- Wait for the audio to play until it nears the end of the stream\r\n\r\n### Minimal Reproducible Code\r\n\r\n```python\r\nclient = await ctx.author.voice.channel.connect()\r\nurl = \"https://www.youtube.com/watch?v=KAwyWkksXuo\"\r\nytdl = subprocess.Popen([\"youtube-dl\", \"-f\", \"bestaudio/worst\", \"-i\", url, \"-o\", \"-\"], stdout=subprocess.PIPE)\r\naudsrc = discord.FFmpegPCMAudio(ytdl.stdout, pipe=True)\r\nclient.play(audsrc)\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nDiscord.py plays the stream until the very end, then closes FFMPEG and stops playback.\r\n\r\n### Actual Results\r\n\r\nThe stream is cut off slightly before the track actually finishes. For the sample video (cbat), it terminates roughly 6 seconds before the stream actually finishes. In addition, FFMPEG terminates with code 255, indicating a forced termination of the program.\r\n\r\n### Intents\r\n\r\nmembers, message_content, messages\r\n\r\n### System Information\r\n\r\n- Python v3.10.7-final\r\n- discord.py v2.0.1-final\r\n- aiohttp v3.8.3\r\n- system info: Darwin 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:17:10 PDT 2022; root:xnu-8020.140.49~2/RELEASE_X86_64\r\n\r\n### Checklist\r\n\r\n- [X] I have searched the open issues for duplicates.\r\n- [X] I have shown the entire traceback, if possible.\r\n- [X] I have removed my token from display, if visible.\r\n\r\n### Additional Context\r\n\r\nMy current solution for this involves a modification of the `FFmpegAudio` class in `player.py`.\r\n```python\r\nclass FFmpegAudio(AudioSource):\r\n # ...\r\n def _pipe_writer(self, source: io.BufferedIOBase) -> None:\r\n while self._process:\r\n # arbitrarily large read size\r\n data = source.read(8192)\r\n if not data:\r\n # self._process.terminate() <--- Removed this line, replaced with following\r\n self._stdin.close()\r\n return\r\n try:\r\n if self._stdin is not None:\r\n self._stdin.write(data)\r\n except Exception:\r\n _log.debug('Write error for %s, this is probably not a problem', self, exc_info=True)\r\n # at this point the source data is either exhausted or the process is fubar\r\n self._process.terminate()\r\n return\r\n```\n", "before_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom __future__ import annotations\n\nimport threading\nimport subprocess\nimport audioop\nimport asyncio\nimport logging\nimport shlex\nimport time\nimport json\nimport sys\nimport re\nimport io\n\nfrom typing import Any, Callable, Generic, IO, Optional, TYPE_CHECKING, Tuple, TypeVar, Union\n\nfrom .enums import SpeakingState\nfrom .errors import ClientException\nfrom .opus import Encoder as OpusEncoder\nfrom .oggparse import OggStream\nfrom .utils import MISSING\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n from .voice_client import VoiceClient\n\n\nAT = TypeVar('AT', bound='AudioSource')\n\n_log = logging.getLogger(__name__)\n\n__all__ = (\n 'AudioSource',\n 'PCMAudio',\n 'FFmpegAudio',\n 'FFmpegPCMAudio',\n 'FFmpegOpusAudio',\n 'PCMVolumeTransformer',\n)\n\nCREATE_NO_WINDOW: int\n\nif sys.platform != 'win32':\n CREATE_NO_WINDOW = 0\nelse:\n CREATE_NO_WINDOW = 0x08000000\n\n\nclass AudioSource:\n \"\"\"Represents an audio stream.\n\n The audio stream can be Opus encoded or not, however if the audio stream\n is not Opus encoded then the audio format must be 16-bit 48KHz stereo PCM.\n\n .. warning::\n\n The audio source reads are done in a separate thread.\n \"\"\"\n\n def read(self) -> bytes:\n \"\"\"Reads 20ms worth of audio.\n\n Subclasses must implement this.\n\n If the audio is complete, then returning an empty\n :term:`py:bytes-like object` to signal this is the way to do so.\n\n If :meth:`~AudioSource.is_opus` method returns ``True``, then it must return\n 20ms worth of Opus encoded audio. Otherwise, it must be 20ms\n worth of 16-bit 48KHz stereo PCM, which is about 3,840 bytes\n per frame (20ms worth of audio).\n\n Returns\n --------\n :class:`bytes`\n A bytes like object that represents the PCM or Opus data.\n \"\"\"\n raise NotImplementedError\n\n def is_opus(self) -> bool:\n \"\"\"Checks if the audio source is already encoded in Opus.\"\"\"\n return False\n\n def cleanup(self) -> None:\n \"\"\"Called when clean-up is needed to be done.\n\n Useful for clearing buffer data or processes after\n it is done playing audio.\n \"\"\"\n pass\n\n def __del__(self) -> None:\n self.cleanup()\n\n\nclass PCMAudio(AudioSource):\n \"\"\"Represents raw 16-bit 48KHz stereo PCM audio source.\n\n Attributes\n -----------\n stream: :term:`py:file object`\n A file-like object that reads byte data representing raw PCM.\n \"\"\"\n\n def __init__(self, stream: io.BufferedIOBase) -> None:\n self.stream: io.BufferedIOBase = stream\n\n def read(self) -> bytes:\n ret = self.stream.read(OpusEncoder.FRAME_SIZE)\n if len(ret) != OpusEncoder.FRAME_SIZE:\n return b''\n return ret\n\n\nclass FFmpegAudio(AudioSource):\n \"\"\"Represents an FFmpeg (or AVConv) based AudioSource.\n\n User created AudioSources using FFmpeg differently from how :class:`FFmpegPCMAudio` and\n :class:`FFmpegOpusAudio` work should subclass this.\n\n .. versionadded:: 1.3\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n executable: str = 'ffmpeg',\n args: Any,\n **subprocess_kwargs: Any,\n ):\n piping = subprocess_kwargs.get('stdin') == subprocess.PIPE\n if piping and isinstance(source, str):\n raise TypeError(\"parameter conflict: 'source' parameter cannot be a string when piping to stdin\")\n\n args = [executable, *args]\n kwargs = {'stdout': subprocess.PIPE}\n kwargs.update(subprocess_kwargs)\n\n # Ensure attribute is assigned even in the case of errors\n self._process: subprocess.Popen = MISSING\n self._process = self._spawn_process(args, **kwargs)\n self._stdout: IO[bytes] = self._process.stdout # type: ignore # process stdout is explicitly set\n self._stdin: Optional[IO[bytes]] = None\n self._pipe_thread: Optional[threading.Thread] = None\n\n if piping:\n n = f'popen-stdin-writer:{id(self):#x}'\n self._stdin = self._process.stdin\n self._pipe_thread = threading.Thread(target=self._pipe_writer, args=(source,), daemon=True, name=n)\n self._pipe_thread.start()\n\n def _spawn_process(self, args: Any, **subprocess_kwargs: Any) -> subprocess.Popen:\n process = None\n try:\n process = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, **subprocess_kwargs)\n except FileNotFoundError:\n executable = args.partition(' ')[0] if isinstance(args, str) else args[0]\n raise ClientException(executable + ' was not found.') from None\n except subprocess.SubprocessError as exc:\n raise ClientException(f'Popen failed: {exc.__class__.__name__}: {exc}') from exc\n else:\n return process\n\n def _kill_process(self) -> None:\n proc = self._process\n if proc is MISSING:\n return\n\n _log.debug('Preparing to terminate ffmpeg process %s.', proc.pid)\n\n try:\n proc.kill()\n except Exception:\n _log.exception('Ignoring error attempting to kill ffmpeg process %s', proc.pid)\n\n if proc.poll() is None:\n _log.info('ffmpeg process %s has not terminated. Waiting to terminate...', proc.pid)\n proc.communicate()\n _log.info('ffmpeg process %s should have terminated with a return code of %s.', proc.pid, proc.returncode)\n else:\n _log.info('ffmpeg process %s successfully terminated with return code of %s.', proc.pid, proc.returncode)\n\n def _pipe_writer(self, source: io.BufferedIOBase) -> None:\n while self._process:\n # arbitrarily large read size\n data = source.read(8192)\n if not data:\n self._process.terminate()\n return\n try:\n if self._stdin is not None:\n self._stdin.write(data)\n except Exception:\n _log.debug('Write error for %s, this is probably not a problem', self, exc_info=True)\n # at this point the source data is either exhausted or the process is fubar\n self._process.terminate()\n return\n\n def cleanup(self) -> None:\n self._kill_process()\n self._process = self._stdout = self._stdin = MISSING\n\n\nclass FFmpegPCMAudio(FFmpegAudio):\n \"\"\"An audio source from FFmpeg (or AVConv).\n\n This launches a sub-process to a specific input file given.\n\n .. warning::\n\n You must have the ffmpeg or avconv executable in your path environment\n variable in order for this to work.\n\n Parameters\n ------------\n source: Union[:class:`str`, :class:`io.BufferedIOBase`]\n The input that ffmpeg will take and convert to PCM bytes.\n If ``pipe`` is ``True`` then this is a file-like object that is\n passed to the stdin of ffmpeg.\n executable: :class:`str`\n The executable name (and path) to use. Defaults to ``ffmpeg``.\n pipe: :class:`bool`\n If ``True``, denotes that ``source`` parameter will be passed\n to the stdin of ffmpeg. Defaults to ``False``.\n stderr: Optional[:term:`py:file object`]\n A file-like object to pass to the Popen constructor.\n Could also be an instance of ``subprocess.PIPE``.\n before_options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg before the ``-i`` flag.\n options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg after the ``-i`` flag.\n\n Raises\n --------\n ClientException\n The subprocess failed to be created.\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n executable: str = 'ffmpeg',\n pipe: bool = False,\n stderr: Optional[IO[str]] = None,\n before_options: Optional[str] = None,\n options: Optional[str] = None,\n ) -> None:\n args = []\n subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}\n\n if isinstance(before_options, str):\n args.extend(shlex.split(before_options))\n\n args.append('-i')\n args.append('-' if pipe else source)\n args.extend(('-f', 's16le', '-ar', '48000', '-ac', '2', '-loglevel', 'warning'))\n\n if isinstance(options, str):\n args.extend(shlex.split(options))\n\n args.append('pipe:1')\n\n super().__init__(source, executable=executable, args=args, **subprocess_kwargs)\n\n def read(self) -> bytes:\n ret = self._stdout.read(OpusEncoder.FRAME_SIZE)\n if len(ret) != OpusEncoder.FRAME_SIZE:\n return b''\n return ret\n\n def is_opus(self) -> bool:\n return False\n\n\nclass FFmpegOpusAudio(FFmpegAudio):\n \"\"\"An audio source from FFmpeg (or AVConv).\n\n This launches a sub-process to a specific input file given. However, rather than\n producing PCM packets like :class:`FFmpegPCMAudio` does that need to be encoded to\n Opus, this class produces Opus packets, skipping the encoding step done by the library.\n\n Alternatively, instead of instantiating this class directly, you can use\n :meth:`FFmpegOpusAudio.from_probe` to probe for bitrate and codec information. This\n can be used to opportunistically skip pointless re-encoding of existing Opus audio data\n for a boost in performance at the cost of a short initial delay to gather the information.\n The same can be achieved by passing ``copy`` to the ``codec`` parameter, but only if you\n know that the input source is Opus encoded beforehand.\n\n .. versionadded:: 1.3\n\n .. warning::\n\n You must have the ffmpeg or avconv executable in your path environment\n variable in order for this to work.\n\n Parameters\n ------------\n source: Union[:class:`str`, :class:`io.BufferedIOBase`]\n The input that ffmpeg will take and convert to Opus bytes.\n If ``pipe`` is ``True`` then this is a file-like object that is\n passed to the stdin of ffmpeg.\n bitrate: :class:`int`\n The bitrate in kbps to encode the output to. Defaults to ``128``.\n codec: Optional[:class:`str`]\n The codec to use to encode the audio data. Normally this would be\n just ``libopus``, but is used by :meth:`FFmpegOpusAudio.from_probe` to\n opportunistically skip pointlessly re-encoding Opus audio data by passing\n ``copy`` as the codec value. Any values other than ``copy``, ``opus``, or\n ``libopus`` will be considered ``libopus``. Defaults to ``libopus``.\n\n .. warning::\n\n Do not provide this parameter unless you are certain that the audio input is\n already Opus encoded. For typical use :meth:`FFmpegOpusAudio.from_probe`\n should be used to determine the proper value for this parameter.\n\n executable: :class:`str`\n The executable name (and path) to use. Defaults to ``ffmpeg``.\n pipe: :class:`bool`\n If ``True``, denotes that ``source`` parameter will be passed\n to the stdin of ffmpeg. Defaults to ``False``.\n stderr: Optional[:term:`py:file object`]\n A file-like object to pass to the Popen constructor.\n Could also be an instance of ``subprocess.PIPE``.\n before_options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg before the ``-i`` flag.\n options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg after the ``-i`` flag.\n\n Raises\n --------\n ClientException\n The subprocess failed to be created.\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n bitrate: Optional[int] = None,\n codec: Optional[str] = None,\n executable: str = 'ffmpeg',\n pipe: bool = False,\n stderr: Optional[IO[bytes]] = None,\n before_options: Optional[str] = None,\n options: Optional[str] = None,\n ) -> None:\n args = []\n subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}\n\n if isinstance(before_options, str):\n args.extend(shlex.split(before_options))\n\n args.append('-i')\n args.append('-' if pipe else source)\n\n codec = 'copy' if codec in ('opus', 'libopus') else 'libopus'\n bitrate = bitrate if bitrate is not None else 128\n\n # fmt: off\n args.extend(('-map_metadata', '-1',\n '-f', 'opus',\n '-c:a', codec,\n '-ar', '48000',\n '-ac', '2',\n '-b:a', f'{bitrate}k',\n '-loglevel', 'warning'))\n # fmt: on\n\n if isinstance(options, str):\n args.extend(shlex.split(options))\n\n args.append('pipe:1')\n\n super().__init__(source, executable=executable, args=args, **subprocess_kwargs)\n self._packet_iter = OggStream(self._stdout).iter_packets()\n\n @classmethod\n async def from_probe(\n cls,\n source: str,\n *,\n method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,\n **kwargs: Any,\n ) -> Self:\n \"\"\"|coro|\n\n A factory method that creates a :class:`FFmpegOpusAudio` after probing\n the input source for audio codec and bitrate information.\n\n Examples\n ----------\n\n Use this function to create an :class:`FFmpegOpusAudio` instance instead of the constructor: ::\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\")\n voice_client.play(source)\n\n If you are on Windows and don't have ffprobe installed, use the ``fallback`` method\n to probe using ffmpeg instead: ::\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\", method='fallback')\n voice_client.play(source)\n\n Using a custom method of determining codec and bitrate: ::\n\n def custom_probe(source, executable):\n # some analysis code here\n return codec, bitrate\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\", method=custom_probe)\n voice_client.play(source)\n\n Parameters\n ------------\n source\n Identical to the ``source`` parameter for the constructor.\n method: Optional[Union[:class:`str`, Callable[:class:`str`, :class:`str`]]]\n The probing method used to determine bitrate and codec information. As a string, valid\n values are ``native`` to use ffprobe (or avprobe) and ``fallback`` to use ffmpeg\n (or avconv). As a callable, it must take two string arguments, ``source`` and\n ``executable``. Both parameters are the same values passed to this factory function.\n ``executable`` will default to ``ffmpeg`` if not provided as a keyword argument.\n kwargs\n The remaining parameters to be passed to the :class:`FFmpegOpusAudio` constructor,\n excluding ``bitrate`` and ``codec``.\n\n Raises\n --------\n AttributeError\n Invalid probe method, must be ``'native'`` or ``'fallback'``.\n TypeError\n Invalid value for ``probe`` parameter, must be :class:`str` or a callable.\n\n Returns\n --------\n :class:`FFmpegOpusAudio`\n An instance of this class.\n \"\"\"\n\n executable = kwargs.get('executable')\n codec, bitrate = await cls.probe(source, method=method, executable=executable)\n return cls(source, bitrate=bitrate, codec=codec, **kwargs)\n\n @classmethod\n async def probe(\n cls,\n source: str,\n *,\n method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,\n executable: Optional[str] = None,\n ) -> Tuple[Optional[str], Optional[int]]:\n \"\"\"|coro|\n\n Probes the input source for bitrate and codec information.\n\n Parameters\n ------------\n source\n Identical to the ``source`` parameter for :class:`FFmpegOpusAudio`.\n method\n Identical to the ``method`` parameter for :meth:`FFmpegOpusAudio.from_probe`.\n executable: :class:`str`\n Identical to the ``executable`` parameter for :class:`FFmpegOpusAudio`.\n\n Raises\n --------\n AttributeError\n Invalid probe method, must be ``'native'`` or ``'fallback'``.\n TypeError\n Invalid value for ``probe`` parameter, must be :class:`str` or a callable.\n\n Returns\n ---------\n Optional[Tuple[Optional[:class:`str`], :class:`int`]]\n A 2-tuple with the codec and bitrate of the input source.\n \"\"\"\n\n method = method or 'native'\n executable = executable or 'ffmpeg'\n probefunc = fallback = None\n\n if isinstance(method, str):\n probefunc = getattr(cls, '_probe_codec_' + method, None)\n if probefunc is None:\n raise AttributeError(f\"Invalid probe method {method!r}\")\n\n if probefunc is cls._probe_codec_native:\n fallback = cls._probe_codec_fallback\n\n elif callable(method):\n probefunc = method\n fallback = cls._probe_codec_fallback\n else:\n raise TypeError(f\"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'\")\n\n codec = bitrate = None\n loop = asyncio.get_running_loop()\n try:\n codec, bitrate = await loop.run_in_executor(None, lambda: probefunc(source, executable))\n except Exception:\n if not fallback:\n _log.exception(\"Probe '%s' using '%s' failed\", method, executable)\n return # type: ignore\n\n _log.exception(\"Probe '%s' using '%s' failed, trying fallback\", method, executable)\n try:\n codec, bitrate = await loop.run_in_executor(None, lambda: fallback(source, executable))\n except Exception:\n _log.exception(\"Fallback probe using '%s' failed\", executable)\n else:\n _log.debug(\"Fallback probe found codec=%s, bitrate=%s\", codec, bitrate)\n else:\n _log.debug(\"Probe found codec=%s, bitrate=%s\", codec, bitrate)\n finally:\n return codec, bitrate\n\n @staticmethod\n def _probe_codec_native(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:\n exe = executable[:2] + 'probe' if executable in ('ffmpeg', 'avconv') else executable\n args = [exe, '-v', 'quiet', '-print_format', 'json', '-show_streams', '-select_streams', 'a:0', source]\n output = subprocess.check_output(args, timeout=20)\n codec = bitrate = None\n\n if output:\n data = json.loads(output)\n streamdata = data['streams'][0]\n\n codec = streamdata.get('codec_name')\n bitrate = int(streamdata.get('bit_rate', 0))\n bitrate = max(round(bitrate / 1000), 512)\n\n return codec, bitrate\n\n @staticmethod\n def _probe_codec_fallback(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:\n args = [executable, '-hide_banner', '-i', source]\n proc = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n out, _ = proc.communicate(timeout=20)\n output = out.decode('utf8')\n codec = bitrate = None\n\n codec_match = re.search(r\"Stream #0.*?Audio: (\\w+)\", output)\n if codec_match:\n codec = codec_match.group(1)\n\n br_match = re.search(r\"(\\d+) [kK]b/s\", output)\n if br_match:\n bitrate = max(int(br_match.group(1)), 512)\n\n return codec, bitrate\n\n def read(self) -> bytes:\n return next(self._packet_iter, b'')\n\n def is_opus(self) -> bool:\n return True\n\n\nclass PCMVolumeTransformer(AudioSource, Generic[AT]):\n \"\"\"Transforms a previous :class:`AudioSource` to have volume controls.\n\n This does not work on audio sources that have :meth:`AudioSource.is_opus`\n set to ``True``.\n\n Parameters\n ------------\n original: :class:`AudioSource`\n The original AudioSource to transform.\n volume: :class:`float`\n The initial volume to set it to.\n See :attr:`volume` for more info.\n\n Raises\n -------\n TypeError\n Not an audio source.\n ClientException\n The audio source is opus encoded.\n \"\"\"\n\n def __init__(self, original: AT, volume: float = 1.0):\n if not isinstance(original, AudioSource):\n raise TypeError(f'expected AudioSource not {original.__class__.__name__}.')\n\n if original.is_opus():\n raise ClientException('AudioSource must not be Opus encoded.')\n\n self.original: AT = original\n self.volume = volume\n\n @property\n def volume(self) -> float:\n \"\"\"Retrieves or sets the volume as a floating point percentage (e.g. ``1.0`` for 100%).\"\"\"\n return self._volume\n\n @volume.setter\n def volume(self, value: float) -> None:\n self._volume = max(value, 0.0)\n\n def cleanup(self) -> None:\n self.original.cleanup()\n\n def read(self) -> bytes:\n ret = self.original.read()\n return audioop.mul(ret, 2, min(self._volume, 2.0))\n\n\nclass AudioPlayer(threading.Thread):\n DELAY: float = OpusEncoder.FRAME_LENGTH / 1000.0\n\n def __init__(\n self,\n source: AudioSource,\n client: VoiceClient,\n *,\n after: Optional[Callable[[Optional[Exception]], Any]] = None,\n ) -> None:\n threading.Thread.__init__(self)\n self.daemon: bool = True\n self.source: AudioSource = source\n self.client: VoiceClient = client\n self.after: Optional[Callable[[Optional[Exception]], Any]] = after\n\n self._end: threading.Event = threading.Event()\n self._resumed: threading.Event = threading.Event()\n self._resumed.set() # we are not paused\n self._current_error: Optional[Exception] = None\n self._connected: threading.Event = client._connected\n self._lock: threading.Lock = threading.Lock()\n\n if after is not None and not callable(after):\n raise TypeError('Expected a callable for the \"after\" parameter.')\n\n def _do_run(self) -> None:\n self.loops = 0\n self._start = time.perf_counter()\n\n # getattr lookup speed ups\n play_audio = self.client.send_audio_packet\n self._speak(SpeakingState.voice)\n\n while not self._end.is_set():\n # are we paused?\n if not self._resumed.is_set():\n # wait until we aren't\n self._resumed.wait()\n continue\n\n # are we disconnected from voice?\n if not self._connected.is_set():\n # wait until we are connected\n self._connected.wait()\n # reset our internal data\n self.loops = 0\n self._start = time.perf_counter()\n\n self.loops += 1\n data = self.source.read()\n\n if not data:\n self.stop()\n break\n\n play_audio(data, encode=not self.source.is_opus())\n next_time = self._start + self.DELAY * self.loops\n delay = max(0, self.DELAY + (next_time - time.perf_counter()))\n time.sleep(delay)\n\n def run(self) -> None:\n try:\n self._do_run()\n except Exception as exc:\n self._current_error = exc\n self.stop()\n finally:\n self._call_after()\n self.source.cleanup()\n\n def _call_after(self) -> None:\n error = self._current_error\n\n if self.after is not None:\n try:\n self.after(error)\n except Exception as exc:\n exc.__context__ = error\n _log.exception('Calling the after function failed.', exc_info=exc)\n elif error:\n _log.exception('Exception in voice thread %s', self.name, exc_info=error)\n\n def stop(self) -> None:\n self._end.set()\n self._resumed.set()\n self._speak(SpeakingState.none)\n\n def pause(self, *, update_speaking: bool = True) -> None:\n self._resumed.clear()\n if update_speaking:\n self._speak(SpeakingState.none)\n\n def resume(self, *, update_speaking: bool = True) -> None:\n self.loops: int = 0\n self._start: float = time.perf_counter()\n self._resumed.set()\n if update_speaking:\n self._speak(SpeakingState.voice)\n\n def is_playing(self) -> bool:\n return self._resumed.is_set() and not self._end.is_set()\n\n def is_paused(self) -> bool:\n return not self._end.is_set() and not self._resumed.is_set()\n\n def _set_source(self, source: AudioSource) -> None:\n with self._lock:\n self.pause(update_speaking=False)\n self.source = source\n self.resume(update_speaking=False)\n\n def _speak(self, speaking: SpeakingState) -> None:\n try:\n asyncio.run_coroutine_threadsafe(self.client.ws.speak(speaking), self.client.client.loop)\n except Exception:\n _log.exception(\"Speaking call in player failed\")\n", "path": "discord/player.py"}], "after_files": [{"content": "\"\"\"\nThe MIT License (MIT)\n\nCopyright (c) 2015-present Rapptz\n\nPermission is hereby granted, free of charge, to any person obtaining a\ncopy of this software and associated documentation files (the \"Software\"),\nto deal in the Software without restriction, including without limitation\nthe rights to use, copy, modify, merge, publish, distribute, sublicense,\nand/or sell copies of the Software, and to permit persons to whom the\nSoftware is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\nOR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\nFROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\nDEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom __future__ import annotations\n\nimport threading\nimport subprocess\nimport audioop\nimport asyncio\nimport logging\nimport shlex\nimport time\nimport json\nimport sys\nimport re\nimport io\n\nfrom typing import Any, Callable, Generic, IO, Optional, TYPE_CHECKING, Tuple, TypeVar, Union\n\nfrom .enums import SpeakingState\nfrom .errors import ClientException\nfrom .opus import Encoder as OpusEncoder\nfrom .oggparse import OggStream\nfrom .utils import MISSING\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n from .voice_client import VoiceClient\n\n\nAT = TypeVar('AT', bound='AudioSource')\n\n_log = logging.getLogger(__name__)\n\n__all__ = (\n 'AudioSource',\n 'PCMAudio',\n 'FFmpegAudio',\n 'FFmpegPCMAudio',\n 'FFmpegOpusAudio',\n 'PCMVolumeTransformer',\n)\n\nCREATE_NO_WINDOW: int\n\nif sys.platform != 'win32':\n CREATE_NO_WINDOW = 0\nelse:\n CREATE_NO_WINDOW = 0x08000000\n\n\nclass AudioSource:\n \"\"\"Represents an audio stream.\n\n The audio stream can be Opus encoded or not, however if the audio stream\n is not Opus encoded then the audio format must be 16-bit 48KHz stereo PCM.\n\n .. warning::\n\n The audio source reads are done in a separate thread.\n \"\"\"\n\n def read(self) -> bytes:\n \"\"\"Reads 20ms worth of audio.\n\n Subclasses must implement this.\n\n If the audio is complete, then returning an empty\n :term:`py:bytes-like object` to signal this is the way to do so.\n\n If :meth:`~AudioSource.is_opus` method returns ``True``, then it must return\n 20ms worth of Opus encoded audio. Otherwise, it must be 20ms\n worth of 16-bit 48KHz stereo PCM, which is about 3,840 bytes\n per frame (20ms worth of audio).\n\n Returns\n --------\n :class:`bytes`\n A bytes like object that represents the PCM or Opus data.\n \"\"\"\n raise NotImplementedError\n\n def is_opus(self) -> bool:\n \"\"\"Checks if the audio source is already encoded in Opus.\"\"\"\n return False\n\n def cleanup(self) -> None:\n \"\"\"Called when clean-up is needed to be done.\n\n Useful for clearing buffer data or processes after\n it is done playing audio.\n \"\"\"\n pass\n\n def __del__(self) -> None:\n self.cleanup()\n\n\nclass PCMAudio(AudioSource):\n \"\"\"Represents raw 16-bit 48KHz stereo PCM audio source.\n\n Attributes\n -----------\n stream: :term:`py:file object`\n A file-like object that reads byte data representing raw PCM.\n \"\"\"\n\n def __init__(self, stream: io.BufferedIOBase) -> None:\n self.stream: io.BufferedIOBase = stream\n\n def read(self) -> bytes:\n ret = self.stream.read(OpusEncoder.FRAME_SIZE)\n if len(ret) != OpusEncoder.FRAME_SIZE:\n return b''\n return ret\n\n\nclass FFmpegAudio(AudioSource):\n \"\"\"Represents an FFmpeg (or AVConv) based AudioSource.\n\n User created AudioSources using FFmpeg differently from how :class:`FFmpegPCMAudio` and\n :class:`FFmpegOpusAudio` work should subclass this.\n\n .. versionadded:: 1.3\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n executable: str = 'ffmpeg',\n args: Any,\n **subprocess_kwargs: Any,\n ):\n piping = subprocess_kwargs.get('stdin') == subprocess.PIPE\n if piping and isinstance(source, str):\n raise TypeError(\"parameter conflict: 'source' parameter cannot be a string when piping to stdin\")\n\n args = [executable, *args]\n kwargs = {'stdout': subprocess.PIPE}\n kwargs.update(subprocess_kwargs)\n\n # Ensure attribute is assigned even in the case of errors\n self._process: subprocess.Popen = MISSING\n self._process = self._spawn_process(args, **kwargs)\n self._stdout: IO[bytes] = self._process.stdout # type: ignore # process stdout is explicitly set\n self._stdin: Optional[IO[bytes]] = None\n self._pipe_thread: Optional[threading.Thread] = None\n\n if piping:\n n = f'popen-stdin-writer:{id(self):#x}'\n self._stdin = self._process.stdin\n self._pipe_thread = threading.Thread(target=self._pipe_writer, args=(source,), daemon=True, name=n)\n self._pipe_thread.start()\n\n def _spawn_process(self, args: Any, **subprocess_kwargs: Any) -> subprocess.Popen:\n process = None\n try:\n process = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, **subprocess_kwargs)\n except FileNotFoundError:\n executable = args.partition(' ')[0] if isinstance(args, str) else args[0]\n raise ClientException(executable + ' was not found.') from None\n except subprocess.SubprocessError as exc:\n raise ClientException(f'Popen failed: {exc.__class__.__name__}: {exc}') from exc\n else:\n return process\n\n def _kill_process(self) -> None:\n proc = self._process\n if proc is MISSING:\n return\n\n _log.debug('Preparing to terminate ffmpeg process %s.', proc.pid)\n\n try:\n proc.kill()\n except Exception:\n _log.exception('Ignoring error attempting to kill ffmpeg process %s', proc.pid)\n\n if proc.poll() is None:\n _log.info('ffmpeg process %s has not terminated. Waiting to terminate...', proc.pid)\n proc.communicate()\n _log.info('ffmpeg process %s should have terminated with a return code of %s.', proc.pid, proc.returncode)\n else:\n _log.info('ffmpeg process %s successfully terminated with return code of %s.', proc.pid, proc.returncode)\n\n def _pipe_writer(self, source: io.BufferedIOBase) -> None:\n while self._process:\n # arbitrarily large read size\n data = source.read(8192)\n if not data:\n if self._stdin is not None:\n self._stdin.close()\n return\n try:\n if self._stdin is not None:\n self._stdin.write(data)\n except Exception:\n _log.debug('Write error for %s, this is probably not a problem', self, exc_info=True)\n # at this point the source data is either exhausted or the process is fubar\n self._process.terminate()\n return\n\n def cleanup(self) -> None:\n self._kill_process()\n self._process = self._stdout = self._stdin = MISSING\n\n\nclass FFmpegPCMAudio(FFmpegAudio):\n \"\"\"An audio source from FFmpeg (or AVConv).\n\n This launches a sub-process to a specific input file given.\n\n .. warning::\n\n You must have the ffmpeg or avconv executable in your path environment\n variable in order for this to work.\n\n Parameters\n ------------\n source: Union[:class:`str`, :class:`io.BufferedIOBase`]\n The input that ffmpeg will take and convert to PCM bytes.\n If ``pipe`` is ``True`` then this is a file-like object that is\n passed to the stdin of ffmpeg.\n executable: :class:`str`\n The executable name (and path) to use. Defaults to ``ffmpeg``.\n pipe: :class:`bool`\n If ``True``, denotes that ``source`` parameter will be passed\n to the stdin of ffmpeg. Defaults to ``False``.\n stderr: Optional[:term:`py:file object`]\n A file-like object to pass to the Popen constructor.\n Could also be an instance of ``subprocess.PIPE``.\n before_options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg before the ``-i`` flag.\n options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg after the ``-i`` flag.\n\n Raises\n --------\n ClientException\n The subprocess failed to be created.\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n executable: str = 'ffmpeg',\n pipe: bool = False,\n stderr: Optional[IO[str]] = None,\n before_options: Optional[str] = None,\n options: Optional[str] = None,\n ) -> None:\n args = []\n subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}\n\n if isinstance(before_options, str):\n args.extend(shlex.split(before_options))\n\n args.append('-i')\n args.append('-' if pipe else source)\n args.extend(('-f', 's16le', '-ar', '48000', '-ac', '2', '-loglevel', 'warning'))\n\n if isinstance(options, str):\n args.extend(shlex.split(options))\n\n args.append('pipe:1')\n\n super().__init__(source, executable=executable, args=args, **subprocess_kwargs)\n\n def read(self) -> bytes:\n ret = self._stdout.read(OpusEncoder.FRAME_SIZE)\n if len(ret) != OpusEncoder.FRAME_SIZE:\n return b''\n return ret\n\n def is_opus(self) -> bool:\n return False\n\n\nclass FFmpegOpusAudio(FFmpegAudio):\n \"\"\"An audio source from FFmpeg (or AVConv).\n\n This launches a sub-process to a specific input file given. However, rather than\n producing PCM packets like :class:`FFmpegPCMAudio` does that need to be encoded to\n Opus, this class produces Opus packets, skipping the encoding step done by the library.\n\n Alternatively, instead of instantiating this class directly, you can use\n :meth:`FFmpegOpusAudio.from_probe` to probe for bitrate and codec information. This\n can be used to opportunistically skip pointless re-encoding of existing Opus audio data\n for a boost in performance at the cost of a short initial delay to gather the information.\n The same can be achieved by passing ``copy`` to the ``codec`` parameter, but only if you\n know that the input source is Opus encoded beforehand.\n\n .. versionadded:: 1.3\n\n .. warning::\n\n You must have the ffmpeg or avconv executable in your path environment\n variable in order for this to work.\n\n Parameters\n ------------\n source: Union[:class:`str`, :class:`io.BufferedIOBase`]\n The input that ffmpeg will take and convert to Opus bytes.\n If ``pipe`` is ``True`` then this is a file-like object that is\n passed to the stdin of ffmpeg.\n bitrate: :class:`int`\n The bitrate in kbps to encode the output to. Defaults to ``128``.\n codec: Optional[:class:`str`]\n The codec to use to encode the audio data. Normally this would be\n just ``libopus``, but is used by :meth:`FFmpegOpusAudio.from_probe` to\n opportunistically skip pointlessly re-encoding Opus audio data by passing\n ``copy`` as the codec value. Any values other than ``copy``, ``opus``, or\n ``libopus`` will be considered ``libopus``. Defaults to ``libopus``.\n\n .. warning::\n\n Do not provide this parameter unless you are certain that the audio input is\n already Opus encoded. For typical use :meth:`FFmpegOpusAudio.from_probe`\n should be used to determine the proper value for this parameter.\n\n executable: :class:`str`\n The executable name (and path) to use. Defaults to ``ffmpeg``.\n pipe: :class:`bool`\n If ``True``, denotes that ``source`` parameter will be passed\n to the stdin of ffmpeg. Defaults to ``False``.\n stderr: Optional[:term:`py:file object`]\n A file-like object to pass to the Popen constructor.\n Could also be an instance of ``subprocess.PIPE``.\n before_options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg before the ``-i`` flag.\n options: Optional[:class:`str`]\n Extra command line arguments to pass to ffmpeg after the ``-i`` flag.\n\n Raises\n --------\n ClientException\n The subprocess failed to be created.\n \"\"\"\n\n def __init__(\n self,\n source: Union[str, io.BufferedIOBase],\n *,\n bitrate: Optional[int] = None,\n codec: Optional[str] = None,\n executable: str = 'ffmpeg',\n pipe: bool = False,\n stderr: Optional[IO[bytes]] = None,\n before_options: Optional[str] = None,\n options: Optional[str] = None,\n ) -> None:\n args = []\n subprocess_kwargs = {'stdin': subprocess.PIPE if pipe else subprocess.DEVNULL, 'stderr': stderr}\n\n if isinstance(before_options, str):\n args.extend(shlex.split(before_options))\n\n args.append('-i')\n args.append('-' if pipe else source)\n\n codec = 'copy' if codec in ('opus', 'libopus') else 'libopus'\n bitrate = bitrate if bitrate is not None else 128\n\n # fmt: off\n args.extend(('-map_metadata', '-1',\n '-f', 'opus',\n '-c:a', codec,\n '-ar', '48000',\n '-ac', '2',\n '-b:a', f'{bitrate}k',\n '-loglevel', 'warning'))\n # fmt: on\n\n if isinstance(options, str):\n args.extend(shlex.split(options))\n\n args.append('pipe:1')\n\n super().__init__(source, executable=executable, args=args, **subprocess_kwargs)\n self._packet_iter = OggStream(self._stdout).iter_packets()\n\n @classmethod\n async def from_probe(\n cls,\n source: str,\n *,\n method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,\n **kwargs: Any,\n ) -> Self:\n \"\"\"|coro|\n\n A factory method that creates a :class:`FFmpegOpusAudio` after probing\n the input source for audio codec and bitrate information.\n\n Examples\n ----------\n\n Use this function to create an :class:`FFmpegOpusAudio` instance instead of the constructor: ::\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\")\n voice_client.play(source)\n\n If you are on Windows and don't have ffprobe installed, use the ``fallback`` method\n to probe using ffmpeg instead: ::\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\", method='fallback')\n voice_client.play(source)\n\n Using a custom method of determining codec and bitrate: ::\n\n def custom_probe(source, executable):\n # some analysis code here\n return codec, bitrate\n\n source = await discord.FFmpegOpusAudio.from_probe(\"song.webm\", method=custom_probe)\n voice_client.play(source)\n\n Parameters\n ------------\n source\n Identical to the ``source`` parameter for the constructor.\n method: Optional[Union[:class:`str`, Callable[:class:`str`, :class:`str`]]]\n The probing method used to determine bitrate and codec information. As a string, valid\n values are ``native`` to use ffprobe (or avprobe) and ``fallback`` to use ffmpeg\n (or avconv). As a callable, it must take two string arguments, ``source`` and\n ``executable``. Both parameters are the same values passed to this factory function.\n ``executable`` will default to ``ffmpeg`` if not provided as a keyword argument.\n kwargs\n The remaining parameters to be passed to the :class:`FFmpegOpusAudio` constructor,\n excluding ``bitrate`` and ``codec``.\n\n Raises\n --------\n AttributeError\n Invalid probe method, must be ``'native'`` or ``'fallback'``.\n TypeError\n Invalid value for ``probe`` parameter, must be :class:`str` or a callable.\n\n Returns\n --------\n :class:`FFmpegOpusAudio`\n An instance of this class.\n \"\"\"\n\n executable = kwargs.get('executable')\n codec, bitrate = await cls.probe(source, method=method, executable=executable)\n return cls(source, bitrate=bitrate, codec=codec, **kwargs)\n\n @classmethod\n async def probe(\n cls,\n source: str,\n *,\n method: Optional[Union[str, Callable[[str, str], Tuple[Optional[str], Optional[int]]]]] = None,\n executable: Optional[str] = None,\n ) -> Tuple[Optional[str], Optional[int]]:\n \"\"\"|coro|\n\n Probes the input source for bitrate and codec information.\n\n Parameters\n ------------\n source\n Identical to the ``source`` parameter for :class:`FFmpegOpusAudio`.\n method\n Identical to the ``method`` parameter for :meth:`FFmpegOpusAudio.from_probe`.\n executable: :class:`str`\n Identical to the ``executable`` parameter for :class:`FFmpegOpusAudio`.\n\n Raises\n --------\n AttributeError\n Invalid probe method, must be ``'native'`` or ``'fallback'``.\n TypeError\n Invalid value for ``probe`` parameter, must be :class:`str` or a callable.\n\n Returns\n ---------\n Optional[Tuple[Optional[:class:`str`], :class:`int`]]\n A 2-tuple with the codec and bitrate of the input source.\n \"\"\"\n\n method = method or 'native'\n executable = executable or 'ffmpeg'\n probefunc = fallback = None\n\n if isinstance(method, str):\n probefunc = getattr(cls, '_probe_codec_' + method, None)\n if probefunc is None:\n raise AttributeError(f\"Invalid probe method {method!r}\")\n\n if probefunc is cls._probe_codec_native:\n fallback = cls._probe_codec_fallback\n\n elif callable(method):\n probefunc = method\n fallback = cls._probe_codec_fallback\n else:\n raise TypeError(f\"Expected str or callable for parameter 'probe', not '{method.__class__.__name__}'\")\n\n codec = bitrate = None\n loop = asyncio.get_running_loop()\n try:\n codec, bitrate = await loop.run_in_executor(None, lambda: probefunc(source, executable))\n except Exception:\n if not fallback:\n _log.exception(\"Probe '%s' using '%s' failed\", method, executable)\n return # type: ignore\n\n _log.exception(\"Probe '%s' using '%s' failed, trying fallback\", method, executable)\n try:\n codec, bitrate = await loop.run_in_executor(None, lambda: fallback(source, executable))\n except Exception:\n _log.exception(\"Fallback probe using '%s' failed\", executable)\n else:\n _log.debug(\"Fallback probe found codec=%s, bitrate=%s\", codec, bitrate)\n else:\n _log.debug(\"Probe found codec=%s, bitrate=%s\", codec, bitrate)\n finally:\n return codec, bitrate\n\n @staticmethod\n def _probe_codec_native(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:\n exe = executable[:2] + 'probe' if executable in ('ffmpeg', 'avconv') else executable\n args = [exe, '-v', 'quiet', '-print_format', 'json', '-show_streams', '-select_streams', 'a:0', source]\n output = subprocess.check_output(args, timeout=20)\n codec = bitrate = None\n\n if output:\n data = json.loads(output)\n streamdata = data['streams'][0]\n\n codec = streamdata.get('codec_name')\n bitrate = int(streamdata.get('bit_rate', 0))\n bitrate = max(round(bitrate / 1000), 512)\n\n return codec, bitrate\n\n @staticmethod\n def _probe_codec_fallback(source, executable: str = 'ffmpeg') -> Tuple[Optional[str], Optional[int]]:\n args = [executable, '-hide_banner', '-i', source]\n proc = subprocess.Popen(args, creationflags=CREATE_NO_WINDOW, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)\n out, _ = proc.communicate(timeout=20)\n output = out.decode('utf8')\n codec = bitrate = None\n\n codec_match = re.search(r\"Stream #0.*?Audio: (\\w+)\", output)\n if codec_match:\n codec = codec_match.group(1)\n\n br_match = re.search(r\"(\\d+) [kK]b/s\", output)\n if br_match:\n bitrate = max(int(br_match.group(1)), 512)\n\n return codec, bitrate\n\n def read(self) -> bytes:\n return next(self._packet_iter, b'')\n\n def is_opus(self) -> bool:\n return True\n\n\nclass PCMVolumeTransformer(AudioSource, Generic[AT]):\n \"\"\"Transforms a previous :class:`AudioSource` to have volume controls.\n\n This does not work on audio sources that have :meth:`AudioSource.is_opus`\n set to ``True``.\n\n Parameters\n ------------\n original: :class:`AudioSource`\n The original AudioSource to transform.\n volume: :class:`float`\n The initial volume to set it to.\n See :attr:`volume` for more info.\n\n Raises\n -------\n TypeError\n Not an audio source.\n ClientException\n The audio source is opus encoded.\n \"\"\"\n\n def __init__(self, original: AT, volume: float = 1.0):\n if not isinstance(original, AudioSource):\n raise TypeError(f'expected AudioSource not {original.__class__.__name__}.')\n\n if original.is_opus():\n raise ClientException('AudioSource must not be Opus encoded.')\n\n self.original: AT = original\n self.volume = volume\n\n @property\n def volume(self) -> float:\n \"\"\"Retrieves or sets the volume as a floating point percentage (e.g. ``1.0`` for 100%).\"\"\"\n return self._volume\n\n @volume.setter\n def volume(self, value: float) -> None:\n self._volume = max(value, 0.0)\n\n def cleanup(self) -> None:\n self.original.cleanup()\n\n def read(self) -> bytes:\n ret = self.original.read()\n return audioop.mul(ret, 2, min(self._volume, 2.0))\n\n\nclass AudioPlayer(threading.Thread):\n DELAY: float = OpusEncoder.FRAME_LENGTH / 1000.0\n\n def __init__(\n self,\n source: AudioSource,\n client: VoiceClient,\n *,\n after: Optional[Callable[[Optional[Exception]], Any]] = None,\n ) -> None:\n threading.Thread.__init__(self)\n self.daemon: bool = True\n self.source: AudioSource = source\n self.client: VoiceClient = client\n self.after: Optional[Callable[[Optional[Exception]], Any]] = after\n\n self._end: threading.Event = threading.Event()\n self._resumed: threading.Event = threading.Event()\n self._resumed.set() # we are not paused\n self._current_error: Optional[Exception] = None\n self._connected: threading.Event = client._connected\n self._lock: threading.Lock = threading.Lock()\n\n if after is not None and not callable(after):\n raise TypeError('Expected a callable for the \"after\" parameter.')\n\n def _do_run(self) -> None:\n self.loops = 0\n self._start = time.perf_counter()\n\n # getattr lookup speed ups\n play_audio = self.client.send_audio_packet\n self._speak(SpeakingState.voice)\n\n while not self._end.is_set():\n # are we paused?\n if not self._resumed.is_set():\n # wait until we aren't\n self._resumed.wait()\n continue\n\n # are we disconnected from voice?\n if not self._connected.is_set():\n # wait until we are connected\n self._connected.wait()\n # reset our internal data\n self.loops = 0\n self._start = time.perf_counter()\n\n self.loops += 1\n data = self.source.read()\n\n if not data:\n self.stop()\n break\n\n play_audio(data, encode=not self.source.is_opus())\n next_time = self._start + self.DELAY * self.loops\n delay = max(0, self.DELAY + (next_time - time.perf_counter()))\n time.sleep(delay)\n\n def run(self) -> None:\n try:\n self._do_run()\n except Exception as exc:\n self._current_error = exc\n self.stop()\n finally:\n self._call_after()\n self.source.cleanup()\n\n def _call_after(self) -> None:\n error = self._current_error\n\n if self.after is not None:\n try:\n self.after(error)\n except Exception as exc:\n exc.__context__ = error\n _log.exception('Calling the after function failed.', exc_info=exc)\n elif error:\n _log.exception('Exception in voice thread %s', self.name, exc_info=error)\n\n def stop(self) -> None:\n self._end.set()\n self._resumed.set()\n self._speak(SpeakingState.none)\n\n def pause(self, *, update_speaking: bool = True) -> None:\n self._resumed.clear()\n if update_speaking:\n self._speak(SpeakingState.none)\n\n def resume(self, *, update_speaking: bool = True) -> None:\n self.loops: int = 0\n self._start: float = time.perf_counter()\n self._resumed.set()\n if update_speaking:\n self._speak(SpeakingState.voice)\n\n def is_playing(self) -> bool:\n return self._resumed.is_set() and not self._end.is_set()\n\n def is_paused(self) -> bool:\n return not self._end.is_set() and not self._resumed.is_set()\n\n def _set_source(self, source: AudioSource) -> None:\n with self._lock:\n self.pause(update_speaking=False)\n self.source = source\n self.resume(update_speaking=False)\n\n def _speak(self, speaking: SpeakingState) -> None:\n try:\n asyncio.run_coroutine_threadsafe(self.client.ws.speak(speaking), self.client.client.loop)\n except Exception:\n _log.exception(\"Speaking call in player failed\")\n", "path": "discord/player.py"}]} |
gh_patches_debug_1118 | rasdani/github-patches | git_diff | librosa__librosa-1493 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Numpy array truthiness error during effects.split
When loading a file and trying to run librosa.effects.split() on it, I get this error:
```
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/librosa/effects.py", line 574, in split
if non_silent[0]:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
**To Reproduce**
```
import librosa
import numpy as np
y, sr = librosa.load("path/to/file.mp3", sr=44100, mono=False)
intervals = librosa.effects.split(y, top_db=22, ref=np.max, frame_length=44100, hop_length=44100)
```
**Expected behavior**
The split effect returning an array of non-silent intervals.
**Software versions***
```
INSTALLED VERSIONS
------------------
python: 3.8.12 (default, Oct 13 2021, 06:42:42)
[Clang 13.0.0 (clang-1300.0.29.3)]
librosa: 0.9.1
audioread: 2.1.9
numpy: 1.22.4
scipy: 1.8.1
sklearn: 1.1.1
joblib: 1.1.0
decorator: 5.1.1
soundfile: 0.10.3
resampy: 0.2.2
numba: 0.55.2
numpydoc: None
sphinx: None
sphinx_rtd_theme: None
sphinxcontrib.versioning: None
sphinx-gallery: None
pytest: None
pytest-mpl: None
pytest-cov: None
matplotlib: None
presets: None
```
**Additional context**
This is a file I haven't touched in a while, so I apologize if it is something that is covered in a changelog somewhere. However, I was unable find any similar issues.
Numpy array truthiness error during effects.split
When loading a file and trying to run librosa.effects.split() on it, I get this error:
```
File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/librosa/effects.py", line 574, in split
if non_silent[0]:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
**To Reproduce**
```
import librosa
import numpy as np
y, sr = librosa.load("path/to/file.mp3", sr=44100, mono=False)
intervals = librosa.effects.split(y, top_db=22, ref=np.max, frame_length=44100, hop_length=44100)
```
**Expected behavior**
The split effect returning an array of non-silent intervals.
**Software versions***
```
INSTALLED VERSIONS
------------------
python: 3.8.12 (default, Oct 13 2021, 06:42:42)
[Clang 13.0.0 (clang-1300.0.29.3)]
librosa: 0.9.1
audioread: 2.1.9
numpy: 1.22.4
scipy: 1.8.1
sklearn: 1.1.1
joblib: 1.1.0
decorator: 5.1.1
soundfile: 0.10.3
resampy: 0.2.2
numba: 0.55.2
numpydoc: None
sphinx: None
sphinx_rtd_theme: None
sphinxcontrib.versioning: None
sphinx-gallery: None
pytest: None
pytest-mpl: None
pytest-cov: None
matplotlib: None
presets: None
```
**Additional context**
This is a file I haven't touched in a while, so I apologize if it is something that is covered in a changelog somewhere. However, I was unable find any similar issues.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `librosa/effects.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Effects
5 =======
6
7 Harmonic-percussive source separation
8 -------------------------------------
9 .. autosummary::
10 :toctree: generated/
11
12 hpss
13 harmonic
14 percussive
15
16 Time and frequency
17 ------------------
18 .. autosummary::
19 :toctree: generated/
20
21 time_stretch
22 pitch_shift
23
24 Miscellaneous
25 -------------
26 .. autosummary::
27 :toctree: generated/
28
29 remix
30 trim
31 split
32 preemphasis
33 deemphasis
34 """
35
36 import numpy as np
37 import scipy.signal
38
39 from . import core
40 from . import decompose
41 from . import feature
42 from . import util
43 from .util.exceptions import ParameterError
44 from .util.decorators import deprecate_positional_args
45
46 __all__ = [
47 "hpss",
48 "harmonic",
49 "percussive",
50 "time_stretch",
51 "pitch_shift",
52 "remix",
53 "trim",
54 "split",
55 ]
56
57
58 def hpss(y, **kwargs):
59 """Decompose an audio time series into harmonic and percussive components.
60
61 This function automates the STFT->HPSS->ISTFT pipeline, and ensures that
62 the output waveforms have equal length to the input waveform ``y``.
63
64 Parameters
65 ----------
66 y : np.ndarray [shape=(..., n)]
67 audio time series. Multi-channel is supported.
68 **kwargs : additional keyword arguments.
69 See `librosa.decompose.hpss` for details.
70
71 Returns
72 -------
73 y_harmonic : np.ndarray [shape=(..., n)]
74 audio time series of the harmonic elements
75 y_percussive : np.ndarray [shape=(..., n)]
76 audio time series of the percussive elements
77
78 See Also
79 --------
80 harmonic : Extract only the harmonic component
81 percussive : Extract only the percussive component
82 librosa.decompose.hpss : HPSS on spectrograms
83
84 Examples
85 --------
86 >>> # Extract harmonic and percussive components
87 >>> y, sr = librosa.load(librosa.ex('choice'))
88 >>> y_harmonic, y_percussive = librosa.effects.hpss(y)
89
90 >>> # Get a more isolated percussive component by widening its margin
91 >>> y_harmonic, y_percussive = librosa.effects.hpss(y, margin=(1.0,5.0))
92
93 """
94
95 # Compute the STFT matrix
96 stft = core.stft(y)
97
98 # Decompose into harmonic and percussives
99 stft_harm, stft_perc = decompose.hpss(stft, **kwargs)
100
101 # Invert the STFTs. Adjust length to match the input.
102 y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])
103 y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])
104
105 return y_harm, y_perc
106
107
108 def harmonic(y, **kwargs):
109 """Extract harmonic elements from an audio time-series.
110
111 Parameters
112 ----------
113 y : np.ndarray [shape=(..., n)]
114 audio time series. Multi-channel is supported.
115 **kwargs : additional keyword arguments.
116 See `librosa.decompose.hpss` for details.
117
118 Returns
119 -------
120 y_harmonic : np.ndarray [shape=(..., n)]
121 audio time series of just the harmonic portion
122
123 See Also
124 --------
125 hpss : Separate harmonic and percussive components
126 percussive : Extract only the percussive component
127 librosa.decompose.hpss : HPSS for spectrograms
128
129 Examples
130 --------
131 >>> # Extract harmonic component
132 >>> y, sr = librosa.load(librosa.ex('choice'))
133 >>> y_harmonic = librosa.effects.harmonic(y)
134
135 >>> # Use a margin > 1.0 for greater harmonic separation
136 >>> y_harmonic = librosa.effects.harmonic(y, margin=3.0)
137
138 """
139
140 # Compute the STFT matrix
141 stft = core.stft(y)
142
143 # Remove percussives
144 stft_harm = decompose.hpss(stft, **kwargs)[0]
145
146 # Invert the STFTs
147 y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])
148
149 return y_harm
150
151
152 def percussive(y, **kwargs):
153 """Extract percussive elements from an audio time-series.
154
155 Parameters
156 ----------
157 y : np.ndarray [shape=(..., n)]
158 audio time series. Multi-channel is supported.
159 **kwargs : additional keyword arguments.
160 See `librosa.decompose.hpss` for details.
161
162 Returns
163 -------
164 y_percussive : np.ndarray [shape=(..., n)]
165 audio time series of just the percussive portion
166
167 See Also
168 --------
169 hpss : Separate harmonic and percussive components
170 harmonic : Extract only the harmonic component
171 librosa.decompose.hpss : HPSS for spectrograms
172
173 Examples
174 --------
175 >>> # Extract percussive component
176 >>> y, sr = librosa.load(librosa.ex('choice'))
177 >>> y_percussive = librosa.effects.percussive(y)
178
179 >>> # Use a margin > 1.0 for greater percussive separation
180 >>> y_percussive = librosa.effects.percussive(y, margin=3.0)
181
182 """
183
184 # Compute the STFT matrix
185 stft = core.stft(y)
186
187 # Remove harmonics
188 stft_perc = decompose.hpss(stft, **kwargs)[1]
189
190 # Invert the STFT
191 y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])
192
193 return y_perc
194
195
196 @deprecate_positional_args
197 def time_stretch(y, *, rate, **kwargs):
198 """Time-stretch an audio series by a fixed rate.
199
200 Parameters
201 ----------
202 y : np.ndarray [shape=(..., n)]
203 audio time series. Multi-channel is supported.
204 rate : float > 0 [scalar]
205 Stretch factor. If ``rate > 1``, then the signal is sped up.
206 If ``rate < 1``, then the signal is slowed down.
207 **kwargs : additional keyword arguments.
208 See `librosa.decompose.stft` for details.
209
210 Returns
211 -------
212 y_stretch : np.ndarray [shape=(..., round(n/rate))]
213 audio time series stretched by the specified rate
214
215 See Also
216 --------
217 pitch_shift :
218 pitch shifting
219 librosa.phase_vocoder :
220 spectrogram phase vocoder
221 pyrubberband.pyrb.time_stretch :
222 high-quality time stretching using RubberBand
223
224 Examples
225 --------
226 Compress to be twice as fast
227
228 >>> y, sr = librosa.load(librosa.ex('choice'))
229 >>> y_fast = librosa.effects.time_stretch(y, rate=2.0)
230
231 Or half the original speed
232
233 >>> y_slow = librosa.effects.time_stretch(y, rate=0.5)
234
235 """
236
237 if rate <= 0:
238 raise ParameterError("rate must be a positive number")
239
240 # Construct the short-term Fourier transform (STFT)
241 stft = core.stft(y, **kwargs)
242
243 # Stretch by phase vocoding
244 stft_stretch = core.phase_vocoder(
245 stft,
246 rate=rate,
247 hop_length=kwargs.get("hop_length", None),
248 n_fft=kwargs.get("n_fft", None),
249 )
250
251 # Predict the length of y_stretch
252 len_stretch = int(round(y.shape[-1] / rate))
253
254 # Invert the STFT
255 y_stretch = core.istft(stft_stretch, dtype=y.dtype, length=len_stretch, **kwargs)
256
257 return y_stretch
258
259
260 @deprecate_positional_args
261 def pitch_shift(
262 y, *, sr, n_steps, bins_per_octave=12, res_type="kaiser_best", **kwargs
263 ):
264 """Shift the pitch of a waveform by ``n_steps`` steps.
265
266 A step is equal to a semitone if ``bins_per_octave`` is set to 12.
267
268 Parameters
269 ----------
270 y : np.ndarray [shape=(..., n)]
271 audio time series. Multi-channel is supported.
272
273 sr : number > 0 [scalar]
274 audio sampling rate of ``y``
275
276 n_steps : float [scalar]
277 how many (fractional) steps to shift ``y``
278
279 bins_per_octave : float > 0 [scalar]
280 how many steps per octave
281
282 res_type : string
283 Resample type. By default, 'kaiser_best' is used.
284
285 See `librosa.resample` for more information.
286
287 **kwargs : additional keyword arguments.
288 See `librosa.decompose.stft` for details.
289
290 Returns
291 -------
292 y_shift : np.ndarray [shape=(..., n)]
293 The pitch-shifted audio time-series
294
295 See Also
296 --------
297 time_stretch :
298 time stretching
299 librosa.phase_vocoder :
300 spectrogram phase vocoder
301 pyrubberband.pyrb.pitch_shift :
302 high-quality pitch shifting using RubberBand
303
304 Examples
305 --------
306 Shift up by a major third (four steps if ``bins_per_octave`` is 12)
307
308 >>> y, sr = librosa.load(librosa.ex('choice'))
309 >>> y_third = librosa.effects.pitch_shift(y, sr=sr, n_steps=4)
310
311 Shift down by a tritone (six steps if ``bins_per_octave`` is 12)
312
313 >>> y_tritone = librosa.effects.pitch_shift(y, sr=sr, n_steps=-6)
314
315 Shift up by 3 quarter-tones
316
317 >>> y_three_qt = librosa.effects.pitch_shift(y, sr=sr, n_steps=3,
318 ... bins_per_octave=24)
319 """
320
321 if bins_per_octave < 1 or not np.issubdtype(type(bins_per_octave), np.integer):
322 raise ParameterError("bins_per_octave must be a positive integer.")
323
324 rate = 2.0 ** (-float(n_steps) / bins_per_octave)
325
326 # Stretch in time, then resample
327 y_shift = core.resample(
328 time_stretch(y, rate=rate, **kwargs),
329 orig_sr=float(sr) / rate,
330 target_sr=sr,
331 res_type=res_type,
332 )
333
334 # Crop to the same dimension as the input
335 return util.fix_length(y_shift, size=y.shape[-1])
336
337
338 @deprecate_positional_args
339 def remix(y, intervals, *, align_zeros=True):
340 """Remix an audio signal by re-ordering time intervals.
341
342 Parameters
343 ----------
344 y : np.ndarray [shape=(..., t)]
345 Audio time series. Multi-channel is supported.
346 intervals : iterable of tuples (start, end)
347 An iterable (list-like or generator) where the ``i``th item
348 ``intervals[i]`` indicates the start and end (in samples)
349 of a slice of ``y``.
350 align_zeros : boolean
351 If ``True``, interval boundaries are mapped to the closest
352 zero-crossing in ``y``. If ``y`` is stereo, zero-crossings
353 are computed after converting to mono.
354
355 Returns
356 -------
357 y_remix : np.ndarray [shape=(..., d)]
358 ``y`` remixed in the order specified by ``intervals``
359
360 Examples
361 --------
362 Load in the example track and reverse the beats
363
364 >>> y, sr = librosa.load(librosa.ex('choice'))
365
366 Compute beats
367
368 >>> _, beat_frames = librosa.beat.beat_track(y=y, sr=sr,
369 ... hop_length=512)
370
371 Convert from frames to sample indices
372
373 >>> beat_samples = librosa.frames_to_samples(beat_frames)
374
375 Generate intervals from consecutive events
376
377 >>> intervals = librosa.util.frame(beat_samples, frame_length=2,
378 ... hop_length=1).T
379
380 Reverse the beat intervals
381
382 >>> y_out = librosa.effects.remix(y, intervals[::-1])
383 """
384
385 y_out = []
386
387 if align_zeros:
388 y_mono = core.to_mono(y)
389 zeros = np.nonzero(core.zero_crossings(y_mono))[-1]
390 # Force end-of-signal onto zeros
391 zeros = np.append(zeros, [len(y_mono)])
392
393 for interval in intervals:
394
395 if align_zeros:
396 interval = zeros[util.match_events(interval, zeros)]
397
398 y_out.append(y[..., interval[0] : interval[1]])
399
400 return np.concatenate(y_out, axis=-1)
401
402
403 def _signal_to_frame_nonsilent(
404 y, frame_length=2048, hop_length=512, top_db=60, ref=np.max, aggregate=np.max
405 ):
406 """Frame-wise non-silent indicator for audio input.
407
408 This is a helper function for `trim` and `split`.
409
410 Parameters
411 ----------
412 y : np.ndarray
413 Audio signal, mono or stereo
414
415 frame_length : int > 0
416 The number of samples per frame
417
418 hop_length : int > 0
419 The number of samples between frames
420
421 top_db : number > 0
422 The threshold (in decibels) below reference to consider as
423 silence
424
425 ref : callable or float
426 The reference amplitude
427
428 aggregate : callable [default: np.max]
429 Function to aggregate dB measurements across channels (if y.ndim > 1)
430
431 Note: for multiple leading axes, this is performed using ``np.apply_over_axes``.
432
433 Returns
434 -------
435 non_silent : np.ndarray, shape=(m,), dtype=bool
436 Indicator of non-silent frames
437 """
438
439 # Compute the MSE for the signal
440 mse = feature.rms(y=y, frame_length=frame_length, hop_length=hop_length)
441
442 # Convert to decibels and slice out the mse channel
443 db = core.amplitude_to_db(mse[..., 0, :], ref=ref, top_db=None)
444
445 # Aggregate everything but the time dimension
446 if db.ndim > 1:
447 db = np.apply_over_axes(aggregate, db, range(db.ndim - 1))
448
449 return db > -top_db
450
451
452 @deprecate_positional_args
453 def trim(
454 y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max
455 ):
456 """Trim leading and trailing silence from an audio signal.
457
458 Parameters
459 ----------
460 y : np.ndarray, shape=(..., n)
461 Audio signal. Multi-channel is supported.
462 top_db : number > 0
463 The threshold (in decibels) below reference to consider as
464 silence
465 ref : number or callable
466 The reference amplitude. By default, it uses `np.max` and compares
467 to the peak amplitude in the signal.
468 frame_length : int > 0
469 The number of samples per analysis frame
470 hop_length : int > 0
471 The number of samples between analysis frames
472 aggregate : callable [default: np.max]
473 Function to aggregate across channels (if y.ndim > 1)
474
475 Returns
476 -------
477 y_trimmed : np.ndarray, shape=(..., m)
478 The trimmed signal
479 index : np.ndarray, shape=(2,)
480 the interval of ``y`` corresponding to the non-silent region:
481 ``y_trimmed = y[index[0]:index[1]]`` (for mono) or
482 ``y_trimmed = y[:, index[0]:index[1]]`` (for stereo).
483
484 Examples
485 --------
486 >>> # Load some audio
487 >>> y, sr = librosa.load(librosa.ex('choice'))
488 >>> # Trim the beginning and ending silence
489 >>> yt, index = librosa.effects.trim(y)
490 >>> # Print the durations
491 >>> print(librosa.get_duration(y), librosa.get_duration(yt))
492 25.025986394557822 25.007891156462584
493 """
494
495 non_silent = _signal_to_frame_nonsilent(
496 y,
497 frame_length=frame_length,
498 hop_length=hop_length,
499 ref=ref,
500 top_db=top_db,
501 aggregate=aggregate,
502 )
503
504 nonzero = np.flatnonzero(non_silent)
505
506 if nonzero.size > 0:
507 # Compute the start and end positions
508 # End position goes one frame past the last non-zero
509 start = int(core.frames_to_samples(nonzero[0], hop_length=hop_length))
510 end = min(
511 y.shape[-1],
512 int(core.frames_to_samples(nonzero[-1] + 1, hop_length=hop_length)),
513 )
514 else:
515 # The signal only contains zeros
516 start, end = 0, 0
517
518 # Build the mono/stereo index
519 full_index = [slice(None)] * y.ndim
520 full_index[-1] = slice(start, end)
521
522 return y[tuple(full_index)], np.asarray([start, end])
523
524
525 @deprecate_positional_args
526 def split(
527 y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max
528 ):
529 """Split an audio signal into non-silent intervals.
530
531 Parameters
532 ----------
533 y : np.ndarray, shape=(..., n)
534 An audio signal. Multi-channel is supported.
535 top_db : number > 0
536 The threshold (in decibels) below reference to consider as
537 silence
538 ref : number or callable
539 The reference amplitude. By default, it uses `np.max` and compares
540 to the peak amplitude in the signal.
541 frame_length : int > 0
542 The number of samples per analysis frame
543 hop_length : int > 0
544 The number of samples between analysis frames
545 aggregate : callable [default: np.max]
546 Function to aggregate across channels (if y.ndim > 1)
547
548 Returns
549 -------
550 intervals : np.ndarray, shape=(m, 2)
551 ``intervals[i] == (start_i, end_i)`` are the start and end time
552 (in samples) of non-silent interval ``i``.
553
554 """
555
556 non_silent = _signal_to_frame_nonsilent(
557 y,
558 frame_length=frame_length,
559 hop_length=hop_length,
560 ref=ref,
561 top_db=top_db,
562 aggregate=aggregate,
563 )
564
565 # Interval slicing, adapted from
566 # https://stackoverflow.com/questions/2619413/efficiently-finding-the-interval-with-non-zeros-in-scipy-numpy-in-python
567 # Find points where the sign flips
568 edges = np.flatnonzero(np.diff(non_silent.astype(int)))
569
570 # Pad back the sample lost in the diff
571 edges = [edges + 1]
572
573 # If the first frame had high energy, count it
574 if non_silent[0]:
575 edges.insert(0, [0])
576
577 # Likewise for the last frame
578 if non_silent[-1]:
579 edges.append([len(non_silent)])
580
581 # Convert from frames to samples
582 edges = core.frames_to_samples(np.concatenate(edges), hop_length=hop_length)
583
584 # Clip to the signal duration
585 edges = np.minimum(edges, y.shape[-1])
586
587 # Stack the results back as an ndarray
588 return edges.reshape((-1, 2))
589
590
591 @deprecate_positional_args
592 def preemphasis(y, *, coef=0.97, zi=None, return_zf=False):
593 """Pre-emphasize an audio signal with a first-order auto-regressive filter:
594
595 y[n] -> y[n] - coef * y[n-1]
596
597 Parameters
598 ----------
599 y : np.ndarray [shape=(..., n)]
600 Audio signal. Multi-channel is supported.
601
602 coef : positive number
603 Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.
604
605 At the limit ``coef=0``, the signal is unchanged.
606
607 At ``coef=1``, the result is the first-order difference of the signal.
608
609 The default (0.97) matches the pre-emphasis filter used in the HTK
610 implementation of MFCCs [#]_.
611
612 .. [#] http://htk.eng.cam.ac.uk/
613
614 zi : number
615 Initial filter state. When making successive calls to non-overlapping
616 frames, this can be set to the ``zf`` returned from the previous call.
617 (See example below.)
618
619 By default ``zi`` is initialized as ``2*y[0] - y[1]``.
620
621 return_zf : boolean
622 If ``True``, return the final filter state.
623 If ``False``, only return the pre-emphasized signal.
624
625 Returns
626 -------
627 y_out : np.ndarray
628 pre-emphasized signal
629 zf : number
630 if ``return_zf=True``, the final filter state is also returned
631
632 Examples
633 --------
634 Apply a standard pre-emphasis filter
635
636 >>> import matplotlib.pyplot as plt
637 >>> y, sr = librosa.load(librosa.ex('trumpet'))
638 >>> y_filt = librosa.effects.preemphasis(y)
639 >>> # and plot the results for comparison
640 >>> S_orig = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max, top_db=None)
641 >>> S_preemph = librosa.amplitude_to_db(np.abs(librosa.stft(y_filt)), ref=np.max, top_db=None)
642 >>> fig, ax = plt.subplots(nrows=2, sharex=True, sharey=True)
643 >>> librosa.display.specshow(S_orig, y_axis='log', x_axis='time', ax=ax[0])
644 >>> ax[0].set(title='Original signal')
645 >>> ax[0].label_outer()
646 >>> img = librosa.display.specshow(S_preemph, y_axis='log', x_axis='time', ax=ax[1])
647 >>> ax[1].set(title='Pre-emphasized signal')
648 >>> fig.colorbar(img, ax=ax, format="%+2.f dB")
649
650 Apply pre-emphasis in pieces for block streaming. Note that the second block
651 initializes ``zi`` with the final state ``zf`` returned by the first call.
652
653 >>> y_filt_1, zf = librosa.effects.preemphasis(y[:1000], return_zf=True)
654 >>> y_filt_2, zf = librosa.effects.preemphasis(y[1000:], zi=zf, return_zf=True)
655 >>> np.allclose(y_filt, np.concatenate([y_filt_1, y_filt_2]))
656 True
657
658 See Also
659 --------
660 deemphasis
661 """
662 b = np.asarray([1.0, -coef], dtype=y.dtype)
663 a = np.asarray([1.0], dtype=y.dtype)
664
665 if zi is None:
666 # Initialize the filter to implement linear extrapolation
667 zi = 2 * y[..., 0:1] - y[..., 1:2]
668
669 zi = np.atleast_1d(zi)
670
671 y_out, z_f = scipy.signal.lfilter(b, a, y, zi=np.asarray(zi, dtype=y.dtype))
672
673 if return_zf:
674 return y_out, z_f
675
676 return y_out
677
678
679 @deprecate_positional_args
680 def deemphasis(y, *, coef=0.97, zi=None, return_zf=False):
681 """De-emphasize an audio signal with the inverse operation of preemphasis():
682
683 If y = preemphasis(x, coef=coef, zi=zi), the deemphasis is:
684
685 >>> x[i] = y[i] + coef * x[i-1]
686 >>> x = deemphasis(y, coef=coef, zi=zi)
687
688 Parameters
689 ----------
690 y : np.ndarray [shape=(..., n)]
691 Audio signal. Multi-channel is supported.
692
693 coef : positive number
694 Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.
695
696 At the limit ``coef=0``, the signal is unchanged.
697
698 At ``coef=1``, the result is the first-order difference of the signal.
699
700 The default (0.97) matches the pre-emphasis filter used in the HTK
701 implementation of MFCCs [#]_.
702
703 .. [#] http://htk.eng.cam.ac.uk/
704
705 zi : number
706 Initial filter state. If inverting a previous preemphasis(), the same value should be used.
707
708 By default ``zi`` is initialized as
709 ``((2 - coef) * y[0] - y[1]) / (3 - coef)``. This
710 value corresponds to the transformation of the default initialization of ``zi`` in ``preemphasis()``,
711 ``2*x[0] - x[1]``.
712
713 return_zf : boolean
714 If ``True``, return the final filter state.
715 If ``False``, only return the pre-emphasized signal.
716
717 Returns
718 -------
719 y_out : np.ndarray
720 de-emphasized signal
721 zf : number
722 if ``return_zf=True``, the final filter state is also returned
723
724 Examples
725 --------
726 Apply a standard pre-emphasis filter and invert it with de-emphasis
727
728 >>> y, sr = librosa.load(librosa.ex('trumpet'))
729 >>> y_filt = librosa.effects.preemphasis(y)
730 >>> y_deemph = librosa.effects.deemphasis(y_filt)
731 >>> np.allclose(y, y_deemph)
732 True
733
734 See Also
735 --------
736 preemphasis
737 """
738
739 b = np.array([1.0, -coef], dtype=y.dtype)
740 a = np.array([1.0], dtype=y.dtype)
741
742 if zi is None:
743 # initialize with all zeros
744 zi = np.zeros(list(y.shape[:-1]) + [1], dtype=y.dtype)
745 y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi)
746
747 # factor in the linear extrapolation
748 y_out -= (
749 ((2 - coef) * y[..., 0:1] - y[..., 1:2])
750 / (3 - coef)
751 * (coef ** np.arange(y.shape[-1]))
752 )
753
754 else:
755 zi = np.atleast_1d(zi)
756 y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi.astype(y.dtype))
757
758 if return_zf:
759 return y_out, zf
760 else:
761 return y_out
762
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/librosa/effects.py b/librosa/effects.py
--- a/librosa/effects.py
+++ b/librosa/effects.py
@@ -445,6 +445,9 @@
# Aggregate everything but the time dimension
if db.ndim > 1:
db = np.apply_over_axes(aggregate, db, range(db.ndim - 1))
+ # Squeeze out leading singleton dimensions here
+ # We always want to keep the trailing dimension though
+ db = np.squeeze(db, axis=tuple(range(db.ndim - 1)))
return db > -top_db
| {"golden_diff": "diff --git a/librosa/effects.py b/librosa/effects.py\n--- a/librosa/effects.py\n+++ b/librosa/effects.py\n@@ -445,6 +445,9 @@\n # Aggregate everything but the time dimension\n if db.ndim > 1:\n db = np.apply_over_axes(aggregate, db, range(db.ndim - 1))\n+ # Squeeze out leading singleton dimensions here\n+ # We always want to keep the trailing dimension though\n+ db = np.squeeze(db, axis=tuple(range(db.ndim - 1)))\n \n return db > -top_db\n", "issue": "Numpy array truthiness error during effects.split\nWhen loading a file and trying to run librosa.effects.split() on it, I get this error:\r\n\r\n```\r\n File \"/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/librosa/effects.py\", line 574, in split\r\n if non_silent[0]:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```\r\n\r\n\r\n**To Reproduce**\r\n```\r\nimport librosa\r\nimport numpy as np\r\n\r\ny, sr = librosa.load(\"path/to/file.mp3\", sr=44100, mono=False)\r\nintervals = librosa.effects.split(y, top_db=22, ref=np.max, frame_length=44100, hop_length=44100)\r\n```\r\n\r\n**Expected behavior**\r\nThe split effect returning an array of non-silent intervals.\r\n\r\n**Software versions***\r\n```\r\nINSTALLED VERSIONS\r\n------------------\r\npython: 3.8.12 (default, Oct 13 2021, 06:42:42) \r\n[Clang 13.0.0 (clang-1300.0.29.3)]\r\n\r\nlibrosa: 0.9.1\r\n\r\naudioread: 2.1.9\r\nnumpy: 1.22.4\r\nscipy: 1.8.1\r\nsklearn: 1.1.1\r\njoblib: 1.1.0\r\ndecorator: 5.1.1\r\nsoundfile: 0.10.3\r\nresampy: 0.2.2\r\nnumba: 0.55.2\r\n\r\nnumpydoc: None\r\nsphinx: None\r\nsphinx_rtd_theme: None\r\nsphinxcontrib.versioning: None\r\nsphinx-gallery: None\r\npytest: None\r\npytest-mpl: None\r\npytest-cov: None\r\nmatplotlib: None\r\npresets: None\r\n```\r\n\r\n**Additional context**\r\nThis is a file I haven't touched in a while, so I apologize if it is something that is covered in a changelog somewhere. However, I was unable find any similar issues.\r\n\nNumpy array truthiness error during effects.split\nWhen loading a file and trying to run librosa.effects.split() on it, I get this error:\r\n\r\n```\r\n File \"/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/librosa/effects.py\", line 574, in split\r\n if non_silent[0]:\r\nValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()\r\n```\r\n\r\n\r\n**To Reproduce**\r\n```\r\nimport librosa\r\nimport numpy as np\r\n\r\ny, sr = librosa.load(\"path/to/file.mp3\", sr=44100, mono=False)\r\nintervals = librosa.effects.split(y, top_db=22, ref=np.max, frame_length=44100, hop_length=44100)\r\n```\r\n\r\n**Expected behavior**\r\nThe split effect returning an array of non-silent intervals.\r\n\r\n**Software versions***\r\n```\r\nINSTALLED VERSIONS\r\n------------------\r\npython: 3.8.12 (default, Oct 13 2021, 06:42:42) \r\n[Clang 13.0.0 (clang-1300.0.29.3)]\r\n\r\nlibrosa: 0.9.1\r\n\r\naudioread: 2.1.9\r\nnumpy: 1.22.4\r\nscipy: 1.8.1\r\nsklearn: 1.1.1\r\njoblib: 1.1.0\r\ndecorator: 5.1.1\r\nsoundfile: 0.10.3\r\nresampy: 0.2.2\r\nnumba: 0.55.2\r\n\r\nnumpydoc: None\r\nsphinx: None\r\nsphinx_rtd_theme: None\r\nsphinxcontrib.versioning: None\r\nsphinx-gallery: None\r\npytest: None\r\npytest-mpl: None\r\npytest-cov: None\r\nmatplotlib: None\r\npresets: None\r\n```\r\n\r\n**Additional context**\r\nThis is a file I haven't touched in a while, so I apologize if it is something that is covered in a changelog somewhere. However, I was unable find any similar issues.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nEffects\n=======\n\nHarmonic-percussive source separation\n-------------------------------------\n.. autosummary::\n :toctree: generated/\n\n hpss\n harmonic\n percussive\n\nTime and frequency\n------------------\n.. autosummary::\n :toctree: generated/\n\n time_stretch\n pitch_shift\n\nMiscellaneous\n-------------\n.. autosummary::\n :toctree: generated/\n\n remix\n trim\n split\n preemphasis\n deemphasis\n\"\"\"\n\nimport numpy as np\nimport scipy.signal\n\nfrom . import core\nfrom . import decompose\nfrom . import feature\nfrom . import util\nfrom .util.exceptions import ParameterError\nfrom .util.decorators import deprecate_positional_args\n\n__all__ = [\n \"hpss\",\n \"harmonic\",\n \"percussive\",\n \"time_stretch\",\n \"pitch_shift\",\n \"remix\",\n \"trim\",\n \"split\",\n]\n\n\ndef hpss(y, **kwargs):\n \"\"\"Decompose an audio time series into harmonic and percussive components.\n\n This function automates the STFT->HPSS->ISTFT pipeline, and ensures that\n the output waveforms have equal length to the input waveform ``y``.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(..., n)]\n audio time series of the harmonic elements\n y_percussive : np.ndarray [shape=(..., n)]\n audio time series of the percussive elements\n\n See Also\n --------\n harmonic : Extract only the harmonic component\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS on spectrograms\n\n Examples\n --------\n >>> # Extract harmonic and percussive components\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y)\n\n >>> # Get a more isolated percussive component by widening its margin\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y, margin=(1.0,5.0))\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Decompose into harmonic and percussives\n stft_harm, stft_perc = decompose.hpss(stft, **kwargs)\n\n # Invert the STFTs. Adjust length to match the input.\n y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])\n y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])\n\n return y_harm, y_perc\n\n\ndef harmonic(y, **kwargs):\n \"\"\"Extract harmonic elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(..., n)]\n audio time series of just the harmonic portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract harmonic component\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_harmonic = librosa.effects.harmonic(y)\n\n >>> # Use a margin > 1.0 for greater harmonic separation\n >>> y_harmonic = librosa.effects.harmonic(y, margin=3.0)\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Remove percussives\n stft_harm = decompose.hpss(stft, **kwargs)[0]\n\n # Invert the STFTs\n y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])\n\n return y_harm\n\n\ndef percussive(y, **kwargs):\n \"\"\"Extract percussive elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_percussive : np.ndarray [shape=(..., n)]\n audio time series of just the percussive portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n harmonic : Extract only the harmonic component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract percussive component\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_percussive = librosa.effects.percussive(y)\n\n >>> # Use a margin > 1.0 for greater percussive separation\n >>> y_percussive = librosa.effects.percussive(y, margin=3.0)\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Remove harmonics\n stft_perc = decompose.hpss(stft, **kwargs)[1]\n\n # Invert the STFT\n y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])\n\n return y_perc\n\n\n@deprecate_positional_args\ndef time_stretch(y, *, rate, **kwargs):\n \"\"\"Time-stretch an audio series by a fixed rate.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n rate : float > 0 [scalar]\n Stretch factor. If ``rate > 1``, then the signal is sped up.\n If ``rate < 1``, then the signal is slowed down.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.stft` for details.\n\n Returns\n -------\n y_stretch : np.ndarray [shape=(..., round(n/rate))]\n audio time series stretched by the specified rate\n\n See Also\n --------\n pitch_shift :\n pitch shifting\n librosa.phase_vocoder :\n spectrogram phase vocoder\n pyrubberband.pyrb.time_stretch :\n high-quality time stretching using RubberBand\n\n Examples\n --------\n Compress to be twice as fast\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_fast = librosa.effects.time_stretch(y, rate=2.0)\n\n Or half the original speed\n\n >>> y_slow = librosa.effects.time_stretch(y, rate=0.5)\n\n \"\"\"\n\n if rate <= 0:\n raise ParameterError(\"rate must be a positive number\")\n\n # Construct the short-term Fourier transform (STFT)\n stft = core.stft(y, **kwargs)\n\n # Stretch by phase vocoding\n stft_stretch = core.phase_vocoder(\n stft,\n rate=rate,\n hop_length=kwargs.get(\"hop_length\", None),\n n_fft=kwargs.get(\"n_fft\", None),\n )\n\n # Predict the length of y_stretch\n len_stretch = int(round(y.shape[-1] / rate))\n\n # Invert the STFT\n y_stretch = core.istft(stft_stretch, dtype=y.dtype, length=len_stretch, **kwargs)\n\n return y_stretch\n\n\n@deprecate_positional_args\ndef pitch_shift(\n y, *, sr, n_steps, bins_per_octave=12, res_type=\"kaiser_best\", **kwargs\n):\n \"\"\"Shift the pitch of a waveform by ``n_steps`` steps.\n\n A step is equal to a semitone if ``bins_per_octave`` is set to 12.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n\n sr : number > 0 [scalar]\n audio sampling rate of ``y``\n\n n_steps : float [scalar]\n how many (fractional) steps to shift ``y``\n\n bins_per_octave : float > 0 [scalar]\n how many steps per octave\n\n res_type : string\n Resample type. By default, 'kaiser_best' is used.\n\n See `librosa.resample` for more information.\n\n **kwargs : additional keyword arguments.\n See `librosa.decompose.stft` for details.\n\n Returns\n -------\n y_shift : np.ndarray [shape=(..., n)]\n The pitch-shifted audio time-series\n\n See Also\n --------\n time_stretch :\n time stretching\n librosa.phase_vocoder :\n spectrogram phase vocoder\n pyrubberband.pyrb.pitch_shift :\n high-quality pitch shifting using RubberBand\n\n Examples\n --------\n Shift up by a major third (four steps if ``bins_per_octave`` is 12)\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_third = librosa.effects.pitch_shift(y, sr=sr, n_steps=4)\n\n Shift down by a tritone (six steps if ``bins_per_octave`` is 12)\n\n >>> y_tritone = librosa.effects.pitch_shift(y, sr=sr, n_steps=-6)\n\n Shift up by 3 quarter-tones\n\n >>> y_three_qt = librosa.effects.pitch_shift(y, sr=sr, n_steps=3,\n ... bins_per_octave=24)\n \"\"\"\n\n if bins_per_octave < 1 or not np.issubdtype(type(bins_per_octave), np.integer):\n raise ParameterError(\"bins_per_octave must be a positive integer.\")\n\n rate = 2.0 ** (-float(n_steps) / bins_per_octave)\n\n # Stretch in time, then resample\n y_shift = core.resample(\n time_stretch(y, rate=rate, **kwargs),\n orig_sr=float(sr) / rate,\n target_sr=sr,\n res_type=res_type,\n )\n\n # Crop to the same dimension as the input\n return util.fix_length(y_shift, size=y.shape[-1])\n\n\n@deprecate_positional_args\ndef remix(y, intervals, *, align_zeros=True):\n \"\"\"Remix an audio signal by re-ordering time intervals.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., t)]\n Audio time series. Multi-channel is supported.\n intervals : iterable of tuples (start, end)\n An iterable (list-like or generator) where the ``i``th item\n ``intervals[i]`` indicates the start and end (in samples)\n of a slice of ``y``.\n align_zeros : boolean\n If ``True``, interval boundaries are mapped to the closest\n zero-crossing in ``y``. If ``y`` is stereo, zero-crossings\n are computed after converting to mono.\n\n Returns\n -------\n y_remix : np.ndarray [shape=(..., d)]\n ``y`` remixed in the order specified by ``intervals``\n\n Examples\n --------\n Load in the example track and reverse the beats\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n\n Compute beats\n\n >>> _, beat_frames = librosa.beat.beat_track(y=y, sr=sr,\n ... hop_length=512)\n\n Convert from frames to sample indices\n\n >>> beat_samples = librosa.frames_to_samples(beat_frames)\n\n Generate intervals from consecutive events\n\n >>> intervals = librosa.util.frame(beat_samples, frame_length=2,\n ... hop_length=1).T\n\n Reverse the beat intervals\n\n >>> y_out = librosa.effects.remix(y, intervals[::-1])\n \"\"\"\n\n y_out = []\n\n if align_zeros:\n y_mono = core.to_mono(y)\n zeros = np.nonzero(core.zero_crossings(y_mono))[-1]\n # Force end-of-signal onto zeros\n zeros = np.append(zeros, [len(y_mono)])\n\n for interval in intervals:\n\n if align_zeros:\n interval = zeros[util.match_events(interval, zeros)]\n\n y_out.append(y[..., interval[0] : interval[1]])\n\n return np.concatenate(y_out, axis=-1)\n\n\ndef _signal_to_frame_nonsilent(\n y, frame_length=2048, hop_length=512, top_db=60, ref=np.max, aggregate=np.max\n):\n \"\"\"Frame-wise non-silent indicator for audio input.\n\n This is a helper function for `trim` and `split`.\n\n Parameters\n ----------\n y : np.ndarray\n Audio signal, mono or stereo\n\n frame_length : int > 0\n The number of samples per frame\n\n hop_length : int > 0\n The number of samples between frames\n\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n\n ref : callable or float\n The reference amplitude\n\n aggregate : callable [default: np.max]\n Function to aggregate dB measurements across channels (if y.ndim > 1)\n\n Note: for multiple leading axes, this is performed using ``np.apply_over_axes``.\n\n Returns\n -------\n non_silent : np.ndarray, shape=(m,), dtype=bool\n Indicator of non-silent frames\n \"\"\"\n\n # Compute the MSE for the signal\n mse = feature.rms(y=y, frame_length=frame_length, hop_length=hop_length)\n\n # Convert to decibels and slice out the mse channel\n db = core.amplitude_to_db(mse[..., 0, :], ref=ref, top_db=None)\n\n # Aggregate everything but the time dimension\n if db.ndim > 1:\n db = np.apply_over_axes(aggregate, db, range(db.ndim - 1))\n\n return db > -top_db\n\n\n@deprecate_positional_args\ndef trim(\n y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max\n):\n \"\"\"Trim leading and trailing silence from an audio signal.\n\n Parameters\n ----------\n y : np.ndarray, shape=(..., n)\n Audio signal. Multi-channel is supported.\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n ref : number or callable\n The reference amplitude. By default, it uses `np.max` and compares\n to the peak amplitude in the signal.\n frame_length : int > 0\n The number of samples per analysis frame\n hop_length : int > 0\n The number of samples between analysis frames\n aggregate : callable [default: np.max]\n Function to aggregate across channels (if y.ndim > 1)\n\n Returns\n -------\n y_trimmed : np.ndarray, shape=(..., m)\n The trimmed signal\n index : np.ndarray, shape=(2,)\n the interval of ``y`` corresponding to the non-silent region:\n ``y_trimmed = y[index[0]:index[1]]`` (for mono) or\n ``y_trimmed = y[:, index[0]:index[1]]`` (for stereo).\n\n Examples\n --------\n >>> # Load some audio\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> # Trim the beginning and ending silence\n >>> yt, index = librosa.effects.trim(y)\n >>> # Print the durations\n >>> print(librosa.get_duration(y), librosa.get_duration(yt))\n 25.025986394557822 25.007891156462584\n \"\"\"\n\n non_silent = _signal_to_frame_nonsilent(\n y,\n frame_length=frame_length,\n hop_length=hop_length,\n ref=ref,\n top_db=top_db,\n aggregate=aggregate,\n )\n\n nonzero = np.flatnonzero(non_silent)\n\n if nonzero.size > 0:\n # Compute the start and end positions\n # End position goes one frame past the last non-zero\n start = int(core.frames_to_samples(nonzero[0], hop_length=hop_length))\n end = min(\n y.shape[-1],\n int(core.frames_to_samples(nonzero[-1] + 1, hop_length=hop_length)),\n )\n else:\n # The signal only contains zeros\n start, end = 0, 0\n\n # Build the mono/stereo index\n full_index = [slice(None)] * y.ndim\n full_index[-1] = slice(start, end)\n\n return y[tuple(full_index)], np.asarray([start, end])\n\n\n@deprecate_positional_args\ndef split(\n y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max\n):\n \"\"\"Split an audio signal into non-silent intervals.\n\n Parameters\n ----------\n y : np.ndarray, shape=(..., n)\n An audio signal. Multi-channel is supported.\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n ref : number or callable\n The reference amplitude. By default, it uses `np.max` and compares\n to the peak amplitude in the signal.\n frame_length : int > 0\n The number of samples per analysis frame\n hop_length : int > 0\n The number of samples between analysis frames\n aggregate : callable [default: np.max]\n Function to aggregate across channels (if y.ndim > 1)\n\n Returns\n -------\n intervals : np.ndarray, shape=(m, 2)\n ``intervals[i] == (start_i, end_i)`` are the start and end time\n (in samples) of non-silent interval ``i``.\n\n \"\"\"\n\n non_silent = _signal_to_frame_nonsilent(\n y,\n frame_length=frame_length,\n hop_length=hop_length,\n ref=ref,\n top_db=top_db,\n aggregate=aggregate,\n )\n\n # Interval slicing, adapted from\n # https://stackoverflow.com/questions/2619413/efficiently-finding-the-interval-with-non-zeros-in-scipy-numpy-in-python\n # Find points where the sign flips\n edges = np.flatnonzero(np.diff(non_silent.astype(int)))\n\n # Pad back the sample lost in the diff\n edges = [edges + 1]\n\n # If the first frame had high energy, count it\n if non_silent[0]:\n edges.insert(0, [0])\n\n # Likewise for the last frame\n if non_silent[-1]:\n edges.append([len(non_silent)])\n\n # Convert from frames to samples\n edges = core.frames_to_samples(np.concatenate(edges), hop_length=hop_length)\n\n # Clip to the signal duration\n edges = np.minimum(edges, y.shape[-1])\n\n # Stack the results back as an ndarray\n return edges.reshape((-1, 2))\n\n\n@deprecate_positional_args\ndef preemphasis(y, *, coef=0.97, zi=None, return_zf=False):\n \"\"\"Pre-emphasize an audio signal with a first-order auto-regressive filter:\n\n y[n] -> y[n] - coef * y[n-1]\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n Audio signal. Multi-channel is supported.\n\n coef : positive number\n Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.\n\n At the limit ``coef=0``, the signal is unchanged.\n\n At ``coef=1``, the result is the first-order difference of the signal.\n\n The default (0.97) matches the pre-emphasis filter used in the HTK\n implementation of MFCCs [#]_.\n\n .. [#] http://htk.eng.cam.ac.uk/\n\n zi : number\n Initial filter state. When making successive calls to non-overlapping\n frames, this can be set to the ``zf`` returned from the previous call.\n (See example below.)\n\n By default ``zi`` is initialized as ``2*y[0] - y[1]``.\n\n return_zf : boolean\n If ``True``, return the final filter state.\n If ``False``, only return the pre-emphasized signal.\n\n Returns\n -------\n y_out : np.ndarray\n pre-emphasized signal\n zf : number\n if ``return_zf=True``, the final filter state is also returned\n\n Examples\n --------\n Apply a standard pre-emphasis filter\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> y_filt = librosa.effects.preemphasis(y)\n >>> # and plot the results for comparison\n >>> S_orig = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max, top_db=None)\n >>> S_preemph = librosa.amplitude_to_db(np.abs(librosa.stft(y_filt)), ref=np.max, top_db=None)\n >>> fig, ax = plt.subplots(nrows=2, sharex=True, sharey=True)\n >>> librosa.display.specshow(S_orig, y_axis='log', x_axis='time', ax=ax[0])\n >>> ax[0].set(title='Original signal')\n >>> ax[0].label_outer()\n >>> img = librosa.display.specshow(S_preemph, y_axis='log', x_axis='time', ax=ax[1])\n >>> ax[1].set(title='Pre-emphasized signal')\n >>> fig.colorbar(img, ax=ax, format=\"%+2.f dB\")\n\n Apply pre-emphasis in pieces for block streaming. Note that the second block\n initializes ``zi`` with the final state ``zf`` returned by the first call.\n\n >>> y_filt_1, zf = librosa.effects.preemphasis(y[:1000], return_zf=True)\n >>> y_filt_2, zf = librosa.effects.preemphasis(y[1000:], zi=zf, return_zf=True)\n >>> np.allclose(y_filt, np.concatenate([y_filt_1, y_filt_2]))\n True\n\n See Also\n --------\n deemphasis\n \"\"\"\n b = np.asarray([1.0, -coef], dtype=y.dtype)\n a = np.asarray([1.0], dtype=y.dtype)\n\n if zi is None:\n # Initialize the filter to implement linear extrapolation\n zi = 2 * y[..., 0:1] - y[..., 1:2]\n\n zi = np.atleast_1d(zi)\n\n y_out, z_f = scipy.signal.lfilter(b, a, y, zi=np.asarray(zi, dtype=y.dtype))\n\n if return_zf:\n return y_out, z_f\n\n return y_out\n\n\n@deprecate_positional_args\ndef deemphasis(y, *, coef=0.97, zi=None, return_zf=False):\n \"\"\"De-emphasize an audio signal with the inverse operation of preemphasis():\n\n If y = preemphasis(x, coef=coef, zi=zi), the deemphasis is:\n\n >>> x[i] = y[i] + coef * x[i-1]\n >>> x = deemphasis(y, coef=coef, zi=zi)\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n Audio signal. Multi-channel is supported.\n\n coef : positive number\n Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.\n\n At the limit ``coef=0``, the signal is unchanged.\n\n At ``coef=1``, the result is the first-order difference of the signal.\n\n The default (0.97) matches the pre-emphasis filter used in the HTK\n implementation of MFCCs [#]_.\n\n .. [#] http://htk.eng.cam.ac.uk/\n\n zi : number\n Initial filter state. If inverting a previous preemphasis(), the same value should be used.\n\n By default ``zi`` is initialized as\n ``((2 - coef) * y[0] - y[1]) / (3 - coef)``. This\n value corresponds to the transformation of the default initialization of ``zi`` in ``preemphasis()``,\n ``2*x[0] - x[1]``.\n\n return_zf : boolean\n If ``True``, return the final filter state.\n If ``False``, only return the pre-emphasized signal.\n\n Returns\n -------\n y_out : np.ndarray\n de-emphasized signal\n zf : number\n if ``return_zf=True``, the final filter state is also returned\n\n Examples\n --------\n Apply a standard pre-emphasis filter and invert it with de-emphasis\n\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> y_filt = librosa.effects.preemphasis(y)\n >>> y_deemph = librosa.effects.deemphasis(y_filt)\n >>> np.allclose(y, y_deemph)\n True\n\n See Also\n --------\n preemphasis\n \"\"\"\n\n b = np.array([1.0, -coef], dtype=y.dtype)\n a = np.array([1.0], dtype=y.dtype)\n\n if zi is None:\n # initialize with all zeros\n zi = np.zeros(list(y.shape[:-1]) + [1], dtype=y.dtype)\n y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi)\n\n # factor in the linear extrapolation\n y_out -= (\n ((2 - coef) * y[..., 0:1] - y[..., 1:2])\n / (3 - coef)\n * (coef ** np.arange(y.shape[-1]))\n )\n\n else:\n zi = np.atleast_1d(zi)\n y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi.astype(y.dtype))\n\n if return_zf:\n return y_out, zf\n else:\n return y_out\n", "path": "librosa/effects.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nEffects\n=======\n\nHarmonic-percussive source separation\n-------------------------------------\n.. autosummary::\n :toctree: generated/\n\n hpss\n harmonic\n percussive\n\nTime and frequency\n------------------\n.. autosummary::\n :toctree: generated/\n\n time_stretch\n pitch_shift\n\nMiscellaneous\n-------------\n.. autosummary::\n :toctree: generated/\n\n remix\n trim\n split\n preemphasis\n deemphasis\n\"\"\"\n\nimport numpy as np\nimport scipy.signal\n\nfrom . import core\nfrom . import decompose\nfrom . import feature\nfrom . import util\nfrom .util.exceptions import ParameterError\nfrom .util.decorators import deprecate_positional_args\n\n__all__ = [\n \"hpss\",\n \"harmonic\",\n \"percussive\",\n \"time_stretch\",\n \"pitch_shift\",\n \"remix\",\n \"trim\",\n \"split\",\n]\n\n\ndef hpss(y, **kwargs):\n \"\"\"Decompose an audio time series into harmonic and percussive components.\n\n This function automates the STFT->HPSS->ISTFT pipeline, and ensures that\n the output waveforms have equal length to the input waveform ``y``.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(..., n)]\n audio time series of the harmonic elements\n y_percussive : np.ndarray [shape=(..., n)]\n audio time series of the percussive elements\n\n See Also\n --------\n harmonic : Extract only the harmonic component\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS on spectrograms\n\n Examples\n --------\n >>> # Extract harmonic and percussive components\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y)\n\n >>> # Get a more isolated percussive component by widening its margin\n >>> y_harmonic, y_percussive = librosa.effects.hpss(y, margin=(1.0,5.0))\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Decompose into harmonic and percussives\n stft_harm, stft_perc = decompose.hpss(stft, **kwargs)\n\n # Invert the STFTs. Adjust length to match the input.\n y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])\n y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])\n\n return y_harm, y_perc\n\n\ndef harmonic(y, **kwargs):\n \"\"\"Extract harmonic elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_harmonic : np.ndarray [shape=(..., n)]\n audio time series of just the harmonic portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n percussive : Extract only the percussive component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract harmonic component\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_harmonic = librosa.effects.harmonic(y)\n\n >>> # Use a margin > 1.0 for greater harmonic separation\n >>> y_harmonic = librosa.effects.harmonic(y, margin=3.0)\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Remove percussives\n stft_harm = decompose.hpss(stft, **kwargs)[0]\n\n # Invert the STFTs\n y_harm = core.istft(stft_harm, dtype=y.dtype, length=y.shape[-1])\n\n return y_harm\n\n\ndef percussive(y, **kwargs):\n \"\"\"Extract percussive elements from an audio time-series.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.hpss` for details.\n\n Returns\n -------\n y_percussive : np.ndarray [shape=(..., n)]\n audio time series of just the percussive portion\n\n See Also\n --------\n hpss : Separate harmonic and percussive components\n harmonic : Extract only the harmonic component\n librosa.decompose.hpss : HPSS for spectrograms\n\n Examples\n --------\n >>> # Extract percussive component\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_percussive = librosa.effects.percussive(y)\n\n >>> # Use a margin > 1.0 for greater percussive separation\n >>> y_percussive = librosa.effects.percussive(y, margin=3.0)\n\n \"\"\"\n\n # Compute the STFT matrix\n stft = core.stft(y)\n\n # Remove harmonics\n stft_perc = decompose.hpss(stft, **kwargs)[1]\n\n # Invert the STFT\n y_perc = core.istft(stft_perc, dtype=y.dtype, length=y.shape[-1])\n\n return y_perc\n\n\n@deprecate_positional_args\ndef time_stretch(y, *, rate, **kwargs):\n \"\"\"Time-stretch an audio series by a fixed rate.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n rate : float > 0 [scalar]\n Stretch factor. If ``rate > 1``, then the signal is sped up.\n If ``rate < 1``, then the signal is slowed down.\n **kwargs : additional keyword arguments.\n See `librosa.decompose.stft` for details.\n\n Returns\n -------\n y_stretch : np.ndarray [shape=(..., round(n/rate))]\n audio time series stretched by the specified rate\n\n See Also\n --------\n pitch_shift :\n pitch shifting\n librosa.phase_vocoder :\n spectrogram phase vocoder\n pyrubberband.pyrb.time_stretch :\n high-quality time stretching using RubberBand\n\n Examples\n --------\n Compress to be twice as fast\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_fast = librosa.effects.time_stretch(y, rate=2.0)\n\n Or half the original speed\n\n >>> y_slow = librosa.effects.time_stretch(y, rate=0.5)\n\n \"\"\"\n\n if rate <= 0:\n raise ParameterError(\"rate must be a positive number\")\n\n # Construct the short-term Fourier transform (STFT)\n stft = core.stft(y, **kwargs)\n\n # Stretch by phase vocoding\n stft_stretch = core.phase_vocoder(\n stft,\n rate=rate,\n hop_length=kwargs.get(\"hop_length\", None),\n n_fft=kwargs.get(\"n_fft\", None),\n )\n\n # Predict the length of y_stretch\n len_stretch = int(round(y.shape[-1] / rate))\n\n # Invert the STFT\n y_stretch = core.istft(stft_stretch, dtype=y.dtype, length=len_stretch, **kwargs)\n\n return y_stretch\n\n\n@deprecate_positional_args\ndef pitch_shift(\n y, *, sr, n_steps, bins_per_octave=12, res_type=\"kaiser_best\", **kwargs\n):\n \"\"\"Shift the pitch of a waveform by ``n_steps`` steps.\n\n A step is equal to a semitone if ``bins_per_octave`` is set to 12.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n audio time series. Multi-channel is supported.\n\n sr : number > 0 [scalar]\n audio sampling rate of ``y``\n\n n_steps : float [scalar]\n how many (fractional) steps to shift ``y``\n\n bins_per_octave : float > 0 [scalar]\n how many steps per octave\n\n res_type : string\n Resample type. By default, 'kaiser_best' is used.\n\n See `librosa.resample` for more information.\n\n **kwargs : additional keyword arguments.\n See `librosa.decompose.stft` for details.\n\n Returns\n -------\n y_shift : np.ndarray [shape=(..., n)]\n The pitch-shifted audio time-series\n\n See Also\n --------\n time_stretch :\n time stretching\n librosa.phase_vocoder :\n spectrogram phase vocoder\n pyrubberband.pyrb.pitch_shift :\n high-quality pitch shifting using RubberBand\n\n Examples\n --------\n Shift up by a major third (four steps if ``bins_per_octave`` is 12)\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> y_third = librosa.effects.pitch_shift(y, sr=sr, n_steps=4)\n\n Shift down by a tritone (six steps if ``bins_per_octave`` is 12)\n\n >>> y_tritone = librosa.effects.pitch_shift(y, sr=sr, n_steps=-6)\n\n Shift up by 3 quarter-tones\n\n >>> y_three_qt = librosa.effects.pitch_shift(y, sr=sr, n_steps=3,\n ... bins_per_octave=24)\n \"\"\"\n\n if bins_per_octave < 1 or not np.issubdtype(type(bins_per_octave), np.integer):\n raise ParameterError(\"bins_per_octave must be a positive integer.\")\n\n rate = 2.0 ** (-float(n_steps) / bins_per_octave)\n\n # Stretch in time, then resample\n y_shift = core.resample(\n time_stretch(y, rate=rate, **kwargs),\n orig_sr=float(sr) / rate,\n target_sr=sr,\n res_type=res_type,\n )\n\n # Crop to the same dimension as the input\n return util.fix_length(y_shift, size=y.shape[-1])\n\n\n@deprecate_positional_args\ndef remix(y, intervals, *, align_zeros=True):\n \"\"\"Remix an audio signal by re-ordering time intervals.\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., t)]\n Audio time series. Multi-channel is supported.\n intervals : iterable of tuples (start, end)\n An iterable (list-like or generator) where the ``i``th item\n ``intervals[i]`` indicates the start and end (in samples)\n of a slice of ``y``.\n align_zeros : boolean\n If ``True``, interval boundaries are mapped to the closest\n zero-crossing in ``y``. If ``y`` is stereo, zero-crossings\n are computed after converting to mono.\n\n Returns\n -------\n y_remix : np.ndarray [shape=(..., d)]\n ``y`` remixed in the order specified by ``intervals``\n\n Examples\n --------\n Load in the example track and reverse the beats\n\n >>> y, sr = librosa.load(librosa.ex('choice'))\n\n Compute beats\n\n >>> _, beat_frames = librosa.beat.beat_track(y=y, sr=sr,\n ... hop_length=512)\n\n Convert from frames to sample indices\n\n >>> beat_samples = librosa.frames_to_samples(beat_frames)\n\n Generate intervals from consecutive events\n\n >>> intervals = librosa.util.frame(beat_samples, frame_length=2,\n ... hop_length=1).T\n\n Reverse the beat intervals\n\n >>> y_out = librosa.effects.remix(y, intervals[::-1])\n \"\"\"\n\n y_out = []\n\n if align_zeros:\n y_mono = core.to_mono(y)\n zeros = np.nonzero(core.zero_crossings(y_mono))[-1]\n # Force end-of-signal onto zeros\n zeros = np.append(zeros, [len(y_mono)])\n\n for interval in intervals:\n\n if align_zeros:\n interval = zeros[util.match_events(interval, zeros)]\n\n y_out.append(y[..., interval[0] : interval[1]])\n\n return np.concatenate(y_out, axis=-1)\n\n\ndef _signal_to_frame_nonsilent(\n y, frame_length=2048, hop_length=512, top_db=60, ref=np.max, aggregate=np.max\n):\n \"\"\"Frame-wise non-silent indicator for audio input.\n\n This is a helper function for `trim` and `split`.\n\n Parameters\n ----------\n y : np.ndarray\n Audio signal, mono or stereo\n\n frame_length : int > 0\n The number of samples per frame\n\n hop_length : int > 0\n The number of samples between frames\n\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n\n ref : callable or float\n The reference amplitude\n\n aggregate : callable [default: np.max]\n Function to aggregate dB measurements across channels (if y.ndim > 1)\n\n Note: for multiple leading axes, this is performed using ``np.apply_over_axes``.\n\n Returns\n -------\n non_silent : np.ndarray, shape=(m,), dtype=bool\n Indicator of non-silent frames\n \"\"\"\n\n # Compute the MSE for the signal\n mse = feature.rms(y=y, frame_length=frame_length, hop_length=hop_length)\n\n # Convert to decibels and slice out the mse channel\n db = core.amplitude_to_db(mse[..., 0, :], ref=ref, top_db=None)\n\n # Aggregate everything but the time dimension\n if db.ndim > 1:\n db = np.apply_over_axes(aggregate, db, range(db.ndim - 1))\n # Squeeze out leading singleton dimensions here\n # We always want to keep the trailing dimension though\n db = np.squeeze(db, axis=tuple(range(db.ndim - 1)))\n\n return db > -top_db\n\n\n@deprecate_positional_args\ndef trim(\n y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max\n):\n \"\"\"Trim leading and trailing silence from an audio signal.\n\n Parameters\n ----------\n y : np.ndarray, shape=(..., n)\n Audio signal. Multi-channel is supported.\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n ref : number or callable\n The reference amplitude. By default, it uses `np.max` and compares\n to the peak amplitude in the signal.\n frame_length : int > 0\n The number of samples per analysis frame\n hop_length : int > 0\n The number of samples between analysis frames\n aggregate : callable [default: np.max]\n Function to aggregate across channels (if y.ndim > 1)\n\n Returns\n -------\n y_trimmed : np.ndarray, shape=(..., m)\n The trimmed signal\n index : np.ndarray, shape=(2,)\n the interval of ``y`` corresponding to the non-silent region:\n ``y_trimmed = y[index[0]:index[1]]`` (for mono) or\n ``y_trimmed = y[:, index[0]:index[1]]`` (for stereo).\n\n Examples\n --------\n >>> # Load some audio\n >>> y, sr = librosa.load(librosa.ex('choice'))\n >>> # Trim the beginning and ending silence\n >>> yt, index = librosa.effects.trim(y)\n >>> # Print the durations\n >>> print(librosa.get_duration(y), librosa.get_duration(yt))\n 25.025986394557822 25.007891156462584\n \"\"\"\n\n non_silent = _signal_to_frame_nonsilent(\n y,\n frame_length=frame_length,\n hop_length=hop_length,\n ref=ref,\n top_db=top_db,\n aggregate=aggregate,\n )\n\n nonzero = np.flatnonzero(non_silent)\n\n if nonzero.size > 0:\n # Compute the start and end positions\n # End position goes one frame past the last non-zero\n start = int(core.frames_to_samples(nonzero[0], hop_length=hop_length))\n end = min(\n y.shape[-1],\n int(core.frames_to_samples(nonzero[-1] + 1, hop_length=hop_length)),\n )\n else:\n # The signal only contains zeros\n start, end = 0, 0\n\n # Build the mono/stereo index\n full_index = [slice(None)] * y.ndim\n full_index[-1] = slice(start, end)\n\n return y[tuple(full_index)], np.asarray([start, end])\n\n\n@deprecate_positional_args\ndef split(\n y, *, top_db=60, ref=np.max, frame_length=2048, hop_length=512, aggregate=np.max\n):\n \"\"\"Split an audio signal into non-silent intervals.\n\n Parameters\n ----------\n y : np.ndarray, shape=(..., n)\n An audio signal. Multi-channel is supported.\n top_db : number > 0\n The threshold (in decibels) below reference to consider as\n silence\n ref : number or callable\n The reference amplitude. By default, it uses `np.max` and compares\n to the peak amplitude in the signal.\n frame_length : int > 0\n The number of samples per analysis frame\n hop_length : int > 0\n The number of samples between analysis frames\n aggregate : callable [default: np.max]\n Function to aggregate across channels (if y.ndim > 1)\n\n Returns\n -------\n intervals : np.ndarray, shape=(m, 2)\n ``intervals[i] == (start_i, end_i)`` are the start and end time\n (in samples) of non-silent interval ``i``.\n\n \"\"\"\n\n non_silent = _signal_to_frame_nonsilent(\n y,\n frame_length=frame_length,\n hop_length=hop_length,\n ref=ref,\n top_db=top_db,\n aggregate=aggregate,\n )\n\n # Interval slicing, adapted from\n # https://stackoverflow.com/questions/2619413/efficiently-finding-the-interval-with-non-zeros-in-scipy-numpy-in-python\n # Find points where the sign flips\n edges = np.flatnonzero(np.diff(non_silent.astype(int)))\n\n # Pad back the sample lost in the diff\n edges = [edges + 1]\n\n # If the first frame had high energy, count it\n if non_silent[0]:\n edges.insert(0, [0])\n\n # Likewise for the last frame\n if non_silent[-1]:\n edges.append([len(non_silent)])\n\n # Convert from frames to samples\n edges = core.frames_to_samples(np.concatenate(edges), hop_length=hop_length)\n\n # Clip to the signal duration\n edges = np.minimum(edges, y.shape[-1])\n\n # Stack the results back as an ndarray\n return edges.reshape((-1, 2))\n\n\n@deprecate_positional_args\ndef preemphasis(y, *, coef=0.97, zi=None, return_zf=False):\n \"\"\"Pre-emphasize an audio signal with a first-order auto-regressive filter:\n\n y[n] -> y[n] - coef * y[n-1]\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n Audio signal. Multi-channel is supported.\n\n coef : positive number\n Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.\n\n At the limit ``coef=0``, the signal is unchanged.\n\n At ``coef=1``, the result is the first-order difference of the signal.\n\n The default (0.97) matches the pre-emphasis filter used in the HTK\n implementation of MFCCs [#]_.\n\n .. [#] http://htk.eng.cam.ac.uk/\n\n zi : number\n Initial filter state. When making successive calls to non-overlapping\n frames, this can be set to the ``zf`` returned from the previous call.\n (See example below.)\n\n By default ``zi`` is initialized as ``2*y[0] - y[1]``.\n\n return_zf : boolean\n If ``True``, return the final filter state.\n If ``False``, only return the pre-emphasized signal.\n\n Returns\n -------\n y_out : np.ndarray\n pre-emphasized signal\n zf : number\n if ``return_zf=True``, the final filter state is also returned\n\n Examples\n --------\n Apply a standard pre-emphasis filter\n\n >>> import matplotlib.pyplot as plt\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> y_filt = librosa.effects.preemphasis(y)\n >>> # and plot the results for comparison\n >>> S_orig = librosa.amplitude_to_db(np.abs(librosa.stft(y)), ref=np.max, top_db=None)\n >>> S_preemph = librosa.amplitude_to_db(np.abs(librosa.stft(y_filt)), ref=np.max, top_db=None)\n >>> fig, ax = plt.subplots(nrows=2, sharex=True, sharey=True)\n >>> librosa.display.specshow(S_orig, y_axis='log', x_axis='time', ax=ax[0])\n >>> ax[0].set(title='Original signal')\n >>> ax[0].label_outer()\n >>> img = librosa.display.specshow(S_preemph, y_axis='log', x_axis='time', ax=ax[1])\n >>> ax[1].set(title='Pre-emphasized signal')\n >>> fig.colorbar(img, ax=ax, format=\"%+2.f dB\")\n\n Apply pre-emphasis in pieces for block streaming. Note that the second block\n initializes ``zi`` with the final state ``zf`` returned by the first call.\n\n >>> y_filt_1, zf = librosa.effects.preemphasis(y[:1000], return_zf=True)\n >>> y_filt_2, zf = librosa.effects.preemphasis(y[1000:], zi=zf, return_zf=True)\n >>> np.allclose(y_filt, np.concatenate([y_filt_1, y_filt_2]))\n True\n\n See Also\n --------\n deemphasis\n \"\"\"\n b = np.asarray([1.0, -coef], dtype=y.dtype)\n a = np.asarray([1.0], dtype=y.dtype)\n\n if zi is None:\n # Initialize the filter to implement linear extrapolation\n zi = 2 * y[..., 0:1] - y[..., 1:2]\n\n zi = np.atleast_1d(zi)\n\n y_out, z_f = scipy.signal.lfilter(b, a, y, zi=np.asarray(zi, dtype=y.dtype))\n\n if return_zf:\n return y_out, z_f\n\n return y_out\n\n\n@deprecate_positional_args\ndef deemphasis(y, *, coef=0.97, zi=None, return_zf=False):\n \"\"\"De-emphasize an audio signal with the inverse operation of preemphasis():\n\n If y = preemphasis(x, coef=coef, zi=zi), the deemphasis is:\n\n >>> x[i] = y[i] + coef * x[i-1]\n >>> x = deemphasis(y, coef=coef, zi=zi)\n\n Parameters\n ----------\n y : np.ndarray [shape=(..., n)]\n Audio signal. Multi-channel is supported.\n\n coef : positive number\n Pre-emphasis coefficient. Typical values of ``coef`` are between 0 and 1.\n\n At the limit ``coef=0``, the signal is unchanged.\n\n At ``coef=1``, the result is the first-order difference of the signal.\n\n The default (0.97) matches the pre-emphasis filter used in the HTK\n implementation of MFCCs [#]_.\n\n .. [#] http://htk.eng.cam.ac.uk/\n\n zi : number\n Initial filter state. If inverting a previous preemphasis(), the same value should be used.\n\n By default ``zi`` is initialized as\n ``((2 - coef) * y[0] - y[1]) / (3 - coef)``. This\n value corresponds to the transformation of the default initialization of ``zi`` in ``preemphasis()``,\n ``2*x[0] - x[1]``.\n\n return_zf : boolean\n If ``True``, return the final filter state.\n If ``False``, only return the pre-emphasized signal.\n\n Returns\n -------\n y_out : np.ndarray\n de-emphasized signal\n zf : number\n if ``return_zf=True``, the final filter state is also returned\n\n Examples\n --------\n Apply a standard pre-emphasis filter and invert it with de-emphasis\n\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> y_filt = librosa.effects.preemphasis(y)\n >>> y_deemph = librosa.effects.deemphasis(y_filt)\n >>> np.allclose(y, y_deemph)\n True\n\n See Also\n --------\n preemphasis\n \"\"\"\n\n b = np.array([1.0, -coef], dtype=y.dtype)\n a = np.array([1.0], dtype=y.dtype)\n\n if zi is None:\n # initialize with all zeros\n zi = np.zeros(list(y.shape[:-1]) + [1], dtype=y.dtype)\n y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi)\n\n # factor in the linear extrapolation\n y_out -= (\n ((2 - coef) * y[..., 0:1] - y[..., 1:2])\n / (3 - coef)\n * (coef ** np.arange(y.shape[-1]))\n )\n\n else:\n zi = np.atleast_1d(zi)\n y_out, zf = scipy.signal.lfilter(a, b, y, zi=zi.astype(y.dtype))\n\n if return_zf:\n return y_out, zf\n else:\n return y_out\n", "path": "librosa/effects.py"}]} |
gh_patches_debug_1119 | rasdani/github-patches | git_diff | pyca__cryptography-3995 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Remove docutils.conf - enable smart quotes for builders
Blocked on:
- [x] https://github.com/sphinx-doc/sphinx/pull/4110
- [x] Sphinx release
- [x] https://github.com/sphinx-contrib/spelling/pull/2
- [x] sphinxcontrib-spelling release
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 from setuptools import find_packages, setup
18 from setuptools.command.install import install
19 from setuptools.command.test import test
20
21
22 base_dir = os.path.dirname(__file__)
23 src_dir = os.path.join(base_dir, "src")
24
25 # When executing the setup.py, we need to be able to import ourselves, this
26 # means that we need to add the src/ directory to the sys.path.
27 sys.path.insert(0, src_dir)
28
29 about = {}
30 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
31 exec(f.read(), about)
32
33
34 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
35
36 setup_requirements = []
37
38 if platform.python_implementation() == "PyPy":
39 if sys.pypy_version_info < (5, 3):
40 raise RuntimeError(
41 "cryptography 1.9 is not compatible with PyPy < 5.3. Please "
42 "upgrade PyPy to use this library."
43 )
44 else:
45 setup_requirements.append("cffi>=1.7")
46
47 test_requirements = [
48 "pytest>=3.2.1",
49 "pretend",
50 "iso8601",
51 "pytz",
52 "hypothesis>=1.11.4",
53 ]
54
55
56 # If there's no vectors locally that probably means we are in a tarball and
57 # need to go and get the matching vectors package from PyPi
58 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
59 test_requirements.append(VECTORS_DEPENDENCY)
60
61
62 class PyTest(test):
63 def finalize_options(self):
64 test.finalize_options(self)
65 self.test_args = []
66 self.test_suite = True
67
68 # This means there's a vectors/ folder with the package in here.
69 # cd into it, install the vectors package and then refresh sys.path
70 if VECTORS_DEPENDENCY not in test_requirements:
71 subprocess.check_call(
72 [sys.executable, "setup.py", "install"], cwd="vectors"
73 )
74 pkg_resources.get_distribution("cryptography_vectors").activate()
75
76 def run_tests(self):
77 # Import here because in module scope the eggs are not loaded.
78 import pytest
79 test_args = [os.path.join(base_dir, "tests")]
80 errno = pytest.main(test_args)
81 sys.exit(errno)
82
83
84 def keywords_with_side_effects(argv):
85 """
86 Get a dictionary with setup keywords that (can) have side effects.
87
88 :param argv: A list of strings with command line arguments.
89 :returns: A dictionary with keyword arguments for the ``setup()`` function.
90
91 This setup.py script uses the setuptools 'setup_requires' feature because
92 this is required by the cffi package to compile extension modules. The
93 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
94 build process as a result of setup.py invocations that don't need the cffi
95 module to be built (setup.py serves the dual purpose of exposing package
96 metadata).
97
98 All of the options listed by ``python setup.py --help`` that print
99 information should be recognized here. The commands ``clean``,
100 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
101 Any combination of these options and commands is also supported.
102
103 This function was originally based on the `setup.py script`_ of SciPy (see
104 also the discussion in `pip issue #25`_).
105
106 .. _pip issue #25: https://github.com/pypa/pip/issues/25
107 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
108 """
109 no_setup_requires_arguments = (
110 '-h', '--help',
111 '-n', '--dry-run',
112 '-q', '--quiet',
113 '-v', '--verbose',
114 '-V', '--version',
115 '--author',
116 '--author-email',
117 '--classifiers',
118 '--contact',
119 '--contact-email',
120 '--description',
121 '--egg-base',
122 '--fullname',
123 '--help-commands',
124 '--keywords',
125 '--licence',
126 '--license',
127 '--long-description',
128 '--maintainer',
129 '--maintainer-email',
130 '--name',
131 '--no-user-cfg',
132 '--obsoletes',
133 '--platforms',
134 '--provides',
135 '--requires',
136 '--url',
137 'clean',
138 'egg_info',
139 'register',
140 'sdist',
141 'upload',
142 )
143
144 def is_short_option(argument):
145 """Check whether a command line argument is a short option."""
146 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
147
148 def expand_short_options(argument):
149 """Expand combined short options into canonical short options."""
150 return ('-' + char for char in argument[1:])
151
152 def argument_without_setup_requirements(argv, i):
153 """Check whether a command line argument needs setup requirements."""
154 if argv[i] in no_setup_requires_arguments:
155 # Simple case: An argument which is either an option or a command
156 # which doesn't need setup requirements.
157 return True
158 elif (is_short_option(argv[i]) and
159 all(option in no_setup_requires_arguments
160 for option in expand_short_options(argv[i]))):
161 # Not so simple case: Combined short options none of which need
162 # setup requirements.
163 return True
164 elif argv[i - 1:i] == ['--egg-base']:
165 # Tricky case: --egg-info takes an argument which should not make
166 # us use setup_requires (defeating the purpose of this code).
167 return True
168 else:
169 return False
170
171 if all(argument_without_setup_requirements(argv, i)
172 for i in range(1, len(argv))):
173 return {
174 "cmdclass": {
175 "build": DummyBuild,
176 "install": DummyInstall,
177 "test": DummyPyTest,
178 }
179 }
180 else:
181 cffi_modules = [
182 "src/_cffi_src/build_openssl.py:ffi",
183 "src/_cffi_src/build_constant_time.py:ffi",
184 "src/_cffi_src/build_padding.py:ffi",
185 ]
186
187 return {
188 "setup_requires": setup_requirements,
189 "cmdclass": {
190 "test": PyTest,
191 },
192 "cffi_modules": cffi_modules
193 }
194
195
196 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
197 "while command line arguments implied a side effect "
198 "free command or option.")
199
200
201 class DummyBuild(build):
202 """
203 This class makes it very obvious when ``keywords_with_side_effects()`` has
204 incorrectly interpreted the command line arguments to ``setup.py build`` as
205 one of the 'side effect free' commands or options.
206 """
207
208 def run(self):
209 raise RuntimeError(setup_requires_error)
210
211
212 class DummyInstall(install):
213 """
214 This class makes it very obvious when ``keywords_with_side_effects()`` has
215 incorrectly interpreted the command line arguments to ``setup.py install``
216 as one of the 'side effect free' commands or options.
217 """
218
219 def run(self):
220 raise RuntimeError(setup_requires_error)
221
222
223 class DummyPyTest(test):
224 """
225 This class makes it very obvious when ``keywords_with_side_effects()`` has
226 incorrectly interpreted the command line arguments to ``setup.py test`` as
227 one of the 'side effect free' commands or options.
228 """
229
230 def run_tests(self):
231 raise RuntimeError(setup_requires_error)
232
233
234 with open(os.path.join(base_dir, "README.rst")) as f:
235 long_description = f.read()
236
237
238 setup(
239 name=about["__title__"],
240 version=about["__version__"],
241
242 description=about["__summary__"],
243 long_description=long_description,
244 license=about["__license__"],
245 url=about["__uri__"],
246
247 author=about["__author__"],
248 author_email=about["__email__"],
249
250 classifiers=[
251 "Intended Audience :: Developers",
252 "License :: OSI Approved :: Apache Software License",
253 "License :: OSI Approved :: BSD License",
254 "Natural Language :: English",
255 "Operating System :: MacOS :: MacOS X",
256 "Operating System :: POSIX",
257 "Operating System :: POSIX :: BSD",
258 "Operating System :: POSIX :: Linux",
259 "Operating System :: Microsoft :: Windows",
260 "Programming Language :: Python",
261 "Programming Language :: Python :: 2",
262 "Programming Language :: Python :: 2.7",
263 "Programming Language :: Python :: 3",
264 "Programming Language :: Python :: 3.4",
265 "Programming Language :: Python :: 3.5",
266 "Programming Language :: Python :: 3.6",
267 "Programming Language :: Python :: Implementation :: CPython",
268 "Programming Language :: Python :: Implementation :: PyPy",
269 "Topic :: Security :: Cryptography",
270 ],
271
272 package_dir={"": "src"},
273 packages=find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"]),
274 include_package_data=True,
275
276 install_requires=[
277 "idna >= 2.1",
278 "asn1crypto >= 0.21.0",
279 "six >= 1.4.1",
280 ],
281 tests_require=test_requirements,
282 extras_require={
283 ":python_version < '3'": ["enum34", "ipaddress"],
284 ":platform_python_implementation != 'PyPy'": ["cffi >= 1.7"],
285
286 "test": test_requirements,
287 "docstest": [
288 "doc8",
289 "pyenchant >= 1.6.11",
290 "readme_renderer >= 16.0",
291 "sphinx",
292 "sphinx_rtd_theme",
293 "sphinxcontrib-spelling",
294 ],
295 "pep8test": [
296 "flake8",
297 "flake8-import-order",
298 "pep8-naming",
299 ],
300 },
301
302 # for cffi
303 zip_safe=False,
304 ext_package="cryptography.hazmat.bindings",
305 **keywords_with_side_effects(sys.argv)
306 )
307
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -288,9 +288,9 @@
"doc8",
"pyenchant >= 1.6.11",
"readme_renderer >= 16.0",
- "sphinx",
+ "sphinx >= 1.6.5",
"sphinx_rtd_theme",
- "sphinxcontrib-spelling",
+ "sphinxcontrib-spelling >= 4.0.1",
],
"pep8test": [
"flake8",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -288,9 +288,9 @@\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n- \"sphinx\",\n+ \"sphinx >= 1.6.5\",\n \"sphinx_rtd_theme\",\n- \"sphinxcontrib-spelling\",\n+ \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"flake8\",\n", "issue": "Remove docutils.conf - enable smart quotes for builders\nBlocked on:\r\n\r\n- [x] https://github.com/sphinx-doc/sphinx/pull/4110\r\n - [x] Sphinx release\r\n- [x] https://github.com/sphinx-contrib/spelling/pull/2\r\n - [x] sphinxcontrib-spelling release\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nsetup_requirements = []\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (5, 3):\n raise RuntimeError(\n \"cryptography 1.9 is not compatible with PyPy < 5.3. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n setup_requirements.append(\"cffi>=1.7\")\n\ntest_requirements = [\n \"pytest>=3.2.1\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4\",\n]\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n \"six >= 1.4.1\",\n ],\n tests_require=test_requirements,\n extras_require={\n \":python_version < '3'\": [\"enum34\", \"ipaddress\"],\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n\n \"test\": test_requirements,\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling\",\n ],\n \"pep8test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nsetup_requirements = []\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (5, 3):\n raise RuntimeError(\n \"cryptography 1.9 is not compatible with PyPy < 5.3. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n setup_requirements.append(\"cffi>=1.7\")\n\ntest_requirements = [\n \"pytest>=3.2.1\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4\",\n]\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n\n install_requires=[\n \"idna >= 2.1\",\n \"asn1crypto >= 0.21.0\",\n \"six >= 1.4.1\",\n ],\n tests_require=test_requirements,\n extras_require={\n \":python_version < '3'\": [\"enum34\", \"ipaddress\"],\n \":platform_python_implementation != 'PyPy'\": [\"cffi >= 1.7\"],\n\n \"test\": test_requirements,\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"readme_renderer >= 16.0\",\n \"sphinx >= 1.6.5\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1120 | rasdani/github-patches | git_diff | vaexio__vaex-2039 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG-REPORT] Cannot pass dictionaries into registered functions
Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
If you pass a dictionary into a registered function, you get a syntax error, while the same works for a list.
```
import vaex
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
@vaex.register_function()
def index_to_label(arr, mapping):
return np.array([mapping[i] for i in arr])
df.id.index_to_label(labels)
```
throws
```
Expression = index_to_label(id, {0: 'now', 1: 'happy', 2: 'sad', 3: 'a...
Length: 100 dtype: string (expression)
--------------------------------------
Error evaluating: SyntaxError('invalid syntax', ('<unknown>', 1, 30, "index_to_label(id, {0: 'now' 1: 'happy' 2: 'sad' 3: 'arg' 4: 'foo' 5: 'bar' 6: 'something' 7: 'is' 8: 'happening' 9: 'here'})\n"))
```
while the same works as a list
```
import vaex
import numpy as np
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
labels_list = [labels[i] for i in labels]
@vaex.register_function()
def index_to_label(arr, mapping):
return np.array([labels[i] for i in arr])
df.id.index_to_label(labels_list)
```
I also tried to be explicit like the docs
```
import vaex
import numpy as np
import json
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
@vaex.register_function(on_expression=False)
def index_to_label(mapping, arr):
return np.array([mapping.get(i) for i in arr])
df.func.index_to_label(labels, df.id)
```
but that also failed
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: 4.9.1
- Vaex was installed via: pip / conda-forge / from source pip
- OS:Mac/Linux
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `packages/vaex-core/vaex/expresso.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import division
3 import logging
4 import collections
5 import ast
6 import _ast
7 import string
8 import numpy as np
9 import math
10 import sys
11 import six
12 import copy
13 import difflib
14
15
16 if hasattr(_ast, 'Num'):
17 ast_Num = _ast.Num
18 ast_Str = _ast.Str
19 else: # Python3.8
20 ast_Num = _ast.Constant
21 ast_Str = _ast.Constant
22
23 if hasattr(_ast, 'NameConstant'):
24 ast_Constant = _ast.NameConstant
25 else:
26 ast_Constant = _ast.Constant
27
28
29 logger = logging.getLogger("expr")
30 logger.setLevel(logging.ERROR)
31
32
33 valid_binary_operators = [_ast.Add, _ast.Sub, _ast.Mult, ast.MatMult, _ast.Pow,
34 _ast.Div, _ast.FloorDiv, _ast.BitAnd, _ast.BitOr, _ast.BitXor, _ast.Mod,
35 _ast.RShift, _ast.LShift
36 ]
37 valid_compare_operators = [_ast.Lt, _ast.LtE,
38 _ast.Gt, _ast.GtE, _ast.Eq, _ast.NotEq, _ast.IsNot, _ast.Is, _ast.In]
39 valid_unary_operators = [_ast.USub, _ast.UAdd, _ast.Invert]
40 valid_id_characters = string.ascii_letters + string.digits + "_"
41 valid_functions = "sin cos".split()
42
43 opmap = {
44 _ast.Add: '+',
45 _ast.Sub: '-',
46 _ast.Mult: '*',
47 _ast.Pow: '**',
48 _ast.Div: '/',
49 _ast.FloorDiv: '//',
50 _ast.BitAnd: '&',
51 _ast.BitOr: '|',
52 _ast.BitXor: '^',
53 _ast.Mod: '%',
54 }
55
56
57 def math_parse(expression, macros=[]):
58 # TODO: validate macros?
59 node = ast.parse(expression)
60 if len(node.body) != 1:
61 raise ValueError("expected one expression, got %r" % len(node.body))
62 expr = node.body[0]
63 if not isinstance(expr, _ast.Expr):
64 raise ValueError("expected an expression got a %r" % type(node.body))
65
66 validate_expression(expr.value)
67 return MathExpression(expression, macros)
68
69
70 last_func = None
71
72
73 def validate_expression(expr, variable_set, function_set=[], names=None):
74 global last_func
75 names = names if names is not None else []
76 if isinstance(expr, six.string_types):
77 node = ast.parse(expr)
78 if len(node.body) != 1:
79 raise ValueError("expected one expression, got %r" %
80 len(node.body))
81 first_expr = node.body[0]
82 if not isinstance(first_expr, _ast.Expr):
83 raise ValueError("expected an expression got a %r" %
84 type(node.body))
85 validate_expression(first_expr.value, variable_set,
86 function_set, names)
87 elif isinstance(expr, _ast.BinOp):
88 if expr.op.__class__ in valid_binary_operators:
89 validate_expression(expr.right, variable_set, function_set, names)
90 validate_expression(expr.left, variable_set, function_set, names)
91 else:
92 raise ValueError("Binary operator not allowed: %r" % expr.op)
93 elif isinstance(expr, _ast.UnaryOp):
94 if expr.op.__class__ in valid_unary_operators:
95 validate_expression(expr.operand, variable_set,
96 function_set, names)
97 else:
98 raise ValueError("Unary operator not allowed: %r" % expr.op)
99 elif isinstance(expr, _ast.Name):
100 if expr.id not in variable_set:
101 matches = difflib.get_close_matches(expr.id, list(variable_set))
102 msg = "Column or variable %r does not exist." % expr.id
103 if matches:
104 msg += ' Did you mean: ' + " or ".join(map(repr, matches))
105
106 raise NameError(msg)
107 names.append(expr.id)
108 elif isinstance(expr, ast_Num):
109 pass # numbers are fine
110 elif isinstance(expr, ast_Str):
111 pass # as well as strings
112 elif isinstance(expr, _ast.Call):
113 validate_func(expr.func, function_set)
114 last_func = expr
115 for arg in expr.args:
116 validate_expression(arg, variable_set, function_set, names)
117 for arg in expr.keywords:
118 validate_expression(arg, variable_set, function_set, names)
119 elif isinstance(expr, _ast.Compare):
120 validate_expression(expr.left, variable_set, function_set, names)
121 for op in expr.ops:
122 if op.__class__ not in valid_compare_operators:
123 raise ValueError("Compare operator not allowed: %r" % op)
124 for comparator in expr.comparators:
125 validate_expression(comparator, variable_set, function_set, names)
126 elif isinstance(expr, _ast.keyword):
127 validate_expression(expr.value, variable_set, function_set, names)
128 elif isinstance(expr, ast_Constant):
129 pass # like True and False
130 elif isinstance(expr, _ast.List):
131 for el in expr.elts:
132 validate_expression(el, variable_set, function_set, names)
133 elif isinstance(expr, _ast.Dict):
134 for key in expr.keys:
135 validate_expression(key, variable_set, function_set, names)
136 for value in expr.values:
137 validate_expression(value, variable_set, function_set, names)
138 elif isinstance(expr, _ast.Subscript):
139 validate_expression(expr.value, variable_set, function_set, names)
140 if isinstance(expr.slice.value, ast_Num):
141 pass # numbers are fine
142 elif isinstance(expr.slice.value, str) or isinstance(expr.slice.value, _ast.Str):
143 pass # and strings (from py3.9, value is str)
144 else:
145 raise ValueError(
146 "Only subscript/slices with numbers allowed, not: %r" % expr.slice.value)
147 else:
148 last_func = expr
149 raise ValueError("Unknown expression type: %r" % type(expr))
150
151
152 class Validator(ast.NodeVisitor):
153
154 def generic_visit(self, node):
155 raise ValueError('unexpected node: {}', ast.dump(node))
156
157 def visit_BinOp(self, expr):
158 if expr.op.__class__ in valid_binary_operators:
159 validate_expression(expr.right, variable_set, function_set, names)
160 validate_expression(expr.left, variable_set, function_set, names)
161 else:
162 raise ValueError("Binary operator not allowed: %r" % expr.op)
163
164
165 def mul(left, right):
166 return ast.BinOp(left=left, right=right, op=ast.Mult())
167
168
169 def div(left, right):
170 return ast.BinOp(left=left, right=right, op=ast.Div())
171
172
173 def add(left, right):
174 return ast.BinOp(left=left, right=right, op=ast.Add())
175
176
177 def sub(left, right):
178 return ast.BinOp(left=left, right=right, op=ast.Sub())
179
180
181 def pow(left, right):
182 return ast.BinOp(left=left, right=right, op=ast.Pow())
183
184
185 def sqr(node):
186 return ast.BinOp(left=node, right=num(2), op=ast.Pow())
187
188 def sqrt(node):
189 return call('sqrt', [node])
190
191
192 def neg(node):
193 return ast.UnaryOp(op=ast.USub(), operand=node)
194
195
196 def num(n):
197 return ast.Num(n=n)
198
199
200 def call(fname, args):
201 return ast.Call(func=ast.Name(id=fname, ctx=ast.Load()), args=args)
202
203
204 def _dlog10(n, args):
205 assert len(args) == 1
206 assert n == 0
207 a = call('log', args=[num(10)])
208 return div(num(1), mul(args[0], a))
209
210
211 def _dsqrt(n, args):
212 assert n == 0
213 assert len(args) == 1
214 a = call('log', args=[num(10)])
215 return mul(num(1/2), pow(args[0], num(-0.5)))
216
217
218 def _dcos(n, args):
219 assert n == 0
220 assert len(args) == 1
221 return neg(call('sin', args=args))
222
223 def _darccos(n, args):
224 assert n == 0
225 assert len(args) == 1
226 a = sqrt(sub(num(1), sqr(args[0])))
227 return neg(div(num(1), a))
228
229 def _darctan2(n, args):
230 # derivative of arctan2(y, x)
231 assert (n >= 0) and (n <= 1)
232 assert len(args) == 2
233 y, x = args
234 if n == 1: # derivative wrt 2nd argument (x)
235 return div(neg(y), add(sqr(x), sqr(y)))
236 if n == 0: # derivative wrt 1st argument (y)
237 return div(x, add(sqr(x), sqr(y)))
238
239 def _dtan(n, args):
240 assert n == 0
241 assert len(args) == 1
242 # a = div(sub(num(1), sqr(args[0])))
243 return div(num(1), sqr(call('cos', args=args)))
244
245 standard_function_derivatives = {}
246 standard_function_derivatives['sin'] = 'cos'
247 standard_function_derivatives['cos'] = _dcos
248 standard_function_derivatives['tan'] = _dtan
249 standard_function_derivatives['log10'] = _dlog10
250 standard_function_derivatives['sqrt'] = _dsqrt
251 standard_function_derivatives['arctan2'] = _darctan2
252 standard_function_derivatives['arccos'] = _darccos
253
254
255 class Derivative(ast.NodeTransformer):
256 def __init__(self, id, function_derivatives={}):
257 self.id = id
258 self.function_derivatives = dict(standard_function_derivatives)
259 self.function_derivatives.update(function_derivatives)
260
261 def format(self, node):
262 # try:
263 return ExpressionString().visit(node)
264 # return ast.dump(node)
265
266 def visit_Num(self, node):
267 return ast.Num(n=0)
268
269 def visit_Name(self, node):
270 if node.id == self.id:
271 return ast.Num(n=1)
272 else:
273 return ast.Num(n=0)
274
275 def visit_Call(self, node):
276 fname = node.func.id
277 df = self.function_derivatives.get(fname)
278 if df is None:
279 raise ValueError('Derivative of {} is unknown'.format(fname))
280 if not callable(df): # simply a string
281 assert len(node.args) == 1
282 result = mul(call(df, node.args), self.visit(node.args[0]))
283 else:
284 terms = [mul(df(i, node.args), self.visit(arg))
285 for i, arg in enumerate(node.args)]
286 result = terms[0]
287 for term in terms[1:]:
288 result = add(result, term)
289 return result
290
291 def generic_visit(self, node):
292 # it's annoying that the default one modifies in place
293 return super(Derivative, self).generic_visit(copy.deepcopy(node))
294
295 def visit_BinOp(self, node):
296 solution = None
297 if isinstance(node.op, ast.Mult):
298 solution = add(mul(self.visit(node.left), node.right),
299 mul(node.left, self.visit(node.right)))
300 if isinstance(node.op, ast.Div):
301 # (n*at - t*an) / n2
302 n = node.right
303 t = node.left
304 at = self.visit(t)
305 an = self.visit(n)
306 solution = div(sub(mul(n, at), mul(t, an)), pow(n, num(2)))
307 if isinstance(node.op, ast.Add):
308 solution = add(self.visit(node.left), self.visit(node.right))
309 if isinstance(node.op, ast.Sub):
310 solution = sub(self.visit(node.left), self.visit(node.right))
311 if isinstance(node.op, ast.Pow):
312 # following https://en.wikipedia.org/wiki/Differentiation_rules
313 f = node.left
314 df = self.visit(f)
315 g = node.right
316 dg = self.visit(g)
317 # if g is a number, we take a equivalent solution, which gives a nicer result
318 if isinstance(g, ast.Num):
319 solution = mul(g, mul(df, pow(node.left, num(node.right.n-1))))
320 else:
321 a = add(mul(df, div(g, f)), mul(dg, call('log', [f])))
322 solution = mul(pow(f, g), a)
323 if solution is None:
324 raise ValueError('Unknown rule for: {}'.format(self.format(node)))
325 return solution
326
327
328 class ExpressionString(ast.NodeVisitor):
329 def __init__(self, pretty=False):
330 self.pretty = pretty
331 self.indent = 0
332 def visit_UnaryOp(self, node):
333 if isinstance(node.op, ast.USub):
334 if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):
335 return "-{}".format(self.visit(node.operand)) # prettier
336 else:
337 return "-({})".format(self.visit(node.operand))
338 elif isinstance(node.op, ast.UAdd):
339 if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):
340 return "+{}".format(self.visit(node.operand)) # prettier
341 else:
342 return "+({})".format(self.visit(node.operand))
343 elif isinstance(node.op, ast.Invert):
344 if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):
345 return "~{}".format(self.visit(node.operand)) # prettier
346 else:
347 return "~({})".format(self.visit(node.operand))
348 else:
349 raise ValueError('Unary op not supported: {}'.format(node.op))
350
351 def visit_Name(self, node):
352 return node.id
353
354 def visit_Num(self, node):
355 return repr(node.n)
356
357 def visit_keyword(self, node):
358 return "%s=%s" % (node.arg, self.visit(node.value))
359
360 def visit_NameConstant(self, node):
361 return repr(node.value)
362
363 def visit_Dict(self, node):
364 parts = []
365 for key, value in zip(node.keys, node.values):
366 key = self.visit(key)
367 value = self.visit(value)
368 parts.append(f'{key}: {value}')
369 return '{' + ' '.join(parts) + '}'
370
371 def visit_Call(self, node):
372 args = [self.visit(k) for k in node.args]
373 keywords = []
374 if hasattr(node, 'keywords'):
375 keywords = [self.visit(k) for k in node.keywords]
376 return "{}({})".format(node.func.id, ", ".join(args + keywords))
377
378 def visit_Str(self, node):
379 return repr(node.s)
380
381 def visit_List(self, node):
382 return "[{}]".format(", ".join([self.visit(k) for k in node.elts]))
383
384 def pow(self, left, right):
385 return "({left} ** {right})".format(left=left, right=right)
386
387 def visit_BinOp(self, node):
388 newline = indent = ""
389 if self.pretty:
390 indent = " " * self.indent
391 newline = "\n"
392 self.indent += 1
393 left = "{}{}{}".format(newline, indent, self.visit(node.left))
394 right = "{}{}{}".format(newline, indent, self.visit(node.right))
395 try:
396 if isinstance(node.op, ast.Mult):
397 return "({left} * {right})".format(left=left, right=right)
398 elif isinstance(node.op, ast.MatMult):
399 return "({left} @ {right})".format(left=left, right=right)
400 elif isinstance(node.op, ast.Div):
401 return "({left} / {right})".format(left=left, right=right)
402 elif isinstance(node.op, ast.Mod):
403 return "({left} % {right})".format(left=left, right=right)
404 elif isinstance(node.op, ast.FloorDiv):
405 return "({left} // {right})".format(left=left, right=right)
406 elif isinstance(node.op, ast.Add):
407 return "({left} + {right})".format(left=left, right=right)
408 elif isinstance(node.op, ast.Sub):
409 return "({left} - {right})".format(left=left, right=right)
410 elif isinstance(node.op, ast.Pow):
411 return self.pow(left, right)
412 elif isinstance(node.op, ast.BitAnd):
413 return "({left} & {right})".format(left=left, right=right)
414 elif isinstance(node.op, ast.BitOr):
415 return "({left} | {right})".format(left=left, right=right)
416 elif isinstance(node.op, ast.BitXor):
417 return "({left} ^ {right})".format(left=left, right=right)
418 elif isinstance(node.op, ast.RShift):
419 return "({left} >> {right})".format(left=left, right=right)
420 elif isinstance(node.op, ast.LShift):
421 return "({left} << {right})".format(left=left, right=right)
422 else:
423 raise ValueError(f'Do not know binary op {node.op}')
424 # return "do_not_understand_expression"
425 finally:
426 self.indent -= 1
427
428 op_translate = {ast.Lt: "<", ast.LtE: "<=", ast.Gt: ">", ast.GtE: ">=", ast.Eq: "==", ast.NotEq: "!=",
429 ast.IsNot: "is not", ast.Is: "is", ast.In: "in"}
430 def visit_Compare(self, node):
431 s = ""
432 left = self.visit(node.left)
433 for op, comp in zip(node.ops, node.comparators):
434 right = self.visit(comp)
435 op = ExpressionString.op_translate[op.__class__]
436 s = "({left} {op} {right})".format(left=left, op=op, right=right)
437 left = right
438 return s
439
440 def visit_Subscript(self, node):
441 p = self.visit(node.value)
442 v = self.visit(node.slice.value)
443 return f'{p}[{v}]'
444
445 # required from py3.9, since in visit_Subscript node can be a string
446 def visit_str(self, node):
447 return repr(node)
448
449 class SimplifyExpression(ast.NodeTransformer):
450
451 def visit_UnaryOp(self, node):
452 node.operand = self.visit(node.operand)
453 if isinstance(node.op, ast.USub):
454 if isinstance(node.operand, ast.Num) and node.operand.n == 0:
455 node = node.operand
456 return node
457
458 def visit_BinOp(self, node):
459 node.left = left = self.visit(node.left)
460 node.right = right = self.visit(node.right)
461 if isinstance(node.op, ast.Mult):
462 if isinstance(right, ast.Num) and right.n == 0:
463 return num(0)
464 elif isinstance(right, ast.Num) and right.n == 1:
465 return left
466 elif isinstance(left, ast.Num) and left.n == 0:
467 return num(0)
468 elif isinstance(left, ast.Num) and left.n == 1:
469 return right
470 if isinstance(node.op, ast.Div):
471 if isinstance(left, ast.Num) and left.n == 0:
472 return num(0)
473 if isinstance(node.op, ast.Add):
474 if isinstance(right, ast.Num) and right.n == 0:
475 return left
476 if isinstance(left, ast.Num) and left.n == 0:
477 return right
478 if isinstance(node.op, ast.Sub):
479 if isinstance(right, ast.Num) and right.n == 0:
480 return left
481 if isinstance(left, ast.Num) and left.n == 0:
482 return neg(right)
483 if isinstance(node.op, ast.Pow):
484 if isinstance(left, ast.Num) and left.n == 0:
485 return num(0) # not ok with negative powers..
486 if isinstance(right, ast.Num) and right.n == 0:
487 # TODO: this means a numpy arrays can become a scalar
488 return num(1)
489 if isinstance(right, ast.Num) and right.n == 1:
490 return left
491 return node
492
493
494 class Translator(ast.NodeTransformer):
495 def __init__(self, translator):
496 self.translator = translator
497
498 def visit_Call(self, node):
499 # we skip visiting node.id
500 node.args = [self.visit(k) for k in node.args]
501 if hasattr(node, 'keywords'):
502 node.keywords = [self.visit(k) for k in node.keywords]
503 return node
504
505 def visit_Name(self, node):
506 expr = self.translator(node.id)
507 if expr:
508 node = parse_expression(expr)
509 node = self.visit(node)
510 return node
511
512
513 class NameCollector(ast.NodeTransformer):
514 def __init__(self):
515 self.names = {}
516
517 def visit_Call(self, node):
518 # we skip visiting node.id
519 self.visit(node.func)
520 node.args = [self.visit(k) for k in node.args]
521 if hasattr(node, 'keywords'):
522 node.keywords = [self.visit(k) for k in node.keywords]
523 return node
524
525 def visit_Name(self, node):
526 if node.id not in self.names:
527 self.names[node.id] = []
528 self.names[node.id].append(node)
529 return node
530
531
532 class SliceCollector(ast.NodeTransformer):
533 def __init__(self):
534 self.slices = collections.defaultdict(list)
535
536 def visit_Subscript(self, node):
537 # py39
538 if node.value.id == 'df' and isinstance(node.slice.value, str):
539 self.slices[node.slice.value].append(node)
540 if node.value.id == 'df' and isinstance(node.slice.value, ast.Str):
541 self.slices[node.slice.value.s].append(node)
542 return node
543
544 class GraphBuiler(ast.NodeVisitor):
545 def __init__(self):
546 self.dependencies = []
547
548 def visit_Call(self, node):
549 fname = node.func.id
550 dependencies = list(self.dependencies)
551 self.dependencies = []
552 for arg in node.args:
553 self.visit(arg)
554 graph = [fname, node_to_string(node), self.dependencies]
555 dependencies.append(graph)
556 self.dependencies = dependencies
557
558 def visit_BinOp(self, node):
559 dependencies = list(self.dependencies)
560 self.dependencies = []
561 self.visit(node.left)
562 dep_left = self.dependencies
563
564 self.dependencies = []
565 self.visit(node.right)
566 dep_right = self.dependencies
567 graph = [opmap[type(node.op)], node_to_string(node), dep_left + dep_right]
568 dependencies.append(graph)
569 self.dependencies = dependencies
570
571 def visit_Name(self, node):
572 self.dependencies.append(node.id)
573
574
575 def _graph(expression_string):
576 node = parse_expression(expression_string)
577 g = GraphBuiler()
578 node = g.visit(node)
579 return g.dependencies[0]
580
581
582 def simplify(expression_string):
583 node = parse_expression(expression_string)
584 node = SimplifyExpression().visit(node)
585 return node_to_string(node)
586
587
588 def derivative(expression, variable_name, simplify=True):
589 if isinstance(expression, str):
590 node = parse_expression(expression)
591 else:
592 node = expression
593 node = Derivative(variable_name).visit(node)
594 if simplify:
595 node = SimplifyExpression().visit(node)
596 return node_to_string(node)
597
598
599 def translate(expression, translator):
600 if isinstance(expression, str):
601 node = parse_expression(expression)
602 else:
603 node = expression
604 node = Translator(translator).visit(node)
605 return node_to_string(node)
606
607
608 def names(expression):
609 if isinstance(expression, str):
610 node = parse_expression(expression)
611 else:
612 node = expression
613 nc = NameCollector()
614 nc.visit(node)
615 return nc.names
616
617
618 def slices(expression):
619 if isinstance(expression, str):
620 node = parse_expression(expression)
621 else:
622 node = expression
623 nc = SliceCollector()
624 nc.visit(node)
625 return nc.slices
626
627
628 def parse_expression(expression_string):
629 expr = ast.parse(expression_string).body[0]
630 assert isinstance(expr, ast.Expr), f"not an expression {str(expr)}"
631 return expr.value
632
633
634 def node_to_string(node, pretty=False):
635 return ExpressionString(pretty=pretty).visit(node)
636
637
638 def validate_func(name, function_set):
639 if name.id not in function_set:
640 raise NameError("function %r is not defined" % name.id)
641
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/packages/vaex-core/vaex/expresso.py b/packages/vaex-core/vaex/expresso.py
--- a/packages/vaex-core/vaex/expresso.py
+++ b/packages/vaex-core/vaex/expresso.py
@@ -366,7 +366,7 @@
key = self.visit(key)
value = self.visit(value)
parts.append(f'{key}: {value}')
- return '{' + ' '.join(parts) + '}'
+ return '{' + ', '.join(parts) + '}'
def visit_Call(self, node):
args = [self.visit(k) for k in node.args]
| {"golden_diff": "diff --git a/packages/vaex-core/vaex/expresso.py b/packages/vaex-core/vaex/expresso.py\n--- a/packages/vaex-core/vaex/expresso.py\n+++ b/packages/vaex-core/vaex/expresso.py\n@@ -366,7 +366,7 @@\n key = self.visit(key)\n value = self.visit(value)\n parts.append(f'{key}: {value}')\n- return '{' + ' '.join(parts) + '}'\n+ return '{' + ', '.join(parts) + '}'\n \n def visit_Call(self, node):\n args = [self.visit(k) for k in node.args]\n", "issue": "[BUG-REPORT] Cannot pass dictionaries into registered functions\nThank you for reaching out and helping us improve Vaex!\r\n\r\nBefore you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.\r\n\r\n**Description**\r\nIf you pass a dictionary into a registered function, you get a syntax error, while the same works for a list.\r\n\r\n```\r\nimport vaex\r\ndf = vaex.example()\r\ndf = df[df[\"id\"] < 10][:100]\r\n\r\nlabels = {0:\"now\", 1: \"happy\", 2: \"sad\", 3:\"arg\", 4:\"foo\", 5:\"bar\", 6:\"something\", 7:\"is\", 8:\"happening\", 9:\"here\"}\r\n\r\[email protected]_function()\r\ndef index_to_label(arr, mapping):\r\n return np.array([mapping[i] for i in arr])\r\n\r\n\r\ndf.id.index_to_label(labels)\r\n```\r\nthrows\r\n```\r\nExpression = index_to_label(id, {0: 'now', 1: 'happy', 2: 'sad', 3: 'a...\r\nLength: 100 dtype: string (expression)\r\n--------------------------------------\r\nError evaluating: SyntaxError('invalid syntax', ('<unknown>', 1, 30, \"index_to_label(id, {0: 'now' 1: 'happy' 2: 'sad' 3: 'arg' 4: 'foo' 5: 'bar' 6: 'something' 7: 'is' 8: 'happening' 9: 'here'})\\n\"))\r\n```\r\n\r\nwhile the same works as a list\r\n```\r\nimport vaex\r\nimport numpy as np\r\ndf = vaex.example()\r\ndf = df[df[\"id\"] < 10][:100]\r\n\r\nlabels = {0:\"now\", 1: \"happy\", 2: \"sad\", 3:\"arg\", 4:\"foo\", 5:\"bar\", 6:\"something\", 7:\"is\", 8:\"happening\", 9:\"here\"}\r\nlabels_list = [labels[i] for i in labels]\r\[email protected]_function()\r\ndef index_to_label(arr, mapping):\r\n return np.array([labels[i] for i in arr])\r\n\r\n\r\ndf.id.index_to_label(labels_list)\r\n```\r\n\r\nI also tried to be explicit like the docs\r\n```\r\nimport vaex\r\nimport numpy as np\r\nimport json\r\ndf = vaex.example()\r\ndf = df[df[\"id\"] < 10][:100]\r\n\r\nlabels = {0:\"now\", 1: \"happy\", 2: \"sad\", 3:\"arg\", 4:\"foo\", 5:\"bar\", 6:\"something\", 7:\"is\", 8:\"happening\", 9:\"here\"}\r\n\r\[email protected]_function(on_expression=False)\r\ndef index_to_label(mapping, arr):\r\n return np.array([mapping.get(i) for i in arr])\r\n\r\n\r\ndf.func.index_to_label(labels, df.id)\r\n```\r\n\r\nbut that also failed\r\n\r\n\r\n**Software information**\r\n - Vaex version (`import vaex; vaex.__version__)`: 4.9.1\r\n - Vaex was installed via: pip / conda-forge / from source pip\r\n - OS:Mac/Linux\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import division\nimport logging\nimport collections\nimport ast\nimport _ast\nimport string\nimport numpy as np\nimport math\nimport sys\nimport six\nimport copy\nimport difflib\n\n\nif hasattr(_ast, 'Num'):\n ast_Num = _ast.Num\n ast_Str = _ast.Str\nelse: # Python3.8\n ast_Num = _ast.Constant\n ast_Str = _ast.Constant\n\nif hasattr(_ast, 'NameConstant'):\n ast_Constant = _ast.NameConstant\nelse:\n ast_Constant = _ast.Constant\n\n\nlogger = logging.getLogger(\"expr\")\nlogger.setLevel(logging.ERROR)\n\n\nvalid_binary_operators = [_ast.Add, _ast.Sub, _ast.Mult, ast.MatMult, _ast.Pow,\n _ast.Div, _ast.FloorDiv, _ast.BitAnd, _ast.BitOr, _ast.BitXor, _ast.Mod,\n _ast.RShift, _ast.LShift\n ]\nvalid_compare_operators = [_ast.Lt, _ast.LtE,\n _ast.Gt, _ast.GtE, _ast.Eq, _ast.NotEq, _ast.IsNot, _ast.Is, _ast.In]\nvalid_unary_operators = [_ast.USub, _ast.UAdd, _ast.Invert]\nvalid_id_characters = string.ascii_letters + string.digits + \"_\"\nvalid_functions = \"sin cos\".split()\n\nopmap = {\n _ast.Add: '+',\n _ast.Sub: '-',\n _ast.Mult: '*',\n _ast.Pow: '**',\n _ast.Div: '/',\n _ast.FloorDiv: '//',\n _ast.BitAnd: '&',\n _ast.BitOr: '|',\n _ast.BitXor: '^',\n _ast.Mod: '%',\n}\n\n\ndef math_parse(expression, macros=[]):\n # TODO: validate macros?\n node = ast.parse(expression)\n if len(node.body) != 1:\n raise ValueError(\"expected one expression, got %r\" % len(node.body))\n expr = node.body[0]\n if not isinstance(expr, _ast.Expr):\n raise ValueError(\"expected an expression got a %r\" % type(node.body))\n\n validate_expression(expr.value)\n return MathExpression(expression, macros)\n\n\nlast_func = None\n\n\ndef validate_expression(expr, variable_set, function_set=[], names=None):\n global last_func\n names = names if names is not None else []\n if isinstance(expr, six.string_types):\n node = ast.parse(expr)\n if len(node.body) != 1:\n raise ValueError(\"expected one expression, got %r\" %\n len(node.body))\n first_expr = node.body[0]\n if not isinstance(first_expr, _ast.Expr):\n raise ValueError(\"expected an expression got a %r\" %\n type(node.body))\n validate_expression(first_expr.value, variable_set,\n function_set, names)\n elif isinstance(expr, _ast.BinOp):\n if expr.op.__class__ in valid_binary_operators:\n validate_expression(expr.right, variable_set, function_set, names)\n validate_expression(expr.left, variable_set, function_set, names)\n else:\n raise ValueError(\"Binary operator not allowed: %r\" % expr.op)\n elif isinstance(expr, _ast.UnaryOp):\n if expr.op.__class__ in valid_unary_operators:\n validate_expression(expr.operand, variable_set,\n function_set, names)\n else:\n raise ValueError(\"Unary operator not allowed: %r\" % expr.op)\n elif isinstance(expr, _ast.Name):\n if expr.id not in variable_set:\n matches = difflib.get_close_matches(expr.id, list(variable_set))\n msg = \"Column or variable %r does not exist.\" % expr.id\n if matches:\n msg += ' Did you mean: ' + \" or \".join(map(repr, matches))\n\n raise NameError(msg)\n names.append(expr.id)\n elif isinstance(expr, ast_Num):\n pass # numbers are fine\n elif isinstance(expr, ast_Str):\n pass # as well as strings\n elif isinstance(expr, _ast.Call):\n validate_func(expr.func, function_set)\n last_func = expr\n for arg in expr.args:\n validate_expression(arg, variable_set, function_set, names)\n for arg in expr.keywords:\n validate_expression(arg, variable_set, function_set, names)\n elif isinstance(expr, _ast.Compare):\n validate_expression(expr.left, variable_set, function_set, names)\n for op in expr.ops:\n if op.__class__ not in valid_compare_operators:\n raise ValueError(\"Compare operator not allowed: %r\" % op)\n for comparator in expr.comparators:\n validate_expression(comparator, variable_set, function_set, names)\n elif isinstance(expr, _ast.keyword):\n validate_expression(expr.value, variable_set, function_set, names)\n elif isinstance(expr, ast_Constant):\n pass # like True and False\n elif isinstance(expr, _ast.List):\n for el in expr.elts:\n validate_expression(el, variable_set, function_set, names)\n elif isinstance(expr, _ast.Dict):\n for key in expr.keys:\n validate_expression(key, variable_set, function_set, names)\n for value in expr.values:\n validate_expression(value, variable_set, function_set, names)\n elif isinstance(expr, _ast.Subscript):\n validate_expression(expr.value, variable_set, function_set, names)\n if isinstance(expr.slice.value, ast_Num):\n pass # numbers are fine\n elif isinstance(expr.slice.value, str) or isinstance(expr.slice.value, _ast.Str):\n pass # and strings (from py3.9, value is str)\n else:\n raise ValueError(\n \"Only subscript/slices with numbers allowed, not: %r\" % expr.slice.value)\n else:\n last_func = expr\n raise ValueError(\"Unknown expression type: %r\" % type(expr))\n\n\nclass Validator(ast.NodeVisitor):\n\n def generic_visit(self, node):\n raise ValueError('unexpected node: {}', ast.dump(node))\n\n def visit_BinOp(self, expr):\n if expr.op.__class__ in valid_binary_operators:\n validate_expression(expr.right, variable_set, function_set, names)\n validate_expression(expr.left, variable_set, function_set, names)\n else:\n raise ValueError(\"Binary operator not allowed: %r\" % expr.op)\n\n\ndef mul(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Mult())\n\n\ndef div(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Div())\n\n\ndef add(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Add())\n\n\ndef sub(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Sub())\n\n\ndef pow(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Pow())\n\n\ndef sqr(node):\n return ast.BinOp(left=node, right=num(2), op=ast.Pow())\n\ndef sqrt(node):\n return call('sqrt', [node])\n\n\ndef neg(node):\n return ast.UnaryOp(op=ast.USub(), operand=node)\n\n\ndef num(n):\n return ast.Num(n=n)\n\n\ndef call(fname, args):\n return ast.Call(func=ast.Name(id=fname, ctx=ast.Load()), args=args)\n\n\ndef _dlog10(n, args):\n assert len(args) == 1\n assert n == 0\n a = call('log', args=[num(10)])\n return div(num(1), mul(args[0], a))\n\n\ndef _dsqrt(n, args):\n assert n == 0\n assert len(args) == 1\n a = call('log', args=[num(10)])\n return mul(num(1/2), pow(args[0], num(-0.5)))\n\n\ndef _dcos(n, args):\n assert n == 0\n assert len(args) == 1\n return neg(call('sin', args=args))\n\ndef _darccos(n, args):\n assert n == 0\n assert len(args) == 1\n a = sqrt(sub(num(1), sqr(args[0])))\n return neg(div(num(1), a))\n\ndef _darctan2(n, args):\n # derivative of arctan2(y, x)\n assert (n >= 0) and (n <= 1)\n assert len(args) == 2\n y, x = args\n if n == 1: # derivative wrt 2nd argument (x)\n return div(neg(y), add(sqr(x), sqr(y)))\n if n == 0: # derivative wrt 1st argument (y)\n return div(x, add(sqr(x), sqr(y)))\n\ndef _dtan(n, args):\n assert n == 0\n assert len(args) == 1\n# a = div(sub(num(1), sqr(args[0])))\n return div(num(1), sqr(call('cos', args=args)))\n\nstandard_function_derivatives = {}\nstandard_function_derivatives['sin'] = 'cos'\nstandard_function_derivatives['cos'] = _dcos\nstandard_function_derivatives['tan'] = _dtan\nstandard_function_derivatives['log10'] = _dlog10\nstandard_function_derivatives['sqrt'] = _dsqrt\nstandard_function_derivatives['arctan2'] = _darctan2\nstandard_function_derivatives['arccos'] = _darccos\n\n\nclass Derivative(ast.NodeTransformer):\n def __init__(self, id, function_derivatives={}):\n self.id = id\n self.function_derivatives = dict(standard_function_derivatives)\n self.function_derivatives.update(function_derivatives)\n\n def format(self, node):\n # try:\n return ExpressionString().visit(node)\n # return ast.dump(node)\n\n def visit_Num(self, node):\n return ast.Num(n=0)\n\n def visit_Name(self, node):\n if node.id == self.id:\n return ast.Num(n=1)\n else:\n return ast.Num(n=0)\n\n def visit_Call(self, node):\n fname = node.func.id\n df = self.function_derivatives.get(fname)\n if df is None:\n raise ValueError('Derivative of {} is unknown'.format(fname))\n if not callable(df): # simply a string\n assert len(node.args) == 1\n result = mul(call(df, node.args), self.visit(node.args[0]))\n else:\n terms = [mul(df(i, node.args), self.visit(arg))\n for i, arg in enumerate(node.args)]\n result = terms[0]\n for term in terms[1:]:\n result = add(result, term)\n return result\n\n def generic_visit(self, node):\n # it's annoying that the default one modifies in place\n return super(Derivative, self).generic_visit(copy.deepcopy(node))\n\n def visit_BinOp(self, node):\n solution = None\n if isinstance(node.op, ast.Mult):\n solution = add(mul(self.visit(node.left), node.right),\n mul(node.left, self.visit(node.right)))\n if isinstance(node.op, ast.Div):\n # (n*at - t*an) / n2\n n = node.right\n t = node.left\n at = self.visit(t)\n an = self.visit(n)\n solution = div(sub(mul(n, at), mul(t, an)), pow(n, num(2)))\n if isinstance(node.op, ast.Add):\n solution = add(self.visit(node.left), self.visit(node.right))\n if isinstance(node.op, ast.Sub):\n solution = sub(self.visit(node.left), self.visit(node.right))\n if isinstance(node.op, ast.Pow):\n # following https://en.wikipedia.org/wiki/Differentiation_rules\n f = node.left\n df = self.visit(f)\n g = node.right\n dg = self.visit(g)\n # if g is a number, we take a equivalent solution, which gives a nicer result\n if isinstance(g, ast.Num):\n solution = mul(g, mul(df, pow(node.left, num(node.right.n-1))))\n else:\n a = add(mul(df, div(g, f)), mul(dg, call('log', [f])))\n solution = mul(pow(f, g), a)\n if solution is None:\n raise ValueError('Unknown rule for: {}'.format(self.format(node)))\n return solution\n\n\nclass ExpressionString(ast.NodeVisitor):\n def __init__(self, pretty=False):\n self.pretty = pretty\n self.indent = 0\n def visit_UnaryOp(self, node):\n if isinstance(node.op, ast.USub):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"-{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"-({})\".format(self.visit(node.operand))\n elif isinstance(node.op, ast.UAdd):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"+{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"+({})\".format(self.visit(node.operand))\n elif isinstance(node.op, ast.Invert):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"~{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"~({})\".format(self.visit(node.operand))\n else:\n raise ValueError('Unary op not supported: {}'.format(node.op))\n\n def visit_Name(self, node):\n return node.id\n\n def visit_Num(self, node):\n return repr(node.n)\n\n def visit_keyword(self, node):\n return \"%s=%s\" % (node.arg, self.visit(node.value))\n\n def visit_NameConstant(self, node):\n return repr(node.value)\n\n def visit_Dict(self, node):\n parts = []\n for key, value in zip(node.keys, node.values):\n key = self.visit(key)\n value = self.visit(value)\n parts.append(f'{key}: {value}')\n return '{' + ' '.join(parts) + '}'\n\n def visit_Call(self, node):\n args = [self.visit(k) for k in node.args]\n keywords = []\n if hasattr(node, 'keywords'):\n keywords = [self.visit(k) for k in node.keywords]\n return \"{}({})\".format(node.func.id, \", \".join(args + keywords))\n\n def visit_Str(self, node):\n return repr(node.s)\n\n def visit_List(self, node):\n return \"[{}]\".format(\", \".join([self.visit(k) for k in node.elts]))\n\n def pow(self, left, right):\n return \"({left} ** {right})\".format(left=left, right=right)\n\n def visit_BinOp(self, node):\n newline = indent = \"\"\n if self.pretty:\n indent = \" \" * self.indent\n newline = \"\\n\"\n self.indent += 1\n left = \"{}{}{}\".format(newline, indent, self.visit(node.left))\n right = \"{}{}{}\".format(newline, indent, self.visit(node.right))\n try:\n if isinstance(node.op, ast.Mult):\n return \"({left} * {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.MatMult):\n return \"({left} @ {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Div):\n return \"({left} / {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Mod):\n return \"({left} % {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.FloorDiv):\n return \"({left} // {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Add):\n return \"({left} + {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Sub):\n return \"({left} - {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Pow):\n return self.pow(left, right)\n elif isinstance(node.op, ast.BitAnd):\n return \"({left} & {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.BitOr):\n return \"({left} | {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.BitXor):\n return \"({left} ^ {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.RShift):\n return \"({left} >> {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.LShift):\n return \"({left} << {right})\".format(left=left, right=right)\n else:\n raise ValueError(f'Do not know binary op {node.op}')\n # return \"do_not_understand_expression\"\n finally:\n self.indent -= 1\n\n op_translate = {ast.Lt: \"<\", ast.LtE: \"<=\", ast.Gt: \">\", ast.GtE: \">=\", ast.Eq: \"==\", ast.NotEq: \"!=\",\n ast.IsNot: \"is not\", ast.Is: \"is\", ast.In: \"in\"}\n def visit_Compare(self, node):\n s = \"\"\n left = self.visit(node.left)\n for op, comp in zip(node.ops, node.comparators):\n right = self.visit(comp)\n op = ExpressionString.op_translate[op.__class__]\n s = \"({left} {op} {right})\".format(left=left, op=op, right=right)\n left = right\n return s\n\n def visit_Subscript(self, node):\n p = self.visit(node.value)\n v = self.visit(node.slice.value)\n return f'{p}[{v}]'\n\n # required from py3.9, since in visit_Subscript node can be a string\n def visit_str(self, node):\n return repr(node)\n\nclass SimplifyExpression(ast.NodeTransformer):\n\n def visit_UnaryOp(self, node):\n node.operand = self.visit(node.operand)\n if isinstance(node.op, ast.USub):\n if isinstance(node.operand, ast.Num) and node.operand.n == 0:\n node = node.operand\n return node\n\n def visit_BinOp(self, node):\n node.left = left = self.visit(node.left)\n node.right = right = self.visit(node.right)\n if isinstance(node.op, ast.Mult):\n if isinstance(right, ast.Num) and right.n == 0:\n return num(0)\n elif isinstance(right, ast.Num) and right.n == 1:\n return left\n elif isinstance(left, ast.Num) and left.n == 0:\n return num(0)\n elif isinstance(left, ast.Num) and left.n == 1:\n return right\n if isinstance(node.op, ast.Div):\n if isinstance(left, ast.Num) and left.n == 0:\n return num(0)\n if isinstance(node.op, ast.Add):\n if isinstance(right, ast.Num) and right.n == 0:\n return left\n if isinstance(left, ast.Num) and left.n == 0:\n return right\n if isinstance(node.op, ast.Sub):\n if isinstance(right, ast.Num) and right.n == 0:\n return left\n if isinstance(left, ast.Num) and left.n == 0:\n return neg(right)\n if isinstance(node.op, ast.Pow):\n if isinstance(left, ast.Num) and left.n == 0:\n return num(0) # not ok with negative powers..\n if isinstance(right, ast.Num) and right.n == 0:\n # TODO: this means a numpy arrays can become a scalar\n return num(1)\n if isinstance(right, ast.Num) and right.n == 1:\n return left\n return node\n\n\nclass Translator(ast.NodeTransformer):\n def __init__(self, translator):\n self.translator = translator\n\n def visit_Call(self, node):\n # we skip visiting node.id\n node.args = [self.visit(k) for k in node.args]\n if hasattr(node, 'keywords'):\n node.keywords = [self.visit(k) for k in node.keywords]\n return node\n\n def visit_Name(self, node):\n expr = self.translator(node.id)\n if expr:\n node = parse_expression(expr)\n node = self.visit(node)\n return node\n\n\nclass NameCollector(ast.NodeTransformer):\n def __init__(self):\n self.names = {}\n\n def visit_Call(self, node):\n # we skip visiting node.id\n self.visit(node.func)\n node.args = [self.visit(k) for k in node.args]\n if hasattr(node, 'keywords'):\n node.keywords = [self.visit(k) for k in node.keywords]\n return node\n\n def visit_Name(self, node):\n if node.id not in self.names:\n self.names[node.id] = []\n self.names[node.id].append(node)\n return node\n\n\nclass SliceCollector(ast.NodeTransformer):\n def __init__(self):\n self.slices = collections.defaultdict(list)\n\n def visit_Subscript(self, node):\n # py39\n if node.value.id == 'df' and isinstance(node.slice.value, str):\n self.slices[node.slice.value].append(node)\n if node.value.id == 'df' and isinstance(node.slice.value, ast.Str):\n self.slices[node.slice.value.s].append(node)\n return node\n\nclass GraphBuiler(ast.NodeVisitor):\n def __init__(self):\n self.dependencies = []\n\n def visit_Call(self, node):\n fname = node.func.id\n dependencies = list(self.dependencies)\n self.dependencies = []\n for arg in node.args:\n self.visit(arg)\n graph = [fname, node_to_string(node), self.dependencies]\n dependencies.append(graph)\n self.dependencies = dependencies\n\n def visit_BinOp(self, node):\n dependencies = list(self.dependencies)\n self.dependencies = []\n self.visit(node.left)\n dep_left = self.dependencies\n\n self.dependencies = []\n self.visit(node.right)\n dep_right = self.dependencies\n graph = [opmap[type(node.op)], node_to_string(node), dep_left + dep_right]\n dependencies.append(graph)\n self.dependencies = dependencies\n\n def visit_Name(self, node):\n self.dependencies.append(node.id)\n\n\ndef _graph(expression_string):\n node = parse_expression(expression_string)\n g = GraphBuiler()\n node = g.visit(node)\n return g.dependencies[0]\n\n\ndef simplify(expression_string):\n node = parse_expression(expression_string)\n node = SimplifyExpression().visit(node)\n return node_to_string(node)\n\n\ndef derivative(expression, variable_name, simplify=True):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n node = Derivative(variable_name).visit(node)\n if simplify:\n node = SimplifyExpression().visit(node)\n return node_to_string(node)\n\n\ndef translate(expression, translator):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n node = Translator(translator).visit(node)\n return node_to_string(node)\n\n\ndef names(expression):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n nc = NameCollector()\n nc.visit(node)\n return nc.names\n\n\ndef slices(expression):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n nc = SliceCollector()\n nc.visit(node)\n return nc.slices\n\n\ndef parse_expression(expression_string):\n expr = ast.parse(expression_string).body[0]\n assert isinstance(expr, ast.Expr), f\"not an expression {str(expr)}\"\n return expr.value\n\n\ndef node_to_string(node, pretty=False):\n return ExpressionString(pretty=pretty).visit(node)\n\n\ndef validate_func(name, function_set):\n if name.id not in function_set:\n raise NameError(\"function %r is not defined\" % name.id)\n", "path": "packages/vaex-core/vaex/expresso.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import division\nimport logging\nimport collections\nimport ast\nimport _ast\nimport string\nimport numpy as np\nimport math\nimport sys\nimport six\nimport copy\nimport difflib\n\n\nif hasattr(_ast, 'Num'):\n ast_Num = _ast.Num\n ast_Str = _ast.Str\nelse: # Python3.8\n ast_Num = _ast.Constant\n ast_Str = _ast.Constant\n\nif hasattr(_ast, 'NameConstant'):\n ast_Constant = _ast.NameConstant\nelse:\n ast_Constant = _ast.Constant\n\n\nlogger = logging.getLogger(\"expr\")\nlogger.setLevel(logging.ERROR)\n\n\nvalid_binary_operators = [_ast.Add, _ast.Sub, _ast.Mult, ast.MatMult, _ast.Pow,\n _ast.Div, _ast.FloorDiv, _ast.BitAnd, _ast.BitOr, _ast.BitXor, _ast.Mod,\n _ast.RShift, _ast.LShift\n ]\nvalid_compare_operators = [_ast.Lt, _ast.LtE,\n _ast.Gt, _ast.GtE, _ast.Eq, _ast.NotEq, _ast.IsNot, _ast.Is, _ast.In]\nvalid_unary_operators = [_ast.USub, _ast.UAdd, _ast.Invert]\nvalid_id_characters = string.ascii_letters + string.digits + \"_\"\nvalid_functions = \"sin cos\".split()\n\nopmap = {\n _ast.Add: '+',\n _ast.Sub: '-',\n _ast.Mult: '*',\n _ast.Pow: '**',\n _ast.Div: '/',\n _ast.FloorDiv: '//',\n _ast.BitAnd: '&',\n _ast.BitOr: '|',\n _ast.BitXor: '^',\n _ast.Mod: '%',\n}\n\n\ndef math_parse(expression, macros=[]):\n # TODO: validate macros?\n node = ast.parse(expression)\n if len(node.body) != 1:\n raise ValueError(\"expected one expression, got %r\" % len(node.body))\n expr = node.body[0]\n if not isinstance(expr, _ast.Expr):\n raise ValueError(\"expected an expression got a %r\" % type(node.body))\n\n validate_expression(expr.value)\n return MathExpression(expression, macros)\n\n\nlast_func = None\n\n\ndef validate_expression(expr, variable_set, function_set=[], names=None):\n global last_func\n names = names if names is not None else []\n if isinstance(expr, six.string_types):\n node = ast.parse(expr)\n if len(node.body) != 1:\n raise ValueError(\"expected one expression, got %r\" %\n len(node.body))\n first_expr = node.body[0]\n if not isinstance(first_expr, _ast.Expr):\n raise ValueError(\"expected an expression got a %r\" %\n type(node.body))\n validate_expression(first_expr.value, variable_set,\n function_set, names)\n elif isinstance(expr, _ast.BinOp):\n if expr.op.__class__ in valid_binary_operators:\n validate_expression(expr.right, variable_set, function_set, names)\n validate_expression(expr.left, variable_set, function_set, names)\n else:\n raise ValueError(\"Binary operator not allowed: %r\" % expr.op)\n elif isinstance(expr, _ast.UnaryOp):\n if expr.op.__class__ in valid_unary_operators:\n validate_expression(expr.operand, variable_set,\n function_set, names)\n else:\n raise ValueError(\"Unary operator not allowed: %r\" % expr.op)\n elif isinstance(expr, _ast.Name):\n if expr.id not in variable_set:\n matches = difflib.get_close_matches(expr.id, list(variable_set))\n msg = \"Column or variable %r does not exist.\" % expr.id\n if matches:\n msg += ' Did you mean: ' + \" or \".join(map(repr, matches))\n\n raise NameError(msg)\n names.append(expr.id)\n elif isinstance(expr, ast_Num):\n pass # numbers are fine\n elif isinstance(expr, ast_Str):\n pass # as well as strings\n elif isinstance(expr, _ast.Call):\n validate_func(expr.func, function_set)\n last_func = expr\n for arg in expr.args:\n validate_expression(arg, variable_set, function_set, names)\n for arg in expr.keywords:\n validate_expression(arg, variable_set, function_set, names)\n elif isinstance(expr, _ast.Compare):\n validate_expression(expr.left, variable_set, function_set, names)\n for op in expr.ops:\n if op.__class__ not in valid_compare_operators:\n raise ValueError(\"Compare operator not allowed: %r\" % op)\n for comparator in expr.comparators:\n validate_expression(comparator, variable_set, function_set, names)\n elif isinstance(expr, _ast.keyword):\n validate_expression(expr.value, variable_set, function_set, names)\n elif isinstance(expr, ast_Constant):\n pass # like True and False\n elif isinstance(expr, _ast.List):\n for el in expr.elts:\n validate_expression(el, variable_set, function_set, names)\n elif isinstance(expr, _ast.Dict):\n for key in expr.keys:\n validate_expression(key, variable_set, function_set, names)\n for value in expr.values:\n validate_expression(value, variable_set, function_set, names)\n elif isinstance(expr, _ast.Subscript):\n validate_expression(expr.value, variable_set, function_set, names)\n if isinstance(expr.slice.value, ast_Num):\n pass # numbers are fine\n elif isinstance(expr.slice.value, str) or isinstance(expr.slice.value, _ast.Str):\n pass # and strings (from py3.9, value is str)\n else:\n raise ValueError(\n \"Only subscript/slices with numbers allowed, not: %r\" % expr.slice.value)\n else:\n last_func = expr\n raise ValueError(\"Unknown expression type: %r\" % type(expr))\n\n\nclass Validator(ast.NodeVisitor):\n\n def generic_visit(self, node):\n raise ValueError('unexpected node: {}', ast.dump(node))\n\n def visit_BinOp(self, expr):\n if expr.op.__class__ in valid_binary_operators:\n validate_expression(expr.right, variable_set, function_set, names)\n validate_expression(expr.left, variable_set, function_set, names)\n else:\n raise ValueError(\"Binary operator not allowed: %r\" % expr.op)\n\n\ndef mul(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Mult())\n\n\ndef div(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Div())\n\n\ndef add(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Add())\n\n\ndef sub(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Sub())\n\n\ndef pow(left, right):\n return ast.BinOp(left=left, right=right, op=ast.Pow())\n\n\ndef sqr(node):\n return ast.BinOp(left=node, right=num(2), op=ast.Pow())\n\ndef sqrt(node):\n return call('sqrt', [node])\n\n\ndef neg(node):\n return ast.UnaryOp(op=ast.USub(), operand=node)\n\n\ndef num(n):\n return ast.Num(n=n)\n\n\ndef call(fname, args):\n return ast.Call(func=ast.Name(id=fname, ctx=ast.Load()), args=args)\n\n\ndef _dlog10(n, args):\n assert len(args) == 1\n assert n == 0\n a = call('log', args=[num(10)])\n return div(num(1), mul(args[0], a))\n\n\ndef _dsqrt(n, args):\n assert n == 0\n assert len(args) == 1\n a = call('log', args=[num(10)])\n return mul(num(1/2), pow(args[0], num(-0.5)))\n\n\ndef _dcos(n, args):\n assert n == 0\n assert len(args) == 1\n return neg(call('sin', args=args))\n\ndef _darccos(n, args):\n assert n == 0\n assert len(args) == 1\n a = sqrt(sub(num(1), sqr(args[0])))\n return neg(div(num(1), a))\n\ndef _darctan2(n, args):\n # derivative of arctan2(y, x)\n assert (n >= 0) and (n <= 1)\n assert len(args) == 2\n y, x = args\n if n == 1: # derivative wrt 2nd argument (x)\n return div(neg(y), add(sqr(x), sqr(y)))\n if n == 0: # derivative wrt 1st argument (y)\n return div(x, add(sqr(x), sqr(y)))\n\ndef _dtan(n, args):\n assert n == 0\n assert len(args) == 1\n# a = div(sub(num(1), sqr(args[0])))\n return div(num(1), sqr(call('cos', args=args)))\n\nstandard_function_derivatives = {}\nstandard_function_derivatives['sin'] = 'cos'\nstandard_function_derivatives['cos'] = _dcos\nstandard_function_derivatives['tan'] = _dtan\nstandard_function_derivatives['log10'] = _dlog10\nstandard_function_derivatives['sqrt'] = _dsqrt\nstandard_function_derivatives['arctan2'] = _darctan2\nstandard_function_derivatives['arccos'] = _darccos\n\n\nclass Derivative(ast.NodeTransformer):\n def __init__(self, id, function_derivatives={}):\n self.id = id\n self.function_derivatives = dict(standard_function_derivatives)\n self.function_derivatives.update(function_derivatives)\n\n def format(self, node):\n # try:\n return ExpressionString().visit(node)\n # return ast.dump(node)\n\n def visit_Num(self, node):\n return ast.Num(n=0)\n\n def visit_Name(self, node):\n if node.id == self.id:\n return ast.Num(n=1)\n else:\n return ast.Num(n=0)\n\n def visit_Call(self, node):\n fname = node.func.id\n df = self.function_derivatives.get(fname)\n if df is None:\n raise ValueError('Derivative of {} is unknown'.format(fname))\n if not callable(df): # simply a string\n assert len(node.args) == 1\n result = mul(call(df, node.args), self.visit(node.args[0]))\n else:\n terms = [mul(df(i, node.args), self.visit(arg))\n for i, arg in enumerate(node.args)]\n result = terms[0]\n for term in terms[1:]:\n result = add(result, term)\n return result\n\n def generic_visit(self, node):\n # it's annoying that the default one modifies in place\n return super(Derivative, self).generic_visit(copy.deepcopy(node))\n\n def visit_BinOp(self, node):\n solution = None\n if isinstance(node.op, ast.Mult):\n solution = add(mul(self.visit(node.left), node.right),\n mul(node.left, self.visit(node.right)))\n if isinstance(node.op, ast.Div):\n # (n*at - t*an) / n2\n n = node.right\n t = node.left\n at = self.visit(t)\n an = self.visit(n)\n solution = div(sub(mul(n, at), mul(t, an)), pow(n, num(2)))\n if isinstance(node.op, ast.Add):\n solution = add(self.visit(node.left), self.visit(node.right))\n if isinstance(node.op, ast.Sub):\n solution = sub(self.visit(node.left), self.visit(node.right))\n if isinstance(node.op, ast.Pow):\n # following https://en.wikipedia.org/wiki/Differentiation_rules\n f = node.left\n df = self.visit(f)\n g = node.right\n dg = self.visit(g)\n # if g is a number, we take a equivalent solution, which gives a nicer result\n if isinstance(g, ast.Num):\n solution = mul(g, mul(df, pow(node.left, num(node.right.n-1))))\n else:\n a = add(mul(df, div(g, f)), mul(dg, call('log', [f])))\n solution = mul(pow(f, g), a)\n if solution is None:\n raise ValueError('Unknown rule for: {}'.format(self.format(node)))\n return solution\n\n\nclass ExpressionString(ast.NodeVisitor):\n def __init__(self, pretty=False):\n self.pretty = pretty\n self.indent = 0\n def visit_UnaryOp(self, node):\n if isinstance(node.op, ast.USub):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"-{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"-({})\".format(self.visit(node.operand))\n elif isinstance(node.op, ast.UAdd):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"+{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"+({})\".format(self.visit(node.operand))\n elif isinstance(node.op, ast.Invert):\n if isinstance(node.operand, (ast.Name, ast.Num, _ast.Name)):\n return \"~{}\".format(self.visit(node.operand)) # prettier\n else:\n return \"~({})\".format(self.visit(node.operand))\n else:\n raise ValueError('Unary op not supported: {}'.format(node.op))\n\n def visit_Name(self, node):\n return node.id\n\n def visit_Num(self, node):\n return repr(node.n)\n\n def visit_keyword(self, node):\n return \"%s=%s\" % (node.arg, self.visit(node.value))\n\n def visit_NameConstant(self, node):\n return repr(node.value)\n\n def visit_Dict(self, node):\n parts = []\n for key, value in zip(node.keys, node.values):\n key = self.visit(key)\n value = self.visit(value)\n parts.append(f'{key}: {value}')\n return '{' + ', '.join(parts) + '}'\n\n def visit_Call(self, node):\n args = [self.visit(k) for k in node.args]\n keywords = []\n if hasattr(node, 'keywords'):\n keywords = [self.visit(k) for k in node.keywords]\n return \"{}({})\".format(node.func.id, \", \".join(args + keywords))\n\n def visit_Str(self, node):\n return repr(node.s)\n\n def visit_List(self, node):\n return \"[{}]\".format(\", \".join([self.visit(k) for k in node.elts]))\n\n def pow(self, left, right):\n return \"({left} ** {right})\".format(left=left, right=right)\n\n def visit_BinOp(self, node):\n newline = indent = \"\"\n if self.pretty:\n indent = \" \" * self.indent\n newline = \"\\n\"\n self.indent += 1\n left = \"{}{}{}\".format(newline, indent, self.visit(node.left))\n right = \"{}{}{}\".format(newline, indent, self.visit(node.right))\n try:\n if isinstance(node.op, ast.Mult):\n return \"({left} * {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.MatMult):\n return \"({left} @ {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Div):\n return \"({left} / {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Mod):\n return \"({left} % {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.FloorDiv):\n return \"({left} // {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Add):\n return \"({left} + {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Sub):\n return \"({left} - {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.Pow):\n return self.pow(left, right)\n elif isinstance(node.op, ast.BitAnd):\n return \"({left} & {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.BitOr):\n return \"({left} | {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.BitXor):\n return \"({left} ^ {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.RShift):\n return \"({left} >> {right})\".format(left=left, right=right)\n elif isinstance(node.op, ast.LShift):\n return \"({left} << {right})\".format(left=left, right=right)\n else:\n raise ValueError(f'Do not know binary op {node.op}')\n # return \"do_not_understand_expression\"\n finally:\n self.indent -= 1\n\n op_translate = {ast.Lt: \"<\", ast.LtE: \"<=\", ast.Gt: \">\", ast.GtE: \">=\", ast.Eq: \"==\", ast.NotEq: \"!=\",\n ast.IsNot: \"is not\", ast.Is: \"is\", ast.In: \"in\"}\n def visit_Compare(self, node):\n s = \"\"\n left = self.visit(node.left)\n for op, comp in zip(node.ops, node.comparators):\n right = self.visit(comp)\n op = ExpressionString.op_translate[op.__class__]\n s = \"({left} {op} {right})\".format(left=left, op=op, right=right)\n left = right\n return s\n\n def visit_Subscript(self, node):\n p = self.visit(node.value)\n v = self.visit(node.slice.value)\n return f'{p}[{v}]'\n\n # required from py3.9, since in visit_Subscript node can be a string\n def visit_str(self, node):\n return repr(node)\n\nclass SimplifyExpression(ast.NodeTransformer):\n\n def visit_UnaryOp(self, node):\n node.operand = self.visit(node.operand)\n if isinstance(node.op, ast.USub):\n if isinstance(node.operand, ast.Num) and node.operand.n == 0:\n node = node.operand\n return node\n\n def visit_BinOp(self, node):\n node.left = left = self.visit(node.left)\n node.right = right = self.visit(node.right)\n if isinstance(node.op, ast.Mult):\n if isinstance(right, ast.Num) and right.n == 0:\n return num(0)\n elif isinstance(right, ast.Num) and right.n == 1:\n return left\n elif isinstance(left, ast.Num) and left.n == 0:\n return num(0)\n elif isinstance(left, ast.Num) and left.n == 1:\n return right\n if isinstance(node.op, ast.Div):\n if isinstance(left, ast.Num) and left.n == 0:\n return num(0)\n if isinstance(node.op, ast.Add):\n if isinstance(right, ast.Num) and right.n == 0:\n return left\n if isinstance(left, ast.Num) and left.n == 0:\n return right\n if isinstance(node.op, ast.Sub):\n if isinstance(right, ast.Num) and right.n == 0:\n return left\n if isinstance(left, ast.Num) and left.n == 0:\n return neg(right)\n if isinstance(node.op, ast.Pow):\n if isinstance(left, ast.Num) and left.n == 0:\n return num(0) # not ok with negative powers..\n if isinstance(right, ast.Num) and right.n == 0:\n # TODO: this means a numpy arrays can become a scalar\n return num(1)\n if isinstance(right, ast.Num) and right.n == 1:\n return left\n return node\n\n\nclass Translator(ast.NodeTransformer):\n def __init__(self, translator):\n self.translator = translator\n\n def visit_Call(self, node):\n # we skip visiting node.id\n node.args = [self.visit(k) for k in node.args]\n if hasattr(node, 'keywords'):\n node.keywords = [self.visit(k) for k in node.keywords]\n return node\n\n def visit_Name(self, node):\n expr = self.translator(node.id)\n if expr:\n node = parse_expression(expr)\n node = self.visit(node)\n return node\n\n\nclass NameCollector(ast.NodeTransformer):\n def __init__(self):\n self.names = {}\n\n def visit_Call(self, node):\n # we skip visiting node.id\n self.visit(node.func)\n node.args = [self.visit(k) for k in node.args]\n if hasattr(node, 'keywords'):\n node.keywords = [self.visit(k) for k in node.keywords]\n return node\n\n def visit_Name(self, node):\n if node.id not in self.names:\n self.names[node.id] = []\n self.names[node.id].append(node)\n return node\n\n\nclass SliceCollector(ast.NodeTransformer):\n def __init__(self):\n self.slices = collections.defaultdict(list)\n\n def visit_Subscript(self, node):\n # py39\n if node.value.id == 'df' and isinstance(node.slice.value, str):\n self.slices[node.slice.value].append(node)\n if node.value.id == 'df' and isinstance(node.slice.value, ast.Str):\n self.slices[node.slice.value.s].append(node)\n return node\n\nclass GraphBuiler(ast.NodeVisitor):\n def __init__(self):\n self.dependencies = []\n\n def visit_Call(self, node):\n fname = node.func.id\n dependencies = list(self.dependencies)\n self.dependencies = []\n for arg in node.args:\n self.visit(arg)\n graph = [fname, node_to_string(node), self.dependencies]\n dependencies.append(graph)\n self.dependencies = dependencies\n\n def visit_BinOp(self, node):\n dependencies = list(self.dependencies)\n self.dependencies = []\n self.visit(node.left)\n dep_left = self.dependencies\n\n self.dependencies = []\n self.visit(node.right)\n dep_right = self.dependencies\n graph = [opmap[type(node.op)], node_to_string(node), dep_left + dep_right]\n dependencies.append(graph)\n self.dependencies = dependencies\n\n def visit_Name(self, node):\n self.dependencies.append(node.id)\n\n\ndef _graph(expression_string):\n node = parse_expression(expression_string)\n g = GraphBuiler()\n node = g.visit(node)\n return g.dependencies[0]\n\n\ndef simplify(expression_string):\n node = parse_expression(expression_string)\n node = SimplifyExpression().visit(node)\n return node_to_string(node)\n\n\ndef derivative(expression, variable_name, simplify=True):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n node = Derivative(variable_name).visit(node)\n if simplify:\n node = SimplifyExpression().visit(node)\n return node_to_string(node)\n\n\ndef translate(expression, translator):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n node = Translator(translator).visit(node)\n return node_to_string(node)\n\n\ndef names(expression):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n nc = NameCollector()\n nc.visit(node)\n return nc.names\n\n\ndef slices(expression):\n if isinstance(expression, str):\n node = parse_expression(expression)\n else:\n node = expression\n nc = SliceCollector()\n nc.visit(node)\n return nc.slices\n\n\ndef parse_expression(expression_string):\n expr = ast.parse(expression_string).body[0]\n assert isinstance(expr, ast.Expr), f\"not an expression {str(expr)}\"\n return expr.value\n\n\ndef node_to_string(node, pretty=False):\n return ExpressionString(pretty=pretty).visit(node)\n\n\ndef validate_func(name, function_set):\n if name.id not in function_set:\n raise NameError(\"function %r is not defined\" % name.id)\n", "path": "packages/vaex-core/vaex/expresso.py"}]} |
gh_patches_debug_1121 | rasdani/github-patches | git_diff | chainer__chainer-4497 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Feature request: Variable.xp to get the array module of its data array
I often need to call `chainer.cuda.get_array_module` for `Variable` objects. Like `Link.xp`, `Variable.xp` would be a useful property.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainer/variable.py`
Content:
```
1 import collections
2 import copy
3 import heapq
4 import traceback
5 import warnings
6 import weakref
7
8 import numpy
9
10 import chainer
11 from chainer import _backprop_utils
12 from chainer.backends import cuda
13 from chainer.backends import intel64
14 from chainer import initializers
15 from chainer.initializers import constant
16 from chainer.utils import argument
17
18
19 def _check_grad_type(func, x, gx):
20 if x.data is None or gx is None:
21 # ``x.data is None`` implies that the data array is not retained
22 return
23 if not chainer.is_arrays_compatible((gx, x.data)):
24 msg = ('Type of data and grad mismatch\ngrad: %s != data: %s' %
25 (type(x.data), type(gx)))
26 typ = TypeError
27 elif gx.dtype != x.data.dtype:
28 msg = ('Dtype of data and grad mismatch\ngrad: %s != data: %s' %
29 (x.data.dtype, gx.dtype))
30 typ = TypeError
31 elif gx.shape != x.data.shape:
32 msg = ('Shape of data and grad mismatch\ngrad: %s != data: %s' %
33 (x.data.shape, gx.shape))
34 typ = ValueError
35 else:
36 return
37
38 detail = ''
39 if func:
40 detail = 'Function `{0}` ({1}) has a bug.\n'.format(
41 type(func)._impl_name, func.label)
42 stack = func.stack
43 if stack:
44 detail += 'Stacktrace of the function is below:\n'
45 for line in traceback.format_list(func.stack):
46 detail += line
47 detail += '''
48 Please report this error to the issue tracker with the stack trace,
49 the information of your environment, and your script:
50 https://github.com/chainer/chainer/issues/new.
51 '''.format(type(func).__name__, func.label)
52
53 raise typ(detail + msg)
54
55
56 def variable_repr(var):
57 """Return the string representation of a variable.
58
59 Args:
60 var (~chainer.Variable): Input Variable.
61 .. seealso:: numpy.array_repr
62 """
63 xp = cuda.get_array_module(var)
64 if xp is numpy:
65 arr = var.data
66 else:
67 arr = var.data.get()
68
69 if var.name:
70 prefix = 'variable ' + var.name
71 else:
72 prefix = 'variable'
73
74 if arr is None:
75 lst = 'None'
76 elif arr.size > 0 or arr.shape == (0,):
77 lst = numpy.array2string(arr, None, None, None, ', ', prefix + '(')
78 else: # show zero-length shape unless it is (0,)
79 lst = '[], shape=%s' % (repr(arr.shape),)
80
81 return '%s(%s)' % (prefix, lst)
82
83
84 def variable_str(var):
85 """Return the string representation of a variable.
86
87 Args:
88 var (~chainer.Variable): Input Variable.
89 .. seealso:: numpy.array_str
90 """
91 xp = cuda.get_array_module(var)
92 if xp is numpy:
93 arr = var.data
94 else:
95 arr = var.data.get()
96
97 if var.name:
98 prefix = 'variable ' + var.name
99 else:
100 prefix = 'variable'
101
102 if arr is None:
103 lst = 'None'
104 else:
105 lst = numpy.array2string(arr, None, None, None, ' ', prefix + '(')
106
107 return '%s(%s)' % (prefix, lst)
108
109
110 class VariableNode(object):
111
112 """Node in the backward computational graph representing a variable.
113
114 This object represents a variable node in a computational graph. The node
115 is used in error backpropagation (a.k.a. backprop) to determine which
116 gradient to be passed to each function.
117
118 A variable node is held by the corresponding :class:`~chainer.Variable`
119 object, which is managed by users. :class:`~chainer.FunctionNode` objects
120 that take the variable as an input also hold references to the variable
121 node.
122
123 Note that the node does not hold a reference to the corresponding data
124 array in general. The data array is actually accessible by the node in the
125 following cases.
126
127 1. If there exists a :class:`~chainer.Variable` object that holds a
128 reference to the variable node, the variable node holds a weak reference
129 to the variable object, and thus the data array is accessible via the
130 weak reference.
131 2. If :meth:`retain_data` is called, the node holds a reference to the data
132 array. It is mainly called by a function that needs the input or output
133 data array in its backprop procedure.
134 See :meth:`FunctionNode.retain_inputs()
135 <chainer.FunctionNode.retain_inputs>`
136 and :meth:`FunctionNode.retain_outputs()
137 <chainer.FunctionNode.retain_outputs>` for more details.
138
139 Users usually do not need to touch this variable node object. The
140 computational graph is automatically managed by Chainer, and any interface
141 that is beneficial for users is also provided by
142 :class:`~chainer.Variable`.
143
144 Args:
145 variable (Variable): The corresponding variable object.
146 name (str): Name of the variable node.
147
148 Attributes:
149 ~VariableNode.dtype: Data type of the data array.
150 ~VariableNode.shape: Shape of the data array.
151 ~VariableNode.name (str): Name of the variable node.
152
153 """
154
155 _creator_node = None
156 _data = None
157 _rank = 0
158 # Name of the Function is assigned if this variable is a gradient generated
159 # by an old-style Function
160 _old_style_grad_generator = None
161
162 def __init__(self, variable, name, **kwargs):
163 argument.check_unexpected_kwargs(
164 kwargs,
165 grad='unexpected keyword argument "grad": '
166 'pass the gradient to Variable instead'
167 )
168 self._variable = weakref.ref(variable)
169 self.name = name
170 self._requires_grad = variable.requires_grad
171
172 vdata = variable.data
173 self._update_data_info(vdata)
174
175 @property
176 def creator(self):
177 """Function object that created this variable node.
178
179 When the function is implemented with the old-style API (i.e., it uses
180 :class:`~chainer.Function` class),
181 this property returns the :class:`~chainer.Function` object.
182 The object is extracted from the :class:`~chainer.FunctionAdapter`
183 object, so the returned object is not the function node, but instead
184 the actual implementation of forward and backward procedures.
185
186 When the function is implemented with the new-style API (i.e., it uses
187 :class:`~chainer.FunctionNode` class),
188 this property returns the function node
189 object. In this case, the returned object is same as
190 :attr:`creator_node`.
191
192 .. warning::
193
194 As of v3.0.0, when the creator is an old-style function, the
195 following code is invalid:
196
197 .. code-block:: python
198
199 creator = v.creator
200 v.creator = None
201 ...
202 v.creator = creator
203
204 The point is that :class:`~chainer.FunctionNode` objects are used
205 as nodes in the computational graph instead of
206 :class:`~chainer.Function`, and each :class:`~chainer.Function`
207 object only holds a *weak reference* to the corresponding
208 :class:`~chainer.FunctionNode`.
209 Since ``creator`` returns the :class:`~chainer.Function` object,
210 the :class:`~chainer.FunctionNode` object is not kept by preserving
211 ``creator``.
212
213 The above code should be fixed as follows.
214
215 .. code-block:: python
216
217 creator_node = v.creator_node
218 v.creator_node = None
219 ...
220 v.creator_node = creator_node
221
222 """
223 node = self._creator_node
224 if node is None:
225 return None
226
227 if isinstance(node, chainer.function.FunctionAdapter):
228 return node.function
229 return node
230
231 @creator.setter
232 def creator(self, func):
233 self.creator_node = func
234
235 @property
236 def creator_node(self):
237 """Function node that has this variable as an output.
238
239 See :class:`~chainer.FunctionNode` for the definition of a function
240 node.
241
242 """
243 return self._creator_node
244
245 @creator_node.setter
246 def creator_node(self, func):
247 if isinstance(func, chainer.Function):
248 func = func.node
249 self._creator_node = func
250 if func is not None:
251 self._rank = func.rank + 1
252
253 @property
254 def data(self):
255 """Data array of the corresponding variable.
256
257 If the data is not available, it returns ``None``.
258
259 """
260 return self._data
261
262 @data.setter
263 def data(self, d):
264 self._data = d
265 self._update_data_info(d)
266
267 @property
268 def grad(self):
269 """Gradient array of the corresponding variable.
270
271 If the variable is not available, it returns ``None``.
272
273 """
274 var = self._variable()
275 return None if var is None else var.grad
276
277 @property
278 def grad_var(self):
279 """Gradient variable of the corresponding variable.
280
281 If the corresponding variable is not available, it return ``None``.
282
283 """
284 var = self._variable()
285 return None if var is None else var._grad_var
286
287 @property
288 def label(self):
289 """Short text that represents the variable node."""
290 if self.shape == ():
291 return str(self.dtype)
292 return '(%s), %s' % (', '.join(map(str, self.shape)),
293 str(self.dtype))
294
295 @property
296 def rank(self):
297 return self._rank
298
299 @property
300 def requires_grad(self):
301 """It indicates that ``grad`` will be set in backward calculation."""
302 return self._requires_grad
303
304 def get_variable(self):
305 """Returns the corresponding :class:`~chainer.Variable` object.
306
307 VariableNode object holds a weak reference of the variable object. If
308 the reference is alive, it is returned by this property. Otherwise,
309 this property creates a new :class:`~chainer.Variable` object from
310 this node object and returns it.
311
312 Returns:
313 Variable: The variable object that refers this node.
314
315 """
316 var = self._variable()
317 if var is not None:
318 return var
319
320 var = Variable(self.data, name=self.name,
321 requires_grad=self._requires_grad)
322 var._node = self
323 return var
324
325 def get_variable_or_none(self):
326 """Returns the holding :class:`~chainer.Variable` object or ``None``.
327
328 VariableNode object holds a weak reference of the variable object.If
329 the reference is alive, it is returned by this property. Otherwise,
330 returns ``None``.
331
332 Returns:
333 Variable: The variable object that refers this node.
334
335 """
336 return self._variable()
337
338 def set_creator(self, creator):
339 """Sets a :class:`~chainer.Function` object that created this node.
340
341 This method is equivalent to ``self.creator = creator``. A
342 :class:`~chainer.FunctionNode` object can also be passed.
343
344 Args:
345 creator (Function or FunctionNode): Function that has created this
346 variable.
347
348 """
349 self.creator = creator
350
351 def set_creator_node(self, creator_node):
352 """Sets a :class:`~chainer.FunctionNode` object that created this node.
353
354 This method is equivalent to ``self.creator_node = creator_node``. A
355 :class:`~chainer.Function` object can also be passed, in which case the
356 :attr:`Function.node <chainer.Function.node>` attribute is used.
357
358 Args:
359 creator_node (FunctionNode or Function): Function node that has
360 this variable as an output.
361
362 """
363 self.creator_node = creator_node
364
365 def unchain(self):
366 """Deletes the reference to the creator of this variable node.
367
368 This method is equivalent to ``self.creator_node = None``.
369
370 """
371 self.creator_node = None
372
373 def retain_data(self):
374 """Lets the node hold a reference to the underlying data array.
375
376 This method gets the data array of the corresponding variable and keeps
377 it. If the weak reference to the corresponding variable is dead, it
378 raises an error.
379
380 """
381 variable = self._variable()
382 if variable is not None:
383 self.data = variable.data
384 else:
385 raise RuntimeError('cannot retain variable data: the variable has '
386 'been already released')
387
388 def _update_data_info(self, d):
389 if d is None:
390 self.dtype = None
391 self.shape = None
392 else:
393 self.dtype = d.dtype
394 self.shape = d.shape
395
396 # If the node has a reference to data, update it as well.
397 if self._data is not None:
398 self._data = d
399
400 def _check_old_style_gradient(self):
401 if self._old_style_grad_generator is not None:
402 raise RuntimeError(
403 'cannot twice-differentiate an old style Function "%s"' %
404 self._old_style_grad_generator)
405
406
407 def _create_variable(data, name, grad, requires_grad):
408 return Variable(
409 data, name=name, grad=grad, requires_grad=requires_grad)
410
411
412 class Variable(object):
413
414 """__init__(data=None, *, name=None, grad=None, requires_grad=True)
415
416 Array with a structure to keep track of computation.
417
418 Every variable holds a data array of type either :class:`numpy.ndarray` or
419 :class:`cupy.ndarray`.
420
421 A variable object holds a data array and a
422 :class:`~chainer.variable.VariableNode` object of
423 a computational graph. If the variable is constructed by the user, the node
424 is *root* and does not hold any parent. If the variable is constructed by a
425 :class:`~chainer.FunctionNode` object, the node holds a reference to its
426 parent called :attr:`creator_node`.
427 This reference is used in backpropagation to backtrack the graph.
428
429 Users can disable (resp. enable) this chaining behavior by calling
430 :func:`~chainer.no_backprop_mode` (resp.
431 :func:`~chainer.force_backprop_mode`).
432 In the former context, a variable never creates a computational graph,
433 whereas in the latter context, it is forced to create.
434
435 .. warning::
436
437 ``volatile`` argument is not supported anymore since v2.
438 Instead, use :func:`chainer.no_backprop_mode`.
439
440 Args:
441 data (numpy.ndarray or cupy.ndarray): Initial data array.
442 name (str): Name of the variable.
443 grad (numpy.ndarray or cupy.ndarray): Initial gradient array.
444 requires_grad (bool): Boolean indicating whether ``grad`` will be set
445 in backward calculation.
446
447 """ # NOQA
448
449 def __init__(self, data=None, **kwargs):
450 argument.check_unexpected_kwargs(
451 kwargs, volatile='volatile argument is not supported anymore. '
452 'Use chainer.using_config')
453 name, grad, requires_grad \
454 = argument.parse_kwargs(
455 kwargs, ('name', None), ('grad', None),
456 ('requires_grad', True))
457
458 if (data is not None and
459 not isinstance(data, chainer.get_array_types())):
460 msg = '''numpy.ndarray or cuda.ndarray are expected.
461 Actual: {0}'''.format(type(data))
462 raise TypeError(msg)
463
464 # Use a list as a data structure to hold the data array indirectly to
465 # abstract its initialized/uninitialized state.
466 self._data = [data]
467 self._requires_grad = requires_grad
468 self._node = VariableNode(self, name)
469 self._grad_var = None if grad is None else Variable(grad)
470 self._loss_scale = None
471
472 def __copy__(self):
473 return self._copy_to(Variable())
474
475 def _copy_to(self, target):
476 target.__dict__ = copy.copy(self.__dict__)
477 target._node = VariableNode(target, self.name)
478 return target
479
480 def __reduce__(self):
481 return _create_variable, (self.data, self.name, self.grad,
482 self._requires_grad)
483
484 def __repr__(self):
485 return variable_repr(self)
486
487 def __str__(self):
488 return variable_str(self)
489
490 @property
491 def name(self):
492 return self._node.name
493
494 @name.setter
495 def name(self, n):
496 self._node.name = n
497
498 def summary(self):
499 if self.name:
500 return '<variable %s>' % self.name
501 else:
502 return '<variable at 0x%x>' % id(self)
503
504 def debug_print(self):
505 """Display a summary of the stored data and location of the Variable"""
506
507 msg = """{summary}
508 - device: {device}
509 - backend: {backend}
510 - shape: {shape}
511 - dtype: {dtype}
512 - statistics: {stats}
513 - grad: {grad}"""
514
515 stats_msg = 'mean={0:.8f}, std={1:.8f}'
516
517 data = self.data
518 with cuda.get_device_from_array(data) as dev:
519 xp = numpy if int(dev) == -1 else cuda.cupy
520
521 if data is None:
522 # `data` can be `None` if constructed without any arguments
523 device = None
524 backend = None
525 stats = None
526 else:
527 device = getattr(data, 'device', 'CPU')
528 backend = type(data)
529 stats = stats_msg.format(float(xp.mean(data)),
530 float(xp.std(data)))
531 shape = getattr(data, 'shape', None)
532 dtype = getattr(data, 'dtype', None)
533
534 if self.grad is None:
535 grad = None
536 elif xp.all(self.grad == 0):
537 grad = 0
538 else:
539 grad = stats_msg.format(float(xp.mean(self.grad)),
540 float(xp.std(self.grad)))
541
542 return msg.format(summary=self.summary(), device=device,
543 backend=backend, shape=shape, dtype=dtype,
544 stats=stats, grad=grad)
545
546 def __pos__(self):
547 return self
548
549 def __len__(self):
550 """Returns the first dimension of the data array.
551
552 Returns:
553 int: Number of the first dimension of the data array.
554
555 """
556 return len(self.data)
557
558 @property
559 def label(self):
560 """Short text that represents the variable."""
561 return self._node.label
562
563 @property
564 def creator(self):
565 """Function implementation that created this variable.
566
567 When this variable has been created by an old-style function (i.e., it
568 is implemented as a subclass of :class:`Function`), this property
569 returns that :class:`Function` object.
570
571 When this variable has been created by a new-style function (i.e., it
572 is implemented as a subclass of :class:`FunctionNode` class), this
573 property returns that node object.
574
575 """
576 return self._node.creator
577
578 @creator.setter
579 def creator(self, func):
580 self._node.creator = func
581
582 @property
583 def creator_node(self):
584 """:class:`FunctionNode` object that created this variable.
585
586 This property has a setter to which ``None`` can be set. Setting
587 ``None`` to this property is equivalent to call :meth:`unchain`;
588 it purges the variable from the function that created this variable.
589
590 The setter also accepts the original :class:`FunctionNode` object that
591 created this variable. For example, you can once set ``None`` to this
592 property and then set the original value again.
593
594 .. note::
595 Setting an irrelevant :meth:`FunctionNode` object does not emit any
596 error immediately, whereas the behavior is undefined. Do not set
597 a :meth:`FunctionNode` object that did not create this variable
598 object.
599
600 """
601 return self._node._creator_node
602
603 @creator_node.setter
604 def creator_node(self, func):
605 self._node.creator_node = func
606
607 @property
608 def array(self):
609 """The underlying data array.
610
611 It is either :class:`numpy.ndarray` or :class:`cupy.ndarray` object,
612 or ``None`` if the variable in in an uninitialized state.
613
614 """
615 return self._data[0]
616
617 @array.setter
618 def array(self, d):
619 self._data[0] = d
620 self._node._update_data_info(d)
621
622 @property
623 def data(self):
624 """The underlying data array (equivalent to :attr:`array`).
625
626 Note that using this attribute directly is discouraged; use
627 :attr:`array` instead. Using :attr:`array`, you can find an error
628 earlier when your code mixes up Variable and ndarray because
629 ndarray does not have an attribute ``.array`` while it has
630 ``.data``.
631
632 """
633 return self._data[0]
634
635 @data.setter
636 def data(self, d):
637 self._data[0] = d
638 self._node._update_data_info(d)
639
640 @property
641 def grad(self):
642 """Gradient array of this variable.
643
644 Note that this property returns the underlying array of the gradient
645 variable instead of the gradient variable itself; to get/set
646 gradient variable, use :attr:`grad_var` instead.
647
648 """
649 gv = self._grad_var
650 return None if gv is None else gv.data
651
652 @grad.setter
653 def grad(self, g):
654 self.grad_var = None if g is None else Variable(g)
655
656 @property
657 def grad_var(self):
658 """Gradient variable."""
659 return self._grad_var
660
661 @grad_var.setter
662 def grad_var(self, g):
663 if g is not None:
664 _check_grad_type(None, self, g.data)
665 self._grad_var = g
666
667 @property
668 def shape(self):
669 return self.data.shape
670
671 @property
672 def ndim(self):
673 return self.data.ndim
674
675 @property
676 def size(self):
677 return self.data.size
678
679 @property
680 def dtype(self):
681 return self.data.dtype
682
683 @property
684 def rank(self):
685 return self._node.rank
686
687 @property
688 def node(self):
689 return self._node
690
691 @property
692 def requires_grad(self):
693 """It indicates that ``grad`` will be set in backward calculation."""
694 return self._requires_grad
695
696 @property
697 def T(self):
698 """Transposition of this variable."""
699 return chainer.functions.transpose(self)
700
701 def to_cpu(self):
702 """Copies the data and gradient arrays to CPU."""
703
704 data = self.data
705 if data is None:
706 return
707
708 if isinstance(data, cuda.ndarray):
709 # cupy.ndarray to numpy.ndarray
710 self._data = [cuda.to_cpu(data)]
711 elif isinstance(data, intel64.mdarray):
712 # ideep.mdarray to numpy.ndarray
713 self._data = [numpy.array(data)]
714
715 if self._grad_var is not None:
716 self._grad_var.to_cpu()
717 # ensure that the node tracks the device migration
718 node = self._node
719 if node._data is not None:
720 node.retain_data()
721
722 def to_gpu(self, device=None):
723 """Copies the data and gradient arrays to specified GPU.
724
725 Args:
726 device: Target device specifier. If omitted, the current device is
727 used.
728
729 """
730 if self.data is None:
731 self._data = [None] # Renew placeholder to break sharing
732 else:
733 self._data = [cuda.to_gpu(self.data, device)]
734 if self._grad_var is not None:
735 self._grad_var.to_gpu(device)
736 # ensure that the node tracks the device migration
737 node = self._node
738 if node._data is not None:
739 node.retain_data()
740
741 def to_intel64(self):
742 """Copies the data and gradient arrays to intel64 specific mdarray.
743
744 If the array is not suited for intel64, it will be converted to
745 :class:`numpy.ndarray`.
746 """
747 intel64.check_ideep_available()
748 data = self.data
749 if data is not None:
750 if isinstance(data, numpy.ndarray):
751 # numpy.ndarray to ideep
752 self._data = [
753 intel64.ideep.array(
754 data, itype=intel64.ideep.wgt_array)]
755 elif isinstance(data, cuda.ndarray):
756 # cupy.ndarray to ideep
757 self._data = [
758 intel64.ideep.array(
759 data.get(), itype=intel64.ideep.wgt_array)]
760 if self._grad_var is not None:
761 self._grad_var.to_intel64()
762 # ensure that the node tracks the device migration
763 node = self._node
764 if node._data is not None:
765 node.retain_data()
766
767 def cleargrad(self):
768 """Clears the gradient array."""
769 self._grad_var = None
770
771 def zerograd(self):
772 """Initializes the gradient array by zeros.
773
774 Note that the gradient variable is unchained from the computational
775 graph by this method because this operation breaks the backprop
776 validity.
777
778 .. deprecated:: v1.15
779 Use :meth:`cleargrad` instead.
780
781 """
782 warnings.warn(
783 'Variable.zerograd is deprecated. Use Variable.cleargrad instead.',
784 DeprecationWarning)
785
786 if self.data is None:
787 return
788
789 with cuda.get_device_from_array(self.data) as dev:
790 gv = self._grad_var
791 if gv is None:
792 xp = numpy if dev.id == -1 else cuda.cupy
793 self.grad = xp.zeros_like(self.data)
794 else:
795 gv.unchain()
796 gv.data.fill(0)
797
798 def copydata(self, var):
799 """Copies the data array from given source variable.
800
801 This method copies the data array from given variable to this variable.
802 The copy is done even if the arrays reside on different devices,
803 including across the host and a GPU device. If this variable has an
804 uninitialized data array, this method initializes it by the data array
805 of the given variable. Similarly, if the given variable has an
806 uninitialized data array, this method initializes it by the data array
807 of this variable (``self``). If both are uninitialized, this method
808 does nothing.
809
810 Args:
811 var (Variable): Source variable.
812
813 """
814 src = var.data
815 dst = self.data
816 if src is None:
817 if dst is None:
818 return
819 var.initialize(self.shape)
820 src = var.data
821 elif dst is None:
822 self.initialize(src.shape)
823 dst = self.data
824 src_xp = cuda.get_array_module(src)
825 dst_xp = cuda.get_array_module(dst)
826 if dst_xp is src_xp:
827 dst_xp.copyto(dst, src)
828 elif dst_xp is numpy:
829 dst_xp.copyto(dst, src.get())
830 else:
831 dst.set(src)
832
833 def addgrad(self, var):
834 """Accumulates the gradient array from given source variable.
835
836 This method adds the gradient of a given variable to the gradient of
837 this variable. The accumulation is even done across the host and
838 different devices. If this variable has uninitialized data/grad arrays,
839 this method initializes it with the shape of the given variable and
840 then accumulates the gradient.
841
842 Args:
843 var (Variable): Source variable.
844
845 """
846 src = var._grad_var
847 if src is None:
848 return
849
850 if self.data is None:
851 self.initialize(var.shape)
852 dst = self._grad_var
853
854 src_dev = cuda.get_device_from_array(src.data)
855 dst_dev = cuda.get_device_from_array(self.data)
856
857 if src_dev.id != dst_dev.id:
858 src = chainer.functions.copy(src, dst_dev.id)
859 self._grad_var = src if dst is None else src + dst
860
861 def set_creator(self, gen_func):
862 """Notifies the variable that the given function is its creator.
863
864 Args:
865 gen_func (Function): Function object that creates this variable as
866 one of its outputs.
867
868 """
869 self._node.set_creator(gen_func)
870
871 def set_creator_node(self, fnode):
872 """Notifies the variable that the given node is its creator.
873
874 Args:
875 fnode (FunctionNode): Function node that has this variable as an
876 output.
877
878 """
879 self._node.set_creator_node(fnode)
880
881 def backward(self, retain_grad=False, enable_double_backprop=False,
882 loss_scale=None):
883 """Runs error backpropagation (a.k.a.\\ backprop) from this variable.
884
885 On backprop,
886 :meth:`FunctionNode.backward() <chainer.FunctionNode.backward>`
887 is called on each :class:`~chainer.FunctionNode` object appearing in
888 the backward graph starting from this variable.
889 The backward graph is represented by backward
890 references from variable nodes to their creators, and from function
891 nodes to their input variable nodes. The backprop stops at all root
892 nodes. Some function nodes set ``None`` as gradients of some inputs,
893 where further backprop does not take place at such inputs.
894
895 This method uses :data:`grad` as the initial error array. User can
896 manually set a gradient array before calling this method.
897 If the shape of :data:`data` is ``()`` (i.e., it is scalar) and
898 :data:`grad` is ``None``, then this method automatically complements
899 1.0 as the initial error. This is useful on starting backprop from
900 some scalar loss value.
901
902 From v3, this method supports *differentiable backprop* (a.k.a. double
903 backprop, grad of grads). To enable it, pass
904 ``enable_double_backprop=True``.
905
906 Args:
907 retain_grad (bool): If ``True``, the gradient arrays of all
908 intermediate variables are kept.
909 Otherwise, :data:`~chainer.Variable.grad` of the
910 intermediate variables are set to ``None`` on appropriate
911 timing, which may reduce the maximum memory consumption.
912
913 In most cases of training some models, the purpose of backprop
914 is to compute gradients of parameters, not of all variables,
915 and therefore it is recommended to set this flag ``False``.
916 enable_double_backprop (bool): *(Added in v3.0)* If ``True``,
917 computational trace of the whole backpropagation procedure is
918 recorded to the computational graph so that one can further do
919 backpropagation from the resulting gradients. Note that
920 enabling it results in larger memory consumption needed to
921 store the gradients w.r.t intermediate variables that are
922 required for the second gradient computation.
923 loss_scale (float): Loss scaling factor. Loss scaling is a usefull
924 technique to mitigate vanishing gradient issue that tends to
925 happen when low precision data type like float16 is used during
926 training. If you set loss scaling factor, gradients of loss
927 values are to be multiplied by the factor before backprop
928 starts. The factor is propagated to whole gradients in a
929 computational graph along the backprop. The gradients of
930 parameters are divided by the factor just before the parameters
931 are to be updated.
932 """
933 with chainer.using_config('enable_backprop', enable_double_backprop):
934 self._backward_main(retain_grad, loss_scale)
935
936 def _backward_main(self, retain_grad, loss_scale):
937 self._node._check_old_style_gradient()
938 if self.creator_node is None:
939 return
940 initial_device = None
941 if cuda.available and isinstance(self.data, cuda.ndarray):
942 try:
943 initial_device = cuda.Device()
944 except cuda.cupy.cuda.runtime.CUDARuntimeError as e:
945 if e.status != 38: # cudaErrorNoDevice
946 raise
947
948 is_debug = chainer.is_debug()
949
950 cand_funcs = []
951 seen_set = set()
952 grads = {}
953
954 # Initialize error by 1, if this is a loss variable
955 if self.data.size == 1 and self._grad_var is None:
956 if self.data.ndim != 0:
957 warnings.warn(
958 'Treating a scalar as a variable with only one element'
959 ' in Variable.backward is deprecated. A scalar variable'
960 ' must be a 0-dimensional array. Apply'
961 ' chainer.functions.squeeze to obtain a scalar variable.'
962 ' If the size of this variable accidentally becomes one,'
963 ' set zero to grad.',
964 DeprecationWarning)
965 with cuda.get_device_from_array(self.data) as device:
966 if device is cuda.DummyDevice:
967 self.grad = numpy.ones_like(self.data)
968 else:
969 self.grad = cuda.cupy.ones_like(self.data)
970 if loss_scale is not None:
971 self.grad *= loss_scale
972 grads[self._node] = self._grad_var
973
974 def add_cand(cand):
975 if cand not in seen_set:
976 # Negate since heapq is min-heap
977 heapq.heappush(cand_funcs, (-cand.rank, len(seen_set), cand))
978 seen_set.add(cand)
979
980 add_cand(self.creator_node)
981
982 def get_grad(node):
983 if node is None:
984 return None
985 if node in grads:
986 return grads[node]
987 return node.grad_var
988
989 def set_grad(node, value):
990 if node is None:
991 return
992 if node in grads:
993 grads[node] = value
994 var = node.get_variable()
995 if var is not None:
996 var._grad_var = value
997
998 while cand_funcs:
999 _, _, func = heapq.heappop(cand_funcs)
1000 inputs = func.inputs
1001 target_input_indexes = tuple([
1002 i for i, x in enumerate(inputs) if x.requires_grad
1003 ])
1004 if not target_input_indexes:
1005 continue
1006 outputs = [y() for y in func.outputs] # access via weak ref
1007
1008 in_data = tuple([x.data for x in inputs])
1009 # We need calculate the value of for the out_grad which accumulated
1010 # because now out_grad is used in backward calculation.
1011 for y in outputs:
1012 grad = get_grad(y)
1013 if isinstance(grad, tuple):
1014 grad = chainer.functions.add(*grad)
1015 set_grad(y, grad)
1016 out_grad = tuple([get_grad(y) for y in outputs])
1017 out_grad_data = tuple(
1018 [None if g is None else g.data for g in out_grad])
1019 hooks = chainer.get_function_hooks()
1020 if func._n_local_function_hooks != 0:
1021 hooks = collections.OrderedDict(hooks)
1022 hooks.update(func.local_function_hooks)
1023 hooks = hooks.values() # avoid six for performance
1024
1025 cuda.get_device_from_array(*in_data).use()
1026 for hook in hooks:
1027 hook.backward_preprocess(func, in_data, out_grad_data)
1028
1029 # Collect the current input gradients.
1030 #
1031 # Note (Tokui): When the same variable is passed to multiple input
1032 # slots (e.g. an expression like ``f(x, x)``), it makes the
1033 # gradient accumulation complicated since the back-propagated
1034 # gradients w.r.t. the first and second argument should be
1035 # accumulated to the current gradient w.r.t. the same variable.
1036 # In this case, the current implementation passes the current
1037 # gradient only to the first occurrence of the variable in the
1038 # input tuple and passes ``None`` to the rest of the occurrences.
1039 # For example, when the input variables are ``(x, x)``, the
1040 # input gradient passed to the ``backward_accumulate`` method is
1041 # ``(gx, None)`` where ``gx`` is the current gradient of ``x``.
1042 # See also the docstring of ``FunctionNode.backward_accumulate``.
1043 target_inputs = [inputs[i] for i in target_input_indexes]
1044 in_grad = []
1045 for i, index_i in enumerate(target_input_indexes):
1046 x = inputs[index_i]
1047 if x in target_inputs[:i]:
1048 # Pass ``None`` for duplicated input variables except for
1049 # the first occurrence (see the comment above).
1050 gx = None
1051 elif x in grads:
1052 gx = grads[x]
1053 elif x.creator_node is None:
1054 x._check_old_style_gradient()
1055 # accumulate the gradient only if the node is a leaf
1056 gx = x.grad_var
1057 else:
1058 gx = None
1059 in_grad.append(gx)
1060 in_grad = tuple(in_grad)
1061
1062 gxs = func.backward_accumulate(
1063 target_input_indexes, out_grad, in_grad)
1064
1065 assert len(gxs) == len(in_grad)
1066 for hook in hooks:
1067 hook.backward_postprocess(func, in_data, out_grad_data)
1068
1069 if is_debug:
1070 for gx in gxs:
1071 if gx is None:
1072 continue
1073 gx_data = gx.data
1074 if gx_data.dtype.kind == 'f':
1075 cuda.get_device_from_array(gx_data).use()
1076 if cuda.get_array_module(gx_data).isnan(gx_data).any():
1077 raise RuntimeError(
1078 'NaN is detected on backward computation of '
1079 '{}'.format(func.label))
1080
1081 if not retain_grad:
1082 for y in outputs:
1083 if y is not None and y is not self.node:
1084 grads[y] = None
1085 y_var = y.get_variable_or_none()
1086 if y_var is not None:
1087 y_var._grad_var = None
1088
1089 for i, gx in enumerate(gxs):
1090 if gx is None:
1091 continue
1092
1093 x = target_inputs[i]
1094 if not x.requires_grad:
1095 continue
1096
1097 if isinstance(gx, tuple):
1098 # No need to check each data in the tuple,
1099 # just check the new gx concated in
1100 # backward_accumulate().
1101 _check_grad_type(func, x, gx[0].data)
1102 else:
1103 _check_grad_type(func, x, gx.data)
1104
1105 if x in target_inputs[:i]:
1106 # Accumulate the duplicated gradients here. See the comment
1107 # above the code that builds ``in_grad``.
1108 cur_gx = grads[x]
1109 if func.lazy_grad_sum:
1110 if x.creator is None:
1111 gx = _backprop_utils.add(gx, cur_gx)
1112 grads[x] = gx
1113 else:
1114 grads[x] = _backprop_utils.concat_variable(
1115 gx, cur_gx)
1116 else:
1117 grads[x] = gx if cur_gx is None else gx + cur_gx
1118
1119 else:
1120 grads[x] = gx
1121
1122 x_var = x.get_variable_or_none()
1123 if x_var is not None:
1124 x_var._grad_var = grads[x]
1125 x_var._loss_scale = loss_scale
1126
1127 if x.creator_node is not None:
1128 add_cand(x.creator_node)
1129
1130 del gxs # to reduce memory usage
1131 if initial_device is not None:
1132 initial_device.use()
1133
1134 def reshape(self, *shape):
1135 """Returns a variable of a different shape and the same content.
1136
1137 .. seealso::
1138 :func:`chainer.functions.reshape` for full documentation,
1139
1140 """
1141 if len(shape) == 1 and isinstance(shape[0], (tuple, list)):
1142 shape = shape[0]
1143 return chainer.functions.reshape(self, shape)
1144
1145 def transpose(self, *axes):
1146 """Permute the dimensions of an input variable without copy.
1147
1148 .. seealso::
1149 :func:`chainer.functions.transpose` for full documentation.
1150
1151 """
1152 if len(axes) == 0:
1153 axes = None
1154 elif len(axes) == 1 and (isinstance(axes[0], (tuple, list)) or
1155 axes[0] is None):
1156 axes = axes[0]
1157 return chainer.functions.transpose(self, axes)
1158
1159 def unchain(self):
1160 """Deletes the reference to the creator of this variable.
1161
1162 This method deletes the reference to the creator from the corresponding
1163 variable node. Unlike :meth:`unchain_backward`, it does not backtrack
1164 the graph.
1165
1166 This method is equivalent to ``self.creator_node = None``.
1167
1168 """
1169 self.creator_node = None
1170
1171 def unchain_backward(self):
1172 """Deletes references between variable nodes and functions backward.
1173
1174 After this method completes, intermediate variable nodes and functions
1175 that are not referenced from anywhere are deallocated by reference
1176 count GC. Also this variable itself deletes the reference to its
1177 creator function from the node, i.e. the node becomes root in the
1178 computation graph. It indicates that backprop after unchaining stops at
1179 this variable. This behavior is useful to implement truncated BPTT.
1180
1181 """
1182 cand_funcs = []
1183 seen_set = set()
1184
1185 def add_cand(cand):
1186 if cand is not None and cand not in seen_set:
1187 cand_funcs.append(cand)
1188 seen_set.add(cand)
1189
1190 add_cand(self.creator_node)
1191
1192 while cand_funcs:
1193 func = cand_funcs.pop()
1194 for var in func.inputs:
1195 add_cand(var.creator_node)
1196 func.unchain()
1197
1198 def retain_data(self):
1199 """Lets the corresponding variable node keep the underlying array."""
1200 self._node.data = self._data[0]
1201
1202 def __lt__(self, other):
1203 raise NotImplementedError()
1204
1205 def __le__(self, other):
1206 raise NotImplementedError()
1207
1208 def __eq__(self, other):
1209 raise NotImplementedError()
1210
1211 def __ne__(self, other):
1212 raise NotImplementedError()
1213
1214 def __gt__(self, other):
1215 raise NotImplementedError()
1216
1217 def __ge__(self, other):
1218 raise NotImplementedError()
1219
1220 def __nonzero__(self):
1221 raise NotImplementedError()
1222
1223 def __bool__(self):
1224 raise NotImplementedError()
1225
1226 __array_priority__ = 200
1227 __hash__ = None
1228
1229
1230 class Parameter(Variable):
1231
1232 """Parameter variable that can be registered to a link.
1233
1234 Parameter is a subclass of :class:`Variable`. It almost behaves as same
1235 as a usual variable except that a parameter can be registered to a
1236 :class:`~chainer.Link` object just by assigning it to an attribute of
1237 the link within an :meth:`~chainer.Link.init_scope` context.
1238
1239 Parameter also supports an initialization by an initializer. It can have
1240 two initializers: one for the data array, and the other for the gradient
1241 array. The initializer only specifies the way of filling the elements of
1242 these arrays, and the shape information is specified at the initialization
1243 point.
1244
1245 When a link that the parameter has been registered to is passed to an
1246 :class:`~chainer.GradientMethod`, an update rule is set to the parameter.
1247 This update rule specifies how to update the data array of the parameter
1248 using its gradient array.
1249
1250 Args:
1251 initializer (~chainer.Initializer or numpy.ndarray or cupy.ndarray):
1252 Initializer of the data array. If ``shape`` is given, this
1253 initializer is immediately used to initialize the data array.
1254 Otherwise, if it is an array, it is immediately used as the data
1255 array, and otherwise the data array is left uninitialized and will
1256 be initialized by this initializer in :meth:`initialize`. It can
1257 also be a scalar, in which case the data array will be filled by
1258 this scalar. Note that float32 is used in this case.
1259 shape (int or tuple of int or None): Shape of the parameter. If it is
1260 ``None``, the initialization is deferred to the call of
1261 :meth:`initialize`.
1262 name (str): Name of the parameter.
1263
1264 Attributes:
1265 initializer: Initializer of the data array. It is used for
1266 initializing the data array of an uninitialized variable.
1267 update_rule: :class:`~chainer.optimizer.UpdateRule` instance that
1268 updates this variable as a parameter. This argument is set to
1269 :attr:`update_rule`.
1270
1271 """
1272
1273 initializer = None
1274 _grad_initializer = None
1275 _initial_backend = None
1276 _initial_device = None
1277
1278 def __init__(self, initializer=None, shape=None, name=None):
1279 if initializer is None:
1280 initializer = constant.NaN()
1281 elif numpy.isscalar(initializer):
1282 initializer = constant.Constant(initializer)
1283 if shape is None:
1284 if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):
1285 # parameter initialized by the initial array
1286 super(Parameter, self).__init__(initializer, name=name)
1287 else:
1288 # uninitialized parameter
1289 super(Parameter, self).__init__(name=name)
1290 self.initializer = initializer
1291 dtype = getattr(initializer, 'dtype', numpy.float32)
1292 self._grad_initializer = constant.NaN(dtype)
1293 else:
1294 # parameter initialized with a given shape
1295 if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):
1296 xp = cuda.get_array_module(initializer)
1297 initializer = constant.Constant(initializer)
1298 else:
1299 xp = numpy
1300 data = initializers.generate_array(initializer, shape, xp)
1301 grad = xp.full_like(data, numpy.nan)
1302 super(Parameter, self).__init__(data, name=name, grad=grad)
1303
1304 self.update_rule = None
1305
1306 def __copy__(self):
1307 return self._copy_to(Parameter())
1308
1309 def __reduce__(self):
1310 return _recover_parameter, (self.data, self.name, self.grad,
1311 self.initializer, self.update_rule)
1312
1313 def to_cpu(self):
1314 super(Parameter, self).to_cpu()
1315 if self.data is None:
1316 self._initial_backend = None
1317 self._initial_device = None
1318
1319 def to_gpu(self, device=None):
1320 super(Parameter, self).to_gpu(device)
1321 if self.data is None:
1322 if device is None:
1323 device = cuda.Device().id
1324 self._initial_backend = 'cuda'
1325 self._initial_device = device
1326
1327 def to_intel64(self):
1328 super(Parameter, self).to_intel64()
1329 if self.data is None:
1330 self._initial_backend = 'intel64'
1331 self._initial_device = None
1332
1333 def cleargrad(self):
1334 super(Parameter, self).cleargrad()
1335 if self.data is None:
1336 self._grad_initializer = None
1337
1338 def zerograd(self):
1339 super(Parameter, self).zerograd()
1340 if self.data is None:
1341 dtype = getattr(self.initializer, 'dtype', None)
1342 self._grad_initializer = initializers.Zero(dtype)
1343
1344 def initialize(self, shape):
1345 """Initializes the uninitialized variable.
1346
1347 Uninitialized variable is a variable created with the data array set to
1348 None. This method creates and initializes the data array. The shape of
1349 the variable can be left unknown until this method is called.
1350
1351 Args:
1352 shape (tuple of int): Shape of the data array.
1353
1354 """
1355 xp = numpy if self._initial_backend != 'cuda' else cuda.cupy
1356 with cuda.get_device_from_id(self._initial_device):
1357 data = initializers.generate_array(self.initializer, shape, xp)
1358
1359 ginit = self._grad_initializer
1360 grad = None if ginit is None else initializers.generate_array(
1361 ginit, shape, xp)
1362
1363 self.data = data
1364 self.grad = grad
1365
1366 # Convert the array for iDeep.
1367 if self._initial_backend == 'intel64':
1368 self.to_intel64()
1369
1370 def update(self):
1371 """Updates the data array using the gradient and the update rule.
1372
1373 This method updates the parameter using the attached update rule.
1374
1375 """
1376 if self.update_rule is not None:
1377 self.update_rule.update(self)
1378
1379
1380 def as_variable(obj):
1381 """Converts an array or a variable into :class:`~chainer.Variable`.
1382
1383 This is a convenient function to get a :class:`~chainer.Variable` object
1384 transparently from a raw array or a variable.
1385
1386 Note that this function should only be used for type consistency (i.e., to
1387 enforce the return value of an API having type :class:`~chainer.Varialbe`).
1388 The :class:`~chainer.Variable.requires_grad` flag is kept as is; if ``obj``
1389 is a raw array, the newly created variable has ``requires_grad = False``.
1390 In order to make a variable w.r.t. which you want to compute the gradient,
1391 you should use :class:`~chainer.Variable` directly.
1392
1393 Args:
1394 obj (numpy.ndarray or cupy.ndarray or ~chainer.Variable): An array or
1395 a variable that you want to convert to :class:`~chainer.Variable`.
1396
1397 Returns:
1398 ~chainer.Variable:
1399 A variable converted from ``obj``. If ``obj`` is a raw array, this is a
1400 new :class:`~chainer.Variable` object that wraps the array. If ``obj``
1401 is already a :class:`~chainer.Variable` object, this function returns
1402 ``obj`` as is.
1403
1404 """
1405 if isinstance(obj, Variable):
1406 return obj
1407 return Variable(obj, requires_grad=False)
1408
1409
1410 def _recover_parameter(data, name, grad, initializer, update_rule):
1411 p = Parameter(initializer=initializer, name=name)
1412 p.data = data
1413 p.grad = grad
1414 p.update_rule = update_rule
1415 return p
1416
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainer/variable.py b/chainer/variable.py
--- a/chainer/variable.py
+++ b/chainer/variable.py
@@ -487,6 +487,16 @@
def __str__(self):
return variable_str(self)
+ @property
+ def xp(self):
+ """Array module for this variable.
+
+ Depending on which of CPU/GPU this variable is on, this property
+ returns :mod:`numpy` or :mod:`cupy`.
+
+ """
+ return cuda.get_array_module(self)
+
@property
def name(self):
return self._node.name
| {"golden_diff": "diff --git a/chainer/variable.py b/chainer/variable.py\n--- a/chainer/variable.py\n+++ b/chainer/variable.py\n@@ -487,6 +487,16 @@\n def __str__(self):\n return variable_str(self)\n \n+ @property\n+ def xp(self):\n+ \"\"\"Array module for this variable.\n+\n+ Depending on which of CPU/GPU this variable is on, this property\n+ returns :mod:`numpy` or :mod:`cupy`.\n+\n+ \"\"\"\n+ return cuda.get_array_module(self)\n+\n @property\n def name(self):\n return self._node.name\n", "issue": "Feature request: Variable.xp to get the array module of its data array\nI often need to call `chainer.cuda.get_array_module` for `Variable` objects. Like `Link.xp`, `Variable.xp` would be a useful property.\n", "before_files": [{"content": "import collections\nimport copy\nimport heapq\nimport traceback\nimport warnings\nimport weakref\n\nimport numpy\n\nimport chainer\nfrom chainer import _backprop_utils\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\nfrom chainer import initializers\nfrom chainer.initializers import constant\nfrom chainer.utils import argument\n\n\ndef _check_grad_type(func, x, gx):\n if x.data is None or gx is None:\n # ``x.data is None`` implies that the data array is not retained\n return\n if not chainer.is_arrays_compatible((gx, x.data)):\n msg = ('Type of data and grad mismatch\\ngrad: %s != data: %s' %\n (type(x.data), type(gx)))\n typ = TypeError\n elif gx.dtype != x.data.dtype:\n msg = ('Dtype of data and grad mismatch\\ngrad: %s != data: %s' %\n (x.data.dtype, gx.dtype))\n typ = TypeError\n elif gx.shape != x.data.shape:\n msg = ('Shape of data and grad mismatch\\ngrad: %s != data: %s' %\n (x.data.shape, gx.shape))\n typ = ValueError\n else:\n return\n\n detail = ''\n if func:\n detail = 'Function `{0}` ({1}) has a bug.\\n'.format(\n type(func)._impl_name, func.label)\n stack = func.stack\n if stack:\n detail += 'Stacktrace of the function is below:\\n'\n for line in traceback.format_list(func.stack):\n detail += line\n detail += '''\nPlease report this error to the issue tracker with the stack trace,\nthe information of your environment, and your script:\nhttps://github.com/chainer/chainer/issues/new.\n'''.format(type(func).__name__, func.label)\n\n raise typ(detail + msg)\n\n\ndef variable_repr(var):\n \"\"\"Return the string representation of a variable.\n\n Args:\n var (~chainer.Variable): Input Variable.\n .. seealso:: numpy.array_repr\n \"\"\"\n xp = cuda.get_array_module(var)\n if xp is numpy:\n arr = var.data\n else:\n arr = var.data.get()\n\n if var.name:\n prefix = 'variable ' + var.name\n else:\n prefix = 'variable'\n\n if arr is None:\n lst = 'None'\n elif arr.size > 0 or arr.shape == (0,):\n lst = numpy.array2string(arr, None, None, None, ', ', prefix + '(')\n else: # show zero-length shape unless it is (0,)\n lst = '[], shape=%s' % (repr(arr.shape),)\n\n return '%s(%s)' % (prefix, lst)\n\n\ndef variable_str(var):\n \"\"\"Return the string representation of a variable.\n\n Args:\n var (~chainer.Variable): Input Variable.\n .. seealso:: numpy.array_str\n \"\"\"\n xp = cuda.get_array_module(var)\n if xp is numpy:\n arr = var.data\n else:\n arr = var.data.get()\n\n if var.name:\n prefix = 'variable ' + var.name\n else:\n prefix = 'variable'\n\n if arr is None:\n lst = 'None'\n else:\n lst = numpy.array2string(arr, None, None, None, ' ', prefix + '(')\n\n return '%s(%s)' % (prefix, lst)\n\n\nclass VariableNode(object):\n\n \"\"\"Node in the backward computational graph representing a variable.\n\n This object represents a variable node in a computational graph. The node\n is used in error backpropagation (a.k.a. backprop) to determine which\n gradient to be passed to each function.\n\n A variable node is held by the corresponding :class:`~chainer.Variable`\n object, which is managed by users. :class:`~chainer.FunctionNode` objects\n that take the variable as an input also hold references to the variable\n node.\n\n Note that the node does not hold a reference to the corresponding data\n array in general. The data array is actually accessible by the node in the\n following cases.\n\n 1. If there exists a :class:`~chainer.Variable` object that holds a\n reference to the variable node, the variable node holds a weak reference\n to the variable object, and thus the data array is accessible via the\n weak reference.\n 2. If :meth:`retain_data` is called, the node holds a reference to the data\n array. It is mainly called by a function that needs the input or output\n data array in its backprop procedure.\n See :meth:`FunctionNode.retain_inputs()\n <chainer.FunctionNode.retain_inputs>`\n and :meth:`FunctionNode.retain_outputs()\n <chainer.FunctionNode.retain_outputs>` for more details.\n\n Users usually do not need to touch this variable node object. The\n computational graph is automatically managed by Chainer, and any interface\n that is beneficial for users is also provided by\n :class:`~chainer.Variable`.\n\n Args:\n variable (Variable): The corresponding variable object.\n name (str): Name of the variable node.\n\n Attributes:\n ~VariableNode.dtype: Data type of the data array.\n ~VariableNode.shape: Shape of the data array.\n ~VariableNode.name (str): Name of the variable node.\n\n \"\"\"\n\n _creator_node = None\n _data = None\n _rank = 0\n # Name of the Function is assigned if this variable is a gradient generated\n # by an old-style Function\n _old_style_grad_generator = None\n\n def __init__(self, variable, name, **kwargs):\n argument.check_unexpected_kwargs(\n kwargs,\n grad='unexpected keyword argument \"grad\": '\n 'pass the gradient to Variable instead'\n )\n self._variable = weakref.ref(variable)\n self.name = name\n self._requires_grad = variable.requires_grad\n\n vdata = variable.data\n self._update_data_info(vdata)\n\n @property\n def creator(self):\n \"\"\"Function object that created this variable node.\n\n When the function is implemented with the old-style API (i.e., it uses\n :class:`~chainer.Function` class),\n this property returns the :class:`~chainer.Function` object.\n The object is extracted from the :class:`~chainer.FunctionAdapter`\n object, so the returned object is not the function node, but instead\n the actual implementation of forward and backward procedures.\n\n When the function is implemented with the new-style API (i.e., it uses\n :class:`~chainer.FunctionNode` class),\n this property returns the function node\n object. In this case, the returned object is same as\n :attr:`creator_node`.\n\n .. warning::\n\n As of v3.0.0, when the creator is an old-style function, the\n following code is invalid:\n\n .. code-block:: python\n\n creator = v.creator\n v.creator = None\n ...\n v.creator = creator\n\n The point is that :class:`~chainer.FunctionNode` objects are used\n as nodes in the computational graph instead of\n :class:`~chainer.Function`, and each :class:`~chainer.Function`\n object only holds a *weak reference* to the corresponding\n :class:`~chainer.FunctionNode`.\n Since ``creator`` returns the :class:`~chainer.Function` object,\n the :class:`~chainer.FunctionNode` object is not kept by preserving\n ``creator``.\n\n The above code should be fixed as follows.\n\n .. code-block:: python\n\n creator_node = v.creator_node\n v.creator_node = None\n ...\n v.creator_node = creator_node\n\n \"\"\"\n node = self._creator_node\n if node is None:\n return None\n\n if isinstance(node, chainer.function.FunctionAdapter):\n return node.function\n return node\n\n @creator.setter\n def creator(self, func):\n self.creator_node = func\n\n @property\n def creator_node(self):\n \"\"\"Function node that has this variable as an output.\n\n See :class:`~chainer.FunctionNode` for the definition of a function\n node.\n\n \"\"\"\n return self._creator_node\n\n @creator_node.setter\n def creator_node(self, func):\n if isinstance(func, chainer.Function):\n func = func.node\n self._creator_node = func\n if func is not None:\n self._rank = func.rank + 1\n\n @property\n def data(self):\n \"\"\"Data array of the corresponding variable.\n\n If the data is not available, it returns ``None``.\n\n \"\"\"\n return self._data\n\n @data.setter\n def data(self, d):\n self._data = d\n self._update_data_info(d)\n\n @property\n def grad(self):\n \"\"\"Gradient array of the corresponding variable.\n\n If the variable is not available, it returns ``None``.\n\n \"\"\"\n var = self._variable()\n return None if var is None else var.grad\n\n @property\n def grad_var(self):\n \"\"\"Gradient variable of the corresponding variable.\n\n If the corresponding variable is not available, it return ``None``.\n\n \"\"\"\n var = self._variable()\n return None if var is None else var._grad_var\n\n @property\n def label(self):\n \"\"\"Short text that represents the variable node.\"\"\"\n if self.shape == ():\n return str(self.dtype)\n return '(%s), %s' % (', '.join(map(str, self.shape)),\n str(self.dtype))\n\n @property\n def rank(self):\n return self._rank\n\n @property\n def requires_grad(self):\n \"\"\"It indicates that ``grad`` will be set in backward calculation.\"\"\"\n return self._requires_grad\n\n def get_variable(self):\n \"\"\"Returns the corresponding :class:`~chainer.Variable` object.\n\n VariableNode object holds a weak reference of the variable object. If\n the reference is alive, it is returned by this property. Otherwise,\n this property creates a new :class:`~chainer.Variable` object from\n this node object and returns it.\n\n Returns:\n Variable: The variable object that refers this node.\n\n \"\"\"\n var = self._variable()\n if var is not None:\n return var\n\n var = Variable(self.data, name=self.name,\n requires_grad=self._requires_grad)\n var._node = self\n return var\n\n def get_variable_or_none(self):\n \"\"\"Returns the holding :class:`~chainer.Variable` object or ``None``.\n\n VariableNode object holds a weak reference of the variable object.If\n the reference is alive, it is returned by this property. Otherwise,\n returns ``None``.\n\n Returns:\n Variable: The variable object that refers this node.\n\n \"\"\"\n return self._variable()\n\n def set_creator(self, creator):\n \"\"\"Sets a :class:`~chainer.Function` object that created this node.\n\n This method is equivalent to ``self.creator = creator``. A\n :class:`~chainer.FunctionNode` object can also be passed.\n\n Args:\n creator (Function or FunctionNode): Function that has created this\n variable.\n\n \"\"\"\n self.creator = creator\n\n def set_creator_node(self, creator_node):\n \"\"\"Sets a :class:`~chainer.FunctionNode` object that created this node.\n\n This method is equivalent to ``self.creator_node = creator_node``. A\n :class:`~chainer.Function` object can also be passed, in which case the\n :attr:`Function.node <chainer.Function.node>` attribute is used.\n\n Args:\n creator_node (FunctionNode or Function): Function node that has\n this variable as an output.\n\n \"\"\"\n self.creator_node = creator_node\n\n def unchain(self):\n \"\"\"Deletes the reference to the creator of this variable node.\n\n This method is equivalent to ``self.creator_node = None``.\n\n \"\"\"\n self.creator_node = None\n\n def retain_data(self):\n \"\"\"Lets the node hold a reference to the underlying data array.\n\n This method gets the data array of the corresponding variable and keeps\n it. If the weak reference to the corresponding variable is dead, it\n raises an error.\n\n \"\"\"\n variable = self._variable()\n if variable is not None:\n self.data = variable.data\n else:\n raise RuntimeError('cannot retain variable data: the variable has '\n 'been already released')\n\n def _update_data_info(self, d):\n if d is None:\n self.dtype = None\n self.shape = None\n else:\n self.dtype = d.dtype\n self.shape = d.shape\n\n # If the node has a reference to data, update it as well.\n if self._data is not None:\n self._data = d\n\n def _check_old_style_gradient(self):\n if self._old_style_grad_generator is not None:\n raise RuntimeError(\n 'cannot twice-differentiate an old style Function \"%s\"' %\n self._old_style_grad_generator)\n\n\ndef _create_variable(data, name, grad, requires_grad):\n return Variable(\n data, name=name, grad=grad, requires_grad=requires_grad)\n\n\nclass Variable(object):\n\n \"\"\"__init__(data=None, *, name=None, grad=None, requires_grad=True)\n\n Array with a structure to keep track of computation.\n\n Every variable holds a data array of type either :class:`numpy.ndarray` or\n :class:`cupy.ndarray`.\n\n A variable object holds a data array and a\n :class:`~chainer.variable.VariableNode` object of\n a computational graph. If the variable is constructed by the user, the node\n is *root* and does not hold any parent. If the variable is constructed by a\n :class:`~chainer.FunctionNode` object, the node holds a reference to its\n parent called :attr:`creator_node`.\n This reference is used in backpropagation to backtrack the graph.\n\n Users can disable (resp. enable) this chaining behavior by calling\n :func:`~chainer.no_backprop_mode` (resp.\n :func:`~chainer.force_backprop_mode`).\n In the former context, a variable never creates a computational graph,\n whereas in the latter context, it is forced to create.\n\n .. warning::\n\n ``volatile`` argument is not supported anymore since v2.\n Instead, use :func:`chainer.no_backprop_mode`.\n\n Args:\n data (numpy.ndarray or cupy.ndarray): Initial data array.\n name (str): Name of the variable.\n grad (numpy.ndarray or cupy.ndarray): Initial gradient array.\n requires_grad (bool): Boolean indicating whether ``grad`` will be set\n in backward calculation.\n\n \"\"\" # NOQA\n\n def __init__(self, data=None, **kwargs):\n argument.check_unexpected_kwargs(\n kwargs, volatile='volatile argument is not supported anymore. '\n 'Use chainer.using_config')\n name, grad, requires_grad \\\n = argument.parse_kwargs(\n kwargs, ('name', None), ('grad', None),\n ('requires_grad', True))\n\n if (data is not None and\n not isinstance(data, chainer.get_array_types())):\n msg = '''numpy.ndarray or cuda.ndarray are expected.\nActual: {0}'''.format(type(data))\n raise TypeError(msg)\n\n # Use a list as a data structure to hold the data array indirectly to\n # abstract its initialized/uninitialized state.\n self._data = [data]\n self._requires_grad = requires_grad\n self._node = VariableNode(self, name)\n self._grad_var = None if grad is None else Variable(grad)\n self._loss_scale = None\n\n def __copy__(self):\n return self._copy_to(Variable())\n\n def _copy_to(self, target):\n target.__dict__ = copy.copy(self.__dict__)\n target._node = VariableNode(target, self.name)\n return target\n\n def __reduce__(self):\n return _create_variable, (self.data, self.name, self.grad,\n self._requires_grad)\n\n def __repr__(self):\n return variable_repr(self)\n\n def __str__(self):\n return variable_str(self)\n\n @property\n def name(self):\n return self._node.name\n\n @name.setter\n def name(self, n):\n self._node.name = n\n\n def summary(self):\n if self.name:\n return '<variable %s>' % self.name\n else:\n return '<variable at 0x%x>' % id(self)\n\n def debug_print(self):\n \"\"\"Display a summary of the stored data and location of the Variable\"\"\"\n\n msg = \"\"\"{summary}\n- device: {device}\n- backend: {backend}\n- shape: {shape}\n- dtype: {dtype}\n- statistics: {stats}\n- grad: {grad}\"\"\"\n\n stats_msg = 'mean={0:.8f}, std={1:.8f}'\n\n data = self.data\n with cuda.get_device_from_array(data) as dev:\n xp = numpy if int(dev) == -1 else cuda.cupy\n\n if data is None:\n # `data` can be `None` if constructed without any arguments\n device = None\n backend = None\n stats = None\n else:\n device = getattr(data, 'device', 'CPU')\n backend = type(data)\n stats = stats_msg.format(float(xp.mean(data)),\n float(xp.std(data)))\n shape = getattr(data, 'shape', None)\n dtype = getattr(data, 'dtype', None)\n\n if self.grad is None:\n grad = None\n elif xp.all(self.grad == 0):\n grad = 0\n else:\n grad = stats_msg.format(float(xp.mean(self.grad)),\n float(xp.std(self.grad)))\n\n return msg.format(summary=self.summary(), device=device,\n backend=backend, shape=shape, dtype=dtype,\n stats=stats, grad=grad)\n\n def __pos__(self):\n return self\n\n def __len__(self):\n \"\"\"Returns the first dimension of the data array.\n\n Returns:\n int: Number of the first dimension of the data array.\n\n \"\"\"\n return len(self.data)\n\n @property\n def label(self):\n \"\"\"Short text that represents the variable.\"\"\"\n return self._node.label\n\n @property\n def creator(self):\n \"\"\"Function implementation that created this variable.\n\n When this variable has been created by an old-style function (i.e., it\n is implemented as a subclass of :class:`Function`), this property\n returns that :class:`Function` object.\n\n When this variable has been created by a new-style function (i.e., it\n is implemented as a subclass of :class:`FunctionNode` class), this\n property returns that node object.\n\n \"\"\"\n return self._node.creator\n\n @creator.setter\n def creator(self, func):\n self._node.creator = func\n\n @property\n def creator_node(self):\n \"\"\":class:`FunctionNode` object that created this variable.\n\n This property has a setter to which ``None`` can be set. Setting\n ``None`` to this property is equivalent to call :meth:`unchain`;\n it purges the variable from the function that created this variable.\n\n The setter also accepts the original :class:`FunctionNode` object that\n created this variable. For example, you can once set ``None`` to this\n property and then set the original value again.\n\n .. note::\n Setting an irrelevant :meth:`FunctionNode` object does not emit any\n error immediately, whereas the behavior is undefined. Do not set\n a :meth:`FunctionNode` object that did not create this variable\n object.\n\n \"\"\"\n return self._node._creator_node\n\n @creator_node.setter\n def creator_node(self, func):\n self._node.creator_node = func\n\n @property\n def array(self):\n \"\"\"The underlying data array.\n\n It is either :class:`numpy.ndarray` or :class:`cupy.ndarray` object,\n or ``None`` if the variable in in an uninitialized state.\n\n \"\"\"\n return self._data[0]\n\n @array.setter\n def array(self, d):\n self._data[0] = d\n self._node._update_data_info(d)\n\n @property\n def data(self):\n \"\"\"The underlying data array (equivalent to :attr:`array`).\n\n Note that using this attribute directly is discouraged; use\n :attr:`array` instead. Using :attr:`array`, you can find an error\n earlier when your code mixes up Variable and ndarray because\n ndarray does not have an attribute ``.array`` while it has\n ``.data``.\n\n \"\"\"\n return self._data[0]\n\n @data.setter\n def data(self, d):\n self._data[0] = d\n self._node._update_data_info(d)\n\n @property\n def grad(self):\n \"\"\"Gradient array of this variable.\n\n Note that this property returns the underlying array of the gradient\n variable instead of the gradient variable itself; to get/set\n gradient variable, use :attr:`grad_var` instead.\n\n \"\"\"\n gv = self._grad_var\n return None if gv is None else gv.data\n\n @grad.setter\n def grad(self, g):\n self.grad_var = None if g is None else Variable(g)\n\n @property\n def grad_var(self):\n \"\"\"Gradient variable.\"\"\"\n return self._grad_var\n\n @grad_var.setter\n def grad_var(self, g):\n if g is not None:\n _check_grad_type(None, self, g.data)\n self._grad_var = g\n\n @property\n def shape(self):\n return self.data.shape\n\n @property\n def ndim(self):\n return self.data.ndim\n\n @property\n def size(self):\n return self.data.size\n\n @property\n def dtype(self):\n return self.data.dtype\n\n @property\n def rank(self):\n return self._node.rank\n\n @property\n def node(self):\n return self._node\n\n @property\n def requires_grad(self):\n \"\"\"It indicates that ``grad`` will be set in backward calculation.\"\"\"\n return self._requires_grad\n\n @property\n def T(self):\n \"\"\"Transposition of this variable.\"\"\"\n return chainer.functions.transpose(self)\n\n def to_cpu(self):\n \"\"\"Copies the data and gradient arrays to CPU.\"\"\"\n\n data = self.data\n if data is None:\n return\n\n if isinstance(data, cuda.ndarray):\n # cupy.ndarray to numpy.ndarray\n self._data = [cuda.to_cpu(data)]\n elif isinstance(data, intel64.mdarray):\n # ideep.mdarray to numpy.ndarray\n self._data = [numpy.array(data)]\n\n if self._grad_var is not None:\n self._grad_var.to_cpu()\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def to_gpu(self, device=None):\n \"\"\"Copies the data and gradient arrays to specified GPU.\n\n Args:\n device: Target device specifier. If omitted, the current device is\n used.\n\n \"\"\"\n if self.data is None:\n self._data = [None] # Renew placeholder to break sharing\n else:\n self._data = [cuda.to_gpu(self.data, device)]\n if self._grad_var is not None:\n self._grad_var.to_gpu(device)\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def to_intel64(self):\n \"\"\"Copies the data and gradient arrays to intel64 specific mdarray.\n\n If the array is not suited for intel64, it will be converted to\n :class:`numpy.ndarray`.\n \"\"\"\n intel64.check_ideep_available()\n data = self.data\n if data is not None:\n if isinstance(data, numpy.ndarray):\n # numpy.ndarray to ideep\n self._data = [\n intel64.ideep.array(\n data, itype=intel64.ideep.wgt_array)]\n elif isinstance(data, cuda.ndarray):\n # cupy.ndarray to ideep\n self._data = [\n intel64.ideep.array(\n data.get(), itype=intel64.ideep.wgt_array)]\n if self._grad_var is not None:\n self._grad_var.to_intel64()\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def cleargrad(self):\n \"\"\"Clears the gradient array.\"\"\"\n self._grad_var = None\n\n def zerograd(self):\n \"\"\"Initializes the gradient array by zeros.\n\n Note that the gradient variable is unchained from the computational\n graph by this method because this operation breaks the backprop\n validity.\n\n .. deprecated:: v1.15\n Use :meth:`cleargrad` instead.\n\n \"\"\"\n warnings.warn(\n 'Variable.zerograd is deprecated. Use Variable.cleargrad instead.',\n DeprecationWarning)\n\n if self.data is None:\n return\n\n with cuda.get_device_from_array(self.data) as dev:\n gv = self._grad_var\n if gv is None:\n xp = numpy if dev.id == -1 else cuda.cupy\n self.grad = xp.zeros_like(self.data)\n else:\n gv.unchain()\n gv.data.fill(0)\n\n def copydata(self, var):\n \"\"\"Copies the data array from given source variable.\n\n This method copies the data array from given variable to this variable.\n The copy is done even if the arrays reside on different devices,\n including across the host and a GPU device. If this variable has an\n uninitialized data array, this method initializes it by the data array\n of the given variable. Similarly, if the given variable has an\n uninitialized data array, this method initializes it by the data array\n of this variable (``self``). If both are uninitialized, this method\n does nothing.\n\n Args:\n var (Variable): Source variable.\n\n \"\"\"\n src = var.data\n dst = self.data\n if src is None:\n if dst is None:\n return\n var.initialize(self.shape)\n src = var.data\n elif dst is None:\n self.initialize(src.shape)\n dst = self.data\n src_xp = cuda.get_array_module(src)\n dst_xp = cuda.get_array_module(dst)\n if dst_xp is src_xp:\n dst_xp.copyto(dst, src)\n elif dst_xp is numpy:\n dst_xp.copyto(dst, src.get())\n else:\n dst.set(src)\n\n def addgrad(self, var):\n \"\"\"Accumulates the gradient array from given source variable.\n\n This method adds the gradient of a given variable to the gradient of\n this variable. The accumulation is even done across the host and\n different devices. If this variable has uninitialized data/grad arrays,\n this method initializes it with the shape of the given variable and\n then accumulates the gradient.\n\n Args:\n var (Variable): Source variable.\n\n \"\"\"\n src = var._grad_var\n if src is None:\n return\n\n if self.data is None:\n self.initialize(var.shape)\n dst = self._grad_var\n\n src_dev = cuda.get_device_from_array(src.data)\n dst_dev = cuda.get_device_from_array(self.data)\n\n if src_dev.id != dst_dev.id:\n src = chainer.functions.copy(src, dst_dev.id)\n self._grad_var = src if dst is None else src + dst\n\n def set_creator(self, gen_func):\n \"\"\"Notifies the variable that the given function is its creator.\n\n Args:\n gen_func (Function): Function object that creates this variable as\n one of its outputs.\n\n \"\"\"\n self._node.set_creator(gen_func)\n\n def set_creator_node(self, fnode):\n \"\"\"Notifies the variable that the given node is its creator.\n\n Args:\n fnode (FunctionNode): Function node that has this variable as an\n output.\n\n \"\"\"\n self._node.set_creator_node(fnode)\n\n def backward(self, retain_grad=False, enable_double_backprop=False,\n loss_scale=None):\n \"\"\"Runs error backpropagation (a.k.a.\\\\ backprop) from this variable.\n\n On backprop,\n :meth:`FunctionNode.backward() <chainer.FunctionNode.backward>`\n is called on each :class:`~chainer.FunctionNode` object appearing in\n the backward graph starting from this variable.\n The backward graph is represented by backward\n references from variable nodes to their creators, and from function\n nodes to their input variable nodes. The backprop stops at all root\n nodes. Some function nodes set ``None`` as gradients of some inputs,\n where further backprop does not take place at such inputs.\n\n This method uses :data:`grad` as the initial error array. User can\n manually set a gradient array before calling this method.\n If the shape of :data:`data` is ``()`` (i.e., it is scalar) and\n :data:`grad` is ``None``, then this method automatically complements\n 1.0 as the initial error. This is useful on starting backprop from\n some scalar loss value.\n\n From v3, this method supports *differentiable backprop* (a.k.a. double\n backprop, grad of grads). To enable it, pass\n ``enable_double_backprop=True``.\n\n Args:\n retain_grad (bool): If ``True``, the gradient arrays of all\n intermediate variables are kept.\n Otherwise, :data:`~chainer.Variable.grad` of the\n intermediate variables are set to ``None`` on appropriate\n timing, which may reduce the maximum memory consumption.\n\n In most cases of training some models, the purpose of backprop\n is to compute gradients of parameters, not of all variables,\n and therefore it is recommended to set this flag ``False``.\n enable_double_backprop (bool): *(Added in v3.0)* If ``True``,\n computational trace of the whole backpropagation procedure is\n recorded to the computational graph so that one can further do\n backpropagation from the resulting gradients. Note that\n enabling it results in larger memory consumption needed to\n store the gradients w.r.t intermediate variables that are\n required for the second gradient computation.\n loss_scale (float): Loss scaling factor. Loss scaling is a usefull\n technique to mitigate vanishing gradient issue that tends to\n happen when low precision data type like float16 is used during\n training. If you set loss scaling factor, gradients of loss\n values are to be multiplied by the factor before backprop\n starts. The factor is propagated to whole gradients in a\n computational graph along the backprop. The gradients of\n parameters are divided by the factor just before the parameters\n are to be updated.\n \"\"\"\n with chainer.using_config('enable_backprop', enable_double_backprop):\n self._backward_main(retain_grad, loss_scale)\n\n def _backward_main(self, retain_grad, loss_scale):\n self._node._check_old_style_gradient()\n if self.creator_node is None:\n return\n initial_device = None\n if cuda.available and isinstance(self.data, cuda.ndarray):\n try:\n initial_device = cuda.Device()\n except cuda.cupy.cuda.runtime.CUDARuntimeError as e:\n if e.status != 38: # cudaErrorNoDevice\n raise\n\n is_debug = chainer.is_debug()\n\n cand_funcs = []\n seen_set = set()\n grads = {}\n\n # Initialize error by 1, if this is a loss variable\n if self.data.size == 1 and self._grad_var is None:\n if self.data.ndim != 0:\n warnings.warn(\n 'Treating a scalar as a variable with only one element'\n ' in Variable.backward is deprecated. A scalar variable'\n ' must be a 0-dimensional array. Apply'\n ' chainer.functions.squeeze to obtain a scalar variable.'\n ' If the size of this variable accidentally becomes one,'\n ' set zero to grad.',\n DeprecationWarning)\n with cuda.get_device_from_array(self.data) as device:\n if device is cuda.DummyDevice:\n self.grad = numpy.ones_like(self.data)\n else:\n self.grad = cuda.cupy.ones_like(self.data)\n if loss_scale is not None:\n self.grad *= loss_scale\n grads[self._node] = self._grad_var\n\n def add_cand(cand):\n if cand not in seen_set:\n # Negate since heapq is min-heap\n heapq.heappush(cand_funcs, (-cand.rank, len(seen_set), cand))\n seen_set.add(cand)\n\n add_cand(self.creator_node)\n\n def get_grad(node):\n if node is None:\n return None\n if node in grads:\n return grads[node]\n return node.grad_var\n\n def set_grad(node, value):\n if node is None:\n return\n if node in grads:\n grads[node] = value\n var = node.get_variable()\n if var is not None:\n var._grad_var = value\n\n while cand_funcs:\n _, _, func = heapq.heappop(cand_funcs)\n inputs = func.inputs\n target_input_indexes = tuple([\n i for i, x in enumerate(inputs) if x.requires_grad\n ])\n if not target_input_indexes:\n continue\n outputs = [y() for y in func.outputs] # access via weak ref\n\n in_data = tuple([x.data for x in inputs])\n # We need calculate the value of for the out_grad which accumulated\n # because now out_grad is used in backward calculation.\n for y in outputs:\n grad = get_grad(y)\n if isinstance(grad, tuple):\n grad = chainer.functions.add(*grad)\n set_grad(y, grad)\n out_grad = tuple([get_grad(y) for y in outputs])\n out_grad_data = tuple(\n [None if g is None else g.data for g in out_grad])\n hooks = chainer.get_function_hooks()\n if func._n_local_function_hooks != 0:\n hooks = collections.OrderedDict(hooks)\n hooks.update(func.local_function_hooks)\n hooks = hooks.values() # avoid six for performance\n\n cuda.get_device_from_array(*in_data).use()\n for hook in hooks:\n hook.backward_preprocess(func, in_data, out_grad_data)\n\n # Collect the current input gradients.\n #\n # Note (Tokui): When the same variable is passed to multiple input\n # slots (e.g. an expression like ``f(x, x)``), it makes the\n # gradient accumulation complicated since the back-propagated\n # gradients w.r.t. the first and second argument should be\n # accumulated to the current gradient w.r.t. the same variable.\n # In this case, the current implementation passes the current\n # gradient only to the first occurrence of the variable in the\n # input tuple and passes ``None`` to the rest of the occurrences.\n # For example, when the input variables are ``(x, x)``, the\n # input gradient passed to the ``backward_accumulate`` method is\n # ``(gx, None)`` where ``gx`` is the current gradient of ``x``.\n # See also the docstring of ``FunctionNode.backward_accumulate``.\n target_inputs = [inputs[i] for i in target_input_indexes]\n in_grad = []\n for i, index_i in enumerate(target_input_indexes):\n x = inputs[index_i]\n if x in target_inputs[:i]:\n # Pass ``None`` for duplicated input variables except for\n # the first occurrence (see the comment above).\n gx = None\n elif x in grads:\n gx = grads[x]\n elif x.creator_node is None:\n x._check_old_style_gradient()\n # accumulate the gradient only if the node is a leaf\n gx = x.grad_var\n else:\n gx = None\n in_grad.append(gx)\n in_grad = tuple(in_grad)\n\n gxs = func.backward_accumulate(\n target_input_indexes, out_grad, in_grad)\n\n assert len(gxs) == len(in_grad)\n for hook in hooks:\n hook.backward_postprocess(func, in_data, out_grad_data)\n\n if is_debug:\n for gx in gxs:\n if gx is None:\n continue\n gx_data = gx.data\n if gx_data.dtype.kind == 'f':\n cuda.get_device_from_array(gx_data).use()\n if cuda.get_array_module(gx_data).isnan(gx_data).any():\n raise RuntimeError(\n 'NaN is detected on backward computation of '\n '{}'.format(func.label))\n\n if not retain_grad:\n for y in outputs:\n if y is not None and y is not self.node:\n grads[y] = None\n y_var = y.get_variable_or_none()\n if y_var is not None:\n y_var._grad_var = None\n\n for i, gx in enumerate(gxs):\n if gx is None:\n continue\n\n x = target_inputs[i]\n if not x.requires_grad:\n continue\n\n if isinstance(gx, tuple):\n # No need to check each data in the tuple,\n # just check the new gx concated in\n # backward_accumulate().\n _check_grad_type(func, x, gx[0].data)\n else:\n _check_grad_type(func, x, gx.data)\n\n if x in target_inputs[:i]:\n # Accumulate the duplicated gradients here. See the comment\n # above the code that builds ``in_grad``.\n cur_gx = grads[x]\n if func.lazy_grad_sum:\n if x.creator is None:\n gx = _backprop_utils.add(gx, cur_gx)\n grads[x] = gx\n else:\n grads[x] = _backprop_utils.concat_variable(\n gx, cur_gx)\n else:\n grads[x] = gx if cur_gx is None else gx + cur_gx\n\n else:\n grads[x] = gx\n\n x_var = x.get_variable_or_none()\n if x_var is not None:\n x_var._grad_var = grads[x]\n x_var._loss_scale = loss_scale\n\n if x.creator_node is not None:\n add_cand(x.creator_node)\n\n del gxs # to reduce memory usage\n if initial_device is not None:\n initial_device.use()\n\n def reshape(self, *shape):\n \"\"\"Returns a variable of a different shape and the same content.\n\n .. seealso::\n :func:`chainer.functions.reshape` for full documentation,\n\n \"\"\"\n if len(shape) == 1 and isinstance(shape[0], (tuple, list)):\n shape = shape[0]\n return chainer.functions.reshape(self, shape)\n\n def transpose(self, *axes):\n \"\"\"Permute the dimensions of an input variable without copy.\n\n .. seealso::\n :func:`chainer.functions.transpose` for full documentation.\n\n \"\"\"\n if len(axes) == 0:\n axes = None\n elif len(axes) == 1 and (isinstance(axes[0], (tuple, list)) or\n axes[0] is None):\n axes = axes[0]\n return chainer.functions.transpose(self, axes)\n\n def unchain(self):\n \"\"\"Deletes the reference to the creator of this variable.\n\n This method deletes the reference to the creator from the corresponding\n variable node. Unlike :meth:`unchain_backward`, it does not backtrack\n the graph.\n\n This method is equivalent to ``self.creator_node = None``.\n\n \"\"\"\n self.creator_node = None\n\n def unchain_backward(self):\n \"\"\"Deletes references between variable nodes and functions backward.\n\n After this method completes, intermediate variable nodes and functions\n that are not referenced from anywhere are deallocated by reference\n count GC. Also this variable itself deletes the reference to its\n creator function from the node, i.e. the node becomes root in the\n computation graph. It indicates that backprop after unchaining stops at\n this variable. This behavior is useful to implement truncated BPTT.\n\n \"\"\"\n cand_funcs = []\n seen_set = set()\n\n def add_cand(cand):\n if cand is not None and cand not in seen_set:\n cand_funcs.append(cand)\n seen_set.add(cand)\n\n add_cand(self.creator_node)\n\n while cand_funcs:\n func = cand_funcs.pop()\n for var in func.inputs:\n add_cand(var.creator_node)\n func.unchain()\n\n def retain_data(self):\n \"\"\"Lets the corresponding variable node keep the underlying array.\"\"\"\n self._node.data = self._data[0]\n\n def __lt__(self, other):\n raise NotImplementedError()\n\n def __le__(self, other):\n raise NotImplementedError()\n\n def __eq__(self, other):\n raise NotImplementedError()\n\n def __ne__(self, other):\n raise NotImplementedError()\n\n def __gt__(self, other):\n raise NotImplementedError()\n\n def __ge__(self, other):\n raise NotImplementedError()\n\n def __nonzero__(self):\n raise NotImplementedError()\n\n def __bool__(self):\n raise NotImplementedError()\n\n __array_priority__ = 200\n __hash__ = None\n\n\nclass Parameter(Variable):\n\n \"\"\"Parameter variable that can be registered to a link.\n\n Parameter is a subclass of :class:`Variable`. It almost behaves as same\n as a usual variable except that a parameter can be registered to a\n :class:`~chainer.Link` object just by assigning it to an attribute of\n the link within an :meth:`~chainer.Link.init_scope` context.\n\n Parameter also supports an initialization by an initializer. It can have\n two initializers: one for the data array, and the other for the gradient\n array. The initializer only specifies the way of filling the elements of\n these arrays, and the shape information is specified at the initialization\n point.\n\n When a link that the parameter has been registered to is passed to an\n :class:`~chainer.GradientMethod`, an update rule is set to the parameter.\n This update rule specifies how to update the data array of the parameter\n using its gradient array.\n\n Args:\n initializer (~chainer.Initializer or numpy.ndarray or cupy.ndarray):\n Initializer of the data array. If ``shape`` is given, this\n initializer is immediately used to initialize the data array.\n Otherwise, if it is an array, it is immediately used as the data\n array, and otherwise the data array is left uninitialized and will\n be initialized by this initializer in :meth:`initialize`. It can\n also be a scalar, in which case the data array will be filled by\n this scalar. Note that float32 is used in this case.\n shape (int or tuple of int or None): Shape of the parameter. If it is\n ``None``, the initialization is deferred to the call of\n :meth:`initialize`.\n name (str): Name of the parameter.\n\n Attributes:\n initializer: Initializer of the data array. It is used for\n initializing the data array of an uninitialized variable.\n update_rule: :class:`~chainer.optimizer.UpdateRule` instance that\n updates this variable as a parameter. This argument is set to\n :attr:`update_rule`.\n\n \"\"\"\n\n initializer = None\n _grad_initializer = None\n _initial_backend = None\n _initial_device = None\n\n def __init__(self, initializer=None, shape=None, name=None):\n if initializer is None:\n initializer = constant.NaN()\n elif numpy.isscalar(initializer):\n initializer = constant.Constant(initializer)\n if shape is None:\n if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):\n # parameter initialized by the initial array\n super(Parameter, self).__init__(initializer, name=name)\n else:\n # uninitialized parameter\n super(Parameter, self).__init__(name=name)\n self.initializer = initializer\n dtype = getattr(initializer, 'dtype', numpy.float32)\n self._grad_initializer = constant.NaN(dtype)\n else:\n # parameter initialized with a given shape\n if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):\n xp = cuda.get_array_module(initializer)\n initializer = constant.Constant(initializer)\n else:\n xp = numpy\n data = initializers.generate_array(initializer, shape, xp)\n grad = xp.full_like(data, numpy.nan)\n super(Parameter, self).__init__(data, name=name, grad=grad)\n\n self.update_rule = None\n\n def __copy__(self):\n return self._copy_to(Parameter())\n\n def __reduce__(self):\n return _recover_parameter, (self.data, self.name, self.grad,\n self.initializer, self.update_rule)\n\n def to_cpu(self):\n super(Parameter, self).to_cpu()\n if self.data is None:\n self._initial_backend = None\n self._initial_device = None\n\n def to_gpu(self, device=None):\n super(Parameter, self).to_gpu(device)\n if self.data is None:\n if device is None:\n device = cuda.Device().id\n self._initial_backend = 'cuda'\n self._initial_device = device\n\n def to_intel64(self):\n super(Parameter, self).to_intel64()\n if self.data is None:\n self._initial_backend = 'intel64'\n self._initial_device = None\n\n def cleargrad(self):\n super(Parameter, self).cleargrad()\n if self.data is None:\n self._grad_initializer = None\n\n def zerograd(self):\n super(Parameter, self).zerograd()\n if self.data is None:\n dtype = getattr(self.initializer, 'dtype', None)\n self._grad_initializer = initializers.Zero(dtype)\n\n def initialize(self, shape):\n \"\"\"Initializes the uninitialized variable.\n\n Uninitialized variable is a variable created with the data array set to\n None. This method creates and initializes the data array. The shape of\n the variable can be left unknown until this method is called.\n\n Args:\n shape (tuple of int): Shape of the data array.\n\n \"\"\"\n xp = numpy if self._initial_backend != 'cuda' else cuda.cupy\n with cuda.get_device_from_id(self._initial_device):\n data = initializers.generate_array(self.initializer, shape, xp)\n\n ginit = self._grad_initializer\n grad = None if ginit is None else initializers.generate_array(\n ginit, shape, xp)\n\n self.data = data\n self.grad = grad\n\n # Convert the array for iDeep.\n if self._initial_backend == 'intel64':\n self.to_intel64()\n\n def update(self):\n \"\"\"Updates the data array using the gradient and the update rule.\n\n This method updates the parameter using the attached update rule.\n\n \"\"\"\n if self.update_rule is not None:\n self.update_rule.update(self)\n\n\ndef as_variable(obj):\n \"\"\"Converts an array or a variable into :class:`~chainer.Variable`.\n\n This is a convenient function to get a :class:`~chainer.Variable` object\n transparently from a raw array or a variable.\n\n Note that this function should only be used for type consistency (i.e., to\n enforce the return value of an API having type :class:`~chainer.Varialbe`).\n The :class:`~chainer.Variable.requires_grad` flag is kept as is; if ``obj``\n is a raw array, the newly created variable has ``requires_grad = False``.\n In order to make a variable w.r.t. which you want to compute the gradient,\n you should use :class:`~chainer.Variable` directly.\n\n Args:\n obj (numpy.ndarray or cupy.ndarray or ~chainer.Variable): An array or\n a variable that you want to convert to :class:`~chainer.Variable`.\n\n Returns:\n ~chainer.Variable:\n A variable converted from ``obj``. If ``obj`` is a raw array, this is a\n new :class:`~chainer.Variable` object that wraps the array. If ``obj``\n is already a :class:`~chainer.Variable` object, this function returns\n ``obj`` as is.\n\n \"\"\"\n if isinstance(obj, Variable):\n return obj\n return Variable(obj, requires_grad=False)\n\n\ndef _recover_parameter(data, name, grad, initializer, update_rule):\n p = Parameter(initializer=initializer, name=name)\n p.data = data\n p.grad = grad\n p.update_rule = update_rule\n return p\n", "path": "chainer/variable.py"}], "after_files": [{"content": "import collections\nimport copy\nimport heapq\nimport traceback\nimport warnings\nimport weakref\n\nimport numpy\n\nimport chainer\nfrom chainer import _backprop_utils\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\nfrom chainer import initializers\nfrom chainer.initializers import constant\nfrom chainer.utils import argument\n\n\ndef _check_grad_type(func, x, gx):\n if x.data is None or gx is None:\n # ``x.data is None`` implies that the data array is not retained\n return\n if not chainer.is_arrays_compatible((gx, x.data)):\n msg = ('Type of data and grad mismatch\\ngrad: %s != data: %s' %\n (type(x.data), type(gx)))\n typ = TypeError\n elif gx.dtype != x.data.dtype:\n msg = ('Dtype of data and grad mismatch\\ngrad: %s != data: %s' %\n (x.data.dtype, gx.dtype))\n typ = TypeError\n elif gx.shape != x.data.shape:\n msg = ('Shape of data and grad mismatch\\ngrad: %s != data: %s' %\n (x.data.shape, gx.shape))\n typ = ValueError\n else:\n return\n\n detail = ''\n if func:\n detail = 'Function `{0}` ({1}) has a bug.\\n'.format(\n type(func)._impl_name, func.label)\n stack = func.stack\n if stack:\n detail += 'Stacktrace of the function is below:\\n'\n for line in traceback.format_list(func.stack):\n detail += line\n detail += '''\nPlease report this error to the issue tracker with the stack trace,\nthe information of your environment, and your script:\nhttps://github.com/chainer/chainer/issues/new.\n'''.format(type(func).__name__, func.label)\n\n raise typ(detail + msg)\n\n\ndef variable_repr(var):\n \"\"\"Return the string representation of a variable.\n\n Args:\n var (~chainer.Variable): Input Variable.\n .. seealso:: numpy.array_repr\n \"\"\"\n xp = cuda.get_array_module(var)\n if xp is numpy:\n arr = var.data\n else:\n arr = var.data.get()\n\n if var.name:\n prefix = 'variable ' + var.name\n else:\n prefix = 'variable'\n\n if arr is None:\n lst = 'None'\n elif arr.size > 0 or arr.shape == (0,):\n lst = numpy.array2string(arr, None, None, None, ', ', prefix + '(')\n else: # show zero-length shape unless it is (0,)\n lst = '[], shape=%s' % (repr(arr.shape),)\n\n return '%s(%s)' % (prefix, lst)\n\n\ndef variable_str(var):\n \"\"\"Return the string representation of a variable.\n\n Args:\n var (~chainer.Variable): Input Variable.\n .. seealso:: numpy.array_str\n \"\"\"\n xp = cuda.get_array_module(var)\n if xp is numpy:\n arr = var.data\n else:\n arr = var.data.get()\n\n if var.name:\n prefix = 'variable ' + var.name\n else:\n prefix = 'variable'\n\n if arr is None:\n lst = 'None'\n else:\n lst = numpy.array2string(arr, None, None, None, ' ', prefix + '(')\n\n return '%s(%s)' % (prefix, lst)\n\n\nclass VariableNode(object):\n\n \"\"\"Node in the backward computational graph representing a variable.\n\n This object represents a variable node in a computational graph. The node\n is used in error backpropagation (a.k.a. backprop) to determine which\n gradient to be passed to each function.\n\n A variable node is held by the corresponding :class:`~chainer.Variable`\n object, which is managed by users. :class:`~chainer.FunctionNode` objects\n that take the variable as an input also hold references to the variable\n node.\n\n Note that the node does not hold a reference to the corresponding data\n array in general. The data array is actually accessible by the node in the\n following cases.\n\n 1. If there exists a :class:`~chainer.Variable` object that holds a\n reference to the variable node, the variable node holds a weak reference\n to the variable object, and thus the data array is accessible via the\n weak reference.\n 2. If :meth:`retain_data` is called, the node holds a reference to the data\n array. It is mainly called by a function that needs the input or output\n data array in its backprop procedure.\n See :meth:`FunctionNode.retain_inputs()\n <chainer.FunctionNode.retain_inputs>`\n and :meth:`FunctionNode.retain_outputs()\n <chainer.FunctionNode.retain_outputs>` for more details.\n\n Users usually do not need to touch this variable node object. The\n computational graph is automatically managed by Chainer, and any interface\n that is beneficial for users is also provided by\n :class:`~chainer.Variable`.\n\n Args:\n variable (Variable): The corresponding variable object.\n name (str): Name of the variable node.\n\n Attributes:\n ~VariableNode.dtype: Data type of the data array.\n ~VariableNode.shape: Shape of the data array.\n ~VariableNode.name (str): Name of the variable node.\n\n \"\"\"\n\n _creator_node = None\n _data = None\n _rank = 0\n # Name of the Function is assigned if this variable is a gradient generated\n # by an old-style Function\n _old_style_grad_generator = None\n\n def __init__(self, variable, name, **kwargs):\n argument.check_unexpected_kwargs(\n kwargs,\n grad='unexpected keyword argument \"grad\": '\n 'pass the gradient to Variable instead'\n )\n self._variable = weakref.ref(variable)\n self.name = name\n self._requires_grad = variable.requires_grad\n\n vdata = variable.data\n self._update_data_info(vdata)\n\n @property\n def creator(self):\n \"\"\"Function object that created this variable node.\n\n When the function is implemented with the old-style API (i.e., it uses\n :class:`~chainer.Function` class),\n this property returns the :class:`~chainer.Function` object.\n The object is extracted from the :class:`~chainer.FunctionAdapter`\n object, so the returned object is not the function node, but instead\n the actual implementation of forward and backward procedures.\n\n When the function is implemented with the new-style API (i.e., it uses\n :class:`~chainer.FunctionNode` class),\n this property returns the function node\n object. In this case, the returned object is same as\n :attr:`creator_node`.\n\n .. warning::\n\n As of v3.0.0, when the creator is an old-style function, the\n following code is invalid:\n\n .. code-block:: python\n\n creator = v.creator\n v.creator = None\n ...\n v.creator = creator\n\n The point is that :class:`~chainer.FunctionNode` objects are used\n as nodes in the computational graph instead of\n :class:`~chainer.Function`, and each :class:`~chainer.Function`\n object only holds a *weak reference* to the corresponding\n :class:`~chainer.FunctionNode`.\n Since ``creator`` returns the :class:`~chainer.Function` object,\n the :class:`~chainer.FunctionNode` object is not kept by preserving\n ``creator``.\n\n The above code should be fixed as follows.\n\n .. code-block:: python\n\n creator_node = v.creator_node\n v.creator_node = None\n ...\n v.creator_node = creator_node\n\n \"\"\"\n node = self._creator_node\n if node is None:\n return None\n\n if isinstance(node, chainer.function.FunctionAdapter):\n return node.function\n return node\n\n @creator.setter\n def creator(self, func):\n self.creator_node = func\n\n @property\n def creator_node(self):\n \"\"\"Function node that has this variable as an output.\n\n See :class:`~chainer.FunctionNode` for the definition of a function\n node.\n\n \"\"\"\n return self._creator_node\n\n @creator_node.setter\n def creator_node(self, func):\n if isinstance(func, chainer.Function):\n func = func.node\n self._creator_node = func\n if func is not None:\n self._rank = func.rank + 1\n\n @property\n def data(self):\n \"\"\"Data array of the corresponding variable.\n\n If the data is not available, it returns ``None``.\n\n \"\"\"\n return self._data\n\n @data.setter\n def data(self, d):\n self._data = d\n self._update_data_info(d)\n\n @property\n def grad(self):\n \"\"\"Gradient array of the corresponding variable.\n\n If the variable is not available, it returns ``None``.\n\n \"\"\"\n var = self._variable()\n return None if var is None else var.grad\n\n @property\n def grad_var(self):\n \"\"\"Gradient variable of the corresponding variable.\n\n If the corresponding variable is not available, it return ``None``.\n\n \"\"\"\n var = self._variable()\n return None if var is None else var._grad_var\n\n @property\n def label(self):\n \"\"\"Short text that represents the variable node.\"\"\"\n if self.shape == ():\n return str(self.dtype)\n return '(%s), %s' % (', '.join(map(str, self.shape)),\n str(self.dtype))\n\n @property\n def rank(self):\n return self._rank\n\n @property\n def requires_grad(self):\n \"\"\"It indicates that ``grad`` will be set in backward calculation.\"\"\"\n return self._requires_grad\n\n def get_variable(self):\n \"\"\"Returns the corresponding :class:`~chainer.Variable` object.\n\n VariableNode object holds a weak reference of the variable object. If\n the reference is alive, it is returned by this property. Otherwise,\n this property creates a new :class:`~chainer.Variable` object from\n this node object and returns it.\n\n Returns:\n Variable: The variable object that refers this node.\n\n \"\"\"\n var = self._variable()\n if var is not None:\n return var\n\n var = Variable(self.data, name=self.name,\n requires_grad=self._requires_grad)\n var._node = self\n return var\n\n def get_variable_or_none(self):\n \"\"\"Returns the holding :class:`~chainer.Variable` object or ``None``.\n\n VariableNode object holds a weak reference of the variable object.If\n the reference is alive, it is returned by this property. Otherwise,\n returns ``None``.\n\n Returns:\n Variable: The variable object that refers this node.\n\n \"\"\"\n return self._variable()\n\n def set_creator(self, creator):\n \"\"\"Sets a :class:`~chainer.Function` object that created this node.\n\n This method is equivalent to ``self.creator = creator``. A\n :class:`~chainer.FunctionNode` object can also be passed.\n\n Args:\n creator (Function or FunctionNode): Function that has created this\n variable.\n\n \"\"\"\n self.creator = creator\n\n def set_creator_node(self, creator_node):\n \"\"\"Sets a :class:`~chainer.FunctionNode` object that created this node.\n\n This method is equivalent to ``self.creator_node = creator_node``. A\n :class:`~chainer.Function` object can also be passed, in which case the\n :attr:`Function.node <chainer.Function.node>` attribute is used.\n\n Args:\n creator_node (FunctionNode or Function): Function node that has\n this variable as an output.\n\n \"\"\"\n self.creator_node = creator_node\n\n def unchain(self):\n \"\"\"Deletes the reference to the creator of this variable node.\n\n This method is equivalent to ``self.creator_node = None``.\n\n \"\"\"\n self.creator_node = None\n\n def retain_data(self):\n \"\"\"Lets the node hold a reference to the underlying data array.\n\n This method gets the data array of the corresponding variable and keeps\n it. If the weak reference to the corresponding variable is dead, it\n raises an error.\n\n \"\"\"\n variable = self._variable()\n if variable is not None:\n self.data = variable.data\n else:\n raise RuntimeError('cannot retain variable data: the variable has '\n 'been already released')\n\n def _update_data_info(self, d):\n if d is None:\n self.dtype = None\n self.shape = None\n else:\n self.dtype = d.dtype\n self.shape = d.shape\n\n # If the node has a reference to data, update it as well.\n if self._data is not None:\n self._data = d\n\n def _check_old_style_gradient(self):\n if self._old_style_grad_generator is not None:\n raise RuntimeError(\n 'cannot twice-differentiate an old style Function \"%s\"' %\n self._old_style_grad_generator)\n\n\ndef _create_variable(data, name, grad, requires_grad):\n return Variable(\n data, name=name, grad=grad, requires_grad=requires_grad)\n\n\nclass Variable(object):\n\n \"\"\"__init__(data=None, *, name=None, grad=None, requires_grad=True)\n\n Array with a structure to keep track of computation.\n\n Every variable holds a data array of type either :class:`numpy.ndarray` or\n :class:`cupy.ndarray`.\n\n A variable object holds a data array and a\n :class:`~chainer.variable.VariableNode` object of\n a computational graph. If the variable is constructed by the user, the node\n is *root* and does not hold any parent. If the variable is constructed by a\n :class:`~chainer.FunctionNode` object, the node holds a reference to its\n parent called :attr:`creator_node`.\n This reference is used in backpropagation to backtrack the graph.\n\n Users can disable (resp. enable) this chaining behavior by calling\n :func:`~chainer.no_backprop_mode` (resp.\n :func:`~chainer.force_backprop_mode`).\n In the former context, a variable never creates a computational graph,\n whereas in the latter context, it is forced to create.\n\n .. warning::\n\n ``volatile`` argument is not supported anymore since v2.\n Instead, use :func:`chainer.no_backprop_mode`.\n\n Args:\n data (numpy.ndarray or cupy.ndarray): Initial data array.\n name (str): Name of the variable.\n grad (numpy.ndarray or cupy.ndarray): Initial gradient array.\n requires_grad (bool): Boolean indicating whether ``grad`` will be set\n in backward calculation.\n\n \"\"\" # NOQA\n\n def __init__(self, data=None, **kwargs):\n argument.check_unexpected_kwargs(\n kwargs, volatile='volatile argument is not supported anymore. '\n 'Use chainer.using_config')\n name, grad, requires_grad \\\n = argument.parse_kwargs(\n kwargs, ('name', None), ('grad', None),\n ('requires_grad', True))\n\n if (data is not None and\n not isinstance(data, chainer.get_array_types())):\n msg = '''numpy.ndarray or cuda.ndarray are expected.\nActual: {0}'''.format(type(data))\n raise TypeError(msg)\n\n # Use a list as a data structure to hold the data array indirectly to\n # abstract its initialized/uninitialized state.\n self._data = [data]\n self._requires_grad = requires_grad\n self._node = VariableNode(self, name)\n self._grad_var = None if grad is None else Variable(grad)\n self._loss_scale = None\n\n def __copy__(self):\n return self._copy_to(Variable())\n\n def _copy_to(self, target):\n target.__dict__ = copy.copy(self.__dict__)\n target._node = VariableNode(target, self.name)\n return target\n\n def __reduce__(self):\n return _create_variable, (self.data, self.name, self.grad,\n self._requires_grad)\n\n def __repr__(self):\n return variable_repr(self)\n\n def __str__(self):\n return variable_str(self)\n\n @property\n def xp(self):\n \"\"\"Array module for this variable.\n\n Depending on which of CPU/GPU this variable is on, this property\n returns :mod:`numpy` or :mod:`cupy`.\n\n \"\"\"\n return cuda.get_array_module(self)\n\n @property\n def name(self):\n return self._node.name\n\n @name.setter\n def name(self, n):\n self._node.name = n\n\n def summary(self):\n if self.name:\n return '<variable %s>' % self.name\n else:\n return '<variable at 0x%x>' % id(self)\n\n def debug_print(self):\n \"\"\"Display a summary of the stored data and location of the Variable\"\"\"\n\n msg = \"\"\"{summary}\n- device: {device}\n- backend: {backend}\n- shape: {shape}\n- dtype: {dtype}\n- statistics: {stats}\n- grad: {grad}\"\"\"\n\n stats_msg = 'mean={0:.8f}, std={1:.8f}'\n\n data = self.data\n with cuda.get_device_from_array(data) as dev:\n xp = numpy if int(dev) == -1 else cuda.cupy\n\n if data is None:\n # `data` can be `None` if constructed without any arguments\n device = None\n backend = None\n stats = None\n else:\n device = getattr(data, 'device', 'CPU')\n backend = type(data)\n stats = stats_msg.format(float(xp.mean(data)),\n float(xp.std(data)))\n shape = getattr(data, 'shape', None)\n dtype = getattr(data, 'dtype', None)\n\n if self.grad is None:\n grad = None\n elif xp.all(self.grad == 0):\n grad = 0\n else:\n grad = stats_msg.format(float(xp.mean(self.grad)),\n float(xp.std(self.grad)))\n\n return msg.format(summary=self.summary(), device=device,\n backend=backend, shape=shape, dtype=dtype,\n stats=stats, grad=grad)\n\n def __pos__(self):\n return self\n\n def __len__(self):\n \"\"\"Returns the first dimension of the data array.\n\n Returns:\n int: Number of the first dimension of the data array.\n\n \"\"\"\n return len(self.data)\n\n @property\n def label(self):\n \"\"\"Short text that represents the variable.\"\"\"\n return self._node.label\n\n @property\n def creator(self):\n \"\"\"Function implementation that created this variable.\n\n When this variable has been created by an old-style function (i.e., it\n is implemented as a subclass of :class:`Function`), this property\n returns that :class:`Function` object.\n\n When this variable has been created by a new-style function (i.e., it\n is implemented as a subclass of :class:`FunctionNode` class), this\n property returns that node object.\n\n \"\"\"\n return self._node.creator\n\n @creator.setter\n def creator(self, func):\n self._node.creator = func\n\n @property\n def creator_node(self):\n \"\"\":class:`FunctionNode` object that created this variable.\n\n This property has a setter to which ``None`` can be set. Setting\n ``None`` to this property is equivalent to call :meth:`unchain`;\n it purges the variable from the function that created this variable.\n\n The setter also accepts the original :class:`FunctionNode` object that\n created this variable. For example, you can once set ``None`` to this\n property and then set the original value again.\n\n .. note::\n Setting an irrelevant :meth:`FunctionNode` object does not emit any\n error immediately, whereas the behavior is undefined. Do not set\n a :meth:`FunctionNode` object that did not create this variable\n object.\n\n \"\"\"\n return self._node._creator_node\n\n @creator_node.setter\n def creator_node(self, func):\n self._node.creator_node = func\n\n @property\n def array(self):\n \"\"\"The underlying data array.\n\n It is either :class:`numpy.ndarray` or :class:`cupy.ndarray` object,\n or ``None`` if the variable in in an uninitialized state.\n\n \"\"\"\n return self._data[0]\n\n @array.setter\n def array(self, d):\n self._data[0] = d\n self._node._update_data_info(d)\n\n @property\n def data(self):\n \"\"\"The underlying data array (equivalent to :attr:`array`).\n\n Note that using this attribute directly is discouraged; use\n :attr:`array` instead. Using :attr:`array`, you can find an error\n earlier when your code mixes up Variable and ndarray because\n ndarray does not have an attribute ``.array`` while it has\n ``.data``.\n\n \"\"\"\n return self._data[0]\n\n @data.setter\n def data(self, d):\n self._data[0] = d\n self._node._update_data_info(d)\n\n @property\n def grad(self):\n \"\"\"Gradient array of this variable.\n\n Note that this property returns the underlying array of the gradient\n variable instead of the gradient variable itself; to get/set\n gradient variable, use :attr:`grad_var` instead.\n\n \"\"\"\n gv = self._grad_var\n return None if gv is None else gv.data\n\n @grad.setter\n def grad(self, g):\n self.grad_var = None if g is None else Variable(g)\n\n @property\n def grad_var(self):\n \"\"\"Gradient variable.\"\"\"\n return self._grad_var\n\n @grad_var.setter\n def grad_var(self, g):\n if g is not None:\n _check_grad_type(None, self, g.data)\n self._grad_var = g\n\n @property\n def shape(self):\n return self.data.shape\n\n @property\n def ndim(self):\n return self.data.ndim\n\n @property\n def size(self):\n return self.data.size\n\n @property\n def dtype(self):\n return self.data.dtype\n\n @property\n def rank(self):\n return self._node.rank\n\n @property\n def node(self):\n return self._node\n\n @property\n def requires_grad(self):\n \"\"\"It indicates that ``grad`` will be set in backward calculation.\"\"\"\n return self._requires_grad\n\n @property\n def T(self):\n \"\"\"Transposition of this variable.\"\"\"\n return chainer.functions.transpose(self)\n\n def to_cpu(self):\n \"\"\"Copies the data and gradient arrays to CPU.\"\"\"\n\n data = self.data\n if data is None:\n return\n\n if isinstance(data, cuda.ndarray):\n # cupy.ndarray to numpy.ndarray\n self._data = [cuda.to_cpu(data)]\n elif isinstance(data, intel64.mdarray):\n # ideep.mdarray to numpy.ndarray\n self._data = [numpy.array(data)]\n\n if self._grad_var is not None:\n self._grad_var.to_cpu()\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def to_gpu(self, device=None):\n \"\"\"Copies the data and gradient arrays to specified GPU.\n\n Args:\n device: Target device specifier. If omitted, the current device is\n used.\n\n \"\"\"\n if self.data is None:\n self._data = [None] # Renew placeholder to break sharing\n else:\n self._data = [cuda.to_gpu(self.data, device)]\n if self._grad_var is not None:\n self._grad_var.to_gpu(device)\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def to_intel64(self):\n \"\"\"Copies the data and gradient arrays to intel64 specific mdarray.\n\n If the array is not suited for intel64, it will be converted to\n :class:`numpy.ndarray`.\n \"\"\"\n intel64.check_ideep_available()\n data = self.data\n if data is not None:\n if isinstance(data, numpy.ndarray):\n # numpy.ndarray to ideep\n self._data = [\n intel64.ideep.array(\n data, itype=intel64.ideep.wgt_array)]\n elif isinstance(data, cuda.ndarray):\n # cupy.ndarray to ideep\n self._data = [\n intel64.ideep.array(\n data.get(), itype=intel64.ideep.wgt_array)]\n if self._grad_var is not None:\n self._grad_var.to_intel64()\n # ensure that the node tracks the device migration\n node = self._node\n if node._data is not None:\n node.retain_data()\n\n def cleargrad(self):\n \"\"\"Clears the gradient array.\"\"\"\n self._grad_var = None\n\n def zerograd(self):\n \"\"\"Initializes the gradient array by zeros.\n\n Note that the gradient variable is unchained from the computational\n graph by this method because this operation breaks the backprop\n validity.\n\n .. deprecated:: v1.15\n Use :meth:`cleargrad` instead.\n\n \"\"\"\n warnings.warn(\n 'Variable.zerograd is deprecated. Use Variable.cleargrad instead.',\n DeprecationWarning)\n\n if self.data is None:\n return\n\n with cuda.get_device_from_array(self.data) as dev:\n gv = self._grad_var\n if gv is None:\n xp = numpy if dev.id == -1 else cuda.cupy\n self.grad = xp.zeros_like(self.data)\n else:\n gv.unchain()\n gv.data.fill(0)\n\n def copydata(self, var):\n \"\"\"Copies the data array from given source variable.\n\n This method copies the data array from given variable to this variable.\n The copy is done even if the arrays reside on different devices,\n including across the host and a GPU device. If this variable has an\n uninitialized data array, this method initializes it by the data array\n of the given variable. Similarly, if the given variable has an\n uninitialized data array, this method initializes it by the data array\n of this variable (``self``). If both are uninitialized, this method\n does nothing.\n\n Args:\n var (Variable): Source variable.\n\n \"\"\"\n src = var.data\n dst = self.data\n if src is None:\n if dst is None:\n return\n var.initialize(self.shape)\n src = var.data\n elif dst is None:\n self.initialize(src.shape)\n dst = self.data\n src_xp = cuda.get_array_module(src)\n dst_xp = cuda.get_array_module(dst)\n if dst_xp is src_xp:\n dst_xp.copyto(dst, src)\n elif dst_xp is numpy:\n dst_xp.copyto(dst, src.get())\n else:\n dst.set(src)\n\n def addgrad(self, var):\n \"\"\"Accumulates the gradient array from given source variable.\n\n This method adds the gradient of a given variable to the gradient of\n this variable. The accumulation is even done across the host and\n different devices. If this variable has uninitialized data/grad arrays,\n this method initializes it with the shape of the given variable and\n then accumulates the gradient.\n\n Args:\n var (Variable): Source variable.\n\n \"\"\"\n src = var._grad_var\n if src is None:\n return\n\n if self.data is None:\n self.initialize(var.shape)\n dst = self._grad_var\n\n src_dev = cuda.get_device_from_array(src.data)\n dst_dev = cuda.get_device_from_array(self.data)\n\n if src_dev.id != dst_dev.id:\n src = chainer.functions.copy(src, dst_dev.id)\n self._grad_var = src if dst is None else src + dst\n\n def set_creator(self, gen_func):\n \"\"\"Notifies the variable that the given function is its creator.\n\n Args:\n gen_func (Function): Function object that creates this variable as\n one of its outputs.\n\n \"\"\"\n self._node.set_creator(gen_func)\n\n def set_creator_node(self, fnode):\n \"\"\"Notifies the variable that the given node is its creator.\n\n Args:\n fnode (FunctionNode): Function node that has this variable as an\n output.\n\n \"\"\"\n self._node.set_creator_node(fnode)\n\n def backward(self, retain_grad=False, enable_double_backprop=False,\n loss_scale=None):\n \"\"\"Runs error backpropagation (a.k.a.\\\\ backprop) from this variable.\n\n On backprop,\n :meth:`FunctionNode.backward() <chainer.FunctionNode.backward>`\n is called on each :class:`~chainer.FunctionNode` object appearing in\n the backward graph starting from this variable.\n The backward graph is represented by backward\n references from variable nodes to their creators, and from function\n nodes to their input variable nodes. The backprop stops at all root\n nodes. Some function nodes set ``None`` as gradients of some inputs,\n where further backprop does not take place at such inputs.\n\n This method uses :data:`grad` as the initial error array. User can\n manually set a gradient array before calling this method.\n If the shape of :data:`data` is ``()`` (i.e., it is scalar) and\n :data:`grad` is ``None``, then this method automatically complements\n 1.0 as the initial error. This is useful on starting backprop from\n some scalar loss value.\n\n From v3, this method supports *differentiable backprop* (a.k.a. double\n backprop, grad of grads). To enable it, pass\n ``enable_double_backprop=True``.\n\n Args:\n retain_grad (bool): If ``True``, the gradient arrays of all\n intermediate variables are kept.\n Otherwise, :data:`~chainer.Variable.grad` of the\n intermediate variables are set to ``None`` on appropriate\n timing, which may reduce the maximum memory consumption.\n\n In most cases of training some models, the purpose of backprop\n is to compute gradients of parameters, not of all variables,\n and therefore it is recommended to set this flag ``False``.\n enable_double_backprop (bool): *(Added in v3.0)* If ``True``,\n computational trace of the whole backpropagation procedure is\n recorded to the computational graph so that one can further do\n backpropagation from the resulting gradients. Note that\n enabling it results in larger memory consumption needed to\n store the gradients w.r.t intermediate variables that are\n required for the second gradient computation.\n loss_scale (float): Loss scaling factor. Loss scaling is a usefull\n technique to mitigate vanishing gradient issue that tends to\n happen when low precision data type like float16 is used during\n training. If you set loss scaling factor, gradients of loss\n values are to be multiplied by the factor before backprop\n starts. The factor is propagated to whole gradients in a\n computational graph along the backprop. The gradients of\n parameters are divided by the factor just before the parameters\n are to be updated.\n \"\"\"\n with chainer.using_config('enable_backprop', enable_double_backprop):\n self._backward_main(retain_grad, loss_scale)\n\n def _backward_main(self, retain_grad, loss_scale):\n self._node._check_old_style_gradient()\n if self.creator_node is None:\n return\n initial_device = None\n if cuda.available and isinstance(self.data, cuda.ndarray):\n try:\n initial_device = cuda.Device()\n except cuda.cupy.cuda.runtime.CUDARuntimeError as e:\n if e.status != 38: # cudaErrorNoDevice\n raise\n\n is_debug = chainer.is_debug()\n\n cand_funcs = []\n seen_set = set()\n grads = {}\n\n # Initialize error by 1, if this is a loss variable\n if self.data.size == 1 and self._grad_var is None:\n if self.data.ndim != 0:\n warnings.warn(\n 'Treating a scalar as a variable with only one element'\n ' in Variable.backward is deprecated. A scalar variable'\n ' must be a 0-dimensional array. Apply'\n ' chainer.functions.squeeze to obtain a scalar variable.'\n ' If the size of this variable accidentally becomes one,'\n ' set zero to grad.',\n DeprecationWarning)\n with cuda.get_device_from_array(self.data) as device:\n if device is cuda.DummyDevice:\n self.grad = numpy.ones_like(self.data)\n else:\n self.grad = cuda.cupy.ones_like(self.data)\n if loss_scale is not None:\n self.grad *= loss_scale\n grads[self._node] = self._grad_var\n\n def add_cand(cand):\n if cand not in seen_set:\n # Negate since heapq is min-heap\n heapq.heappush(cand_funcs, (-cand.rank, len(seen_set), cand))\n seen_set.add(cand)\n\n add_cand(self.creator_node)\n\n def get_grad(node):\n if node is None:\n return None\n if node in grads:\n return grads[node]\n return node.grad_var\n\n def set_grad(node, value):\n if node is None:\n return\n if node in grads:\n grads[node] = value\n var = node.get_variable()\n if var is not None:\n var._grad_var = value\n\n while cand_funcs:\n _, _, func = heapq.heappop(cand_funcs)\n inputs = func.inputs\n target_input_indexes = tuple([\n i for i, x in enumerate(inputs) if x.requires_grad\n ])\n if not target_input_indexes:\n continue\n outputs = [y() for y in func.outputs] # access via weak ref\n\n in_data = tuple([x.data for x in inputs])\n # We need calculate the value of for the out_grad which accumulated\n # because now out_grad is used in backward calculation.\n for y in outputs:\n grad = get_grad(y)\n if isinstance(grad, tuple):\n grad = chainer.functions.add(*grad)\n set_grad(y, grad)\n out_grad = tuple([get_grad(y) for y in outputs])\n out_grad_data = tuple(\n [None if g is None else g.data for g in out_grad])\n hooks = chainer.get_function_hooks()\n if func._n_local_function_hooks != 0:\n hooks = collections.OrderedDict(hooks)\n hooks.update(func.local_function_hooks)\n hooks = hooks.values() # avoid six for performance\n\n cuda.get_device_from_array(*in_data).use()\n for hook in hooks:\n hook.backward_preprocess(func, in_data, out_grad_data)\n\n # Collect the current input gradients.\n #\n # Note (Tokui): When the same variable is passed to multiple input\n # slots (e.g. an expression like ``f(x, x)``), it makes the\n # gradient accumulation complicated since the back-propagated\n # gradients w.r.t. the first and second argument should be\n # accumulated to the current gradient w.r.t. the same variable.\n # In this case, the current implementation passes the current\n # gradient only to the first occurrence of the variable in the\n # input tuple and passes ``None`` to the rest of the occurrences.\n # For example, when the input variables are ``(x, x)``, the\n # input gradient passed to the ``backward_accumulate`` method is\n # ``(gx, None)`` where ``gx`` is the current gradient of ``x``.\n # See also the docstring of ``FunctionNode.backward_accumulate``.\n target_inputs = [inputs[i] for i in target_input_indexes]\n in_grad = []\n for i, index_i in enumerate(target_input_indexes):\n x = inputs[index_i]\n if x in target_inputs[:i]:\n # Pass ``None`` for duplicated input variables except for\n # the first occurrence (see the comment above).\n gx = None\n elif x in grads:\n gx = grads[x]\n elif x.creator_node is None:\n x._check_old_style_gradient()\n # accumulate the gradient only if the node is a leaf\n gx = x.grad_var\n else:\n gx = None\n in_grad.append(gx)\n in_grad = tuple(in_grad)\n\n gxs = func.backward_accumulate(\n target_input_indexes, out_grad, in_grad)\n\n assert len(gxs) == len(in_grad)\n for hook in hooks:\n hook.backward_postprocess(func, in_data, out_grad_data)\n\n if is_debug:\n for gx in gxs:\n if gx is None:\n continue\n gx_data = gx.data\n if gx_data.dtype.kind == 'f':\n cuda.get_device_from_array(gx_data).use()\n if cuda.get_array_module(gx_data).isnan(gx_data).any():\n raise RuntimeError(\n 'NaN is detected on backward computation of '\n '{}'.format(func.label))\n\n if not retain_grad:\n for y in outputs:\n if y is not None and y is not self.node:\n grads[y] = None\n y_var = y.get_variable_or_none()\n if y_var is not None:\n y_var._grad_var = None\n\n for i, gx in enumerate(gxs):\n if gx is None:\n continue\n\n x = target_inputs[i]\n if not x.requires_grad:\n continue\n\n if isinstance(gx, tuple):\n # No need to check each data in the tuple,\n # just check the new gx concated in\n # backward_accumulate().\n _check_grad_type(func, x, gx[0].data)\n else:\n _check_grad_type(func, x, gx.data)\n\n if x in target_inputs[:i]:\n # Accumulate the duplicated gradients here. See the comment\n # above the code that builds ``in_grad``.\n cur_gx = grads[x]\n if func.lazy_grad_sum:\n if x.creator is None:\n gx = _backprop_utils.add(gx, cur_gx)\n grads[x] = gx\n else:\n grads[x] = _backprop_utils.concat_variable(\n gx, cur_gx)\n else:\n grads[x] = gx if cur_gx is None else gx + cur_gx\n\n else:\n grads[x] = gx\n\n x_var = x.get_variable_or_none()\n if x_var is not None:\n x_var._grad_var = grads[x]\n x_var._loss_scale = loss_scale\n\n if x.creator_node is not None:\n add_cand(x.creator_node)\n\n del gxs # to reduce memory usage\n if initial_device is not None:\n initial_device.use()\n\n def reshape(self, *shape):\n \"\"\"Returns a variable of a different shape and the same content.\n\n .. seealso::\n :func:`chainer.functions.reshape` for full documentation,\n\n \"\"\"\n if len(shape) == 1 and isinstance(shape[0], (tuple, list)):\n shape = shape[0]\n return chainer.functions.reshape(self, shape)\n\n def transpose(self, *axes):\n \"\"\"Permute the dimensions of an input variable without copy.\n\n .. seealso::\n :func:`chainer.functions.transpose` for full documentation.\n\n \"\"\"\n if len(axes) == 0:\n axes = None\n elif len(axes) == 1 and (isinstance(axes[0], (tuple, list)) or\n axes[0] is None):\n axes = axes[0]\n return chainer.functions.transpose(self, axes)\n\n def unchain(self):\n \"\"\"Deletes the reference to the creator of this variable.\n\n This method deletes the reference to the creator from the corresponding\n variable node. Unlike :meth:`unchain_backward`, it does not backtrack\n the graph.\n\n This method is equivalent to ``self.creator_node = None``.\n\n \"\"\"\n self.creator_node = None\n\n def unchain_backward(self):\n \"\"\"Deletes references between variable nodes and functions backward.\n\n After this method completes, intermediate variable nodes and functions\n that are not referenced from anywhere are deallocated by reference\n count GC. Also this variable itself deletes the reference to its\n creator function from the node, i.e. the node becomes root in the\n computation graph. It indicates that backprop after unchaining stops at\n this variable. This behavior is useful to implement truncated BPTT.\n\n \"\"\"\n cand_funcs = []\n seen_set = set()\n\n def add_cand(cand):\n if cand is not None and cand not in seen_set:\n cand_funcs.append(cand)\n seen_set.add(cand)\n\n add_cand(self.creator_node)\n\n while cand_funcs:\n func = cand_funcs.pop()\n for var in func.inputs:\n add_cand(var.creator_node)\n func.unchain()\n\n def retain_data(self):\n \"\"\"Lets the corresponding variable node keep the underlying array.\"\"\"\n self._node.data = self._data[0]\n\n def __lt__(self, other):\n raise NotImplementedError()\n\n def __le__(self, other):\n raise NotImplementedError()\n\n def __eq__(self, other):\n raise NotImplementedError()\n\n def __ne__(self, other):\n raise NotImplementedError()\n\n def __gt__(self, other):\n raise NotImplementedError()\n\n def __ge__(self, other):\n raise NotImplementedError()\n\n def __nonzero__(self):\n raise NotImplementedError()\n\n def __bool__(self):\n raise NotImplementedError()\n\n __array_priority__ = 200\n __hash__ = None\n\n\nclass Parameter(Variable):\n\n \"\"\"Parameter variable that can be registered to a link.\n\n Parameter is a subclass of :class:`Variable`. It almost behaves as same\n as a usual variable except that a parameter can be registered to a\n :class:`~chainer.Link` object just by assigning it to an attribute of\n the link within an :meth:`~chainer.Link.init_scope` context.\n\n Parameter also supports an initialization by an initializer. It can have\n two initializers: one for the data array, and the other for the gradient\n array. The initializer only specifies the way of filling the elements of\n these arrays, and the shape information is specified at the initialization\n point.\n\n When a link that the parameter has been registered to is passed to an\n :class:`~chainer.GradientMethod`, an update rule is set to the parameter.\n This update rule specifies how to update the data array of the parameter\n using its gradient array.\n\n Args:\n initializer (~chainer.Initializer or numpy.ndarray or cupy.ndarray):\n Initializer of the data array. If ``shape`` is given, this\n initializer is immediately used to initialize the data array.\n Otherwise, if it is an array, it is immediately used as the data\n array, and otherwise the data array is left uninitialized and will\n be initialized by this initializer in :meth:`initialize`. It can\n also be a scalar, in which case the data array will be filled by\n this scalar. Note that float32 is used in this case.\n shape (int or tuple of int or None): Shape of the parameter. If it is\n ``None``, the initialization is deferred to the call of\n :meth:`initialize`.\n name (str): Name of the parameter.\n\n Attributes:\n initializer: Initializer of the data array. It is used for\n initializing the data array of an uninitialized variable.\n update_rule: :class:`~chainer.optimizer.UpdateRule` instance that\n updates this variable as a parameter. This argument is set to\n :attr:`update_rule`.\n\n \"\"\"\n\n initializer = None\n _grad_initializer = None\n _initial_backend = None\n _initial_device = None\n\n def __init__(self, initializer=None, shape=None, name=None):\n if initializer is None:\n initializer = constant.NaN()\n elif numpy.isscalar(initializer):\n initializer = constant.Constant(initializer)\n if shape is None:\n if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):\n # parameter initialized by the initial array\n super(Parameter, self).__init__(initializer, name=name)\n else:\n # uninitialized parameter\n super(Parameter, self).__init__(name=name)\n self.initializer = initializer\n dtype = getattr(initializer, 'dtype', numpy.float32)\n self._grad_initializer = constant.NaN(dtype)\n else:\n # parameter initialized with a given shape\n if isinstance(initializer, (numpy.ndarray, cuda.ndarray)):\n xp = cuda.get_array_module(initializer)\n initializer = constant.Constant(initializer)\n else:\n xp = numpy\n data = initializers.generate_array(initializer, shape, xp)\n grad = xp.full_like(data, numpy.nan)\n super(Parameter, self).__init__(data, name=name, grad=grad)\n\n self.update_rule = None\n\n def __copy__(self):\n return self._copy_to(Parameter())\n\n def __reduce__(self):\n return _recover_parameter, (self.data, self.name, self.grad,\n self.initializer, self.update_rule)\n\n def to_cpu(self):\n super(Parameter, self).to_cpu()\n if self.data is None:\n self._initial_backend = None\n self._initial_device = None\n\n def to_gpu(self, device=None):\n super(Parameter, self).to_gpu(device)\n if self.data is None:\n if device is None:\n device = cuda.Device().id\n self._initial_backend = 'cuda'\n self._initial_device = device\n\n def to_intel64(self):\n super(Parameter, self).to_intel64()\n if self.data is None:\n self._initial_backend = 'intel64'\n self._initial_device = None\n\n def cleargrad(self):\n super(Parameter, self).cleargrad()\n if self.data is None:\n self._grad_initializer = None\n\n def zerograd(self):\n super(Parameter, self).zerograd()\n if self.data is None:\n dtype = getattr(self.initializer, 'dtype', None)\n self._grad_initializer = initializers.Zero(dtype)\n\n def initialize(self, shape):\n \"\"\"Initializes the uninitialized variable.\n\n Uninitialized variable is a variable created with the data array set to\n None. This method creates and initializes the data array. The shape of\n the variable can be left unknown until this method is called.\n\n Args:\n shape (tuple of int): Shape of the data array.\n\n \"\"\"\n xp = numpy if self._initial_backend != 'cuda' else cuda.cupy\n with cuda.get_device_from_id(self._initial_device):\n data = initializers.generate_array(self.initializer, shape, xp)\n\n ginit = self._grad_initializer\n grad = None if ginit is None else initializers.generate_array(\n ginit, shape, xp)\n\n self.data = data\n self.grad = grad\n\n # Convert the array for iDeep.\n if self._initial_backend == 'intel64':\n self.to_intel64()\n\n def update(self):\n \"\"\"Updates the data array using the gradient and the update rule.\n\n This method updates the parameter using the attached update rule.\n\n \"\"\"\n if self.update_rule is not None:\n self.update_rule.update(self)\n\n\ndef as_variable(obj):\n \"\"\"Converts an array or a variable into :class:`~chainer.Variable`.\n\n This is a convenient function to get a :class:`~chainer.Variable` object\n transparently from a raw array or a variable.\n\n Note that this function should only be used for type consistency (i.e., to\n enforce the return value of an API having type :class:`~chainer.Varialbe`).\n The :class:`~chainer.Variable.requires_grad` flag is kept as is; if ``obj``\n is a raw array, the newly created variable has ``requires_grad = False``.\n In order to make a variable w.r.t. which you want to compute the gradient,\n you should use :class:`~chainer.Variable` directly.\n\n Args:\n obj (numpy.ndarray or cupy.ndarray or ~chainer.Variable): An array or\n a variable that you want to convert to :class:`~chainer.Variable`.\n\n Returns:\n ~chainer.Variable:\n A variable converted from ``obj``. If ``obj`` is a raw array, this is a\n new :class:`~chainer.Variable` object that wraps the array. If ``obj``\n is already a :class:`~chainer.Variable` object, this function returns\n ``obj`` as is.\n\n \"\"\"\n if isinstance(obj, Variable):\n return obj\n return Variable(obj, requires_grad=False)\n\n\ndef _recover_parameter(data, name, grad, initializer, update_rule):\n p = Parameter(initializer=initializer, name=name)\n p.data = data\n p.grad = grad\n p.update_rule = update_rule\n return p\n", "path": "chainer/variable.py"}]} |
gh_patches_debug_1122 | rasdani/github-patches | git_diff | SeldonIO__MLServer-945 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
MLServer is incompatible with latest release of FastAPI
MLServer is incompatible with [latest release of FastAPI](https://github.com/tiangolo/fastapi/releases/tag/0.89.0), and installing any version of MLServer will result in the following error, temp workaround added in this [pull request](https://github.com/SeldonIO/MLServer/pull/934) however, I think this needs a more in-depth root-cause analysis.
```
2023-01-09 02:11:59,296 [mlserver] INFO - Using asyncio event-loop policy: uvloop
2023-01-09 02:11:59,301 [mlserver] WARNING - Model name 'node-1' is different than model's folder name '25-mlserver-example-single'.
Traceback (most recent call last):
File "/home/cc/miniconda3/envs/central-1/bin/mlserver", line 8, in <module>
sys.exit(main())
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py", line 79, in main
root()
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py", line 1657, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py", line 20, in wrapper
return asyncio.run(f(*args, **kwargs))
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py", line 43, in start
server = MLServer(settings)
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/server.py", line 71, in __init__
self._rest_server = RESTServer(
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/rest/server.py", line 26, in __init__
self._app = create_app(
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/rest/app.py", line 43, in create_app
APIRoute(
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/fastapi/routing.py", line 400, in __init__
self.response_field = create_response_field(
File "/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/fastapi/utils.py", line 90, in create_response_field
raise fastapi.exceptions.FastAPIError(
fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'starlette.responses.Response'> is a valid pydantic field type
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 import os
2
3 from typing import Dict
4 from setuptools import setup, find_packages
5
6 ROOT_PATH = os.path.dirname(__file__)
7 PKG_NAME = "mlserver"
8 PKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)
9
10
11 def _load_version() -> str:
12 version = ""
13 version_path = os.path.join(PKG_PATH, "version.py")
14 with open(version_path) as fp:
15 version_module: Dict[str, str] = {}
16 exec(fp.read(), version_module)
17 version = version_module["__version__"]
18
19 return version
20
21
22 def _load_description() -> str:
23 readme_path = os.path.join(ROOT_PATH, "README.md")
24 with open(readme_path) as fp:
25 return fp.read()
26
27
28 env_marker_cpython = (
29 "sys_platform != 'win32'"
30 " and (sys_platform != 'cygwin'"
31 " and platform_python_implementation != 'PyPy')"
32 )
33
34 setup(
35 name=PKG_NAME,
36 version=_load_version(),
37 url="https://github.com/SeldonIO/MLServer.git",
38 author="Seldon Technologies Ltd.",
39 author_email="[email protected]",
40 description="ML server",
41 packages=find_packages(exclude=["tests", "tests.*"]),
42 install_requires=[
43 "click",
44 "fastapi<=0.88.0",
45 "python-dotenv",
46 "grpcio",
47 "importlib-metadata;python_version<'3.8'",
48 "numpy",
49 "pandas",
50 "protobuf",
51 "uvicorn",
52 "starlette_exporter",
53 "py-grpc-prometheus",
54 "uvloop;" + env_marker_cpython,
55 "aiokafka",
56 "tritonclient[http]>=2.24",
57 "aiofiles",
58 "orjson",
59 ],
60 entry_points={"console_scripts": ["mlserver=mlserver.cli:main"]},
61 long_description=_load_description(),
62 long_description_content_type="text/markdown",
63 license="Apache 2.0",
64 )
65
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,8 @@
packages=find_packages(exclude=["tests", "tests.*"]),
install_requires=[
"click",
- "fastapi<=0.88.0",
+ # 0.89.0: https://github.com/tiangolo/fastapi/issues/5861
+ "fastapi<=0.89.1, !=0.89.0",
"python-dotenv",
"grpcio",
"importlib-metadata;python_version<'3.8'",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,7 +41,8 @@\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"click\",\n- \"fastapi<=0.88.0\",\n+ # 0.89.0: https://github.com/tiangolo/fastapi/issues/5861\n+ \"fastapi<=0.89.1, !=0.89.0\",\n \"python-dotenv\",\n \"grpcio\",\n \"importlib-metadata;python_version<'3.8'\",\n", "issue": "MLServer is incompatible with latest release of FastAPI\nMLServer is incompatible with [latest release of FastAPI](https://github.com/tiangolo/fastapi/releases/tag/0.89.0), and installing any version of MLServer will result in the following error, temp workaround added in this [pull request](https://github.com/SeldonIO/MLServer/pull/934) however, I think this needs a more in-depth root-cause analysis.\r\n```\r\n2023-01-09 02:11:59,296 [mlserver] INFO - Using asyncio event-loop policy: uvloop\r\n2023-01-09 02:11:59,301 [mlserver] WARNING - Model name 'node-1' is different than model's folder name '25-mlserver-example-single'.\r\nTraceback (most recent call last):\r\n File \"/home/cc/miniconda3/envs/central-1/bin/mlserver\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py\", line 79, in main\r\n root()\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py\", line 1657, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/click/core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py\", line 20, in wrapper\r\n return asyncio.run(f(*args, **kwargs))\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n File \"uvloop/loop.pyx\", line 1517, in uvloop.loop.Loop.run_until_complete\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/cli/main.py\", line 43, in start\r\n server = MLServer(settings)\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/server.py\", line 71, in __init__\r\n self._rest_server = RESTServer(\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/rest/server.py\", line 26, in __init__\r\n self._app = create_app(\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/mlserver/rest/app.py\", line 43, in create_app\r\n APIRoute(\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/fastapi/routing.py\", line 400, in __init__\r\n self.response_field = create_response_field(\r\n File \"/home/cc/miniconda3/envs/central-1/lib/python3.9/site-packages/fastapi/utils.py\", line 90, in create_response_field\r\n raise fastapi.exceptions.FastAPIError(\r\nfastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'starlette.responses.Response'> is a valid pydantic field type\r\n```\n", "before_files": [{"content": "import os\n\nfrom typing import Dict\nfrom setuptools import setup, find_packages\n\nROOT_PATH = os.path.dirname(__file__)\nPKG_NAME = \"mlserver\"\nPKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)\n\n\ndef _load_version() -> str:\n version = \"\"\n version_path = os.path.join(PKG_PATH, \"version.py\")\n with open(version_path) as fp:\n version_module: Dict[str, str] = {}\n exec(fp.read(), version_module)\n version = version_module[\"__version__\"]\n\n return version\n\n\ndef _load_description() -> str:\n readme_path = os.path.join(ROOT_PATH, \"README.md\")\n with open(readme_path) as fp:\n return fp.read()\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nsetup(\n name=PKG_NAME,\n version=_load_version(),\n url=\"https://github.com/SeldonIO/MLServer.git\",\n author=\"Seldon Technologies Ltd.\",\n author_email=\"[email protected]\",\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"click\",\n \"fastapi<=0.88.0\",\n \"python-dotenv\",\n \"grpcio\",\n \"importlib-metadata;python_version<'3.8'\",\n \"numpy\",\n \"pandas\",\n \"protobuf\",\n \"uvicorn\",\n \"starlette_exporter\",\n \"py-grpc-prometheus\",\n \"uvloop;\" + env_marker_cpython,\n \"aiokafka\",\n \"tritonclient[http]>=2.24\",\n \"aiofiles\",\n \"orjson\",\n ],\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n long_description=_load_description(),\n long_description_content_type=\"text/markdown\",\n license=\"Apache 2.0\",\n)\n", "path": "setup.py"}], "after_files": [{"content": "import os\n\nfrom typing import Dict\nfrom setuptools import setup, find_packages\n\nROOT_PATH = os.path.dirname(__file__)\nPKG_NAME = \"mlserver\"\nPKG_PATH = os.path.join(ROOT_PATH, PKG_NAME)\n\n\ndef _load_version() -> str:\n version = \"\"\n version_path = os.path.join(PKG_PATH, \"version.py\")\n with open(version_path) as fp:\n version_module: Dict[str, str] = {}\n exec(fp.read(), version_module)\n version = version_module[\"__version__\"]\n\n return version\n\n\ndef _load_description() -> str:\n readme_path = os.path.join(ROOT_PATH, \"README.md\")\n with open(readme_path) as fp:\n return fp.read()\n\n\nenv_marker_cpython = (\n \"sys_platform != 'win32'\"\n \" and (sys_platform != 'cygwin'\"\n \" and platform_python_implementation != 'PyPy')\"\n)\n\nsetup(\n name=PKG_NAME,\n version=_load_version(),\n url=\"https://github.com/SeldonIO/MLServer.git\",\n author=\"Seldon Technologies Ltd.\",\n author_email=\"[email protected]\",\n description=\"ML server\",\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n install_requires=[\n \"click\",\n # 0.89.0: https://github.com/tiangolo/fastapi/issues/5861\n \"fastapi<=0.89.1, !=0.89.0\",\n \"python-dotenv\",\n \"grpcio\",\n \"importlib-metadata;python_version<'3.8'\",\n \"numpy\",\n \"pandas\",\n \"protobuf\",\n \"uvicorn\",\n \"starlette_exporter\",\n \"py-grpc-prometheus\",\n \"uvloop;\" + env_marker_cpython,\n \"aiokafka\",\n \"tritonclient[http]>=2.24\",\n \"aiofiles\",\n \"orjson\",\n ],\n entry_points={\"console_scripts\": [\"mlserver=mlserver.cli:main\"]},\n long_description=_load_description(),\n long_description_content_type=\"text/markdown\",\n license=\"Apache 2.0\",\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1123 | rasdani/github-patches | git_diff | pytorch__text-193 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Side effect in Vocab __init__
The constructor of Vocab accumulates input data in its `specials` argument/variable and pollute the input argument `counter`. The constructed object is also wrong because of this side effect. Please find reproducible example below:
```
>>> c1 = Counter([1])
>>> v1 = Vocab(c1)
>>> print(c1)
Counter({1: 1, u'<pad>': 0})
>>> print(v1.stoi)
defaultdict(<function _default_unk_index at 0x10b4aa758>, {1: 1, u'<pad>': 0})
>>> c2 = Counter([2])
>>> print(c2)
Counter({2: 1})
>>> v2 = Vocab(c2)
>>> print(c2)
Counter({2: 1, 1: 0, u'<pad>': 0}) # c2 is changed after passing as argument
>>> print(v2.stoi)
defaultdict(<function _default_unk_index at 0x10b4aa758>, {1: 1, u'<pad>': 0, 2: 2}) # resulting vocabulary is wrong
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `torchtext/vocab.py`
Content:
```
1 from __future__ import unicode_literals
2 import array
3 from collections import defaultdict
4 import io
5 import logging
6 import os
7 import zipfile
8
9 import six
10 from six.moves.urllib.request import urlretrieve
11 import torch
12 from tqdm import tqdm
13 import tarfile
14
15 from .utils import reporthook
16
17 logger = logging.getLogger(__name__)
18
19
20 class Vocab(object):
21 """Defines a vocabulary object that will be used to numericalize a field.
22
23 Attributes:
24 freqs: A collections.Counter object holding the frequencies of tokens
25 in the data used to build the Vocab.
26 stoi: A collections.defaultdict instance mapping token strings to
27 numerical identifiers.
28 itos: A list of token strings indexed by their numerical identifiers.
29 """
30 def __init__(self, counter, max_size=None, min_freq=1, specials=['<pad>'],
31 vectors=None):
32 """Create a Vocab object from a collections.Counter.
33
34 Arguments:
35 counter: collections.Counter object holding the frequencies of
36 each value found in the data.
37 max_size: The maximum size of the vocabulary, or None for no
38 maximum. Default: None.
39 min_freq: The minimum frequency needed to include a token in the
40 vocabulary. Values less than 1 will be set to 1. Default: 1.
41 specials: The list of special tokens (e.g., padding or eos) that
42 will be prepended to the vocabulary in addition to an <unk>
43 token. Default: ['<pad>']
44 vectors: One of either the available pretrained vectors
45 or custom pretrained vectors (see Vocab.load_vectors);
46 or a list of aforementioned vectors
47 """
48 self.freqs = counter.copy()
49 min_freq = max(min_freq, 1)
50 counter.update(specials)
51
52 self.stoi = defaultdict(_default_unk_index)
53 self.stoi.update({tok: i for i, tok in enumerate(specials)})
54 self.itos = list(specials)
55
56 counter.subtract({tok: counter[tok] for tok in specials})
57 max_size = None if max_size is None else max_size + len(self.itos)
58
59 # sort by frequency, then alphabetically
60 words_and_frequencies = sorted(counter.items(), key=lambda tup: tup[0])
61 words_and_frequencies.sort(key=lambda tup: tup[1], reverse=True)
62
63 for word, freq in words_and_frequencies:
64 if freq < min_freq or len(self.itos) == max_size:
65 break
66 self.itos.append(word)
67 self.stoi[word] = len(self.itos) - 1
68
69 self.vectors = None
70 if vectors is not None:
71 self.load_vectors(vectors)
72
73 def __eq__(self, other):
74 if self.freqs != other.freqs:
75 return False
76 if self.stoi != other.stoi:
77 return False
78 if self.itos != other.itos:
79 return False
80 if self.vectors != other.vectors:
81 return False
82 return True
83
84 def __len__(self):
85 return len(self.itos)
86
87 def extend(self, v, sort=False):
88 words = sorted(v.itos) if sort else v.itos
89 for w in words:
90 if w not in self.stoi:
91 self.itos.append(w)
92 self.stoi[w] = len(self.itos) - 1
93
94 def load_vectors(self, vectors):
95 """
96 Arguments:
97 vectors: one of or a list containing instantiations of the
98 GloVe, CharNGram, or Vectors classes. Alternatively, one
99 of or a list of available pretrained vectors:
100 charngram.100d
101 fasttext.en.300d
102 fasttext.simple.300d
103 glove.42B.300d
104 glove.840B.300d
105 glove.twitter.27B.25d
106 glove.twitter.27B.50d
107 glove.twitter.27B.100d
108 glove.twitter.27B.200d
109 glove.6B.50d
110 glove.6B.100d
111 glove.6B.200d
112 glove.6B.300d
113 """
114 if not isinstance(vectors, list):
115 vectors = [vectors]
116 for idx, vector in enumerate(vectors):
117 if six.PY2 and isinstance(vector, str):
118 vector = six.text_type(vector)
119 if isinstance(vector, six.string_types):
120 # Convert the string pretrained vector identifier
121 # to a Vectors object
122 if vector not in pretrained_aliases:
123 raise ValueError(
124 "Got string input vector {}, but allowed pretrained "
125 "vectors are {}".format(
126 vector, list(pretrained_aliases.keys())))
127 vectors[idx] = pretrained_aliases[vector]()
128 elif not isinstance(vector, Vectors):
129 raise ValueError(
130 "Got input vectors of type {}, expected str or "
131 "Vectors object".format(type(vector)))
132
133 tot_dim = sum(v.dim for v in vectors)
134 self.vectors = torch.Tensor(len(self), tot_dim)
135 for i, token in enumerate(self.itos):
136 start_dim = 0
137 for v in vectors:
138 end_dim = start_dim + v.dim
139 self.vectors[i][start_dim:end_dim] = v[token.strip()]
140 start_dim = end_dim
141 assert(start_dim == tot_dim)
142
143 def set_vectors(self, stoi, vectors, dim, unk_init=torch.Tensor.zero_):
144 """
145 Set the vectors for the Vocab instance from a collection of Tensors.
146
147 Arguments:
148 stoi: A dictionary of string to the index of the associated vector
149 in the `vectors` input argument.
150 vectors: An indexed iterable (or other structure supporting __getitem__) that
151 given an input index, returns a FloatTensor representing the vector
152 for the token associated with the index. For example,
153 vector[stoi["string"]] should return the vector for "string".
154 dim: The dimensionality of the vectors.
155 unk_init (callback): by default, initialize out-of-vocabulary word vectors
156 to zero vectors; can be any function that takes in a Tensor and
157 returns a Tensor of the same size. Default: torch.Tensor.zero_
158 """
159 self.vectors = torch.Tensor(len(self), dim)
160 for i, token in enumerate(self.itos):
161 wv_index = stoi.get(token, None)
162 if wv_index is not None:
163 self.vectors[i] = vectors[wv_index]
164 else:
165 self.vectors[i] = unk_init(self.vectors[i])
166
167
168 class SubwordVocab(Vocab):
169
170 def __init__(self, counter, max_size=None, specials=['<pad>'],
171 vectors=None, unk_init=torch.Tensor.zero_, expand_vocab=False):
172 """Create a revtok subword vocabulary from a collections.Counter.
173
174 Arguments:
175 counter: collections.Counter object holding the frequencies of
176 each word found in the data.
177 max_size: The maximum size of the subword vocabulary, or None for no
178 maximum. Default: None.
179 specials: The list of special tokens (e.g., padding or eos) that
180 will be prepended to the vocabulary in addition to an <unk>
181 token.
182 """
183 try:
184 import revtok
185 except ImportError:
186 print("Please install revtok.")
187 raise
188
189 self.stoi = defaultdict(_default_unk_index)
190 self.stoi.update({tok: i for i, tok in enumerate(specials)})
191 self.itos = specials
192
193 self.segment = revtok.SubwordSegmenter(counter, max_size)
194
195 max_size = None if max_size is None else max_size + len(self.itos)
196
197 # sort by frequency/entropy, then alphabetically
198 toks = sorted(self.segment.vocab.items(),
199 key=lambda tup: (len(tup[0]) != 1, -tup[1], tup[0]))
200
201 for tok, _ in toks:
202 self.itos.append(tok)
203 self.stoi[tok] = len(self.itos) - 1
204
205 if vectors is not None:
206 self.load_vectors(vectors, unk_init=unk_init, expand_vocab=expand_vocab)
207
208
209 class Vectors(object):
210
211 def __init__(self, name, cache='.vector_cache',
212 url=None, unk_init=torch.Tensor.zero_):
213 """Arguments:
214 name: name of the file that contains the vectors
215 cache: directory for cached vectors
216 url: url for download if vectors not found in cache
217 unk_init (callback): by default, initalize out-of-vocabulary word vectors
218 to zero vectors; can be any function that takes in a Tensor and
219 returns a Tensor of the same size
220 """
221 self.unk_init = unk_init
222 self.cache(name, cache, url=url)
223
224 def __getitem__(self, token):
225 if token in self.stoi:
226 return self.vectors[self.stoi[token]]
227 else:
228 return self.unk_init(torch.Tensor(1, self.dim))
229
230 def cache(self, name, cache, url=None):
231 if os.path.isfile(name):
232 path = name
233 path_pt = os.path.join(cache, os.path.basename(name)) + '.pt'
234 else:
235 path = os.path.join(cache, name)
236 path_pt = path + '.pt'
237
238 if not os.path.isfile(path_pt):
239 if not os.path.isfile(path) and url:
240 logger.info('Downloading vectors from {}'.format(url))
241 if not os.path.exists(cache):
242 os.makedirs(cache)
243 dest = os.path.join(cache, os.path.basename(url))
244 if not os.path.isfile(dest):
245 with tqdm(unit='B', unit_scale=True, miniters=1, desc=dest) as t:
246 urlretrieve(url, dest, reporthook=reporthook(t))
247 logger.info('Extracting vectors into {}'.format(cache))
248 ext = os.path.splitext(dest)[1][1:]
249 if ext == 'zip':
250 with zipfile.ZipFile(dest, "r") as zf:
251 zf.extractall(cache)
252 elif ext == 'gz':
253 with tarfile.open(dest, 'r:gz') as tar:
254 tar.extractall(path=cache)
255 if not os.path.isfile(path):
256 raise RuntimeError('no vectors found at {}'.format(path))
257
258 # str call is necessary for Python 2/3 compatibility, since
259 # argument must be Python 2 str (Python 3 bytes) or
260 # Python 3 str (Python 2 unicode)
261 itos, vectors, dim = [], array.array(str('d')), None
262
263 # Try to read the whole file with utf-8 encoding.
264 binary_lines = False
265 try:
266 with io.open(path, encoding="utf8") as f:
267 lines = [line for line in f]
268 # If there are malformed lines, read in binary mode
269 # and manually decode each word from utf-8
270 except:
271 logger.warning("Could not read {} as UTF8 file, "
272 "reading file as bytes and skipping "
273 "words with malformed UTF8.".format(path))
274 with open(path, 'rb') as f:
275 lines = [line for line in f]
276 binary_lines = True
277
278 logger.info("Loading vectors from {}".format(path))
279 for line in tqdm(lines, total=len(lines)):
280 # Explicitly splitting on " " is important, so we don't
281 # get rid of Unicode non-breaking spaces in the vectors.
282 entries = line.rstrip().split(b" " if binary_lines else " ")
283
284 word, entries = entries[0], entries[1:]
285 if dim is None and len(entries) > 1:
286 dim = len(entries)
287 elif len(entries) == 1:
288 logger.warning("Skipping token {} with 1-dimensional "
289 "vector {}; likely a header".format(word, entries))
290 continue
291 elif dim != len(entries):
292 raise RuntimeError(
293 "Vector for token {} has {} dimensions, but previously "
294 "read vectors have {} dimensions. All vectors must have "
295 "the same number of dimensions.".format(word, len(entries), dim))
296
297 if binary_lines:
298 try:
299 if isinstance(word, six.binary_type):
300 word = word.decode('utf-8')
301 except:
302 logger.info("Skipping non-UTF8 token {}".format(repr(word)))
303 continue
304 vectors.extend(float(x) for x in entries)
305 itos.append(word)
306
307 self.itos = itos
308 self.stoi = {word: i for i, word in enumerate(itos)}
309 self.vectors = torch.Tensor(vectors).view(-1, dim)
310 self.dim = dim
311 logger.info('Saving vectors to {}'.format(path_pt))
312 torch.save((self.itos, self.stoi, self.vectors, self.dim), path_pt)
313 else:
314 logger.info('Loading vectors from {}'.format(path_pt))
315 self.itos, self.stoi, self.vectors, self.dim = torch.load(path_pt)
316
317
318 class GloVe(Vectors):
319 url = {
320 '42B': 'http://nlp.stanford.edu/data/glove.42B.300d.zip',
321 '840B': 'http://nlp.stanford.edu/data/glove.840B.300d.zip',
322 'twitter.27B': 'http://nlp.stanford.edu/data/glove.twitter.27B.zip',
323 '6B': 'http://nlp.stanford.edu/data/glove.6B.zip',
324 }
325
326 def __init__(self, name='840B', dim=300, **kwargs):
327 url = self.url[name]
328 name = 'glove.{}.{}d.txt'.format(name, str(dim))
329 super(GloVe, self).__init__(name, url=url, **kwargs)
330
331
332 class FastText(Vectors):
333
334 url_base = 'https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.{}.vec'
335
336 def __init__(self, language="en", **kwargs):
337 url = self.url_base.format(language)
338 name = os.path.basename(url)
339 super(FastText, self).__init__(name, url=url, **kwargs)
340
341
342 class CharNGram(Vectors):
343
344 name = 'charNgram.txt'
345 url = ('http://www.logos.t.u-tokyo.ac.jp/~hassy/publications/arxiv2016jmt/'
346 'jmt_pre-trained_embeddings.tar.gz')
347
348 def __init__(self, **kwargs):
349 super(CharNGram, self).__init__(self.name, url=self.url, **kwargs)
350
351 def __getitem__(self, token):
352 vector = torch.Tensor(1, self.dim).zero_()
353 if token == "<unk>":
354 return self.unk_init(vector)
355 # These literals need to be coerced to unicode for Python 2 compatibility
356 # when we try to join them with read ngrams from the files.
357 chars = ['#BEGIN#'] + list(token) + ['#END#']
358 num_vectors = 0
359 for n in [2, 3, 4]:
360 end = len(chars) - n + 1
361 grams = [chars[i:(i + n)] for i in range(end)]
362 for gram in grams:
363 gram_key = '{}gram-{}'.format(n, ''.join(gram))
364 if gram_key in self.stoi:
365 vector += self.vectors[self.stoi[gram_key]]
366 num_vectors += 1
367 if num_vectors > 0:
368 vector /= num_vectors
369 else:
370 vector = self.unk_init(vector)
371 return vector
372
373
374 def _default_unk_index():
375 return 0
376
377
378 pretrained_aliases = {
379 "charngram.100d": lambda: CharNGram(),
380 "fasttext.en.300d": lambda: FastText(language="en"),
381 "fasttext.simple.300d": lambda: FastText(language="simple"),
382 "glove.42B.300d": lambda: GloVe(name="42B", dim="300"),
383 "glove.840B.300d": lambda: GloVe(name="840B", dim="300"),
384 "glove.twitter.27B.25d": lambda: GloVe(name="twitter.27B", dim="25"),
385 "glove.twitter.27B.50d": lambda: GloVe(name="twitter.27B", dim="50"),
386 "glove.twitter.27B.100d": lambda: GloVe(name="twitter.27B", dim="100"),
387 "glove.twitter.27B.200d": lambda: GloVe(name="twitter.27B", dim="200"),
388 "glove.6B.50d": lambda: GloVe(name="6B", dim="50"),
389 "glove.6B.100d": lambda: GloVe(name="6B", dim="100"),
390 "glove.6B.200d": lambda: GloVe(name="6B", dim="200"),
391 "glove.6B.300d": lambda: GloVe(name="6B", dim="300")
392 }
393
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/torchtext/vocab.py b/torchtext/vocab.py
--- a/torchtext/vocab.py
+++ b/torchtext/vocab.py
@@ -45,7 +45,8 @@
or custom pretrained vectors (see Vocab.load_vectors);
or a list of aforementioned vectors
"""
- self.freqs = counter.copy()
+ self.freqs = counter
+ counter = counter.copy()
min_freq = max(min_freq, 1)
counter.update(specials)
| {"golden_diff": "diff --git a/torchtext/vocab.py b/torchtext/vocab.py\n--- a/torchtext/vocab.py\n+++ b/torchtext/vocab.py\n@@ -45,7 +45,8 @@\n or custom pretrained vectors (see Vocab.load_vectors);\n or a list of aforementioned vectors\n \"\"\"\n- self.freqs = counter.copy()\n+ self.freqs = counter\n+ counter = counter.copy()\n min_freq = max(min_freq, 1)\n counter.update(specials)\n", "issue": "Side effect in Vocab __init__\nThe constructor of Vocab accumulates input data in its `specials` argument/variable and pollute the input argument `counter`. The constructed object is also wrong because of this side effect. Please find reproducible example below:\r\n```\r\n>>> c1 = Counter([1])\r\n>>> v1 = Vocab(c1)\r\n>>> print(c1)\r\nCounter({1: 1, u'<pad>': 0})\r\n>>> print(v1.stoi)\r\ndefaultdict(<function _default_unk_index at 0x10b4aa758>, {1: 1, u'<pad>': 0})\r\n\r\n>>> c2 = Counter([2])\r\n>>> print(c2)\r\nCounter({2: 1})\r\n>>> v2 = Vocab(c2)\r\n>>> print(c2)\r\nCounter({2: 1, 1: 0, u'<pad>': 0}) # c2 is changed after passing as argument\r\n>>> print(v2.stoi)\r\ndefaultdict(<function _default_unk_index at 0x10b4aa758>, {1: 1, u'<pad>': 0, 2: 2}) # resulting vocabulary is wrong\r\n```\n", "before_files": [{"content": "from __future__ import unicode_literals\nimport array\nfrom collections import defaultdict\nimport io\nimport logging\nimport os\nimport zipfile\n\nimport six\nfrom six.moves.urllib.request import urlretrieve\nimport torch\nfrom tqdm import tqdm\nimport tarfile\n\nfrom .utils import reporthook\n\nlogger = logging.getLogger(__name__)\n\n\nclass Vocab(object):\n \"\"\"Defines a vocabulary object that will be used to numericalize a field.\n\n Attributes:\n freqs: A collections.Counter object holding the frequencies of tokens\n in the data used to build the Vocab.\n stoi: A collections.defaultdict instance mapping token strings to\n numerical identifiers.\n itos: A list of token strings indexed by their numerical identifiers.\n \"\"\"\n def __init__(self, counter, max_size=None, min_freq=1, specials=['<pad>'],\n vectors=None):\n \"\"\"Create a Vocab object from a collections.Counter.\n\n Arguments:\n counter: collections.Counter object holding the frequencies of\n each value found in the data.\n max_size: The maximum size of the vocabulary, or None for no\n maximum. Default: None.\n min_freq: The minimum frequency needed to include a token in the\n vocabulary. Values less than 1 will be set to 1. Default: 1.\n specials: The list of special tokens (e.g., padding or eos) that\n will be prepended to the vocabulary in addition to an <unk>\n token. Default: ['<pad>']\n vectors: One of either the available pretrained vectors\n or custom pretrained vectors (see Vocab.load_vectors);\n or a list of aforementioned vectors\n \"\"\"\n self.freqs = counter.copy()\n min_freq = max(min_freq, 1)\n counter.update(specials)\n\n self.stoi = defaultdict(_default_unk_index)\n self.stoi.update({tok: i for i, tok in enumerate(specials)})\n self.itos = list(specials)\n\n counter.subtract({tok: counter[tok] for tok in specials})\n max_size = None if max_size is None else max_size + len(self.itos)\n\n # sort by frequency, then alphabetically\n words_and_frequencies = sorted(counter.items(), key=lambda tup: tup[0])\n words_and_frequencies.sort(key=lambda tup: tup[1], reverse=True)\n\n for word, freq in words_and_frequencies:\n if freq < min_freq or len(self.itos) == max_size:\n break\n self.itos.append(word)\n self.stoi[word] = len(self.itos) - 1\n\n self.vectors = None\n if vectors is not None:\n self.load_vectors(vectors)\n\n def __eq__(self, other):\n if self.freqs != other.freqs:\n return False\n if self.stoi != other.stoi:\n return False\n if self.itos != other.itos:\n return False\n if self.vectors != other.vectors:\n return False\n return True\n\n def __len__(self):\n return len(self.itos)\n\n def extend(self, v, sort=False):\n words = sorted(v.itos) if sort else v.itos\n for w in words:\n if w not in self.stoi:\n self.itos.append(w)\n self.stoi[w] = len(self.itos) - 1\n\n def load_vectors(self, vectors):\n \"\"\"\n Arguments:\n vectors: one of or a list containing instantiations of the\n GloVe, CharNGram, or Vectors classes. Alternatively, one\n of or a list of available pretrained vectors:\n charngram.100d\n fasttext.en.300d\n fasttext.simple.300d\n glove.42B.300d\n glove.840B.300d\n glove.twitter.27B.25d\n glove.twitter.27B.50d\n glove.twitter.27B.100d\n glove.twitter.27B.200d\n glove.6B.50d\n glove.6B.100d\n glove.6B.200d\n glove.6B.300d\n \"\"\"\n if not isinstance(vectors, list):\n vectors = [vectors]\n for idx, vector in enumerate(vectors):\n if six.PY2 and isinstance(vector, str):\n vector = six.text_type(vector)\n if isinstance(vector, six.string_types):\n # Convert the string pretrained vector identifier\n # to a Vectors object\n if vector not in pretrained_aliases:\n raise ValueError(\n \"Got string input vector {}, but allowed pretrained \"\n \"vectors are {}\".format(\n vector, list(pretrained_aliases.keys())))\n vectors[idx] = pretrained_aliases[vector]()\n elif not isinstance(vector, Vectors):\n raise ValueError(\n \"Got input vectors of type {}, expected str or \"\n \"Vectors object\".format(type(vector)))\n\n tot_dim = sum(v.dim for v in vectors)\n self.vectors = torch.Tensor(len(self), tot_dim)\n for i, token in enumerate(self.itos):\n start_dim = 0\n for v in vectors:\n end_dim = start_dim + v.dim\n self.vectors[i][start_dim:end_dim] = v[token.strip()]\n start_dim = end_dim\n assert(start_dim == tot_dim)\n\n def set_vectors(self, stoi, vectors, dim, unk_init=torch.Tensor.zero_):\n \"\"\"\n Set the vectors for the Vocab instance from a collection of Tensors.\n\n Arguments:\n stoi: A dictionary of string to the index of the associated vector\n in the `vectors` input argument.\n vectors: An indexed iterable (or other structure supporting __getitem__) that\n given an input index, returns a FloatTensor representing the vector\n for the token associated with the index. For example,\n vector[stoi[\"string\"]] should return the vector for \"string\".\n dim: The dimensionality of the vectors.\n unk_init (callback): by default, initialize out-of-vocabulary word vectors\n to zero vectors; can be any function that takes in a Tensor and\n returns a Tensor of the same size. Default: torch.Tensor.zero_\n \"\"\"\n self.vectors = torch.Tensor(len(self), dim)\n for i, token in enumerate(self.itos):\n wv_index = stoi.get(token, None)\n if wv_index is not None:\n self.vectors[i] = vectors[wv_index]\n else:\n self.vectors[i] = unk_init(self.vectors[i])\n\n\nclass SubwordVocab(Vocab):\n\n def __init__(self, counter, max_size=None, specials=['<pad>'],\n vectors=None, unk_init=torch.Tensor.zero_, expand_vocab=False):\n \"\"\"Create a revtok subword vocabulary from a collections.Counter.\n\n Arguments:\n counter: collections.Counter object holding the frequencies of\n each word found in the data.\n max_size: The maximum size of the subword vocabulary, or None for no\n maximum. Default: None.\n specials: The list of special tokens (e.g., padding or eos) that\n will be prepended to the vocabulary in addition to an <unk>\n token.\n \"\"\"\n try:\n import revtok\n except ImportError:\n print(\"Please install revtok.\")\n raise\n\n self.stoi = defaultdict(_default_unk_index)\n self.stoi.update({tok: i for i, tok in enumerate(specials)})\n self.itos = specials\n\n self.segment = revtok.SubwordSegmenter(counter, max_size)\n\n max_size = None if max_size is None else max_size + len(self.itos)\n\n # sort by frequency/entropy, then alphabetically\n toks = sorted(self.segment.vocab.items(),\n key=lambda tup: (len(tup[0]) != 1, -tup[1], tup[0]))\n\n for tok, _ in toks:\n self.itos.append(tok)\n self.stoi[tok] = len(self.itos) - 1\n\n if vectors is not None:\n self.load_vectors(vectors, unk_init=unk_init, expand_vocab=expand_vocab)\n\n\nclass Vectors(object):\n\n def __init__(self, name, cache='.vector_cache',\n url=None, unk_init=torch.Tensor.zero_):\n \"\"\"Arguments:\n name: name of the file that contains the vectors\n cache: directory for cached vectors\n url: url for download if vectors not found in cache\n unk_init (callback): by default, initalize out-of-vocabulary word vectors\n to zero vectors; can be any function that takes in a Tensor and\n returns a Tensor of the same size\n \"\"\"\n self.unk_init = unk_init\n self.cache(name, cache, url=url)\n\n def __getitem__(self, token):\n if token in self.stoi:\n return self.vectors[self.stoi[token]]\n else:\n return self.unk_init(torch.Tensor(1, self.dim))\n\n def cache(self, name, cache, url=None):\n if os.path.isfile(name):\n path = name\n path_pt = os.path.join(cache, os.path.basename(name)) + '.pt'\n else:\n path = os.path.join(cache, name)\n path_pt = path + '.pt'\n\n if not os.path.isfile(path_pt):\n if not os.path.isfile(path) and url:\n logger.info('Downloading vectors from {}'.format(url))\n if not os.path.exists(cache):\n os.makedirs(cache)\n dest = os.path.join(cache, os.path.basename(url))\n if not os.path.isfile(dest):\n with tqdm(unit='B', unit_scale=True, miniters=1, desc=dest) as t:\n urlretrieve(url, dest, reporthook=reporthook(t))\n logger.info('Extracting vectors into {}'.format(cache))\n ext = os.path.splitext(dest)[1][1:]\n if ext == 'zip':\n with zipfile.ZipFile(dest, \"r\") as zf:\n zf.extractall(cache)\n elif ext == 'gz':\n with tarfile.open(dest, 'r:gz') as tar:\n tar.extractall(path=cache)\n if not os.path.isfile(path):\n raise RuntimeError('no vectors found at {}'.format(path))\n\n # str call is necessary for Python 2/3 compatibility, since\n # argument must be Python 2 str (Python 3 bytes) or\n # Python 3 str (Python 2 unicode)\n itos, vectors, dim = [], array.array(str('d')), None\n\n # Try to read the whole file with utf-8 encoding.\n binary_lines = False\n try:\n with io.open(path, encoding=\"utf8\") as f:\n lines = [line for line in f]\n # If there are malformed lines, read in binary mode\n # and manually decode each word from utf-8\n except:\n logger.warning(\"Could not read {} as UTF8 file, \"\n \"reading file as bytes and skipping \"\n \"words with malformed UTF8.\".format(path))\n with open(path, 'rb') as f:\n lines = [line for line in f]\n binary_lines = True\n\n logger.info(\"Loading vectors from {}\".format(path))\n for line in tqdm(lines, total=len(lines)):\n # Explicitly splitting on \" \" is important, so we don't\n # get rid of Unicode non-breaking spaces in the vectors.\n entries = line.rstrip().split(b\" \" if binary_lines else \" \")\n\n word, entries = entries[0], entries[1:]\n if dim is None and len(entries) > 1:\n dim = len(entries)\n elif len(entries) == 1:\n logger.warning(\"Skipping token {} with 1-dimensional \"\n \"vector {}; likely a header\".format(word, entries))\n continue\n elif dim != len(entries):\n raise RuntimeError(\n \"Vector for token {} has {} dimensions, but previously \"\n \"read vectors have {} dimensions. All vectors must have \"\n \"the same number of dimensions.\".format(word, len(entries), dim))\n\n if binary_lines:\n try:\n if isinstance(word, six.binary_type):\n word = word.decode('utf-8')\n except:\n logger.info(\"Skipping non-UTF8 token {}\".format(repr(word)))\n continue\n vectors.extend(float(x) for x in entries)\n itos.append(word)\n\n self.itos = itos\n self.stoi = {word: i for i, word in enumerate(itos)}\n self.vectors = torch.Tensor(vectors).view(-1, dim)\n self.dim = dim\n logger.info('Saving vectors to {}'.format(path_pt))\n torch.save((self.itos, self.stoi, self.vectors, self.dim), path_pt)\n else:\n logger.info('Loading vectors from {}'.format(path_pt))\n self.itos, self.stoi, self.vectors, self.dim = torch.load(path_pt)\n\n\nclass GloVe(Vectors):\n url = {\n '42B': 'http://nlp.stanford.edu/data/glove.42B.300d.zip',\n '840B': 'http://nlp.stanford.edu/data/glove.840B.300d.zip',\n 'twitter.27B': 'http://nlp.stanford.edu/data/glove.twitter.27B.zip',\n '6B': 'http://nlp.stanford.edu/data/glove.6B.zip',\n }\n\n def __init__(self, name='840B', dim=300, **kwargs):\n url = self.url[name]\n name = 'glove.{}.{}d.txt'.format(name, str(dim))\n super(GloVe, self).__init__(name, url=url, **kwargs)\n\n\nclass FastText(Vectors):\n\n url_base = 'https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.{}.vec'\n\n def __init__(self, language=\"en\", **kwargs):\n url = self.url_base.format(language)\n name = os.path.basename(url)\n super(FastText, self).__init__(name, url=url, **kwargs)\n\n\nclass CharNGram(Vectors):\n\n name = 'charNgram.txt'\n url = ('http://www.logos.t.u-tokyo.ac.jp/~hassy/publications/arxiv2016jmt/'\n 'jmt_pre-trained_embeddings.tar.gz')\n\n def __init__(self, **kwargs):\n super(CharNGram, self).__init__(self.name, url=self.url, **kwargs)\n\n def __getitem__(self, token):\n vector = torch.Tensor(1, self.dim).zero_()\n if token == \"<unk>\":\n return self.unk_init(vector)\n # These literals need to be coerced to unicode for Python 2 compatibility\n # when we try to join them with read ngrams from the files.\n chars = ['#BEGIN#'] + list(token) + ['#END#']\n num_vectors = 0\n for n in [2, 3, 4]:\n end = len(chars) - n + 1\n grams = [chars[i:(i + n)] for i in range(end)]\n for gram in grams:\n gram_key = '{}gram-{}'.format(n, ''.join(gram))\n if gram_key in self.stoi:\n vector += self.vectors[self.stoi[gram_key]]\n num_vectors += 1\n if num_vectors > 0:\n vector /= num_vectors\n else:\n vector = self.unk_init(vector)\n return vector\n\n\ndef _default_unk_index():\n return 0\n\n\npretrained_aliases = {\n \"charngram.100d\": lambda: CharNGram(),\n \"fasttext.en.300d\": lambda: FastText(language=\"en\"),\n \"fasttext.simple.300d\": lambda: FastText(language=\"simple\"),\n \"glove.42B.300d\": lambda: GloVe(name=\"42B\", dim=\"300\"),\n \"glove.840B.300d\": lambda: GloVe(name=\"840B\", dim=\"300\"),\n \"glove.twitter.27B.25d\": lambda: GloVe(name=\"twitter.27B\", dim=\"25\"),\n \"glove.twitter.27B.50d\": lambda: GloVe(name=\"twitter.27B\", dim=\"50\"),\n \"glove.twitter.27B.100d\": lambda: GloVe(name=\"twitter.27B\", dim=\"100\"),\n \"glove.twitter.27B.200d\": lambda: GloVe(name=\"twitter.27B\", dim=\"200\"),\n \"glove.6B.50d\": lambda: GloVe(name=\"6B\", dim=\"50\"),\n \"glove.6B.100d\": lambda: GloVe(name=\"6B\", dim=\"100\"),\n \"glove.6B.200d\": lambda: GloVe(name=\"6B\", dim=\"200\"),\n \"glove.6B.300d\": lambda: GloVe(name=\"6B\", dim=\"300\")\n}\n", "path": "torchtext/vocab.py"}], "after_files": [{"content": "from __future__ import unicode_literals\nimport array\nfrom collections import defaultdict\nimport io\nimport logging\nimport os\nimport zipfile\n\nimport six\nfrom six.moves.urllib.request import urlretrieve\nimport torch\nfrom tqdm import tqdm\nimport tarfile\n\nfrom .utils import reporthook\n\nlogger = logging.getLogger(__name__)\n\n\nclass Vocab(object):\n \"\"\"Defines a vocabulary object that will be used to numericalize a field.\n\n Attributes:\n freqs: A collections.Counter object holding the frequencies of tokens\n in the data used to build the Vocab.\n stoi: A collections.defaultdict instance mapping token strings to\n numerical identifiers.\n itos: A list of token strings indexed by their numerical identifiers.\n \"\"\"\n def __init__(self, counter, max_size=None, min_freq=1, specials=['<pad>'],\n vectors=None):\n \"\"\"Create a Vocab object from a collections.Counter.\n\n Arguments:\n counter: collections.Counter object holding the frequencies of\n each value found in the data.\n max_size: The maximum size of the vocabulary, or None for no\n maximum. Default: None.\n min_freq: The minimum frequency needed to include a token in the\n vocabulary. Values less than 1 will be set to 1. Default: 1.\n specials: The list of special tokens (e.g., padding or eos) that\n will be prepended to the vocabulary in addition to an <unk>\n token. Default: ['<pad>']\n vectors: One of either the available pretrained vectors\n or custom pretrained vectors (see Vocab.load_vectors);\n or a list of aforementioned vectors\n \"\"\"\n self.freqs = counter\n counter = counter.copy()\n min_freq = max(min_freq, 1)\n counter.update(specials)\n\n self.stoi = defaultdict(_default_unk_index)\n self.stoi.update({tok: i for i, tok in enumerate(specials)})\n self.itos = list(specials)\n\n counter.subtract({tok: counter[tok] for tok in specials})\n max_size = None if max_size is None else max_size + len(self.itos)\n\n # sort by frequency, then alphabetically\n words_and_frequencies = sorted(counter.items(), key=lambda tup: tup[0])\n words_and_frequencies.sort(key=lambda tup: tup[1], reverse=True)\n\n for word, freq in words_and_frequencies:\n if freq < min_freq or len(self.itos) == max_size:\n break\n self.itos.append(word)\n self.stoi[word] = len(self.itos) - 1\n\n self.vectors = None\n if vectors is not None:\n self.load_vectors(vectors)\n\n def __eq__(self, other):\n if self.freqs != other.freqs:\n return False\n if self.stoi != other.stoi:\n return False\n if self.itos != other.itos:\n return False\n if self.vectors != other.vectors:\n return False\n return True\n\n def __len__(self):\n return len(self.itos)\n\n def extend(self, v, sort=False):\n words = sorted(v.itos) if sort else v.itos\n for w in words:\n if w not in self.stoi:\n self.itos.append(w)\n self.stoi[w] = len(self.itos) - 1\n\n def load_vectors(self, vectors):\n \"\"\"\n Arguments:\n vectors: one of or a list containing instantiations of the\n GloVe, CharNGram, or Vectors classes. Alternatively, one\n of or a list of available pretrained vectors:\n charngram.100d\n fasttext.en.300d\n fasttext.simple.300d\n glove.42B.300d\n glove.840B.300d\n glove.twitter.27B.25d\n glove.twitter.27B.50d\n glove.twitter.27B.100d\n glove.twitter.27B.200d\n glove.6B.50d\n glove.6B.100d\n glove.6B.200d\n glove.6B.300d\n \"\"\"\n if not isinstance(vectors, list):\n vectors = [vectors]\n for idx, vector in enumerate(vectors):\n if six.PY2 and isinstance(vector, str):\n vector = six.text_type(vector)\n if isinstance(vector, six.string_types):\n # Convert the string pretrained vector identifier\n # to a Vectors object\n if vector not in pretrained_aliases:\n raise ValueError(\n \"Got string input vector {}, but allowed pretrained \"\n \"vectors are {}\".format(\n vector, list(pretrained_aliases.keys())))\n vectors[idx] = pretrained_aliases[vector]()\n elif not isinstance(vector, Vectors):\n raise ValueError(\n \"Got input vectors of type {}, expected str or \"\n \"Vectors object\".format(type(vector)))\n\n tot_dim = sum(v.dim for v in vectors)\n self.vectors = torch.Tensor(len(self), tot_dim)\n for i, token in enumerate(self.itos):\n start_dim = 0\n for v in vectors:\n end_dim = start_dim + v.dim\n self.vectors[i][start_dim:end_dim] = v[token.strip()]\n start_dim = end_dim\n assert(start_dim == tot_dim)\n\n def set_vectors(self, stoi, vectors, dim, unk_init=torch.Tensor.zero_):\n \"\"\"\n Set the vectors for the Vocab instance from a collection of Tensors.\n\n Arguments:\n stoi: A dictionary of string to the index of the associated vector\n in the `vectors` input argument.\n vectors: An indexed iterable (or other structure supporting __getitem__) that\n given an input index, returns a FloatTensor representing the vector\n for the token associated with the index. For example,\n vector[stoi[\"string\"]] should return the vector for \"string\".\n dim: The dimensionality of the vectors.\n unk_init (callback): by default, initialize out-of-vocabulary word vectors\n to zero vectors; can be any function that takes in a Tensor and\n returns a Tensor of the same size. Default: torch.Tensor.zero_\n \"\"\"\n self.vectors = torch.Tensor(len(self), dim)\n for i, token in enumerate(self.itos):\n wv_index = stoi.get(token, None)\n if wv_index is not None:\n self.vectors[i] = vectors[wv_index]\n else:\n self.vectors[i] = unk_init(self.vectors[i])\n\n\nclass SubwordVocab(Vocab):\n\n def __init__(self, counter, max_size=None, specials=['<pad>'],\n vectors=None, unk_init=torch.Tensor.zero_, expand_vocab=False):\n \"\"\"Create a revtok subword vocabulary from a collections.Counter.\n\n Arguments:\n counter: collections.Counter object holding the frequencies of\n each word found in the data.\n max_size: The maximum size of the subword vocabulary, or None for no\n maximum. Default: None.\n specials: The list of special tokens (e.g., padding or eos) that\n will be prepended to the vocabulary in addition to an <unk>\n token.\n \"\"\"\n try:\n import revtok\n except ImportError:\n print(\"Please install revtok.\")\n raise\n\n self.stoi = defaultdict(_default_unk_index)\n self.stoi.update({tok: i for i, tok in enumerate(specials)})\n self.itos = specials\n\n self.segment = revtok.SubwordSegmenter(counter, max_size)\n\n max_size = None if max_size is None else max_size + len(self.itos)\n\n # sort by frequency/entropy, then alphabetically\n toks = sorted(self.segment.vocab.items(),\n key=lambda tup: (len(tup[0]) != 1, -tup[1], tup[0]))\n\n for tok, _ in toks:\n self.itos.append(tok)\n self.stoi[tok] = len(self.itos) - 1\n\n if vectors is not None:\n self.load_vectors(vectors, unk_init=unk_init, expand_vocab=expand_vocab)\n\n\nclass Vectors(object):\n\n def __init__(self, name, cache='.vector_cache',\n url=None, unk_init=torch.Tensor.zero_):\n \"\"\"Arguments:\n name: name of the file that contains the vectors\n cache: directory for cached vectors\n url: url for download if vectors not found in cache\n unk_init (callback): by default, initalize out-of-vocabulary word vectors\n to zero vectors; can be any function that takes in a Tensor and\n returns a Tensor of the same size\n \"\"\"\n self.unk_init = unk_init\n self.cache(name, cache, url=url)\n\n def __getitem__(self, token):\n if token in self.stoi:\n return self.vectors[self.stoi[token]]\n else:\n return self.unk_init(torch.Tensor(1, self.dim))\n\n def cache(self, name, cache, url=None):\n if os.path.isfile(name):\n path = name\n path_pt = os.path.join(cache, os.path.basename(name)) + '.pt'\n else:\n path = os.path.join(cache, name)\n path_pt = path + '.pt'\n\n if not os.path.isfile(path_pt):\n if not os.path.isfile(path) and url:\n logger.info('Downloading vectors from {}'.format(url))\n if not os.path.exists(cache):\n os.makedirs(cache)\n dest = os.path.join(cache, os.path.basename(url))\n if not os.path.isfile(dest):\n with tqdm(unit='B', unit_scale=True, miniters=1, desc=dest) as t:\n urlretrieve(url, dest, reporthook=reporthook(t))\n logger.info('Extracting vectors into {}'.format(cache))\n ext = os.path.splitext(dest)[1][1:]\n if ext == 'zip':\n with zipfile.ZipFile(dest, \"r\") as zf:\n zf.extractall(cache)\n elif ext == 'gz':\n with tarfile.open(dest, 'r:gz') as tar:\n tar.extractall(path=cache)\n if not os.path.isfile(path):\n raise RuntimeError('no vectors found at {}'.format(path))\n\n # str call is necessary for Python 2/3 compatibility, since\n # argument must be Python 2 str (Python 3 bytes) or\n # Python 3 str (Python 2 unicode)\n itos, vectors, dim = [], array.array(str('d')), None\n\n # Try to read the whole file with utf-8 encoding.\n binary_lines = False\n try:\n with io.open(path, encoding=\"utf8\") as f:\n lines = [line for line in f]\n # If there are malformed lines, read in binary mode\n # and manually decode each word from utf-8\n except:\n logger.warning(\"Could not read {} as UTF8 file, \"\n \"reading file as bytes and skipping \"\n \"words with malformed UTF8.\".format(path))\n with open(path, 'rb') as f:\n lines = [line for line in f]\n binary_lines = True\n\n logger.info(\"Loading vectors from {}\".format(path))\n for line in tqdm(lines, total=len(lines)):\n # Explicitly splitting on \" \" is important, so we don't\n # get rid of Unicode non-breaking spaces in the vectors.\n entries = line.rstrip().split(b\" \" if binary_lines else \" \")\n\n word, entries = entries[0], entries[1:]\n if dim is None and len(entries) > 1:\n dim = len(entries)\n elif len(entries) == 1:\n logger.warning(\"Skipping token {} with 1-dimensional \"\n \"vector {}; likely a header\".format(word, entries))\n continue\n elif dim != len(entries):\n raise RuntimeError(\n \"Vector for token {} has {} dimensions, but previously \"\n \"read vectors have {} dimensions. All vectors must have \"\n \"the same number of dimensions.\".format(word, len(entries), dim))\n\n if binary_lines:\n try:\n if isinstance(word, six.binary_type):\n word = word.decode('utf-8')\n except:\n logger.info(\"Skipping non-UTF8 token {}\".format(repr(word)))\n continue\n vectors.extend(float(x) for x in entries)\n itos.append(word)\n\n self.itos = itos\n self.stoi = {word: i for i, word in enumerate(itos)}\n self.vectors = torch.Tensor(vectors).view(-1, dim)\n self.dim = dim\n logger.info('Saving vectors to {}'.format(path_pt))\n torch.save((self.itos, self.stoi, self.vectors, self.dim), path_pt)\n else:\n logger.info('Loading vectors from {}'.format(path_pt))\n self.itos, self.stoi, self.vectors, self.dim = torch.load(path_pt)\n\n\nclass GloVe(Vectors):\n url = {\n '42B': 'http://nlp.stanford.edu/data/glove.42B.300d.zip',\n '840B': 'http://nlp.stanford.edu/data/glove.840B.300d.zip',\n 'twitter.27B': 'http://nlp.stanford.edu/data/glove.twitter.27B.zip',\n '6B': 'http://nlp.stanford.edu/data/glove.6B.zip',\n }\n\n def __init__(self, name='840B', dim=300, **kwargs):\n url = self.url[name]\n name = 'glove.{}.{}d.txt'.format(name, str(dim))\n super(GloVe, self).__init__(name, url=url, **kwargs)\n\n\nclass FastText(Vectors):\n\n url_base = 'https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.{}.vec'\n\n def __init__(self, language=\"en\", **kwargs):\n url = self.url_base.format(language)\n name = os.path.basename(url)\n super(FastText, self).__init__(name, url=url, **kwargs)\n\n\nclass CharNGram(Vectors):\n\n name = 'charNgram.txt'\n url = ('http://www.logos.t.u-tokyo.ac.jp/~hassy/publications/arxiv2016jmt/'\n 'jmt_pre-trained_embeddings.tar.gz')\n\n def __init__(self, **kwargs):\n super(CharNGram, self).__init__(self.name, url=self.url, **kwargs)\n\n def __getitem__(self, token):\n vector = torch.Tensor(1, self.dim).zero_()\n if token == \"<unk>\":\n return self.unk_init(vector)\n # These literals need to be coerced to unicode for Python 2 compatibility\n # when we try to join them with read ngrams from the files.\n chars = ['#BEGIN#'] + list(token) + ['#END#']\n num_vectors = 0\n for n in [2, 3, 4]:\n end = len(chars) - n + 1\n grams = [chars[i:(i + n)] for i in range(end)]\n for gram in grams:\n gram_key = '{}gram-{}'.format(n, ''.join(gram))\n if gram_key in self.stoi:\n vector += self.vectors[self.stoi[gram_key]]\n num_vectors += 1\n if num_vectors > 0:\n vector /= num_vectors\n else:\n vector = self.unk_init(vector)\n return vector\n\n\ndef _default_unk_index():\n return 0\n\n\npretrained_aliases = {\n \"charngram.100d\": lambda: CharNGram(),\n \"fasttext.en.300d\": lambda: FastText(language=\"en\"),\n \"fasttext.simple.300d\": lambda: FastText(language=\"simple\"),\n \"glove.42B.300d\": lambda: GloVe(name=\"42B\", dim=\"300\"),\n \"glove.840B.300d\": lambda: GloVe(name=\"840B\", dim=\"300\"),\n \"glove.twitter.27B.25d\": lambda: GloVe(name=\"twitter.27B\", dim=\"25\"),\n \"glove.twitter.27B.50d\": lambda: GloVe(name=\"twitter.27B\", dim=\"50\"),\n \"glove.twitter.27B.100d\": lambda: GloVe(name=\"twitter.27B\", dim=\"100\"),\n \"glove.twitter.27B.200d\": lambda: GloVe(name=\"twitter.27B\", dim=\"200\"),\n \"glove.6B.50d\": lambda: GloVe(name=\"6B\", dim=\"50\"),\n \"glove.6B.100d\": lambda: GloVe(name=\"6B\", dim=\"100\"),\n \"glove.6B.200d\": lambda: GloVe(name=\"6B\", dim=\"200\"),\n \"glove.6B.300d\": lambda: GloVe(name=\"6B\", dim=\"300\")\n}\n", "path": "torchtext/vocab.py"}]} |
gh_patches_debug_1124 | rasdani/github-patches | git_diff | ManimCommunity__manim-3599 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sector.get_arc_center() return a reference but not a copy, cause rotate unexpected behavior.
- ## Description of bug / unexpected behavior
manim version: `0.18.0`
```python
class SectorArcCenterRotate(Scene):
def construct(self):
self.add(NumberPlane())
sector = Sector(outer_radius=2, start_angle=0, angle=PI / 6)
sector.shift(LEFT * 3)
self.add(sector)
self.wait()
self.add(sector.copy().set_color(RED).set_opacity(0.5))
sector.rotate(PI / 6, about_point=sector.get_arc_center()) # unexcepted
# sector.rotate(PI / 6, about_point=deepcopy(sector.get_arc_center()))
self.wait()
```
## Expected behavior
<!-- Add a clear and concise description of what you expected to happen. -->
expected behavior:
<img width="572" alt="image" src="https://github.com/ManimCommunity/manim/assets/1728633/b134ee09-0450-48f8-9800-35cb882285e8">
the actual behavior:
<img width="591" alt="image" src="https://github.com/ManimCommunity/manim/assets/1728633/01519761-976a-450f-a9fd-530217915f78">
## System specifications
<details><summary>System Details</summary>
- OS (MacOS 14.2.1 (23C71)):
- RAM:
- Python version 3.11.5
- Installed modules (provide output from `pip list`):
</details>
I think the "problem" is `get_arc_center` return a reference of Sector's point:
https://github.com/ManimCommunity/manim/blob/3b496ea2e6f1a6ab7829398590b41e17bfbd34c1/manim/mobject/geometry/arc.py#L381-L392
But the other method, such as get_corner return a copy.
Not sure it's a feature or a bug. Thanks.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `manim/mobject/geometry/arc.py`
Content:
```
1 r"""Mobjects that are curved.
2
3 Examples
4 --------
5 .. manim:: UsefulAnnotations
6 :save_last_frame:
7
8 class UsefulAnnotations(Scene):
9 def construct(self):
10 m0 = Dot()
11 m1 = AnnotationDot()
12 m2 = LabeledDot("ii")
13 m3 = LabeledDot(MathTex(r"\alpha").set_color(ORANGE))
14 m4 = CurvedArrow(2*LEFT, 2*RIGHT, radius= -5)
15 m5 = CurvedArrow(2*LEFT, 2*RIGHT, radius= 8)
16 m6 = CurvedDoubleArrow(ORIGIN, 2*RIGHT)
17
18 self.add(m0, m1, m2, m3, m4, m5, m6)
19 for i, mobj in enumerate(self.mobjects):
20 mobj.shift(DOWN * (i-3))
21
22 """
23
24 from __future__ import annotations
25
26 __all__ = [
27 "TipableVMobject",
28 "Arc",
29 "ArcBetweenPoints",
30 "CurvedArrow",
31 "CurvedDoubleArrow",
32 "Circle",
33 "Dot",
34 "AnnotationDot",
35 "LabeledDot",
36 "Ellipse",
37 "AnnularSector",
38 "Sector",
39 "Annulus",
40 "CubicBezier",
41 "ArcPolygon",
42 "ArcPolygonFromArcs",
43 ]
44
45 import itertools
46 import warnings
47 from typing import TYPE_CHECKING
48
49 import numpy as np
50 from typing_extensions import Self
51
52 from manim.constants import *
53 from manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL
54 from manim.mobject.types.vectorized_mobject import VGroup, VMobject
55 from manim.utils.color import BLACK, BLUE, RED, WHITE, ParsableManimColor
56 from manim.utils.iterables import adjacent_pairs
57 from manim.utils.space_ops import (
58 angle_of_vector,
59 cartesian_to_spherical,
60 line_intersection,
61 perpendicular_bisector,
62 rotate_vector,
63 )
64
65 if TYPE_CHECKING:
66 import manim.mobject.geometry.tips as tips
67 from manim.mobject.mobject import Mobject
68 from manim.mobject.text.tex_mobject import SingleStringMathTex, Tex
69 from manim.mobject.text.text_mobject import Text
70 from manim.typing import CubicBezierPoints, Point3D, QuadraticBezierPoints, Vector3D
71
72
73 class TipableVMobject(VMobject, metaclass=ConvertToOpenGL):
74 """Meant for shared functionality between Arc and Line.
75 Functionality can be classified broadly into these groups:
76
77 * Adding, Creating, Modifying tips
78 - add_tip calls create_tip, before pushing the new tip
79 into the TipableVMobject's list of submobjects
80 - stylistic and positional configuration
81
82 * Checking for tips
83 - Boolean checks for whether the TipableVMobject has a tip
84 and a starting tip
85
86 * Getters
87 - Straightforward accessors, returning information pertaining
88 to the TipableVMobject instance's tip(s), its length etc
89 """
90
91 def __init__(
92 self,
93 tip_length: float = DEFAULT_ARROW_TIP_LENGTH,
94 normal_vector: Vector3D = OUT,
95 tip_style: dict = {},
96 **kwargs,
97 ) -> None:
98 self.tip_length: float = tip_length
99 self.normal_vector: Vector3D = normal_vector
100 self.tip_style: dict = tip_style
101 super().__init__(**kwargs)
102
103 # Adding, Creating, Modifying tips
104
105 def add_tip(
106 self,
107 tip: tips.ArrowTip | None = None,
108 tip_shape: type[tips.ArrowTip] | None = None,
109 tip_length: float | None = None,
110 tip_width: float | None = None,
111 at_start: bool = False,
112 ) -> Self:
113 """Adds a tip to the TipableVMobject instance, recognising
114 that the endpoints might need to be switched if it's
115 a 'starting tip' or not.
116 """
117 if tip is None:
118 tip = self.create_tip(tip_shape, tip_length, tip_width, at_start)
119 else:
120 self.position_tip(tip, at_start)
121 self.reset_endpoints_based_on_tip(tip, at_start)
122 self.asign_tip_attr(tip, at_start)
123 self.add(tip)
124 return self
125
126 def create_tip(
127 self,
128 tip_shape: type[tips.ArrowTip] | None = None,
129 tip_length: float = None,
130 tip_width: float = None,
131 at_start: bool = False,
132 ):
133 """Stylises the tip, positions it spatially, and returns
134 the newly instantiated tip to the caller.
135 """
136 tip = self.get_unpositioned_tip(tip_shape, tip_length, tip_width)
137 self.position_tip(tip, at_start)
138 return tip
139
140 def get_unpositioned_tip(
141 self,
142 tip_shape: type[tips.ArrowTip] | None = None,
143 tip_length: float | None = None,
144 tip_width: float | None = None,
145 ):
146 """Returns a tip that has been stylistically configured,
147 but has not yet been given a position in space.
148 """
149 from manim.mobject.geometry.tips import ArrowTriangleFilledTip
150
151 style = {}
152
153 if tip_shape is None:
154 tip_shape = ArrowTriangleFilledTip
155
156 if tip_shape is ArrowTriangleFilledTip:
157 if tip_width is None:
158 tip_width = self.get_default_tip_length()
159 style.update({"width": tip_width})
160 if tip_length is None:
161 tip_length = self.get_default_tip_length()
162
163 color = self.get_color()
164 style.update({"fill_color": color, "stroke_color": color})
165 style.update(self.tip_style)
166 tip = tip_shape(length=tip_length, **style)
167 return tip
168
169 def position_tip(self, tip: tips.ArrowTip, at_start: bool = False):
170 # Last two control points, defining both
171 # the end, and the tangency direction
172 if at_start:
173 anchor = self.get_start()
174 handle = self.get_first_handle()
175 else:
176 handle = self.get_last_handle()
177 anchor = self.get_end()
178 angles = cartesian_to_spherical(handle - anchor)
179 tip.rotate(
180 angles[1] - PI - tip.tip_angle,
181 ) # Rotates the tip along the azimuthal
182 if not hasattr(self, "_init_positioning_axis"):
183 axis = [
184 np.sin(angles[1]),
185 -np.cos(angles[1]),
186 0,
187 ] # Obtains the perpendicular of the tip
188 tip.rotate(
189 -angles[2] + PI / 2,
190 axis=axis,
191 ) # Rotates the tip along the vertical wrt the axis
192 self._init_positioning_axis = axis
193 tip.shift(anchor - tip.tip_point)
194 return tip
195
196 def reset_endpoints_based_on_tip(self, tip: tips.ArrowTip, at_start: bool) -> Self:
197 if self.get_length() == 0:
198 # Zero length, put_start_and_end_on wouldn't work
199 return self
200
201 if at_start:
202 self.put_start_and_end_on(tip.base, self.get_end())
203 else:
204 self.put_start_and_end_on(self.get_start(), tip.base)
205 return self
206
207 def asign_tip_attr(self, tip: tips.ArrowTip, at_start: bool) -> Self:
208 if at_start:
209 self.start_tip = tip
210 else:
211 self.tip = tip
212 return self
213
214 # Checking for tips
215
216 def has_tip(self) -> bool:
217 return hasattr(self, "tip") and self.tip in self
218
219 def has_start_tip(self) -> bool:
220 return hasattr(self, "start_tip") and self.start_tip in self
221
222 # Getters
223
224 def pop_tips(self) -> VGroup:
225 start, end = self.get_start_and_end()
226 result = self.get_group_class()()
227 if self.has_tip():
228 result.add(self.tip)
229 self.remove(self.tip)
230 if self.has_start_tip():
231 result.add(self.start_tip)
232 self.remove(self.start_tip)
233 self.put_start_and_end_on(start, end)
234 return result
235
236 def get_tips(self) -> VGroup:
237 """Returns a VGroup (collection of VMobjects) containing
238 the TipableVMObject instance's tips.
239 """
240 result = self.get_group_class()()
241 if hasattr(self, "tip"):
242 result.add(self.tip)
243 if hasattr(self, "start_tip"):
244 result.add(self.start_tip)
245 return result
246
247 def get_tip(self):
248 """Returns the TipableVMobject instance's (first) tip,
249 otherwise throws an exception."""
250 tips = self.get_tips()
251 if len(tips) == 0:
252 raise Exception("tip not found")
253 else:
254 return tips[0]
255
256 def get_default_tip_length(self) -> float:
257 return self.tip_length
258
259 def get_first_handle(self) -> Point3D:
260 return self.points[1]
261
262 def get_last_handle(self) -> Point3D:
263 return self.points[-2]
264
265 def get_end(self) -> Point3D:
266 if self.has_tip():
267 return self.tip.get_start()
268 else:
269 return super().get_end()
270
271 def get_start(self) -> Point3D:
272 if self.has_start_tip():
273 return self.start_tip.get_start()
274 else:
275 return super().get_start()
276
277 def get_length(self) -> np.floating:
278 start, end = self.get_start_and_end()
279 return np.linalg.norm(start - end)
280
281
282 class Arc(TipableVMobject):
283 """A circular arc.
284
285 Examples
286 --------
287 A simple arc of angle Pi.
288
289 .. manim:: ArcExample
290 :save_last_frame:
291
292 class ArcExample(Scene):
293 def construct(self):
294 self.add(Arc(angle=PI))
295 """
296
297 def __init__(
298 self,
299 radius: float = 1.0,
300 start_angle: float = 0,
301 angle: float = TAU / 4,
302 num_components: int = 9,
303 arc_center: Point3D = ORIGIN,
304 **kwargs,
305 ):
306 if radius is None: # apparently None is passed by ArcBetweenPoints
307 radius = 1.0
308 self.radius = radius
309 self.num_components: int = num_components
310 self.arc_center: Point3D = arc_center
311 self.start_angle: float = start_angle
312 self.angle: float = angle
313 self._failed_to_get_center: bool = False
314 super().__init__(**kwargs)
315
316 def generate_points(self) -> None:
317 self._set_pre_positioned_points()
318 self.scale(self.radius, about_point=ORIGIN)
319 self.shift(self.arc_center)
320
321 # Points are set a bit differently when rendering via OpenGL.
322 # TODO: refactor Arc so that only one strategy for setting points
323 # has to be used.
324 def init_points(self) -> None:
325 self.set_points(
326 Arc._create_quadratic_bezier_points(
327 angle=self.angle,
328 start_angle=self.start_angle,
329 n_components=self.num_components,
330 ),
331 )
332 self.scale(self.radius, about_point=ORIGIN)
333 self.shift(self.arc_center)
334
335 @staticmethod
336 def _create_quadratic_bezier_points(
337 angle: float, start_angle: float = 0, n_components: int = 8
338 ) -> QuadraticBezierPoints:
339 samples = np.array(
340 [
341 [np.cos(a), np.sin(a), 0]
342 for a in np.linspace(
343 start_angle,
344 start_angle + angle,
345 2 * n_components + 1,
346 )
347 ],
348 )
349 theta = angle / n_components
350 samples[1::2] /= np.cos(theta / 2)
351
352 points = np.zeros((3 * n_components, 3))
353 points[0::3] = samples[0:-1:2]
354 points[1::3] = samples[1::2]
355 points[2::3] = samples[2::2]
356 return points
357
358 def _set_pre_positioned_points(self) -> None:
359 anchors = np.array(
360 [
361 np.cos(a) * RIGHT + np.sin(a) * UP
362 for a in np.linspace(
363 self.start_angle,
364 self.start_angle + self.angle,
365 self.num_components,
366 )
367 ],
368 )
369 # Figure out which control points will give the
370 # Appropriate tangent lines to the circle
371 d_theta = self.angle / (self.num_components - 1.0)
372 tangent_vectors = np.zeros(anchors.shape)
373 # Rotate all 90 degrees, via (x, y) -> (-y, x)
374 tangent_vectors[:, 1] = anchors[:, 0]
375 tangent_vectors[:, 0] = -anchors[:, 1]
376 # Use tangent vectors to deduce anchors
377 handles1 = anchors[:-1] + (d_theta / 3) * tangent_vectors[:-1]
378 handles2 = anchors[1:] - (d_theta / 3) * tangent_vectors[1:]
379 self.set_anchors_and_handles(anchors[:-1], handles1, handles2, anchors[1:])
380
381 def get_arc_center(self, warning: bool = True) -> Point3D:
382 """Looks at the normals to the first two
383 anchors, and finds their intersection points
384 """
385 # First two anchors and handles
386 a1, h1, h2, a2 = self.points[:4]
387
388 if np.all(a1 == a2):
389 # For a1 and a2 to lie at the same point arc radius
390 # must be zero. Thus arc_center will also lie at
391 # that point.
392 return a1
393 # Tangent vectors
394 t1 = h1 - a1
395 t2 = h2 - a2
396 # Normals
397 n1 = rotate_vector(t1, TAU / 4)
398 n2 = rotate_vector(t2, TAU / 4)
399 try:
400 return line_intersection(line1=(a1, a1 + n1), line2=(a2, a2 + n2))
401 except Exception:
402 if warning:
403 warnings.warn("Can't find Arc center, using ORIGIN instead")
404 self._failed_to_get_center = True
405 return np.array(ORIGIN)
406
407 def move_arc_center_to(self, point: Point3D) -> Self:
408 self.shift(point - self.get_arc_center())
409 return self
410
411 def stop_angle(self) -> float:
412 return angle_of_vector(self.points[-1] - self.get_arc_center()) % TAU
413
414
415 class ArcBetweenPoints(Arc):
416 """Inherits from Arc and additionally takes 2 points between which the arc is spanned.
417
418 Example
419 -------
420 .. manim:: ArcBetweenPointsExample
421
422 class ArcBetweenPointsExample(Scene):
423 def construct(self):
424 circle = Circle(radius=2, stroke_color=GREY)
425 dot_1 = Dot(color=GREEN).move_to([2, 0, 0]).scale(0.5)
426 dot_1_text = Tex("(2,0)").scale(0.5).next_to(dot_1, RIGHT).set_color(BLUE)
427 dot_2 = Dot(color=GREEN).move_to([0, 2, 0]).scale(0.5)
428 dot_2_text = Tex("(0,2)").scale(0.5).next_to(dot_2, UP).set_color(BLUE)
429 arc= ArcBetweenPoints(start=2 * RIGHT, end=2 * UP, stroke_color=YELLOW)
430 self.add(circle, dot_1, dot_2, dot_1_text, dot_2_text)
431 self.play(Create(arc))
432 """
433
434 def __init__(
435 self,
436 start: Point3D,
437 end: Point3D,
438 angle: float = TAU / 4,
439 radius: float = None,
440 **kwargs,
441 ) -> None:
442 if radius is not None:
443 self.radius = radius
444 if radius < 0:
445 sign = -2
446 radius *= -1
447 else:
448 sign = 2
449 halfdist = np.linalg.norm(np.array(start) - np.array(end)) / 2
450 if radius < halfdist:
451 raise ValueError(
452 """ArcBetweenPoints called with a radius that is
453 smaller than half the distance between the points.""",
454 )
455 arc_height = radius - np.sqrt(radius**2 - halfdist**2)
456 angle = np.arccos((radius - arc_height) / radius) * sign
457
458 super().__init__(radius=radius, angle=angle, **kwargs)
459 if angle == 0:
460 self.set_points_as_corners([LEFT, RIGHT])
461 self.put_start_and_end_on(start, end)
462
463 if radius is None:
464 center = self.get_arc_center(warning=False)
465 if not self._failed_to_get_center:
466 self.radius = np.linalg.norm(np.array(start) - np.array(center))
467 else:
468 self.radius = np.inf
469
470
471 class CurvedArrow(ArcBetweenPoints):
472 def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:
473 from manim.mobject.geometry.tips import ArrowTriangleFilledTip
474
475 tip_shape = kwargs.pop("tip_shape", ArrowTriangleFilledTip)
476 super().__init__(start_point, end_point, **kwargs)
477 self.add_tip(tip_shape=tip_shape)
478
479
480 class CurvedDoubleArrow(CurvedArrow):
481 def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:
482 if "tip_shape_end" in kwargs:
483 kwargs["tip_shape"] = kwargs.pop("tip_shape_end")
484 from manim.mobject.geometry.tips import ArrowTriangleFilledTip
485
486 tip_shape_start = kwargs.pop("tip_shape_start", ArrowTriangleFilledTip)
487 super().__init__(start_point, end_point, **kwargs)
488 self.add_tip(at_start=True, tip_shape=tip_shape_start)
489
490
491 class Circle(Arc):
492 """A circle.
493
494 Parameters
495 ----------
496 color
497 The color of the shape.
498 kwargs
499 Additional arguments to be passed to :class:`Arc`
500
501 Examples
502 --------
503 .. manim:: CircleExample
504 :save_last_frame:
505
506 class CircleExample(Scene):
507 def construct(self):
508 circle_1 = Circle(radius=1.0)
509 circle_2 = Circle(radius=1.5, color=GREEN)
510 circle_3 = Circle(radius=1.0, color=BLUE_B, fill_opacity=1)
511
512 circle_group = Group(circle_1, circle_2, circle_3).arrange(buff=1)
513 self.add(circle_group)
514 """
515
516 def __init__(
517 self,
518 radius: float | None = None,
519 color: ParsableManimColor = RED,
520 **kwargs,
521 ) -> None:
522 super().__init__(
523 radius=radius,
524 start_angle=0,
525 angle=TAU,
526 color=color,
527 **kwargs,
528 )
529
530 def surround(
531 self,
532 mobject: Mobject,
533 dim_to_match: int = 0,
534 stretch: bool = False,
535 buffer_factor: float = 1.2,
536 ) -> Self:
537 """Modifies a circle so that it surrounds a given mobject.
538
539 Parameters
540 ----------
541 mobject
542 The mobject that the circle will be surrounding.
543 dim_to_match
544 buffer_factor
545 Scales the circle with respect to the mobject. A `buffer_factor` < 1 makes the circle smaller than the mobject.
546 stretch
547 Stretches the circle to fit more tightly around the mobject. Note: Does not work with :class:`Line`
548
549 Examples
550 --------
551 .. manim:: CircleSurround
552 :save_last_frame:
553
554 class CircleSurround(Scene):
555 def construct(self):
556 triangle1 = Triangle()
557 circle1 = Circle().surround(triangle1)
558 group1 = Group(triangle1,circle1) # treat the two mobjects as one
559
560 line2 = Line()
561 circle2 = Circle().surround(line2, buffer_factor=2.0)
562 group2 = Group(line2,circle2)
563
564 # buffer_factor < 1, so the circle is smaller than the square
565 square3 = Square()
566 circle3 = Circle().surround(square3, buffer_factor=0.5)
567 group3 = Group(square3, circle3)
568
569 group = Group(group1, group2, group3).arrange(buff=1)
570 self.add(group)
571 """
572
573 # Ignores dim_to_match and stretch; result will always be a circle
574 # TODO: Perhaps create an ellipse class to handle single-dimension stretching
575
576 # Something goes wrong here when surrounding lines?
577 # TODO: Figure out and fix
578 self.replace(mobject, dim_to_match, stretch)
579
580 self.width = np.sqrt(mobject.width**2 + mobject.height**2)
581 return self.scale(buffer_factor)
582
583 def point_at_angle(self, angle: float) -> Point3D:
584 """Returns the position of a point on the circle.
585
586 Parameters
587 ----------
588 angle
589 The angle of the point along the circle in radians.
590
591 Returns
592 -------
593 :class:`numpy.ndarray`
594 The location of the point along the circle's circumference.
595
596 Examples
597 --------
598 .. manim:: PointAtAngleExample
599 :save_last_frame:
600
601 class PointAtAngleExample(Scene):
602 def construct(self):
603 circle = Circle(radius=2.0)
604 p1 = circle.point_at_angle(PI/2)
605 p2 = circle.point_at_angle(270*DEGREES)
606
607 s1 = Square(side_length=0.25).move_to(p1)
608 s2 = Square(side_length=0.25).move_to(p2)
609 self.add(circle, s1, s2)
610
611 """
612
613 start_angle = angle_of_vector(self.points[0] - self.get_center())
614 proportion = (angle - start_angle) / TAU
615 proportion -= np.floor(proportion)
616 return self.point_from_proportion(proportion)
617
618 @staticmethod
619 def from_three_points(p1: Point3D, p2: Point3D, p3: Point3D, **kwargs) -> Self:
620 """Returns a circle passing through the specified
621 three points.
622
623 Example
624 -------
625 .. manim:: CircleFromPointsExample
626 :save_last_frame:
627
628 class CircleFromPointsExample(Scene):
629 def construct(self):
630 circle = Circle.from_three_points(LEFT, LEFT + UP, UP * 2, color=RED)
631 dots = VGroup(
632 Dot(LEFT),
633 Dot(LEFT + UP),
634 Dot(UP * 2),
635 )
636 self.add(NumberPlane(), circle, dots)
637 """
638 center = line_intersection(
639 perpendicular_bisector([p1, p2]),
640 perpendicular_bisector([p2, p3]),
641 )
642 radius = np.linalg.norm(p1 - center)
643 return Circle(radius=radius, **kwargs).shift(center)
644
645
646 class Dot(Circle):
647 """A circle with a very small radius.
648
649 Parameters
650 ----------
651 point
652 The location of the dot.
653 radius
654 The radius of the dot.
655 stroke_width
656 The thickness of the outline of the dot.
657 fill_opacity
658 The opacity of the dot's fill_colour
659 color
660 The color of the dot.
661 kwargs
662 Additional arguments to be passed to :class:`Circle`
663
664 Examples
665 --------
666 .. manim:: DotExample
667 :save_last_frame:
668
669 class DotExample(Scene):
670 def construct(self):
671 dot1 = Dot(point=LEFT, radius=0.08)
672 dot2 = Dot(point=ORIGIN)
673 dot3 = Dot(point=RIGHT)
674 self.add(dot1,dot2,dot3)
675 """
676
677 def __init__(
678 self,
679 point: Point3D = ORIGIN,
680 radius: float = DEFAULT_DOT_RADIUS,
681 stroke_width: float = 0,
682 fill_opacity: float = 1.0,
683 color: ParsableManimColor = WHITE,
684 **kwargs,
685 ) -> None:
686 super().__init__(
687 arc_center=point,
688 radius=radius,
689 stroke_width=stroke_width,
690 fill_opacity=fill_opacity,
691 color=color,
692 **kwargs,
693 )
694
695
696 class AnnotationDot(Dot):
697 """A dot with bigger radius and bold stroke to annotate scenes."""
698
699 def __init__(
700 self,
701 radius: float = DEFAULT_DOT_RADIUS * 1.3,
702 stroke_width: float = 5,
703 stroke_color: ParsableManimColor = WHITE,
704 fill_color: ParsableManimColor = BLUE,
705 **kwargs,
706 ) -> None:
707 super().__init__(
708 radius=radius,
709 stroke_width=stroke_width,
710 stroke_color=stroke_color,
711 fill_color=fill_color,
712 **kwargs,
713 )
714
715
716 class LabeledDot(Dot):
717 """A :class:`Dot` containing a label in its center.
718
719 Parameters
720 ----------
721 label
722 The label of the :class:`Dot`. This is rendered as :class:`~.MathTex`
723 by default (i.e., when passing a :class:`str`), but other classes
724 representing rendered strings like :class:`~.Text` or :class:`~.Tex`
725 can be passed as well.
726 radius
727 The radius of the :class:`Dot`. If ``None`` (the default), the radius
728 is calculated based on the size of the ``label``.
729
730 Examples
731 --------
732 .. manim:: SeveralLabeledDots
733 :save_last_frame:
734
735 class SeveralLabeledDots(Scene):
736 def construct(self):
737 sq = Square(fill_color=RED, fill_opacity=1)
738 self.add(sq)
739 dot1 = LabeledDot(Tex("42", color=RED))
740 dot2 = LabeledDot(MathTex("a", color=GREEN))
741 dot3 = LabeledDot(Text("ii", color=BLUE))
742 dot4 = LabeledDot("3")
743 dot1.next_to(sq, UL)
744 dot2.next_to(sq, UR)
745 dot3.next_to(sq, DL)
746 dot4.next_to(sq, DR)
747 self.add(dot1, dot2, dot3, dot4)
748 """
749
750 def __init__(
751 self,
752 label: str | SingleStringMathTex | Text | Tex,
753 radius: float | None = None,
754 **kwargs,
755 ) -> None:
756 if isinstance(label, str):
757 from manim import MathTex
758
759 rendered_label = MathTex(label, color=BLACK)
760 else:
761 rendered_label = label
762
763 if radius is None:
764 radius = 0.1 + max(rendered_label.width, rendered_label.height) / 2
765 super().__init__(radius=radius, **kwargs)
766 rendered_label.move_to(self.get_center())
767 self.add(rendered_label)
768
769
770 class Ellipse(Circle):
771 """A circular shape; oval, circle.
772
773 Parameters
774 ----------
775 width
776 The horizontal width of the ellipse.
777 height
778 The vertical height of the ellipse.
779 kwargs
780 Additional arguments to be passed to :class:`Circle`.
781
782 Examples
783 --------
784 .. manim:: EllipseExample
785 :save_last_frame:
786
787 class EllipseExample(Scene):
788 def construct(self):
789 ellipse_1 = Ellipse(width=2.0, height=4.0, color=BLUE_B)
790 ellipse_2 = Ellipse(width=4.0, height=1.0, color=BLUE_D)
791 ellipse_group = Group(ellipse_1,ellipse_2).arrange(buff=1)
792 self.add(ellipse_group)
793 """
794
795 def __init__(self, width: float = 2, height: float = 1, **kwargs) -> None:
796 super().__init__(**kwargs)
797 self.stretch_to_fit_width(width)
798 self.stretch_to_fit_height(height)
799
800
801 class AnnularSector(Arc):
802 """A sector of an annulus.
803
804
805 Parameters
806 ----------
807 inner_radius
808 The inside radius of the Annular Sector.
809 outer_radius
810 The outside radius of the Annular Sector.
811 angle
812 The clockwise angle of the Annular Sector.
813 start_angle
814 The starting clockwise angle of the Annular Sector.
815 fill_opacity
816 The opacity of the color filled in the Annular Sector.
817 stroke_width
818 The stroke width of the Annular Sector.
819 color
820 The color filled into the Annular Sector.
821
822 Examples
823 --------
824 .. manim:: AnnularSectorExample
825 :save_last_frame:
826
827 class AnnularSectorExample(Scene):
828 def construct(self):
829 # Changes background color to clearly visualize changes in fill_opacity.
830 self.camera.background_color = WHITE
831
832 # The default parameter start_angle is 0, so the AnnularSector starts from the +x-axis.
833 s1 = AnnularSector(color=YELLOW).move_to(2 * UL)
834
835 # Different inner_radius and outer_radius than the default.
836 s2 = AnnularSector(inner_radius=1.5, outer_radius=2, angle=45 * DEGREES, color=RED).move_to(2 * UR)
837
838 # fill_opacity is typically a number > 0 and <= 1. If fill_opacity=0, the AnnularSector is transparent.
839 s3 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=PI, fill_opacity=0.25, color=BLUE).move_to(2 * DL)
840
841 # With a negative value for the angle, the AnnularSector is drawn clockwise from the start value.
842 s4 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=-3 * PI / 2, color=GREEN).move_to(2 * DR)
843
844 self.add(s1, s2, s3, s4)
845 """
846
847 def __init__(
848 self,
849 inner_radius: float = 1,
850 outer_radius: float = 2,
851 angle: float = TAU / 4,
852 start_angle: float = 0,
853 fill_opacity: float = 1,
854 stroke_width: float = 0,
855 color: ParsableManimColor = WHITE,
856 **kwargs,
857 ) -> None:
858 self.inner_radius = inner_radius
859 self.outer_radius = outer_radius
860 super().__init__(
861 start_angle=start_angle,
862 angle=angle,
863 fill_opacity=fill_opacity,
864 stroke_width=stroke_width,
865 color=color,
866 **kwargs,
867 )
868
869 def generate_points(self) -> None:
870 inner_arc, outer_arc = (
871 Arc(
872 start_angle=self.start_angle,
873 angle=self.angle,
874 radius=radius,
875 arc_center=self.arc_center,
876 )
877 for radius in (self.inner_radius, self.outer_radius)
878 )
879 outer_arc.reverse_points()
880 self.append_points(inner_arc.points)
881 self.add_line_to(outer_arc.points[0])
882 self.append_points(outer_arc.points)
883 self.add_line_to(inner_arc.points[0])
884
885 init_points = generate_points
886
887
888 class Sector(AnnularSector):
889 """A sector of a circle.
890
891 Examples
892 --------
893 .. manim:: ExampleSector
894 :save_last_frame:
895
896 class ExampleSector(Scene):
897 def construct(self):
898 sector = Sector(outer_radius=2, inner_radius=1)
899 sector2 = Sector(outer_radius=2.5, inner_radius=0.8).move_to([-3, 0, 0])
900 sector.set_color(RED)
901 sector2.set_color(PINK)
902 self.add(sector, sector2)
903 """
904
905 def __init__(
906 self, outer_radius: float = 1, inner_radius: float = 0, **kwargs
907 ) -> None:
908 super().__init__(inner_radius=inner_radius, outer_radius=outer_radius, **kwargs)
909
910
911 class Annulus(Circle):
912 """Region between two concentric :class:`Circles <.Circle>`.
913
914 Parameters
915 ----------
916 inner_radius
917 The radius of the inner :class:`Circle`.
918 outer_radius
919 The radius of the outer :class:`Circle`.
920 kwargs
921 Additional arguments to be passed to :class:`Annulus`
922
923 Examples
924 --------
925 .. manim:: AnnulusExample
926 :save_last_frame:
927
928 class AnnulusExample(Scene):
929 def construct(self):
930 annulus_1 = Annulus(inner_radius=0.5, outer_radius=1).shift(UP)
931 annulus_2 = Annulus(inner_radius=0.3, outer_radius=0.6, color=RED).next_to(annulus_1, DOWN)
932 self.add(annulus_1, annulus_2)
933 """
934
935 def __init__(
936 self,
937 inner_radius: float | None = 1,
938 outer_radius: float | None = 2,
939 fill_opacity: float = 1,
940 stroke_width: float = 0,
941 color: ParsableManimColor = WHITE,
942 mark_paths_closed: bool = False,
943 **kwargs,
944 ) -> None:
945 self.mark_paths_closed = mark_paths_closed # is this even used?
946 self.inner_radius = inner_radius
947 self.outer_radius = outer_radius
948 super().__init__(
949 fill_opacity=fill_opacity, stroke_width=stroke_width, color=color, **kwargs
950 )
951
952 def generate_points(self) -> None:
953 self.radius = self.outer_radius
954 outer_circle = Circle(radius=self.outer_radius)
955 inner_circle = Circle(radius=self.inner_radius)
956 inner_circle.reverse_points()
957 self.append_points(outer_circle.points)
958 self.append_points(inner_circle.points)
959 self.shift(self.arc_center)
960
961 init_points = generate_points
962
963
964 class CubicBezier(VMobject, metaclass=ConvertToOpenGL):
965 """A cubic Bézier curve.
966
967 Example
968 -------
969 .. manim:: BezierSplineExample
970 :save_last_frame:
971
972 class BezierSplineExample(Scene):
973 def construct(self):
974 p1 = np.array([-3, 1, 0])
975 p1b = p1 + [1, 0, 0]
976 d1 = Dot(point=p1).set_color(BLUE)
977 l1 = Line(p1, p1b)
978 p2 = np.array([3, -1, 0])
979 p2b = p2 - [1, 0, 0]
980 d2 = Dot(point=p2).set_color(RED)
981 l2 = Line(p2, p2b)
982 bezier = CubicBezier(p1b, p1b + 3 * RIGHT, p2b - 3 * RIGHT, p2b)
983 self.add(l1, d1, l2, d2, bezier)
984
985 """
986
987 def __init__(
988 self,
989 start_anchor: CubicBezierPoints,
990 start_handle: CubicBezierPoints,
991 end_handle: CubicBezierPoints,
992 end_anchor: CubicBezierPoints,
993 **kwargs,
994 ) -> None:
995 super().__init__(**kwargs)
996 self.add_cubic_bezier_curve(start_anchor, start_handle, end_handle, end_anchor)
997
998
999 class ArcPolygon(VMobject, metaclass=ConvertToOpenGL):
1000 """A generalized polygon allowing for points to be connected with arcs.
1001
1002 This version tries to stick close to the way :class:`Polygon` is used. Points
1003 can be passed to it directly which are used to generate the according arcs
1004 (using :class:`ArcBetweenPoints`). An angle or radius can be passed to it to
1005 use across all arcs, but to configure arcs individually an ``arc_config`` list
1006 has to be passed with the syntax explained below.
1007
1008 Parameters
1009 ----------
1010 vertices
1011 A list of vertices, start and end points for the arc segments.
1012 angle
1013 The angle used for constructing the arcs. If no other parameters
1014 are set, this angle is used to construct all arcs.
1015 radius
1016 The circle radius used to construct the arcs. If specified,
1017 overrides the specified ``angle``.
1018 arc_config
1019 When passing a ``dict``, its content will be passed as keyword
1020 arguments to :class:`~.ArcBetweenPoints`. Otherwise, a list
1021 of dictionaries containing values that are passed as keyword
1022 arguments for every individual arc can be passed.
1023 kwargs
1024 Further keyword arguments that are passed to the constructor of
1025 :class:`~.VMobject`.
1026
1027 Attributes
1028 ----------
1029 arcs : :class:`list`
1030 The arcs created from the input parameters::
1031
1032 >>> from manim import ArcPolygon
1033 >>> ap = ArcPolygon([0, 0, 0], [2, 0, 0], [0, 2, 0])
1034 >>> ap.arcs
1035 [ArcBetweenPoints, ArcBetweenPoints, ArcBetweenPoints]
1036
1037
1038 .. tip::
1039
1040 Two instances of :class:`ArcPolygon` can be transformed properly into one
1041 another as well. Be advised that any arc initialized with ``angle=0``
1042 will actually be a straight line, so if a straight section should seamlessly
1043 transform into an arced section or vice versa, initialize the straight section
1044 with a negligible angle instead (such as ``angle=0.0001``).
1045
1046 .. note::
1047 There is an alternative version (:class:`ArcPolygonFromArcs`) that is instantiated
1048 with pre-defined arcs.
1049
1050 See Also
1051 --------
1052 :class:`ArcPolygonFromArcs`
1053
1054
1055 Examples
1056 --------
1057 .. manim:: SeveralArcPolygons
1058
1059 class SeveralArcPolygons(Scene):
1060 def construct(self):
1061 a = [0, 0, 0]
1062 b = [2, 0, 0]
1063 c = [0, 2, 0]
1064 ap1 = ArcPolygon(a, b, c, radius=2)
1065 ap2 = ArcPolygon(a, b, c, angle=45*DEGREES)
1066 ap3 = ArcPolygon(a, b, c, arc_config={'radius': 1.7, 'color': RED})
1067 ap4 = ArcPolygon(a, b, c, color=RED, fill_opacity=1,
1068 arc_config=[{'radius': 1.7, 'color': RED},
1069 {'angle': 20*DEGREES, 'color': BLUE},
1070 {'radius': 1}])
1071 ap_group = VGroup(ap1, ap2, ap3, ap4).arrange()
1072 self.play(*[Create(ap) for ap in [ap1, ap2, ap3, ap4]])
1073 self.wait()
1074
1075 For further examples see :class:`ArcPolygonFromArcs`.
1076 """
1077
1078 def __init__(
1079 self,
1080 *vertices: Point3D,
1081 angle: float = PI / 4,
1082 radius: float | None = None,
1083 arc_config: list[dict] | None = None,
1084 **kwargs,
1085 ) -> None:
1086 n = len(vertices)
1087 point_pairs = [(vertices[k], vertices[(k + 1) % n]) for k in range(n)]
1088
1089 if not arc_config:
1090 if radius:
1091 all_arc_configs = itertools.repeat({"radius": radius}, len(point_pairs))
1092 else:
1093 all_arc_configs = itertools.repeat({"angle": angle}, len(point_pairs))
1094 elif isinstance(arc_config, dict):
1095 all_arc_configs = itertools.repeat(arc_config, len(point_pairs))
1096 else:
1097 assert len(arc_config) == n
1098 all_arc_configs = arc_config
1099
1100 arcs = [
1101 ArcBetweenPoints(*pair, **conf)
1102 for (pair, conf) in zip(point_pairs, all_arc_configs)
1103 ]
1104
1105 super().__init__(**kwargs)
1106 # Adding the arcs like this makes ArcPolygon double as a VGroup.
1107 # Also makes changes to the ArcPolygon, such as scaling, affect
1108 # the arcs, so that their new values are usable.
1109 self.add(*arcs)
1110 for arc in arcs:
1111 self.append_points(arc.points)
1112
1113 # This enables the use of ArcPolygon.arcs as a convenience
1114 # because ArcPolygon[0] returns itself, not the first Arc.
1115 self.arcs = arcs
1116
1117
1118 class ArcPolygonFromArcs(VMobject, metaclass=ConvertToOpenGL):
1119 """A generalized polygon allowing for points to be connected with arcs.
1120
1121 This version takes in pre-defined arcs to generate the arcpolygon and introduces
1122 little new syntax. However unlike :class:`Polygon` it can't be created with points
1123 directly.
1124
1125 For proper appearance the passed arcs should connect seamlessly:
1126 ``[a,b][b,c][c,a]``
1127
1128 If there are any gaps between the arcs, those will be filled in
1129 with straight lines, which can be used deliberately for any straight
1130 sections. Arcs can also be passed as straight lines such as an arc
1131 initialized with ``angle=0``.
1132
1133 Parameters
1134 ----------
1135 arcs
1136 These are the arcs from which the arcpolygon is assembled.
1137 kwargs
1138 Keyword arguments that are passed to the constructor of
1139 :class:`~.VMobject`. Affects how the ArcPolygon itself is drawn,
1140 but doesn't affect passed arcs.
1141
1142 Attributes
1143 ----------
1144 arcs
1145 The arcs used to initialize the ArcPolygonFromArcs::
1146
1147 >>> from manim import ArcPolygonFromArcs, Arc, ArcBetweenPoints
1148 >>> ap = ArcPolygonFromArcs(Arc(), ArcBetweenPoints([1,0,0], [0,1,0]), Arc())
1149 >>> ap.arcs
1150 [Arc, ArcBetweenPoints, Arc]
1151
1152
1153 .. tip::
1154
1155 Two instances of :class:`ArcPolygon` can be transformed properly into
1156 one another as well. Be advised that any arc initialized with ``angle=0``
1157 will actually be a straight line, so if a straight section should seamlessly
1158 transform into an arced section or vice versa, initialize the straight
1159 section with a negligible angle instead (such as ``angle=0.0001``).
1160
1161 .. note::
1162 There is an alternative version (:class:`ArcPolygon`) that can be instantiated
1163 with points.
1164
1165 .. seealso::
1166 :class:`ArcPolygon`
1167
1168 Examples
1169 --------
1170 One example of an arcpolygon is the Reuleaux triangle.
1171 Instead of 3 straight lines connecting the outer points,
1172 a Reuleaux triangle has 3 arcs connecting those points,
1173 making a shape with constant width.
1174
1175 Passed arcs are stored as submobjects in the arcpolygon.
1176 This means that the arcs are changed along with the arcpolygon,
1177 for example when it's shifted, and these arcs can be manipulated
1178 after the arcpolygon has been initialized.
1179
1180 Also both the arcs contained in an :class:`~.ArcPolygonFromArcs`, as well as the
1181 arcpolygon itself are drawn, which affects draw time in :class:`~.Create`
1182 for example. In most cases the arcs themselves don't
1183 need to be drawn, in which case they can be passed as invisible.
1184
1185 .. manim:: ArcPolygonExample
1186
1187 class ArcPolygonExample(Scene):
1188 def construct(self):
1189 arc_conf = {"stroke_width": 0}
1190 poly_conf = {"stroke_width": 10, "stroke_color": BLUE,
1191 "fill_opacity": 1, "color": PURPLE}
1192 a = [-1, 0, 0]
1193 b = [1, 0, 0]
1194 c = [0, np.sqrt(3), 0]
1195 arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)
1196 arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)
1197 arc2 = ArcBetweenPoints(c, a, radius=2, **arc_conf)
1198 reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)
1199 self.play(FadeIn(reuleaux_tri))
1200 self.wait(2)
1201
1202 The arcpolygon itself can also be hidden so that instead only the contained
1203 arcs are drawn. This can be used to easily debug arcs or to highlight them.
1204
1205 .. manim:: ArcPolygonExample2
1206
1207 class ArcPolygonExample2(Scene):
1208 def construct(self):
1209 arc_conf = {"stroke_width": 3, "stroke_color": BLUE,
1210 "fill_opacity": 0.5, "color": GREEN}
1211 poly_conf = {"color": None}
1212 a = [-1, 0, 0]
1213 b = [1, 0, 0]
1214 c = [0, np.sqrt(3), 0]
1215 arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)
1216 arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)
1217 arc2 = ArcBetweenPoints(c, a, radius=2, stroke_color=RED)
1218 reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)
1219 self.play(FadeIn(reuleaux_tri))
1220 self.wait(2)
1221 """
1222
1223 def __init__(self, *arcs: Arc | ArcBetweenPoints, **kwargs) -> None:
1224 if not all(isinstance(m, (Arc, ArcBetweenPoints)) for m in arcs):
1225 raise ValueError(
1226 "All ArcPolygon submobjects must be of type Arc/ArcBetweenPoints",
1227 )
1228 super().__init__(**kwargs)
1229 # Adding the arcs like this makes ArcPolygonFromArcs double as a VGroup.
1230 # Also makes changes to the ArcPolygonFromArcs, such as scaling, affect
1231 # the arcs, so that their new values are usable.
1232 self.add(*arcs)
1233 # This enables the use of ArcPolygonFromArcs.arcs as a convenience
1234 # because ArcPolygonFromArcs[0] returns itself, not the first Arc.
1235 self.arcs = [*arcs]
1236 from .line import Line
1237
1238 for arc1, arc2 in adjacent_pairs(arcs):
1239 self.append_points(arc1.points)
1240 line = Line(arc1.get_end(), arc2.get_start())
1241 len_ratio = line.get_length() / arc1.get_arc_length()
1242 if np.isnan(len_ratio) or np.isinf(len_ratio):
1243 continue
1244 line.insert_n_curves(int(arc1.get_num_curves() * len_ratio))
1245 self.append_points(line.points)
1246
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/manim/mobject/geometry/arc.py b/manim/mobject/geometry/arc.py
--- a/manim/mobject/geometry/arc.py
+++ b/manim/mobject/geometry/arc.py
@@ -389,7 +389,7 @@
# For a1 and a2 to lie at the same point arc radius
# must be zero. Thus arc_center will also lie at
# that point.
- return a1
+ return np.copy(a1)
# Tangent vectors
t1 = h1 - a1
t2 = h2 - a2
| {"golden_diff": "diff --git a/manim/mobject/geometry/arc.py b/manim/mobject/geometry/arc.py\n--- a/manim/mobject/geometry/arc.py\n+++ b/manim/mobject/geometry/arc.py\n@@ -389,7 +389,7 @@\n # For a1 and a2 to lie at the same point arc radius\n # must be zero. Thus arc_center will also lie at\n # that point.\n- return a1\n+ return np.copy(a1)\n # Tangent vectors\n t1 = h1 - a1\n t2 = h2 - a2\n", "issue": "Sector.get_arc_center() return a reference but not a copy, cause rotate unexpected behavior. \n- ## Description of bug / unexpected behavior\r\n\r\nmanim version: `0.18.0`\r\n\r\n```python\r\nclass SectorArcCenterRotate(Scene):\r\n def construct(self):\r\n self.add(NumberPlane())\r\n sector = Sector(outer_radius=2, start_angle=0, angle=PI / 6)\r\n sector.shift(LEFT * 3)\r\n self.add(sector)\r\n self.wait()\r\n self.add(sector.copy().set_color(RED).set_opacity(0.5))\r\n sector.rotate(PI / 6, about_point=sector.get_arc_center()) # unexcepted\r\n # sector.rotate(PI / 6, about_point=deepcopy(sector.get_arc_center()))\r\n self.wait()\r\n```\r\n\r\n\r\n## Expected behavior\r\n<!-- Add a clear and concise description of what you expected to happen. -->\r\nexpected behavior:\r\n<img width=\"572\" alt=\"image\" src=\"https://github.com/ManimCommunity/manim/assets/1728633/b134ee09-0450-48f8-9800-35cb882285e8\">\r\n\r\nthe actual behavior:\r\n<img width=\"591\" alt=\"image\" src=\"https://github.com/ManimCommunity/manim/assets/1728633/01519761-976a-450f-a9fd-530217915f78\">\r\n\r\n\r\n## System specifications\r\n\r\n<details><summary>System Details</summary>\r\n- OS (MacOS 14.2.1 (23C71)):\r\n- RAM:\r\n- Python version 3.11.5\r\n- Installed modules (provide output from `pip list`):\r\n</details>\r\n\r\nI think the \"problem\" is `get_arc_center` return a reference of Sector's point:\r\nhttps://github.com/ManimCommunity/manim/blob/3b496ea2e6f1a6ab7829398590b41e17bfbd34c1/manim/mobject/geometry/arc.py#L381-L392\r\nBut the other method, such as get_corner return a copy. \r\nNot sure it's a feature or a bug. Thanks. \n", "before_files": [{"content": "r\"\"\"Mobjects that are curved.\n\nExamples\n--------\n.. manim:: UsefulAnnotations\n :save_last_frame:\n\n class UsefulAnnotations(Scene):\n def construct(self):\n m0 = Dot()\n m1 = AnnotationDot()\n m2 = LabeledDot(\"ii\")\n m3 = LabeledDot(MathTex(r\"\\alpha\").set_color(ORANGE))\n m4 = CurvedArrow(2*LEFT, 2*RIGHT, radius= -5)\n m5 = CurvedArrow(2*LEFT, 2*RIGHT, radius= 8)\n m6 = CurvedDoubleArrow(ORIGIN, 2*RIGHT)\n\n self.add(m0, m1, m2, m3, m4, m5, m6)\n for i, mobj in enumerate(self.mobjects):\n mobj.shift(DOWN * (i-3))\n\n\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"TipableVMobject\",\n \"Arc\",\n \"ArcBetweenPoints\",\n \"CurvedArrow\",\n \"CurvedDoubleArrow\",\n \"Circle\",\n \"Dot\",\n \"AnnotationDot\",\n \"LabeledDot\",\n \"Ellipse\",\n \"AnnularSector\",\n \"Sector\",\n \"Annulus\",\n \"CubicBezier\",\n \"ArcPolygon\",\n \"ArcPolygonFromArcs\",\n]\n\nimport itertools\nimport warnings\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\nfrom typing_extensions import Self\n\nfrom manim.constants import *\nfrom manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL\nfrom manim.mobject.types.vectorized_mobject import VGroup, VMobject\nfrom manim.utils.color import BLACK, BLUE, RED, WHITE, ParsableManimColor\nfrom manim.utils.iterables import adjacent_pairs\nfrom manim.utils.space_ops import (\n angle_of_vector,\n cartesian_to_spherical,\n line_intersection,\n perpendicular_bisector,\n rotate_vector,\n)\n\nif TYPE_CHECKING:\n import manim.mobject.geometry.tips as tips\n from manim.mobject.mobject import Mobject\n from manim.mobject.text.tex_mobject import SingleStringMathTex, Tex\n from manim.mobject.text.text_mobject import Text\n from manim.typing import CubicBezierPoints, Point3D, QuadraticBezierPoints, Vector3D\n\n\nclass TipableVMobject(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"Meant for shared functionality between Arc and Line.\n Functionality can be classified broadly into these groups:\n\n * Adding, Creating, Modifying tips\n - add_tip calls create_tip, before pushing the new tip\n into the TipableVMobject's list of submobjects\n - stylistic and positional configuration\n\n * Checking for tips\n - Boolean checks for whether the TipableVMobject has a tip\n and a starting tip\n\n * Getters\n - Straightforward accessors, returning information pertaining\n to the TipableVMobject instance's tip(s), its length etc\n \"\"\"\n\n def __init__(\n self,\n tip_length: float = DEFAULT_ARROW_TIP_LENGTH,\n normal_vector: Vector3D = OUT,\n tip_style: dict = {},\n **kwargs,\n ) -> None:\n self.tip_length: float = tip_length\n self.normal_vector: Vector3D = normal_vector\n self.tip_style: dict = tip_style\n super().__init__(**kwargs)\n\n # Adding, Creating, Modifying tips\n\n def add_tip(\n self,\n tip: tips.ArrowTip | None = None,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float | None = None,\n tip_width: float | None = None,\n at_start: bool = False,\n ) -> Self:\n \"\"\"Adds a tip to the TipableVMobject instance, recognising\n that the endpoints might need to be switched if it's\n a 'starting tip' or not.\n \"\"\"\n if tip is None:\n tip = self.create_tip(tip_shape, tip_length, tip_width, at_start)\n else:\n self.position_tip(tip, at_start)\n self.reset_endpoints_based_on_tip(tip, at_start)\n self.asign_tip_attr(tip, at_start)\n self.add(tip)\n return self\n\n def create_tip(\n self,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float = None,\n tip_width: float = None,\n at_start: bool = False,\n ):\n \"\"\"Stylises the tip, positions it spatially, and returns\n the newly instantiated tip to the caller.\n \"\"\"\n tip = self.get_unpositioned_tip(tip_shape, tip_length, tip_width)\n self.position_tip(tip, at_start)\n return tip\n\n def get_unpositioned_tip(\n self,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float | None = None,\n tip_width: float | None = None,\n ):\n \"\"\"Returns a tip that has been stylistically configured,\n but has not yet been given a position in space.\n \"\"\"\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n style = {}\n\n if tip_shape is None:\n tip_shape = ArrowTriangleFilledTip\n\n if tip_shape is ArrowTriangleFilledTip:\n if tip_width is None:\n tip_width = self.get_default_tip_length()\n style.update({\"width\": tip_width})\n if tip_length is None:\n tip_length = self.get_default_tip_length()\n\n color = self.get_color()\n style.update({\"fill_color\": color, \"stroke_color\": color})\n style.update(self.tip_style)\n tip = tip_shape(length=tip_length, **style)\n return tip\n\n def position_tip(self, tip: tips.ArrowTip, at_start: bool = False):\n # Last two control points, defining both\n # the end, and the tangency direction\n if at_start:\n anchor = self.get_start()\n handle = self.get_first_handle()\n else:\n handle = self.get_last_handle()\n anchor = self.get_end()\n angles = cartesian_to_spherical(handle - anchor)\n tip.rotate(\n angles[1] - PI - tip.tip_angle,\n ) # Rotates the tip along the azimuthal\n if not hasattr(self, \"_init_positioning_axis\"):\n axis = [\n np.sin(angles[1]),\n -np.cos(angles[1]),\n 0,\n ] # Obtains the perpendicular of the tip\n tip.rotate(\n -angles[2] + PI / 2,\n axis=axis,\n ) # Rotates the tip along the vertical wrt the axis\n self._init_positioning_axis = axis\n tip.shift(anchor - tip.tip_point)\n return tip\n\n def reset_endpoints_based_on_tip(self, tip: tips.ArrowTip, at_start: bool) -> Self:\n if self.get_length() == 0:\n # Zero length, put_start_and_end_on wouldn't work\n return self\n\n if at_start:\n self.put_start_and_end_on(tip.base, self.get_end())\n else:\n self.put_start_and_end_on(self.get_start(), tip.base)\n return self\n\n def asign_tip_attr(self, tip: tips.ArrowTip, at_start: bool) -> Self:\n if at_start:\n self.start_tip = tip\n else:\n self.tip = tip\n return self\n\n # Checking for tips\n\n def has_tip(self) -> bool:\n return hasattr(self, \"tip\") and self.tip in self\n\n def has_start_tip(self) -> bool:\n return hasattr(self, \"start_tip\") and self.start_tip in self\n\n # Getters\n\n def pop_tips(self) -> VGroup:\n start, end = self.get_start_and_end()\n result = self.get_group_class()()\n if self.has_tip():\n result.add(self.tip)\n self.remove(self.tip)\n if self.has_start_tip():\n result.add(self.start_tip)\n self.remove(self.start_tip)\n self.put_start_and_end_on(start, end)\n return result\n\n def get_tips(self) -> VGroup:\n \"\"\"Returns a VGroup (collection of VMobjects) containing\n the TipableVMObject instance's tips.\n \"\"\"\n result = self.get_group_class()()\n if hasattr(self, \"tip\"):\n result.add(self.tip)\n if hasattr(self, \"start_tip\"):\n result.add(self.start_tip)\n return result\n\n def get_tip(self):\n \"\"\"Returns the TipableVMobject instance's (first) tip,\n otherwise throws an exception.\"\"\"\n tips = self.get_tips()\n if len(tips) == 0:\n raise Exception(\"tip not found\")\n else:\n return tips[0]\n\n def get_default_tip_length(self) -> float:\n return self.tip_length\n\n def get_first_handle(self) -> Point3D:\n return self.points[1]\n\n def get_last_handle(self) -> Point3D:\n return self.points[-2]\n\n def get_end(self) -> Point3D:\n if self.has_tip():\n return self.tip.get_start()\n else:\n return super().get_end()\n\n def get_start(self) -> Point3D:\n if self.has_start_tip():\n return self.start_tip.get_start()\n else:\n return super().get_start()\n\n def get_length(self) -> np.floating:\n start, end = self.get_start_and_end()\n return np.linalg.norm(start - end)\n\n\nclass Arc(TipableVMobject):\n \"\"\"A circular arc.\n\n Examples\n --------\n A simple arc of angle Pi.\n\n .. manim:: ArcExample\n :save_last_frame:\n\n class ArcExample(Scene):\n def construct(self):\n self.add(Arc(angle=PI))\n \"\"\"\n\n def __init__(\n self,\n radius: float = 1.0,\n start_angle: float = 0,\n angle: float = TAU / 4,\n num_components: int = 9,\n arc_center: Point3D = ORIGIN,\n **kwargs,\n ):\n if radius is None: # apparently None is passed by ArcBetweenPoints\n radius = 1.0\n self.radius = radius\n self.num_components: int = num_components\n self.arc_center: Point3D = arc_center\n self.start_angle: float = start_angle\n self.angle: float = angle\n self._failed_to_get_center: bool = False\n super().__init__(**kwargs)\n\n def generate_points(self) -> None:\n self._set_pre_positioned_points()\n self.scale(self.radius, about_point=ORIGIN)\n self.shift(self.arc_center)\n\n # Points are set a bit differently when rendering via OpenGL.\n # TODO: refactor Arc so that only one strategy for setting points\n # has to be used.\n def init_points(self) -> None:\n self.set_points(\n Arc._create_quadratic_bezier_points(\n angle=self.angle,\n start_angle=self.start_angle,\n n_components=self.num_components,\n ),\n )\n self.scale(self.radius, about_point=ORIGIN)\n self.shift(self.arc_center)\n\n @staticmethod\n def _create_quadratic_bezier_points(\n angle: float, start_angle: float = 0, n_components: int = 8\n ) -> QuadraticBezierPoints:\n samples = np.array(\n [\n [np.cos(a), np.sin(a), 0]\n for a in np.linspace(\n start_angle,\n start_angle + angle,\n 2 * n_components + 1,\n )\n ],\n )\n theta = angle / n_components\n samples[1::2] /= np.cos(theta / 2)\n\n points = np.zeros((3 * n_components, 3))\n points[0::3] = samples[0:-1:2]\n points[1::3] = samples[1::2]\n points[2::3] = samples[2::2]\n return points\n\n def _set_pre_positioned_points(self) -> None:\n anchors = np.array(\n [\n np.cos(a) * RIGHT + np.sin(a) * UP\n for a in np.linspace(\n self.start_angle,\n self.start_angle + self.angle,\n self.num_components,\n )\n ],\n )\n # Figure out which control points will give the\n # Appropriate tangent lines to the circle\n d_theta = self.angle / (self.num_components - 1.0)\n tangent_vectors = np.zeros(anchors.shape)\n # Rotate all 90 degrees, via (x, y) -> (-y, x)\n tangent_vectors[:, 1] = anchors[:, 0]\n tangent_vectors[:, 0] = -anchors[:, 1]\n # Use tangent vectors to deduce anchors\n handles1 = anchors[:-1] + (d_theta / 3) * tangent_vectors[:-1]\n handles2 = anchors[1:] - (d_theta / 3) * tangent_vectors[1:]\n self.set_anchors_and_handles(anchors[:-1], handles1, handles2, anchors[1:])\n\n def get_arc_center(self, warning: bool = True) -> Point3D:\n \"\"\"Looks at the normals to the first two\n anchors, and finds their intersection points\n \"\"\"\n # First two anchors and handles\n a1, h1, h2, a2 = self.points[:4]\n\n if np.all(a1 == a2):\n # For a1 and a2 to lie at the same point arc radius\n # must be zero. Thus arc_center will also lie at\n # that point.\n return a1\n # Tangent vectors\n t1 = h1 - a1\n t2 = h2 - a2\n # Normals\n n1 = rotate_vector(t1, TAU / 4)\n n2 = rotate_vector(t2, TAU / 4)\n try:\n return line_intersection(line1=(a1, a1 + n1), line2=(a2, a2 + n2))\n except Exception:\n if warning:\n warnings.warn(\"Can't find Arc center, using ORIGIN instead\")\n self._failed_to_get_center = True\n return np.array(ORIGIN)\n\n def move_arc_center_to(self, point: Point3D) -> Self:\n self.shift(point - self.get_arc_center())\n return self\n\n def stop_angle(self) -> float:\n return angle_of_vector(self.points[-1] - self.get_arc_center()) % TAU\n\n\nclass ArcBetweenPoints(Arc):\n \"\"\"Inherits from Arc and additionally takes 2 points between which the arc is spanned.\n\n Example\n -------\n .. manim:: ArcBetweenPointsExample\n\n class ArcBetweenPointsExample(Scene):\n def construct(self):\n circle = Circle(radius=2, stroke_color=GREY)\n dot_1 = Dot(color=GREEN).move_to([2, 0, 0]).scale(0.5)\n dot_1_text = Tex(\"(2,0)\").scale(0.5).next_to(dot_1, RIGHT).set_color(BLUE)\n dot_2 = Dot(color=GREEN).move_to([0, 2, 0]).scale(0.5)\n dot_2_text = Tex(\"(0,2)\").scale(0.5).next_to(dot_2, UP).set_color(BLUE)\n arc= ArcBetweenPoints(start=2 * RIGHT, end=2 * UP, stroke_color=YELLOW)\n self.add(circle, dot_1, dot_2, dot_1_text, dot_2_text)\n self.play(Create(arc))\n \"\"\"\n\n def __init__(\n self,\n start: Point3D,\n end: Point3D,\n angle: float = TAU / 4,\n radius: float = None,\n **kwargs,\n ) -> None:\n if radius is not None:\n self.radius = radius\n if radius < 0:\n sign = -2\n radius *= -1\n else:\n sign = 2\n halfdist = np.linalg.norm(np.array(start) - np.array(end)) / 2\n if radius < halfdist:\n raise ValueError(\n \"\"\"ArcBetweenPoints called with a radius that is\n smaller than half the distance between the points.\"\"\",\n )\n arc_height = radius - np.sqrt(radius**2 - halfdist**2)\n angle = np.arccos((radius - arc_height) / radius) * sign\n\n super().__init__(radius=radius, angle=angle, **kwargs)\n if angle == 0:\n self.set_points_as_corners([LEFT, RIGHT])\n self.put_start_and_end_on(start, end)\n\n if radius is None:\n center = self.get_arc_center(warning=False)\n if not self._failed_to_get_center:\n self.radius = np.linalg.norm(np.array(start) - np.array(center))\n else:\n self.radius = np.inf\n\n\nclass CurvedArrow(ArcBetweenPoints):\n def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n tip_shape = kwargs.pop(\"tip_shape\", ArrowTriangleFilledTip)\n super().__init__(start_point, end_point, **kwargs)\n self.add_tip(tip_shape=tip_shape)\n\n\nclass CurvedDoubleArrow(CurvedArrow):\n def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:\n if \"tip_shape_end\" in kwargs:\n kwargs[\"tip_shape\"] = kwargs.pop(\"tip_shape_end\")\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n tip_shape_start = kwargs.pop(\"tip_shape_start\", ArrowTriangleFilledTip)\n super().__init__(start_point, end_point, **kwargs)\n self.add_tip(at_start=True, tip_shape=tip_shape_start)\n\n\nclass Circle(Arc):\n \"\"\"A circle.\n\n Parameters\n ----------\n color\n The color of the shape.\n kwargs\n Additional arguments to be passed to :class:`Arc`\n\n Examples\n --------\n .. manim:: CircleExample\n :save_last_frame:\n\n class CircleExample(Scene):\n def construct(self):\n circle_1 = Circle(radius=1.0)\n circle_2 = Circle(radius=1.5, color=GREEN)\n circle_3 = Circle(radius=1.0, color=BLUE_B, fill_opacity=1)\n\n circle_group = Group(circle_1, circle_2, circle_3).arrange(buff=1)\n self.add(circle_group)\n \"\"\"\n\n def __init__(\n self,\n radius: float | None = None,\n color: ParsableManimColor = RED,\n **kwargs,\n ) -> None:\n super().__init__(\n radius=radius,\n start_angle=0,\n angle=TAU,\n color=color,\n **kwargs,\n )\n\n def surround(\n self,\n mobject: Mobject,\n dim_to_match: int = 0,\n stretch: bool = False,\n buffer_factor: float = 1.2,\n ) -> Self:\n \"\"\"Modifies a circle so that it surrounds a given mobject.\n\n Parameters\n ----------\n mobject\n The mobject that the circle will be surrounding.\n dim_to_match\n buffer_factor\n Scales the circle with respect to the mobject. A `buffer_factor` < 1 makes the circle smaller than the mobject.\n stretch\n Stretches the circle to fit more tightly around the mobject. Note: Does not work with :class:`Line`\n\n Examples\n --------\n .. manim:: CircleSurround\n :save_last_frame:\n\n class CircleSurround(Scene):\n def construct(self):\n triangle1 = Triangle()\n circle1 = Circle().surround(triangle1)\n group1 = Group(triangle1,circle1) # treat the two mobjects as one\n\n line2 = Line()\n circle2 = Circle().surround(line2, buffer_factor=2.0)\n group2 = Group(line2,circle2)\n\n # buffer_factor < 1, so the circle is smaller than the square\n square3 = Square()\n circle3 = Circle().surround(square3, buffer_factor=0.5)\n group3 = Group(square3, circle3)\n\n group = Group(group1, group2, group3).arrange(buff=1)\n self.add(group)\n \"\"\"\n\n # Ignores dim_to_match and stretch; result will always be a circle\n # TODO: Perhaps create an ellipse class to handle single-dimension stretching\n\n # Something goes wrong here when surrounding lines?\n # TODO: Figure out and fix\n self.replace(mobject, dim_to_match, stretch)\n\n self.width = np.sqrt(mobject.width**2 + mobject.height**2)\n return self.scale(buffer_factor)\n\n def point_at_angle(self, angle: float) -> Point3D:\n \"\"\"Returns the position of a point on the circle.\n\n Parameters\n ----------\n angle\n The angle of the point along the circle in radians.\n\n Returns\n -------\n :class:`numpy.ndarray`\n The location of the point along the circle's circumference.\n\n Examples\n --------\n .. manim:: PointAtAngleExample\n :save_last_frame:\n\n class PointAtAngleExample(Scene):\n def construct(self):\n circle = Circle(radius=2.0)\n p1 = circle.point_at_angle(PI/2)\n p2 = circle.point_at_angle(270*DEGREES)\n\n s1 = Square(side_length=0.25).move_to(p1)\n s2 = Square(side_length=0.25).move_to(p2)\n self.add(circle, s1, s2)\n\n \"\"\"\n\n start_angle = angle_of_vector(self.points[0] - self.get_center())\n proportion = (angle - start_angle) / TAU\n proportion -= np.floor(proportion)\n return self.point_from_proportion(proportion)\n\n @staticmethod\n def from_three_points(p1: Point3D, p2: Point3D, p3: Point3D, **kwargs) -> Self:\n \"\"\"Returns a circle passing through the specified\n three points.\n\n Example\n -------\n .. manim:: CircleFromPointsExample\n :save_last_frame:\n\n class CircleFromPointsExample(Scene):\n def construct(self):\n circle = Circle.from_three_points(LEFT, LEFT + UP, UP * 2, color=RED)\n dots = VGroup(\n Dot(LEFT),\n Dot(LEFT + UP),\n Dot(UP * 2),\n )\n self.add(NumberPlane(), circle, dots)\n \"\"\"\n center = line_intersection(\n perpendicular_bisector([p1, p2]),\n perpendicular_bisector([p2, p3]),\n )\n radius = np.linalg.norm(p1 - center)\n return Circle(radius=radius, **kwargs).shift(center)\n\n\nclass Dot(Circle):\n \"\"\"A circle with a very small radius.\n\n Parameters\n ----------\n point\n The location of the dot.\n radius\n The radius of the dot.\n stroke_width\n The thickness of the outline of the dot.\n fill_opacity\n The opacity of the dot's fill_colour\n color\n The color of the dot.\n kwargs\n Additional arguments to be passed to :class:`Circle`\n\n Examples\n --------\n .. manim:: DotExample\n :save_last_frame:\n\n class DotExample(Scene):\n def construct(self):\n dot1 = Dot(point=LEFT, radius=0.08)\n dot2 = Dot(point=ORIGIN)\n dot3 = Dot(point=RIGHT)\n self.add(dot1,dot2,dot3)\n \"\"\"\n\n def __init__(\n self,\n point: Point3D = ORIGIN,\n radius: float = DEFAULT_DOT_RADIUS,\n stroke_width: float = 0,\n fill_opacity: float = 1.0,\n color: ParsableManimColor = WHITE,\n **kwargs,\n ) -> None:\n super().__init__(\n arc_center=point,\n radius=radius,\n stroke_width=stroke_width,\n fill_opacity=fill_opacity,\n color=color,\n **kwargs,\n )\n\n\nclass AnnotationDot(Dot):\n \"\"\"A dot with bigger radius and bold stroke to annotate scenes.\"\"\"\n\n def __init__(\n self,\n radius: float = DEFAULT_DOT_RADIUS * 1.3,\n stroke_width: float = 5,\n stroke_color: ParsableManimColor = WHITE,\n fill_color: ParsableManimColor = BLUE,\n **kwargs,\n ) -> None:\n super().__init__(\n radius=radius,\n stroke_width=stroke_width,\n stroke_color=stroke_color,\n fill_color=fill_color,\n **kwargs,\n )\n\n\nclass LabeledDot(Dot):\n \"\"\"A :class:`Dot` containing a label in its center.\n\n Parameters\n ----------\n label\n The label of the :class:`Dot`. This is rendered as :class:`~.MathTex`\n by default (i.e., when passing a :class:`str`), but other classes\n representing rendered strings like :class:`~.Text` or :class:`~.Tex`\n can be passed as well.\n radius\n The radius of the :class:`Dot`. If ``None`` (the default), the radius\n is calculated based on the size of the ``label``.\n\n Examples\n --------\n .. manim:: SeveralLabeledDots\n :save_last_frame:\n\n class SeveralLabeledDots(Scene):\n def construct(self):\n sq = Square(fill_color=RED, fill_opacity=1)\n self.add(sq)\n dot1 = LabeledDot(Tex(\"42\", color=RED))\n dot2 = LabeledDot(MathTex(\"a\", color=GREEN))\n dot3 = LabeledDot(Text(\"ii\", color=BLUE))\n dot4 = LabeledDot(\"3\")\n dot1.next_to(sq, UL)\n dot2.next_to(sq, UR)\n dot3.next_to(sq, DL)\n dot4.next_to(sq, DR)\n self.add(dot1, dot2, dot3, dot4)\n \"\"\"\n\n def __init__(\n self,\n label: str | SingleStringMathTex | Text | Tex,\n radius: float | None = None,\n **kwargs,\n ) -> None:\n if isinstance(label, str):\n from manim import MathTex\n\n rendered_label = MathTex(label, color=BLACK)\n else:\n rendered_label = label\n\n if radius is None:\n radius = 0.1 + max(rendered_label.width, rendered_label.height) / 2\n super().__init__(radius=radius, **kwargs)\n rendered_label.move_to(self.get_center())\n self.add(rendered_label)\n\n\nclass Ellipse(Circle):\n \"\"\"A circular shape; oval, circle.\n\n Parameters\n ----------\n width\n The horizontal width of the ellipse.\n height\n The vertical height of the ellipse.\n kwargs\n Additional arguments to be passed to :class:`Circle`.\n\n Examples\n --------\n .. manim:: EllipseExample\n :save_last_frame:\n\n class EllipseExample(Scene):\n def construct(self):\n ellipse_1 = Ellipse(width=2.0, height=4.0, color=BLUE_B)\n ellipse_2 = Ellipse(width=4.0, height=1.0, color=BLUE_D)\n ellipse_group = Group(ellipse_1,ellipse_2).arrange(buff=1)\n self.add(ellipse_group)\n \"\"\"\n\n def __init__(self, width: float = 2, height: float = 1, **kwargs) -> None:\n super().__init__(**kwargs)\n self.stretch_to_fit_width(width)\n self.stretch_to_fit_height(height)\n\n\nclass AnnularSector(Arc):\n \"\"\"A sector of an annulus.\n\n\n Parameters\n ----------\n inner_radius\n The inside radius of the Annular Sector.\n outer_radius\n The outside radius of the Annular Sector.\n angle\n The clockwise angle of the Annular Sector.\n start_angle\n The starting clockwise angle of the Annular Sector.\n fill_opacity\n The opacity of the color filled in the Annular Sector.\n stroke_width\n The stroke width of the Annular Sector.\n color\n The color filled into the Annular Sector.\n\n Examples\n --------\n .. manim:: AnnularSectorExample\n :save_last_frame:\n\n class AnnularSectorExample(Scene):\n def construct(self):\n # Changes background color to clearly visualize changes in fill_opacity.\n self.camera.background_color = WHITE\n\n # The default parameter start_angle is 0, so the AnnularSector starts from the +x-axis.\n s1 = AnnularSector(color=YELLOW).move_to(2 * UL)\n\n # Different inner_radius and outer_radius than the default.\n s2 = AnnularSector(inner_radius=1.5, outer_radius=2, angle=45 * DEGREES, color=RED).move_to(2 * UR)\n\n # fill_opacity is typically a number > 0 and <= 1. If fill_opacity=0, the AnnularSector is transparent.\n s3 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=PI, fill_opacity=0.25, color=BLUE).move_to(2 * DL)\n\n # With a negative value for the angle, the AnnularSector is drawn clockwise from the start value.\n s4 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=-3 * PI / 2, color=GREEN).move_to(2 * DR)\n\n self.add(s1, s2, s3, s4)\n \"\"\"\n\n def __init__(\n self,\n inner_radius: float = 1,\n outer_radius: float = 2,\n angle: float = TAU / 4,\n start_angle: float = 0,\n fill_opacity: float = 1,\n stroke_width: float = 0,\n color: ParsableManimColor = WHITE,\n **kwargs,\n ) -> None:\n self.inner_radius = inner_radius\n self.outer_radius = outer_radius\n super().__init__(\n start_angle=start_angle,\n angle=angle,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n color=color,\n **kwargs,\n )\n\n def generate_points(self) -> None:\n inner_arc, outer_arc = (\n Arc(\n start_angle=self.start_angle,\n angle=self.angle,\n radius=radius,\n arc_center=self.arc_center,\n )\n for radius in (self.inner_radius, self.outer_radius)\n )\n outer_arc.reverse_points()\n self.append_points(inner_arc.points)\n self.add_line_to(outer_arc.points[0])\n self.append_points(outer_arc.points)\n self.add_line_to(inner_arc.points[0])\n\n init_points = generate_points\n\n\nclass Sector(AnnularSector):\n \"\"\"A sector of a circle.\n\n Examples\n --------\n .. manim:: ExampleSector\n :save_last_frame:\n\n class ExampleSector(Scene):\n def construct(self):\n sector = Sector(outer_radius=2, inner_radius=1)\n sector2 = Sector(outer_radius=2.5, inner_radius=0.8).move_to([-3, 0, 0])\n sector.set_color(RED)\n sector2.set_color(PINK)\n self.add(sector, sector2)\n \"\"\"\n\n def __init__(\n self, outer_radius: float = 1, inner_radius: float = 0, **kwargs\n ) -> None:\n super().__init__(inner_radius=inner_radius, outer_radius=outer_radius, **kwargs)\n\n\nclass Annulus(Circle):\n \"\"\"Region between two concentric :class:`Circles <.Circle>`.\n\n Parameters\n ----------\n inner_radius\n The radius of the inner :class:`Circle`.\n outer_radius\n The radius of the outer :class:`Circle`.\n kwargs\n Additional arguments to be passed to :class:`Annulus`\n\n Examples\n --------\n .. manim:: AnnulusExample\n :save_last_frame:\n\n class AnnulusExample(Scene):\n def construct(self):\n annulus_1 = Annulus(inner_radius=0.5, outer_radius=1).shift(UP)\n annulus_2 = Annulus(inner_radius=0.3, outer_radius=0.6, color=RED).next_to(annulus_1, DOWN)\n self.add(annulus_1, annulus_2)\n \"\"\"\n\n def __init__(\n self,\n inner_radius: float | None = 1,\n outer_radius: float | None = 2,\n fill_opacity: float = 1,\n stroke_width: float = 0,\n color: ParsableManimColor = WHITE,\n mark_paths_closed: bool = False,\n **kwargs,\n ) -> None:\n self.mark_paths_closed = mark_paths_closed # is this even used?\n self.inner_radius = inner_radius\n self.outer_radius = outer_radius\n super().__init__(\n fill_opacity=fill_opacity, stroke_width=stroke_width, color=color, **kwargs\n )\n\n def generate_points(self) -> None:\n self.radius = self.outer_radius\n outer_circle = Circle(radius=self.outer_radius)\n inner_circle = Circle(radius=self.inner_radius)\n inner_circle.reverse_points()\n self.append_points(outer_circle.points)\n self.append_points(inner_circle.points)\n self.shift(self.arc_center)\n\n init_points = generate_points\n\n\nclass CubicBezier(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A cubic B\u00e9zier curve.\n\n Example\n -------\n .. manim:: BezierSplineExample\n :save_last_frame:\n\n class BezierSplineExample(Scene):\n def construct(self):\n p1 = np.array([-3, 1, 0])\n p1b = p1 + [1, 0, 0]\n d1 = Dot(point=p1).set_color(BLUE)\n l1 = Line(p1, p1b)\n p2 = np.array([3, -1, 0])\n p2b = p2 - [1, 0, 0]\n d2 = Dot(point=p2).set_color(RED)\n l2 = Line(p2, p2b)\n bezier = CubicBezier(p1b, p1b + 3 * RIGHT, p2b - 3 * RIGHT, p2b)\n self.add(l1, d1, l2, d2, bezier)\n\n \"\"\"\n\n def __init__(\n self,\n start_anchor: CubicBezierPoints,\n start_handle: CubicBezierPoints,\n end_handle: CubicBezierPoints,\n end_anchor: CubicBezierPoints,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.add_cubic_bezier_curve(start_anchor, start_handle, end_handle, end_anchor)\n\n\nclass ArcPolygon(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A generalized polygon allowing for points to be connected with arcs.\n\n This version tries to stick close to the way :class:`Polygon` is used. Points\n can be passed to it directly which are used to generate the according arcs\n (using :class:`ArcBetweenPoints`). An angle or radius can be passed to it to\n use across all arcs, but to configure arcs individually an ``arc_config`` list\n has to be passed with the syntax explained below.\n\n Parameters\n ----------\n vertices\n A list of vertices, start and end points for the arc segments.\n angle\n The angle used for constructing the arcs. If no other parameters\n are set, this angle is used to construct all arcs.\n radius\n The circle radius used to construct the arcs. If specified,\n overrides the specified ``angle``.\n arc_config\n When passing a ``dict``, its content will be passed as keyword\n arguments to :class:`~.ArcBetweenPoints`. Otherwise, a list\n of dictionaries containing values that are passed as keyword\n arguments for every individual arc can be passed.\n kwargs\n Further keyword arguments that are passed to the constructor of\n :class:`~.VMobject`.\n\n Attributes\n ----------\n arcs : :class:`list`\n The arcs created from the input parameters::\n\n >>> from manim import ArcPolygon\n >>> ap = ArcPolygon([0, 0, 0], [2, 0, 0], [0, 2, 0])\n >>> ap.arcs\n [ArcBetweenPoints, ArcBetweenPoints, ArcBetweenPoints]\n\n\n .. tip::\n\n Two instances of :class:`ArcPolygon` can be transformed properly into one\n another as well. Be advised that any arc initialized with ``angle=0``\n will actually be a straight line, so if a straight section should seamlessly\n transform into an arced section or vice versa, initialize the straight section\n with a negligible angle instead (such as ``angle=0.0001``).\n\n .. note::\n There is an alternative version (:class:`ArcPolygonFromArcs`) that is instantiated\n with pre-defined arcs.\n\n See Also\n --------\n :class:`ArcPolygonFromArcs`\n\n\n Examples\n --------\n .. manim:: SeveralArcPolygons\n\n class SeveralArcPolygons(Scene):\n def construct(self):\n a = [0, 0, 0]\n b = [2, 0, 0]\n c = [0, 2, 0]\n ap1 = ArcPolygon(a, b, c, radius=2)\n ap2 = ArcPolygon(a, b, c, angle=45*DEGREES)\n ap3 = ArcPolygon(a, b, c, arc_config={'radius': 1.7, 'color': RED})\n ap4 = ArcPolygon(a, b, c, color=RED, fill_opacity=1,\n arc_config=[{'radius': 1.7, 'color': RED},\n {'angle': 20*DEGREES, 'color': BLUE},\n {'radius': 1}])\n ap_group = VGroup(ap1, ap2, ap3, ap4).arrange()\n self.play(*[Create(ap) for ap in [ap1, ap2, ap3, ap4]])\n self.wait()\n\n For further examples see :class:`ArcPolygonFromArcs`.\n \"\"\"\n\n def __init__(\n self,\n *vertices: Point3D,\n angle: float = PI / 4,\n radius: float | None = None,\n arc_config: list[dict] | None = None,\n **kwargs,\n ) -> None:\n n = len(vertices)\n point_pairs = [(vertices[k], vertices[(k + 1) % n]) for k in range(n)]\n\n if not arc_config:\n if radius:\n all_arc_configs = itertools.repeat({\"radius\": radius}, len(point_pairs))\n else:\n all_arc_configs = itertools.repeat({\"angle\": angle}, len(point_pairs))\n elif isinstance(arc_config, dict):\n all_arc_configs = itertools.repeat(arc_config, len(point_pairs))\n else:\n assert len(arc_config) == n\n all_arc_configs = arc_config\n\n arcs = [\n ArcBetweenPoints(*pair, **conf)\n for (pair, conf) in zip(point_pairs, all_arc_configs)\n ]\n\n super().__init__(**kwargs)\n # Adding the arcs like this makes ArcPolygon double as a VGroup.\n # Also makes changes to the ArcPolygon, such as scaling, affect\n # the arcs, so that their new values are usable.\n self.add(*arcs)\n for arc in arcs:\n self.append_points(arc.points)\n\n # This enables the use of ArcPolygon.arcs as a convenience\n # because ArcPolygon[0] returns itself, not the first Arc.\n self.arcs = arcs\n\n\nclass ArcPolygonFromArcs(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A generalized polygon allowing for points to be connected with arcs.\n\n This version takes in pre-defined arcs to generate the arcpolygon and introduces\n little new syntax. However unlike :class:`Polygon` it can't be created with points\n directly.\n\n For proper appearance the passed arcs should connect seamlessly:\n ``[a,b][b,c][c,a]``\n\n If there are any gaps between the arcs, those will be filled in\n with straight lines, which can be used deliberately for any straight\n sections. Arcs can also be passed as straight lines such as an arc\n initialized with ``angle=0``.\n\n Parameters\n ----------\n arcs\n These are the arcs from which the arcpolygon is assembled.\n kwargs\n Keyword arguments that are passed to the constructor of\n :class:`~.VMobject`. Affects how the ArcPolygon itself is drawn,\n but doesn't affect passed arcs.\n\n Attributes\n ----------\n arcs\n The arcs used to initialize the ArcPolygonFromArcs::\n\n >>> from manim import ArcPolygonFromArcs, Arc, ArcBetweenPoints\n >>> ap = ArcPolygonFromArcs(Arc(), ArcBetweenPoints([1,0,0], [0,1,0]), Arc())\n >>> ap.arcs\n [Arc, ArcBetweenPoints, Arc]\n\n\n .. tip::\n\n Two instances of :class:`ArcPolygon` can be transformed properly into\n one another as well. Be advised that any arc initialized with ``angle=0``\n will actually be a straight line, so if a straight section should seamlessly\n transform into an arced section or vice versa, initialize the straight\n section with a negligible angle instead (such as ``angle=0.0001``).\n\n .. note::\n There is an alternative version (:class:`ArcPolygon`) that can be instantiated\n with points.\n\n .. seealso::\n :class:`ArcPolygon`\n\n Examples\n --------\n One example of an arcpolygon is the Reuleaux triangle.\n Instead of 3 straight lines connecting the outer points,\n a Reuleaux triangle has 3 arcs connecting those points,\n making a shape with constant width.\n\n Passed arcs are stored as submobjects in the arcpolygon.\n This means that the arcs are changed along with the arcpolygon,\n for example when it's shifted, and these arcs can be manipulated\n after the arcpolygon has been initialized.\n\n Also both the arcs contained in an :class:`~.ArcPolygonFromArcs`, as well as the\n arcpolygon itself are drawn, which affects draw time in :class:`~.Create`\n for example. In most cases the arcs themselves don't\n need to be drawn, in which case they can be passed as invisible.\n\n .. manim:: ArcPolygonExample\n\n class ArcPolygonExample(Scene):\n def construct(self):\n arc_conf = {\"stroke_width\": 0}\n poly_conf = {\"stroke_width\": 10, \"stroke_color\": BLUE,\n \"fill_opacity\": 1, \"color\": PURPLE}\n a = [-1, 0, 0]\n b = [1, 0, 0]\n c = [0, np.sqrt(3), 0]\n arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)\n arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)\n arc2 = ArcBetweenPoints(c, a, radius=2, **arc_conf)\n reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)\n self.play(FadeIn(reuleaux_tri))\n self.wait(2)\n\n The arcpolygon itself can also be hidden so that instead only the contained\n arcs are drawn. This can be used to easily debug arcs or to highlight them.\n\n .. manim:: ArcPolygonExample2\n\n class ArcPolygonExample2(Scene):\n def construct(self):\n arc_conf = {\"stroke_width\": 3, \"stroke_color\": BLUE,\n \"fill_opacity\": 0.5, \"color\": GREEN}\n poly_conf = {\"color\": None}\n a = [-1, 0, 0]\n b = [1, 0, 0]\n c = [0, np.sqrt(3), 0]\n arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)\n arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)\n arc2 = ArcBetweenPoints(c, a, radius=2, stroke_color=RED)\n reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)\n self.play(FadeIn(reuleaux_tri))\n self.wait(2)\n \"\"\"\n\n def __init__(self, *arcs: Arc | ArcBetweenPoints, **kwargs) -> None:\n if not all(isinstance(m, (Arc, ArcBetweenPoints)) for m in arcs):\n raise ValueError(\n \"All ArcPolygon submobjects must be of type Arc/ArcBetweenPoints\",\n )\n super().__init__(**kwargs)\n # Adding the arcs like this makes ArcPolygonFromArcs double as a VGroup.\n # Also makes changes to the ArcPolygonFromArcs, such as scaling, affect\n # the arcs, so that their new values are usable.\n self.add(*arcs)\n # This enables the use of ArcPolygonFromArcs.arcs as a convenience\n # because ArcPolygonFromArcs[0] returns itself, not the first Arc.\n self.arcs = [*arcs]\n from .line import Line\n\n for arc1, arc2 in adjacent_pairs(arcs):\n self.append_points(arc1.points)\n line = Line(arc1.get_end(), arc2.get_start())\n len_ratio = line.get_length() / arc1.get_arc_length()\n if np.isnan(len_ratio) or np.isinf(len_ratio):\n continue\n line.insert_n_curves(int(arc1.get_num_curves() * len_ratio))\n self.append_points(line.points)\n", "path": "manim/mobject/geometry/arc.py"}], "after_files": [{"content": "r\"\"\"Mobjects that are curved.\n\nExamples\n--------\n.. manim:: UsefulAnnotations\n :save_last_frame:\n\n class UsefulAnnotations(Scene):\n def construct(self):\n m0 = Dot()\n m1 = AnnotationDot()\n m2 = LabeledDot(\"ii\")\n m3 = LabeledDot(MathTex(r\"\\alpha\").set_color(ORANGE))\n m4 = CurvedArrow(2*LEFT, 2*RIGHT, radius= -5)\n m5 = CurvedArrow(2*LEFT, 2*RIGHT, radius= 8)\n m6 = CurvedDoubleArrow(ORIGIN, 2*RIGHT)\n\n self.add(m0, m1, m2, m3, m4, m5, m6)\n for i, mobj in enumerate(self.mobjects):\n mobj.shift(DOWN * (i-3))\n\n\"\"\"\n\nfrom __future__ import annotations\n\n__all__ = [\n \"TipableVMobject\",\n \"Arc\",\n \"ArcBetweenPoints\",\n \"CurvedArrow\",\n \"CurvedDoubleArrow\",\n \"Circle\",\n \"Dot\",\n \"AnnotationDot\",\n \"LabeledDot\",\n \"Ellipse\",\n \"AnnularSector\",\n \"Sector\",\n \"Annulus\",\n \"CubicBezier\",\n \"ArcPolygon\",\n \"ArcPolygonFromArcs\",\n]\n\nimport itertools\nimport warnings\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\nfrom typing_extensions import Self\n\nfrom manim.constants import *\nfrom manim.mobject.opengl.opengl_compatibility import ConvertToOpenGL\nfrom manim.mobject.types.vectorized_mobject import VGroup, VMobject\nfrom manim.utils.color import BLACK, BLUE, RED, WHITE, ParsableManimColor\nfrom manim.utils.iterables import adjacent_pairs\nfrom manim.utils.space_ops import (\n angle_of_vector,\n cartesian_to_spherical,\n line_intersection,\n perpendicular_bisector,\n rotate_vector,\n)\n\nif TYPE_CHECKING:\n import manim.mobject.geometry.tips as tips\n from manim.mobject.mobject import Mobject\n from manim.mobject.text.tex_mobject import SingleStringMathTex, Tex\n from manim.mobject.text.text_mobject import Text\n from manim.typing import CubicBezierPoints, Point3D, QuadraticBezierPoints, Vector3D\n\n\nclass TipableVMobject(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"Meant for shared functionality between Arc and Line.\n Functionality can be classified broadly into these groups:\n\n * Adding, Creating, Modifying tips\n - add_tip calls create_tip, before pushing the new tip\n into the TipableVMobject's list of submobjects\n - stylistic and positional configuration\n\n * Checking for tips\n - Boolean checks for whether the TipableVMobject has a tip\n and a starting tip\n\n * Getters\n - Straightforward accessors, returning information pertaining\n to the TipableVMobject instance's tip(s), its length etc\n \"\"\"\n\n def __init__(\n self,\n tip_length: float = DEFAULT_ARROW_TIP_LENGTH,\n normal_vector: Vector3D = OUT,\n tip_style: dict = {},\n **kwargs,\n ) -> None:\n self.tip_length: float = tip_length\n self.normal_vector: Vector3D = normal_vector\n self.tip_style: dict = tip_style\n super().__init__(**kwargs)\n\n # Adding, Creating, Modifying tips\n\n def add_tip(\n self,\n tip: tips.ArrowTip | None = None,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float | None = None,\n tip_width: float | None = None,\n at_start: bool = False,\n ) -> Self:\n \"\"\"Adds a tip to the TipableVMobject instance, recognising\n that the endpoints might need to be switched if it's\n a 'starting tip' or not.\n \"\"\"\n if tip is None:\n tip = self.create_tip(tip_shape, tip_length, tip_width, at_start)\n else:\n self.position_tip(tip, at_start)\n self.reset_endpoints_based_on_tip(tip, at_start)\n self.asign_tip_attr(tip, at_start)\n self.add(tip)\n return self\n\n def create_tip(\n self,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float = None,\n tip_width: float = None,\n at_start: bool = False,\n ):\n \"\"\"Stylises the tip, positions it spatially, and returns\n the newly instantiated tip to the caller.\n \"\"\"\n tip = self.get_unpositioned_tip(tip_shape, tip_length, tip_width)\n self.position_tip(tip, at_start)\n return tip\n\n def get_unpositioned_tip(\n self,\n tip_shape: type[tips.ArrowTip] | None = None,\n tip_length: float | None = None,\n tip_width: float | None = None,\n ):\n \"\"\"Returns a tip that has been stylistically configured,\n but has not yet been given a position in space.\n \"\"\"\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n style = {}\n\n if tip_shape is None:\n tip_shape = ArrowTriangleFilledTip\n\n if tip_shape is ArrowTriangleFilledTip:\n if tip_width is None:\n tip_width = self.get_default_tip_length()\n style.update({\"width\": tip_width})\n if tip_length is None:\n tip_length = self.get_default_tip_length()\n\n color = self.get_color()\n style.update({\"fill_color\": color, \"stroke_color\": color})\n style.update(self.tip_style)\n tip = tip_shape(length=tip_length, **style)\n return tip\n\n def position_tip(self, tip: tips.ArrowTip, at_start: bool = False):\n # Last two control points, defining both\n # the end, and the tangency direction\n if at_start:\n anchor = self.get_start()\n handle = self.get_first_handle()\n else:\n handle = self.get_last_handle()\n anchor = self.get_end()\n angles = cartesian_to_spherical(handle - anchor)\n tip.rotate(\n angles[1] - PI - tip.tip_angle,\n ) # Rotates the tip along the azimuthal\n if not hasattr(self, \"_init_positioning_axis\"):\n axis = [\n np.sin(angles[1]),\n -np.cos(angles[1]),\n 0,\n ] # Obtains the perpendicular of the tip\n tip.rotate(\n -angles[2] + PI / 2,\n axis=axis,\n ) # Rotates the tip along the vertical wrt the axis\n self._init_positioning_axis = axis\n tip.shift(anchor - tip.tip_point)\n return tip\n\n def reset_endpoints_based_on_tip(self, tip: tips.ArrowTip, at_start: bool) -> Self:\n if self.get_length() == 0:\n # Zero length, put_start_and_end_on wouldn't work\n return self\n\n if at_start:\n self.put_start_and_end_on(tip.base, self.get_end())\n else:\n self.put_start_and_end_on(self.get_start(), tip.base)\n return self\n\n def asign_tip_attr(self, tip: tips.ArrowTip, at_start: bool) -> Self:\n if at_start:\n self.start_tip = tip\n else:\n self.tip = tip\n return self\n\n # Checking for tips\n\n def has_tip(self) -> bool:\n return hasattr(self, \"tip\") and self.tip in self\n\n def has_start_tip(self) -> bool:\n return hasattr(self, \"start_tip\") and self.start_tip in self\n\n # Getters\n\n def pop_tips(self) -> VGroup:\n start, end = self.get_start_and_end()\n result = self.get_group_class()()\n if self.has_tip():\n result.add(self.tip)\n self.remove(self.tip)\n if self.has_start_tip():\n result.add(self.start_tip)\n self.remove(self.start_tip)\n self.put_start_and_end_on(start, end)\n return result\n\n def get_tips(self) -> VGroup:\n \"\"\"Returns a VGroup (collection of VMobjects) containing\n the TipableVMObject instance's tips.\n \"\"\"\n result = self.get_group_class()()\n if hasattr(self, \"tip\"):\n result.add(self.tip)\n if hasattr(self, \"start_tip\"):\n result.add(self.start_tip)\n return result\n\n def get_tip(self):\n \"\"\"Returns the TipableVMobject instance's (first) tip,\n otherwise throws an exception.\"\"\"\n tips = self.get_tips()\n if len(tips) == 0:\n raise Exception(\"tip not found\")\n else:\n return tips[0]\n\n def get_default_tip_length(self) -> float:\n return self.tip_length\n\n def get_first_handle(self) -> Point3D:\n return self.points[1]\n\n def get_last_handle(self) -> Point3D:\n return self.points[-2]\n\n def get_end(self) -> Point3D:\n if self.has_tip():\n return self.tip.get_start()\n else:\n return super().get_end()\n\n def get_start(self) -> Point3D:\n if self.has_start_tip():\n return self.start_tip.get_start()\n else:\n return super().get_start()\n\n def get_length(self) -> np.floating:\n start, end = self.get_start_and_end()\n return np.linalg.norm(start - end)\n\n\nclass Arc(TipableVMobject):\n \"\"\"A circular arc.\n\n Examples\n --------\n A simple arc of angle Pi.\n\n .. manim:: ArcExample\n :save_last_frame:\n\n class ArcExample(Scene):\n def construct(self):\n self.add(Arc(angle=PI))\n \"\"\"\n\n def __init__(\n self,\n radius: float = 1.0,\n start_angle: float = 0,\n angle: float = TAU / 4,\n num_components: int = 9,\n arc_center: Point3D = ORIGIN,\n **kwargs,\n ):\n if radius is None: # apparently None is passed by ArcBetweenPoints\n radius = 1.0\n self.radius = radius\n self.num_components: int = num_components\n self.arc_center: Point3D = arc_center\n self.start_angle: float = start_angle\n self.angle: float = angle\n self._failed_to_get_center: bool = False\n super().__init__(**kwargs)\n\n def generate_points(self) -> None:\n self._set_pre_positioned_points()\n self.scale(self.radius, about_point=ORIGIN)\n self.shift(self.arc_center)\n\n # Points are set a bit differently when rendering via OpenGL.\n # TODO: refactor Arc so that only one strategy for setting points\n # has to be used.\n def init_points(self) -> None:\n self.set_points(\n Arc._create_quadratic_bezier_points(\n angle=self.angle,\n start_angle=self.start_angle,\n n_components=self.num_components,\n ),\n )\n self.scale(self.radius, about_point=ORIGIN)\n self.shift(self.arc_center)\n\n @staticmethod\n def _create_quadratic_bezier_points(\n angle: float, start_angle: float = 0, n_components: int = 8\n ) -> QuadraticBezierPoints:\n samples = np.array(\n [\n [np.cos(a), np.sin(a), 0]\n for a in np.linspace(\n start_angle,\n start_angle + angle,\n 2 * n_components + 1,\n )\n ],\n )\n theta = angle / n_components\n samples[1::2] /= np.cos(theta / 2)\n\n points = np.zeros((3 * n_components, 3))\n points[0::3] = samples[0:-1:2]\n points[1::3] = samples[1::2]\n points[2::3] = samples[2::2]\n return points\n\n def _set_pre_positioned_points(self) -> None:\n anchors = np.array(\n [\n np.cos(a) * RIGHT + np.sin(a) * UP\n for a in np.linspace(\n self.start_angle,\n self.start_angle + self.angle,\n self.num_components,\n )\n ],\n )\n # Figure out which control points will give the\n # Appropriate tangent lines to the circle\n d_theta = self.angle / (self.num_components - 1.0)\n tangent_vectors = np.zeros(anchors.shape)\n # Rotate all 90 degrees, via (x, y) -> (-y, x)\n tangent_vectors[:, 1] = anchors[:, 0]\n tangent_vectors[:, 0] = -anchors[:, 1]\n # Use tangent vectors to deduce anchors\n handles1 = anchors[:-1] + (d_theta / 3) * tangent_vectors[:-1]\n handles2 = anchors[1:] - (d_theta / 3) * tangent_vectors[1:]\n self.set_anchors_and_handles(anchors[:-1], handles1, handles2, anchors[1:])\n\n def get_arc_center(self, warning: bool = True) -> Point3D:\n \"\"\"Looks at the normals to the first two\n anchors, and finds their intersection points\n \"\"\"\n # First two anchors and handles\n a1, h1, h2, a2 = self.points[:4]\n\n if np.all(a1 == a2):\n # For a1 and a2 to lie at the same point arc radius\n # must be zero. Thus arc_center will also lie at\n # that point.\n return np.copy(a1)\n # Tangent vectors\n t1 = h1 - a1\n t2 = h2 - a2\n # Normals\n n1 = rotate_vector(t1, TAU / 4)\n n2 = rotate_vector(t2, TAU / 4)\n try:\n return line_intersection(line1=(a1, a1 + n1), line2=(a2, a2 + n2))\n except Exception:\n if warning:\n warnings.warn(\"Can't find Arc center, using ORIGIN instead\")\n self._failed_to_get_center = True\n return np.array(ORIGIN)\n\n def move_arc_center_to(self, point: Point3D) -> Self:\n self.shift(point - self.get_arc_center())\n return self\n\n def stop_angle(self) -> float:\n return angle_of_vector(self.points[-1] - self.get_arc_center()) % TAU\n\n\nclass ArcBetweenPoints(Arc):\n \"\"\"Inherits from Arc and additionally takes 2 points between which the arc is spanned.\n\n Example\n -------\n .. manim:: ArcBetweenPointsExample\n\n class ArcBetweenPointsExample(Scene):\n def construct(self):\n circle = Circle(radius=2, stroke_color=GREY)\n dot_1 = Dot(color=GREEN).move_to([2, 0, 0]).scale(0.5)\n dot_1_text = Tex(\"(2,0)\").scale(0.5).next_to(dot_1, RIGHT).set_color(BLUE)\n dot_2 = Dot(color=GREEN).move_to([0, 2, 0]).scale(0.5)\n dot_2_text = Tex(\"(0,2)\").scale(0.5).next_to(dot_2, UP).set_color(BLUE)\n arc= ArcBetweenPoints(start=2 * RIGHT, end=2 * UP, stroke_color=YELLOW)\n self.add(circle, dot_1, dot_2, dot_1_text, dot_2_text)\n self.play(Create(arc))\n \"\"\"\n\n def __init__(\n self,\n start: Point3D,\n end: Point3D,\n angle: float = TAU / 4,\n radius: float = None,\n **kwargs,\n ) -> None:\n if radius is not None:\n self.radius = radius\n if radius < 0:\n sign = -2\n radius *= -1\n else:\n sign = 2\n halfdist = np.linalg.norm(np.array(start) - np.array(end)) / 2\n if radius < halfdist:\n raise ValueError(\n \"\"\"ArcBetweenPoints called with a radius that is\n smaller than half the distance between the points.\"\"\",\n )\n arc_height = radius - np.sqrt(radius**2 - halfdist**2)\n angle = np.arccos((radius - arc_height) / radius) * sign\n\n super().__init__(radius=radius, angle=angle, **kwargs)\n if angle == 0:\n self.set_points_as_corners([LEFT, RIGHT])\n self.put_start_and_end_on(start, end)\n\n if radius is None:\n center = self.get_arc_center(warning=False)\n if not self._failed_to_get_center:\n self.radius = np.linalg.norm(np.array(start) - np.array(center))\n else:\n self.radius = np.inf\n\n\nclass CurvedArrow(ArcBetweenPoints):\n def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n tip_shape = kwargs.pop(\"tip_shape\", ArrowTriangleFilledTip)\n super().__init__(start_point, end_point, **kwargs)\n self.add_tip(tip_shape=tip_shape)\n\n\nclass CurvedDoubleArrow(CurvedArrow):\n def __init__(self, start_point: Point3D, end_point: Point3D, **kwargs) -> None:\n if \"tip_shape_end\" in kwargs:\n kwargs[\"tip_shape\"] = kwargs.pop(\"tip_shape_end\")\n from manim.mobject.geometry.tips import ArrowTriangleFilledTip\n\n tip_shape_start = kwargs.pop(\"tip_shape_start\", ArrowTriangleFilledTip)\n super().__init__(start_point, end_point, **kwargs)\n self.add_tip(at_start=True, tip_shape=tip_shape_start)\n\n\nclass Circle(Arc):\n \"\"\"A circle.\n\n Parameters\n ----------\n color\n The color of the shape.\n kwargs\n Additional arguments to be passed to :class:`Arc`\n\n Examples\n --------\n .. manim:: CircleExample\n :save_last_frame:\n\n class CircleExample(Scene):\n def construct(self):\n circle_1 = Circle(radius=1.0)\n circle_2 = Circle(radius=1.5, color=GREEN)\n circle_3 = Circle(radius=1.0, color=BLUE_B, fill_opacity=1)\n\n circle_group = Group(circle_1, circle_2, circle_3).arrange(buff=1)\n self.add(circle_group)\n \"\"\"\n\n def __init__(\n self,\n radius: float | None = None,\n color: ParsableManimColor = RED,\n **kwargs,\n ) -> None:\n super().__init__(\n radius=radius,\n start_angle=0,\n angle=TAU,\n color=color,\n **kwargs,\n )\n\n def surround(\n self,\n mobject: Mobject,\n dim_to_match: int = 0,\n stretch: bool = False,\n buffer_factor: float = 1.2,\n ) -> Self:\n \"\"\"Modifies a circle so that it surrounds a given mobject.\n\n Parameters\n ----------\n mobject\n The mobject that the circle will be surrounding.\n dim_to_match\n buffer_factor\n Scales the circle with respect to the mobject. A `buffer_factor` < 1 makes the circle smaller than the mobject.\n stretch\n Stretches the circle to fit more tightly around the mobject. Note: Does not work with :class:`Line`\n\n Examples\n --------\n .. manim:: CircleSurround\n :save_last_frame:\n\n class CircleSurround(Scene):\n def construct(self):\n triangle1 = Triangle()\n circle1 = Circle().surround(triangle1)\n group1 = Group(triangle1,circle1) # treat the two mobjects as one\n\n line2 = Line()\n circle2 = Circle().surround(line2, buffer_factor=2.0)\n group2 = Group(line2,circle2)\n\n # buffer_factor < 1, so the circle is smaller than the square\n square3 = Square()\n circle3 = Circle().surround(square3, buffer_factor=0.5)\n group3 = Group(square3, circle3)\n\n group = Group(group1, group2, group3).arrange(buff=1)\n self.add(group)\n \"\"\"\n\n # Ignores dim_to_match and stretch; result will always be a circle\n # TODO: Perhaps create an ellipse class to handle single-dimension stretching\n\n # Something goes wrong here when surrounding lines?\n # TODO: Figure out and fix\n self.replace(mobject, dim_to_match, stretch)\n\n self.width = np.sqrt(mobject.width**2 + mobject.height**2)\n return self.scale(buffer_factor)\n\n def point_at_angle(self, angle: float) -> Point3D:\n \"\"\"Returns the position of a point on the circle.\n\n Parameters\n ----------\n angle\n The angle of the point along the circle in radians.\n\n Returns\n -------\n :class:`numpy.ndarray`\n The location of the point along the circle's circumference.\n\n Examples\n --------\n .. manim:: PointAtAngleExample\n :save_last_frame:\n\n class PointAtAngleExample(Scene):\n def construct(self):\n circle = Circle(radius=2.0)\n p1 = circle.point_at_angle(PI/2)\n p2 = circle.point_at_angle(270*DEGREES)\n\n s1 = Square(side_length=0.25).move_to(p1)\n s2 = Square(side_length=0.25).move_to(p2)\n self.add(circle, s1, s2)\n\n \"\"\"\n\n start_angle = angle_of_vector(self.points[0] - self.get_center())\n proportion = (angle - start_angle) / TAU\n proportion -= np.floor(proportion)\n return self.point_from_proportion(proportion)\n\n @staticmethod\n def from_three_points(p1: Point3D, p2: Point3D, p3: Point3D, **kwargs) -> Self:\n \"\"\"Returns a circle passing through the specified\n three points.\n\n Example\n -------\n .. manim:: CircleFromPointsExample\n :save_last_frame:\n\n class CircleFromPointsExample(Scene):\n def construct(self):\n circle = Circle.from_three_points(LEFT, LEFT + UP, UP * 2, color=RED)\n dots = VGroup(\n Dot(LEFT),\n Dot(LEFT + UP),\n Dot(UP * 2),\n )\n self.add(NumberPlane(), circle, dots)\n \"\"\"\n center = line_intersection(\n perpendicular_bisector([p1, p2]),\n perpendicular_bisector([p2, p3]),\n )\n radius = np.linalg.norm(p1 - center)\n return Circle(radius=radius, **kwargs).shift(center)\n\n\nclass Dot(Circle):\n \"\"\"A circle with a very small radius.\n\n Parameters\n ----------\n point\n The location of the dot.\n radius\n The radius of the dot.\n stroke_width\n The thickness of the outline of the dot.\n fill_opacity\n The opacity of the dot's fill_colour\n color\n The color of the dot.\n kwargs\n Additional arguments to be passed to :class:`Circle`\n\n Examples\n --------\n .. manim:: DotExample\n :save_last_frame:\n\n class DotExample(Scene):\n def construct(self):\n dot1 = Dot(point=LEFT, radius=0.08)\n dot2 = Dot(point=ORIGIN)\n dot3 = Dot(point=RIGHT)\n self.add(dot1,dot2,dot3)\n \"\"\"\n\n def __init__(\n self,\n point: Point3D = ORIGIN,\n radius: float = DEFAULT_DOT_RADIUS,\n stroke_width: float = 0,\n fill_opacity: float = 1.0,\n color: ParsableManimColor = WHITE,\n **kwargs,\n ) -> None:\n super().__init__(\n arc_center=point,\n radius=radius,\n stroke_width=stroke_width,\n fill_opacity=fill_opacity,\n color=color,\n **kwargs,\n )\n\n\nclass AnnotationDot(Dot):\n \"\"\"A dot with bigger radius and bold stroke to annotate scenes.\"\"\"\n\n def __init__(\n self,\n radius: float = DEFAULT_DOT_RADIUS * 1.3,\n stroke_width: float = 5,\n stroke_color: ParsableManimColor = WHITE,\n fill_color: ParsableManimColor = BLUE,\n **kwargs,\n ) -> None:\n super().__init__(\n radius=radius,\n stroke_width=stroke_width,\n stroke_color=stroke_color,\n fill_color=fill_color,\n **kwargs,\n )\n\n\nclass LabeledDot(Dot):\n \"\"\"A :class:`Dot` containing a label in its center.\n\n Parameters\n ----------\n label\n The label of the :class:`Dot`. This is rendered as :class:`~.MathTex`\n by default (i.e., when passing a :class:`str`), but other classes\n representing rendered strings like :class:`~.Text` or :class:`~.Tex`\n can be passed as well.\n radius\n The radius of the :class:`Dot`. If ``None`` (the default), the radius\n is calculated based on the size of the ``label``.\n\n Examples\n --------\n .. manim:: SeveralLabeledDots\n :save_last_frame:\n\n class SeveralLabeledDots(Scene):\n def construct(self):\n sq = Square(fill_color=RED, fill_opacity=1)\n self.add(sq)\n dot1 = LabeledDot(Tex(\"42\", color=RED))\n dot2 = LabeledDot(MathTex(\"a\", color=GREEN))\n dot3 = LabeledDot(Text(\"ii\", color=BLUE))\n dot4 = LabeledDot(\"3\")\n dot1.next_to(sq, UL)\n dot2.next_to(sq, UR)\n dot3.next_to(sq, DL)\n dot4.next_to(sq, DR)\n self.add(dot1, dot2, dot3, dot4)\n \"\"\"\n\n def __init__(\n self,\n label: str | SingleStringMathTex | Text | Tex,\n radius: float | None = None,\n **kwargs,\n ) -> None:\n if isinstance(label, str):\n from manim import MathTex\n\n rendered_label = MathTex(label, color=BLACK)\n else:\n rendered_label = label\n\n if radius is None:\n radius = 0.1 + max(rendered_label.width, rendered_label.height) / 2\n super().__init__(radius=radius, **kwargs)\n rendered_label.move_to(self.get_center())\n self.add(rendered_label)\n\n\nclass Ellipse(Circle):\n \"\"\"A circular shape; oval, circle.\n\n Parameters\n ----------\n width\n The horizontal width of the ellipse.\n height\n The vertical height of the ellipse.\n kwargs\n Additional arguments to be passed to :class:`Circle`.\n\n Examples\n --------\n .. manim:: EllipseExample\n :save_last_frame:\n\n class EllipseExample(Scene):\n def construct(self):\n ellipse_1 = Ellipse(width=2.0, height=4.0, color=BLUE_B)\n ellipse_2 = Ellipse(width=4.0, height=1.0, color=BLUE_D)\n ellipse_group = Group(ellipse_1,ellipse_2).arrange(buff=1)\n self.add(ellipse_group)\n \"\"\"\n\n def __init__(self, width: float = 2, height: float = 1, **kwargs) -> None:\n super().__init__(**kwargs)\n self.stretch_to_fit_width(width)\n self.stretch_to_fit_height(height)\n\n\nclass AnnularSector(Arc):\n \"\"\"A sector of an annulus.\n\n\n Parameters\n ----------\n inner_radius\n The inside radius of the Annular Sector.\n outer_radius\n The outside radius of the Annular Sector.\n angle\n The clockwise angle of the Annular Sector.\n start_angle\n The starting clockwise angle of the Annular Sector.\n fill_opacity\n The opacity of the color filled in the Annular Sector.\n stroke_width\n The stroke width of the Annular Sector.\n color\n The color filled into the Annular Sector.\n\n Examples\n --------\n .. manim:: AnnularSectorExample\n :save_last_frame:\n\n class AnnularSectorExample(Scene):\n def construct(self):\n # Changes background color to clearly visualize changes in fill_opacity.\n self.camera.background_color = WHITE\n\n # The default parameter start_angle is 0, so the AnnularSector starts from the +x-axis.\n s1 = AnnularSector(color=YELLOW).move_to(2 * UL)\n\n # Different inner_radius and outer_radius than the default.\n s2 = AnnularSector(inner_radius=1.5, outer_radius=2, angle=45 * DEGREES, color=RED).move_to(2 * UR)\n\n # fill_opacity is typically a number > 0 and <= 1. If fill_opacity=0, the AnnularSector is transparent.\n s3 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=PI, fill_opacity=0.25, color=BLUE).move_to(2 * DL)\n\n # With a negative value for the angle, the AnnularSector is drawn clockwise from the start value.\n s4 = AnnularSector(inner_radius=1, outer_radius=1.5, angle=-3 * PI / 2, color=GREEN).move_to(2 * DR)\n\n self.add(s1, s2, s3, s4)\n \"\"\"\n\n def __init__(\n self,\n inner_radius: float = 1,\n outer_radius: float = 2,\n angle: float = TAU / 4,\n start_angle: float = 0,\n fill_opacity: float = 1,\n stroke_width: float = 0,\n color: ParsableManimColor = WHITE,\n **kwargs,\n ) -> None:\n self.inner_radius = inner_radius\n self.outer_radius = outer_radius\n super().__init__(\n start_angle=start_angle,\n angle=angle,\n fill_opacity=fill_opacity,\n stroke_width=stroke_width,\n color=color,\n **kwargs,\n )\n\n def generate_points(self) -> None:\n inner_arc, outer_arc = (\n Arc(\n start_angle=self.start_angle,\n angle=self.angle,\n radius=radius,\n arc_center=self.arc_center,\n )\n for radius in (self.inner_radius, self.outer_radius)\n )\n outer_arc.reverse_points()\n self.append_points(inner_arc.points)\n self.add_line_to(outer_arc.points[0])\n self.append_points(outer_arc.points)\n self.add_line_to(inner_arc.points[0])\n\n init_points = generate_points\n\n\nclass Sector(AnnularSector):\n \"\"\"A sector of a circle.\n\n Examples\n --------\n .. manim:: ExampleSector\n :save_last_frame:\n\n class ExampleSector(Scene):\n def construct(self):\n sector = Sector(outer_radius=2, inner_radius=1)\n sector2 = Sector(outer_radius=2.5, inner_radius=0.8).move_to([-3, 0, 0])\n sector.set_color(RED)\n sector2.set_color(PINK)\n self.add(sector, sector2)\n \"\"\"\n\n def __init__(\n self, outer_radius: float = 1, inner_radius: float = 0, **kwargs\n ) -> None:\n super().__init__(inner_radius=inner_radius, outer_radius=outer_radius, **kwargs)\n\n\nclass Annulus(Circle):\n \"\"\"Region between two concentric :class:`Circles <.Circle>`.\n\n Parameters\n ----------\n inner_radius\n The radius of the inner :class:`Circle`.\n outer_radius\n The radius of the outer :class:`Circle`.\n kwargs\n Additional arguments to be passed to :class:`Annulus`\n\n Examples\n --------\n .. manim:: AnnulusExample\n :save_last_frame:\n\n class AnnulusExample(Scene):\n def construct(self):\n annulus_1 = Annulus(inner_radius=0.5, outer_radius=1).shift(UP)\n annulus_2 = Annulus(inner_radius=0.3, outer_radius=0.6, color=RED).next_to(annulus_1, DOWN)\n self.add(annulus_1, annulus_2)\n \"\"\"\n\n def __init__(\n self,\n inner_radius: float | None = 1,\n outer_radius: float | None = 2,\n fill_opacity: float = 1,\n stroke_width: float = 0,\n color: ParsableManimColor = WHITE,\n mark_paths_closed: bool = False,\n **kwargs,\n ) -> None:\n self.mark_paths_closed = mark_paths_closed # is this even used?\n self.inner_radius = inner_radius\n self.outer_radius = outer_radius\n super().__init__(\n fill_opacity=fill_opacity, stroke_width=stroke_width, color=color, **kwargs\n )\n\n def generate_points(self) -> None:\n self.radius = self.outer_radius\n outer_circle = Circle(radius=self.outer_radius)\n inner_circle = Circle(radius=self.inner_radius)\n inner_circle.reverse_points()\n self.append_points(outer_circle.points)\n self.append_points(inner_circle.points)\n self.shift(self.arc_center)\n\n init_points = generate_points\n\n\nclass CubicBezier(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A cubic B\u00e9zier curve.\n\n Example\n -------\n .. manim:: BezierSplineExample\n :save_last_frame:\n\n class BezierSplineExample(Scene):\n def construct(self):\n p1 = np.array([-3, 1, 0])\n p1b = p1 + [1, 0, 0]\n d1 = Dot(point=p1).set_color(BLUE)\n l1 = Line(p1, p1b)\n p2 = np.array([3, -1, 0])\n p2b = p2 - [1, 0, 0]\n d2 = Dot(point=p2).set_color(RED)\n l2 = Line(p2, p2b)\n bezier = CubicBezier(p1b, p1b + 3 * RIGHT, p2b - 3 * RIGHT, p2b)\n self.add(l1, d1, l2, d2, bezier)\n\n \"\"\"\n\n def __init__(\n self,\n start_anchor: CubicBezierPoints,\n start_handle: CubicBezierPoints,\n end_handle: CubicBezierPoints,\n end_anchor: CubicBezierPoints,\n **kwargs,\n ) -> None:\n super().__init__(**kwargs)\n self.add_cubic_bezier_curve(start_anchor, start_handle, end_handle, end_anchor)\n\n\nclass ArcPolygon(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A generalized polygon allowing for points to be connected with arcs.\n\n This version tries to stick close to the way :class:`Polygon` is used. Points\n can be passed to it directly which are used to generate the according arcs\n (using :class:`ArcBetweenPoints`). An angle or radius can be passed to it to\n use across all arcs, but to configure arcs individually an ``arc_config`` list\n has to be passed with the syntax explained below.\n\n Parameters\n ----------\n vertices\n A list of vertices, start and end points for the arc segments.\n angle\n The angle used for constructing the arcs. If no other parameters\n are set, this angle is used to construct all arcs.\n radius\n The circle radius used to construct the arcs. If specified,\n overrides the specified ``angle``.\n arc_config\n When passing a ``dict``, its content will be passed as keyword\n arguments to :class:`~.ArcBetweenPoints`. Otherwise, a list\n of dictionaries containing values that are passed as keyword\n arguments for every individual arc can be passed.\n kwargs\n Further keyword arguments that are passed to the constructor of\n :class:`~.VMobject`.\n\n Attributes\n ----------\n arcs : :class:`list`\n The arcs created from the input parameters::\n\n >>> from manim import ArcPolygon\n >>> ap = ArcPolygon([0, 0, 0], [2, 0, 0], [0, 2, 0])\n >>> ap.arcs\n [ArcBetweenPoints, ArcBetweenPoints, ArcBetweenPoints]\n\n\n .. tip::\n\n Two instances of :class:`ArcPolygon` can be transformed properly into one\n another as well. Be advised that any arc initialized with ``angle=0``\n will actually be a straight line, so if a straight section should seamlessly\n transform into an arced section or vice versa, initialize the straight section\n with a negligible angle instead (such as ``angle=0.0001``).\n\n .. note::\n There is an alternative version (:class:`ArcPolygonFromArcs`) that is instantiated\n with pre-defined arcs.\n\n See Also\n --------\n :class:`ArcPolygonFromArcs`\n\n\n Examples\n --------\n .. manim:: SeveralArcPolygons\n\n class SeveralArcPolygons(Scene):\n def construct(self):\n a = [0, 0, 0]\n b = [2, 0, 0]\n c = [0, 2, 0]\n ap1 = ArcPolygon(a, b, c, radius=2)\n ap2 = ArcPolygon(a, b, c, angle=45*DEGREES)\n ap3 = ArcPolygon(a, b, c, arc_config={'radius': 1.7, 'color': RED})\n ap4 = ArcPolygon(a, b, c, color=RED, fill_opacity=1,\n arc_config=[{'radius': 1.7, 'color': RED},\n {'angle': 20*DEGREES, 'color': BLUE},\n {'radius': 1}])\n ap_group = VGroup(ap1, ap2, ap3, ap4).arrange()\n self.play(*[Create(ap) for ap in [ap1, ap2, ap3, ap4]])\n self.wait()\n\n For further examples see :class:`ArcPolygonFromArcs`.\n \"\"\"\n\n def __init__(\n self,\n *vertices: Point3D,\n angle: float = PI / 4,\n radius: float | None = None,\n arc_config: list[dict] | None = None,\n **kwargs,\n ) -> None:\n n = len(vertices)\n point_pairs = [(vertices[k], vertices[(k + 1) % n]) for k in range(n)]\n\n if not arc_config:\n if radius:\n all_arc_configs = itertools.repeat({\"radius\": radius}, len(point_pairs))\n else:\n all_arc_configs = itertools.repeat({\"angle\": angle}, len(point_pairs))\n elif isinstance(arc_config, dict):\n all_arc_configs = itertools.repeat(arc_config, len(point_pairs))\n else:\n assert len(arc_config) == n\n all_arc_configs = arc_config\n\n arcs = [\n ArcBetweenPoints(*pair, **conf)\n for (pair, conf) in zip(point_pairs, all_arc_configs)\n ]\n\n super().__init__(**kwargs)\n # Adding the arcs like this makes ArcPolygon double as a VGroup.\n # Also makes changes to the ArcPolygon, such as scaling, affect\n # the arcs, so that their new values are usable.\n self.add(*arcs)\n for arc in arcs:\n self.append_points(arc.points)\n\n # This enables the use of ArcPolygon.arcs as a convenience\n # because ArcPolygon[0] returns itself, not the first Arc.\n self.arcs = arcs\n\n\nclass ArcPolygonFromArcs(VMobject, metaclass=ConvertToOpenGL):\n \"\"\"A generalized polygon allowing for points to be connected with arcs.\n\n This version takes in pre-defined arcs to generate the arcpolygon and introduces\n little new syntax. However unlike :class:`Polygon` it can't be created with points\n directly.\n\n For proper appearance the passed arcs should connect seamlessly:\n ``[a,b][b,c][c,a]``\n\n If there are any gaps between the arcs, those will be filled in\n with straight lines, which can be used deliberately for any straight\n sections. Arcs can also be passed as straight lines such as an arc\n initialized with ``angle=0``.\n\n Parameters\n ----------\n arcs\n These are the arcs from which the arcpolygon is assembled.\n kwargs\n Keyword arguments that are passed to the constructor of\n :class:`~.VMobject`. Affects how the ArcPolygon itself is drawn,\n but doesn't affect passed arcs.\n\n Attributes\n ----------\n arcs\n The arcs used to initialize the ArcPolygonFromArcs::\n\n >>> from manim import ArcPolygonFromArcs, Arc, ArcBetweenPoints\n >>> ap = ArcPolygonFromArcs(Arc(), ArcBetweenPoints([1,0,0], [0,1,0]), Arc())\n >>> ap.arcs\n [Arc, ArcBetweenPoints, Arc]\n\n\n .. tip::\n\n Two instances of :class:`ArcPolygon` can be transformed properly into\n one another as well. Be advised that any arc initialized with ``angle=0``\n will actually be a straight line, so if a straight section should seamlessly\n transform into an arced section or vice versa, initialize the straight\n section with a negligible angle instead (such as ``angle=0.0001``).\n\n .. note::\n There is an alternative version (:class:`ArcPolygon`) that can be instantiated\n with points.\n\n .. seealso::\n :class:`ArcPolygon`\n\n Examples\n --------\n One example of an arcpolygon is the Reuleaux triangle.\n Instead of 3 straight lines connecting the outer points,\n a Reuleaux triangle has 3 arcs connecting those points,\n making a shape with constant width.\n\n Passed arcs are stored as submobjects in the arcpolygon.\n This means that the arcs are changed along with the arcpolygon,\n for example when it's shifted, and these arcs can be manipulated\n after the arcpolygon has been initialized.\n\n Also both the arcs contained in an :class:`~.ArcPolygonFromArcs`, as well as the\n arcpolygon itself are drawn, which affects draw time in :class:`~.Create`\n for example. In most cases the arcs themselves don't\n need to be drawn, in which case they can be passed as invisible.\n\n .. manim:: ArcPolygonExample\n\n class ArcPolygonExample(Scene):\n def construct(self):\n arc_conf = {\"stroke_width\": 0}\n poly_conf = {\"stroke_width\": 10, \"stroke_color\": BLUE,\n \"fill_opacity\": 1, \"color\": PURPLE}\n a = [-1, 0, 0]\n b = [1, 0, 0]\n c = [0, np.sqrt(3), 0]\n arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)\n arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)\n arc2 = ArcBetweenPoints(c, a, radius=2, **arc_conf)\n reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)\n self.play(FadeIn(reuleaux_tri))\n self.wait(2)\n\n The arcpolygon itself can also be hidden so that instead only the contained\n arcs are drawn. This can be used to easily debug arcs or to highlight them.\n\n .. manim:: ArcPolygonExample2\n\n class ArcPolygonExample2(Scene):\n def construct(self):\n arc_conf = {\"stroke_width\": 3, \"stroke_color\": BLUE,\n \"fill_opacity\": 0.5, \"color\": GREEN}\n poly_conf = {\"color\": None}\n a = [-1, 0, 0]\n b = [1, 0, 0]\n c = [0, np.sqrt(3), 0]\n arc0 = ArcBetweenPoints(a, b, radius=2, **arc_conf)\n arc1 = ArcBetweenPoints(b, c, radius=2, **arc_conf)\n arc2 = ArcBetweenPoints(c, a, radius=2, stroke_color=RED)\n reuleaux_tri = ArcPolygonFromArcs(arc0, arc1, arc2, **poly_conf)\n self.play(FadeIn(reuleaux_tri))\n self.wait(2)\n \"\"\"\n\n def __init__(self, *arcs: Arc | ArcBetweenPoints, **kwargs) -> None:\n if not all(isinstance(m, (Arc, ArcBetweenPoints)) for m in arcs):\n raise ValueError(\n \"All ArcPolygon submobjects must be of type Arc/ArcBetweenPoints\",\n )\n super().__init__(**kwargs)\n # Adding the arcs like this makes ArcPolygonFromArcs double as a VGroup.\n # Also makes changes to the ArcPolygonFromArcs, such as scaling, affect\n # the arcs, so that their new values are usable.\n self.add(*arcs)\n # This enables the use of ArcPolygonFromArcs.arcs as a convenience\n # because ArcPolygonFromArcs[0] returns itself, not the first Arc.\n self.arcs = [*arcs]\n from .line import Line\n\n for arc1, arc2 in adjacent_pairs(arcs):\n self.append_points(arc1.points)\n line = Line(arc1.get_end(), arc2.get_start())\n len_ratio = line.get_length() / arc1.get_arc_length()\n if np.isnan(len_ratio) or np.isinf(len_ratio):\n continue\n line.insert_n_curves(int(arc1.get_num_curves() * len_ratio))\n self.append_points(line.points)\n", "path": "manim/mobject/geometry/arc.py"}]} |
gh_patches_debug_1125 | rasdani/github-patches | git_diff | keras-team__keras-core-474 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
`weighted_metrics` not showing up next to training progress bar
Tried porting this example (https://keras.io/examples/generative/gpt2_text_generation_with_kerasnlp/) to Keras Core. `accuracy` (passed as a `weighted_metric`) doesn't show up in the training bar (across backends).
Reproduced it here (with JAX): https://colab.research.google.com/drive/1XByfu9-vaFLnXdUjW6tLypMYy36C8LVx?usp=sharing
This is with legacy Keras: https://colab.research.google.com/drive/1fucRLYK_7ZDMRN5GURMgRIe80JknaFhj?usp=sharing
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras_core/trainers/trainer.py`
Content:
```
1 import platform
2 import warnings
3
4 from keras_core import backend
5 from keras_core import metrics as metrics_module
6 from keras_core import ops
7 from keras_core import optimizers
8 from keras_core.saving import serialization_lib
9 from keras_core.trainers.compile_utils import CompileLoss
10 from keras_core.trainers.compile_utils import CompileMetrics
11 from keras_core.utils import traceback_utils
12 from keras_core.utils import tracking
13
14
15 class Trainer:
16 def __init__(self):
17 self._lock = False
18 self._run_eagerly = False
19 self._jit_compile = None
20 self.compiled = False
21 self.steps_per_execution = 1
22
23 @traceback_utils.filter_traceback
24 @tracking.no_automatic_dependency_tracking
25 def compile(
26 self,
27 optimizer="rmsprop",
28 loss=None,
29 loss_weights=None,
30 metrics=None,
31 weighted_metrics=None,
32 run_eagerly=False,
33 steps_per_execution=1,
34 jit_compile="auto",
35 ):
36 self.optimizer = optimizers.get(optimizer)
37 if hasattr(self, "output_names"):
38 output_names = self.output_names
39 else:
40 output_names = None
41 if loss is not None:
42 self._compile_loss = CompileLoss(
43 loss, loss_weights, output_names=output_names
44 )
45 else:
46 self._compile_loss = None
47 if metrics is not None:
48 self._compile_metrics = CompileMetrics(
49 metrics, weighted_metrics, output_names=output_names
50 )
51 else:
52 self._compile_metrics = None
53 if jit_compile == "auto":
54 if run_eagerly:
55 jit_compile = False
56 else:
57 jit_compile = resolve_auto_jit_compile(self)
58 if jit_compile and run_eagerly:
59 jit_compile = False
60 warnings.warn(
61 "If `run_eagerly` is True, then `jit_compile` "
62 "cannot also be True. Disabling `jit_compile`.",
63 stacklevel=2,
64 )
65 if jit_compile and backend.backend() == "torch":
66 warnings.warn(
67 "`jit_compile` is not yet enabled for the PyTorch backend. "
68 "Proceeding with `jit_compile=False`."
69 )
70 jit_compile = False
71 self.jit_compile = jit_compile
72 self.run_eagerly = run_eagerly
73 self.stop_training = False
74 self.compiled = True
75 self._loss_tracker = metrics_module.Mean(name="loss")
76 self.steps_per_execution = steps_per_execution
77
78 self.train_function = None
79 self.test_function = None
80 self.predict_function = None
81
82 self._compile_config = serialization_lib.SerializableDict(
83 optimizer=optimizer,
84 loss=loss,
85 loss_weights=loss_weights,
86 metrics=metrics,
87 weighted_metrics=weighted_metrics,
88 run_eagerly=run_eagerly,
89 steps_per_execution=steps_per_execution,
90 jit_compile=jit_compile,
91 )
92
93 @property
94 def jit_compile(self):
95 if self._jit_compile is None:
96 # Value was never set. Resolve it now.
97 jit_compile = model_supports_jit(self)
98 self._jit_compile = jit_compile
99 return self._jit_compile
100
101 @jit_compile.setter
102 def jit_compile(self, value):
103 self._jit_compile = value
104
105 @property
106 def run_eagerly(self):
107 return self._run_eagerly
108
109 @run_eagerly.setter
110 def run_eagerly(self, value):
111 self._run_eagerly = value
112
113 @property
114 def metrics(self):
115 metrics = [self._loss_tracker]
116 metrics.extend(self._metrics[:])
117 if self._compile_metrics is not None:
118 metrics += [self._compile_metrics]
119 return metrics
120
121 @property
122 def metrics_names(self):
123 return [m.name for m in self.metrics]
124
125 @property
126 def metrics_variables(self):
127 vars = []
128 for metric in self.metrics:
129 vars.extend(metric.variables)
130 return vars
131
132 def reset_metrics(self):
133 for m in self.metrics:
134 m.reset_state()
135
136 def compute_loss(
137 self, x=None, y=None, y_pred=None, sample_weight=None, allow_empty=False
138 ):
139 """Compute the total loss, validate it, and return it.
140
141 Subclasses can optionally override this method to provide custom loss
142 computation logic.
143
144 Example:
145
146 ```python
147 class MyModel(Model):
148 def __init__(self, *args, **kwargs):
149 super().__init__(*args, **kwargs)
150 self.loss_tracker = metrics.Mean(name='loss')
151
152 def compute_loss(self, x, y, y_pred, sample_weight):
153 loss = ops.means((y_pred - y) ** 2)
154 loss += ops.sum(self.losses)
155 self.loss_tracker.update_state(loss)
156 return loss
157
158 def reset_metrics(self):
159 self.loss_tracker.reset_state()
160
161 @property
162 def metrics(self):
163 return [self.loss_tracker]
164
165 inputs = layers.Input(shape=(10,), name='my_input')
166 outputs = layers.Dense(10)(inputs)
167 model = MyModel(inputs, outputs)
168 model.add_loss(ops.sum(outputs))
169
170 optimizer = SGD()
171 model.compile(optimizer, loss='mse', steps_per_execution=10)
172 dataset = ...
173 model.fit(dataset, epochs=2, steps_per_epoch=10)
174 print(f"Custom loss: {model.loss_tracker.result()}")
175 ```
176
177 Args:
178 x: Input data.
179 y: Target data.
180 y_pred: Predictions returned by the model (output of `model(x)`)
181 sample_weight: Sample weights for weighting the loss function.
182 allow_empty: If `False`, the method will error out if
183 no loss has been computed by the model. If `True`, then
184 if no loss is computed, the method returns 0.
185
186 Returns:
187 The total loss as a scalar tensor, or `None` if no loss results
188 (which is the case when called by `Model.test_step`).
189 """
190 del x # The default implementation does not use `x`.
191 losses = []
192 if self._compile_loss is not None:
193 loss = self._compile_loss(y, y_pred, sample_weight)
194 if loss is not None:
195 losses.append(loss)
196 for loss in self.losses:
197 losses.append(ops.cast(loss, dtype=backend.floatx()))
198 if not allow_empty and len(losses) == 0:
199 raise ValueError(
200 "No loss to compute. Provide a `loss` argument in `compile()`."
201 )
202 if len(losses) == 1:
203 total_loss = losses[0]
204 elif len(losses) == 0:
205 total_loss = ops.zeros(())
206 else:
207 total_loss = ops.sum(losses)
208 return total_loss
209
210 def compute_metrics(self, x, y, y_pred, sample_weight=None):
211 """Update metric states and collect all metrics to be returned.
212
213 Subclasses can optionally override this method to provide custom metric
214 updating and collection logic.
215
216 Example:
217
218 ```python
219 class MyModel(Sequential):
220 def compute_metrics(self, x, y, y_pred, sample_weight):
221 # This super call updates `self.compiled_metrics` and returns
222 # results for all metrics listed in `self.metrics`.
223 metric_results = super().compute_metrics(
224 x, y, y_pred, sample_weight)
225
226 # Note that `self.custom_metric` is not listed
227 # in `self.metrics`.
228 self.custom_metric.update_state(x, y, y_pred, sample_weight)
229 metric_results['metric_name'] = self.custom_metric.result()
230 return metric_results
231 ```
232
233 Args:
234 x: Input data.
235 y: Target data.
236 y_pred: Predictions returned by the model output of `model.call(x)`.
237 sample_weight: Sample weights for weighting the loss function.
238
239 Returns:
240 A `dict` containing values that will be passed to
241 `keras_core.callbacks.CallbackList.on_train_batch_end()`. Typically,
242 the values of the metrics listed in `self.metrics` are returned.
243 Example: `{'loss': 0.2, 'accuracy': 0.7}`.
244 """
245 del x # The default implementation does not use `x`.
246 if self._compile_metrics is not None:
247 self._compile_metrics.update_state(y, y_pred, sample_weight)
248 return self.get_metrics_result()
249
250 def get_metrics_result(self):
251 """Returns the model's metrics values as a dict.
252
253 If any of the metric result is a dict (containing multiple metrics),
254 each of them gets added to the top level returned dict of this method.
255
256 Returns:
257 A `dict` containing values of the metrics listed in `self.metrics`.
258 Example: `{'loss': 0.2, 'accuracy': 0.7}`.
259 """
260 return_metrics = {}
261 for metric in self.metrics:
262 result = metric.result()
263 if isinstance(result, dict):
264 return_metrics.update(result)
265 else:
266 return_metrics[metric.name] = result
267 return self._pythonify_logs(return_metrics)
268
269 def fit(
270 self,
271 x=None,
272 y=None,
273 batch_size=None,
274 epochs=1,
275 verbose="auto",
276 callbacks=None,
277 validation_split=0.0,
278 validation_data=None,
279 shuffle=True,
280 class_weight=None,
281 sample_weight=None,
282 initial_epoch=0,
283 steps_per_epoch=None,
284 validation_steps=None,
285 validation_batch_size=None,
286 validation_freq=1,
287 ):
288 """Trains the model for a fixed number of epochs (dataset iterations).
289
290 Args:
291 x: Input data. It could be:
292 - A NumPy array (or array-like), or a list of arrays
293 (in case the model has multiple inputs).
294 - A tensor, or a list of tensors
295 (in case the model has multiple inputs).
296 - A dict mapping input names to the corresponding array/tensors,
297 if the model has named inputs.
298 - A `tf.data.Dataset`. Should return a tuple
299 of either `(inputs, targets)` or
300 `(inputs, targets, sample_weights)`.
301 - A `keras_core.utils.PyDataset` returning `(inputs,
302 targets)` or `(inputs, targets, sample_weights)`.
303 y: Target data. Like the input data `x`,
304 it could be either NumPy array(s) or backend-native tensor(s).
305 If `x` is a dataset, generator,
306 or `keras_core.utils.PyDataset` instance, `y` should
307 not be specified (since targets will be obtained from `x`).
308 batch_size: Integer or `None`.
309 Number of samples per gradient update.
310 If unspecified, `batch_size` will default to 32.
311 Do not specify the `batch_size` if your data is in the
312 form of datasets, generators, or `keras_core.utils.PyDataset`
313 instances (since they generate batches).
314 epochs: Integer. Number of epochs to train the model.
315 An epoch is an iteration over the entire `x` and `y`
316 data provided
317 (unless the `steps_per_epoch` flag is set to
318 something other than None).
319 Note that in conjunction with `initial_epoch`,
320 `epochs` is to be understood as "final epoch".
321 The model is not trained for a number of iterations
322 given by `epochs`, but merely until the epoch
323 of index `epochs` is reached.
324 verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
325 0 = silent, 1 = progress bar, 2 = one line per epoch.
326 "auto" becomes 1 for most cases.
327 Note that the progress bar is not
328 particularly useful when logged to a file,
329 so `verbose=2` is recommended when not running interactively
330 (e.g., in a production environment). Defaults to `"auto"`.
331 callbacks: List of `keras_core.callbacks.Callback` instances.
332 List of callbacks to apply during training.
333 See `keras_core.callbacks`. Note
334 `keras_core.callbacks.ProgbarLogger` and
335 `keras_core.callbacks.History` callbacks are created
336 automatically and need not be passed to `model.fit()`.
337 `keras_core.callbacks.ProgbarLogger` is created
338 or not based on the `verbose` argument in `model.fit()`.
339 validation_split: Float between 0 and 1.
340 Fraction of the training data to be used as validation data.
341 The model will set apart this fraction of the training data,
342 will not train on it, and will evaluate
343 the loss and any model metrics
344 on this data at the end of each epoch.
345 The validation data is selected from the last samples
346 in the `x` and `y` data provided, before shuffling. This
347 argument is not supported when `x` is a dataset, generator or
348 `keras_core.utils.PyDataset` instance.
349 If both `validation_data` and `validation_split` are provided,
350 `validation_data` will override `validation_split`.
351 validation_data: Data on which to evaluate
352 the loss and any model metrics at the end of each epoch.
353 The model will not be trained on this data. Thus, note the fact
354 that the validation loss of data provided using
355 `validation_split` or `validation_data` is not affected by
356 regularization layers like noise and dropout.
357 `validation_data` will override `validation_split`.
358 `validation_data` could be:
359 - A tuple `(x_val, y_val)` of NumPy arrays or tensors.
360 - A tuple `(x_val, y_val, val_sample_weights)` of NumPy
361 arrays.
362 - A `tf.data.Dataset`.
363 - A Python generator or `keras_core.utils.PyDataset` returning
364 `(inputs, targets)` or `(inputs, targets, sample_weights)`.
365 shuffle: Boolean, whether to shuffle the training data
366 before each epoch. This argument is
367 ignored when `x` is a generator or a `tf.data.Dataset`.
368 class_weight: Optional dictionary mapping class indices (integers)
369 to a weight (float) value, used for weighting the loss function
370 (during training only).
371 This can be useful to tell the model to
372 "pay more attention" to samples from
373 an under-represented class. When `class_weight` is specified
374 and targets have a rank of 2 or greater, either `y` must be
375 one-hot encoded, or an explicit final dimension of `1` must
376 be included for sparse class labels.
377 sample_weight: Optional NumPy array of weights for
378 the training samples, used for weighting the loss function
379 (during training only). You can either pass a flat (1D)
380 NumPy array with the same length as the input samples
381 (1:1 mapping between weights and samples),
382 or in the case of temporal data,
383 you can pass a 2D array with shape
384 `(samples, sequence_length)`,
385 to apply a different weight to every timestep of every sample.
386 This argument is not supported when `x` is a dataset, generator,
387 or `keras_core.utils.PyDataset` instance, instead provide the
388 sample_weights as the third element of `x`.
389 Note that sample weighting does not apply to metrics specified
390 via the `metrics` argument in `compile()`. To apply sample
391 weighting to your metrics, you can specify them via the
392 `weighted_metrics` in `compile()` instead.
393 initial_epoch: Integer.
394 Epoch at which to start training
395 (useful for resuming a previous training run).
396 steps_per_epoch: Integer or `None`.
397 Total number of steps (batches of samples)
398 before declaring one epoch finished and starting the
399 next epoch. When training with input tensors such as
400 backend-native tensors, the default `None` is equal to
401 the number of samples in your dataset divided by
402 the batch size, or 1 if that cannot be determined. If `x` is a
403 `tf.data.Dataset`, and `steps_per_epoch`
404 is `None`, the epoch will run until the input dataset is
405 exhausted. When passing an infinitely repeating dataset, you
406 must specify the `steps_per_epoch` argument. If
407 `steps_per_epoch=-1` the training will run indefinitely with an
408 infinitely repeating dataset.
409 validation_steps: Only relevant if `validation_data` is provided.
410 Total number of steps (batches of
411 samples) to draw before stopping when performing validation
412 at the end of every epoch. If `validation_steps` is `None`,
413 validation will run until the `validation_data` dataset is
414 exhausted. In the case of an infinitely repeated dataset, it
415 will run into an infinite loop. If `validation_steps` is
416 specified and only part of the dataset will be consumed, the
417 evaluation will start from the beginning of the dataset at each
418 epoch. This ensures that the same validation samples are used
419 every time.
420 validation_batch_size: Integer or `None`.
421 Number of samples per validation batch.
422 If unspecified, will default to `batch_size`.
423 Do not specify the `validation_batch_size` if your data is in
424 the form of datasets or `keras_core.utils.PyDataset`
425 instances (since they generate batches).
426 validation_freq: Only relevant if validation data is provided.
427 Specifies how many training epochs to run
428 before a new validation run is performed, e.g. `validation_freq=2`
429 runs validation every 2 epochs.
430
431 Unpacking behavior for iterator-like inputs:
432 A common pattern is to pass an iterator like object such as a
433 `tf.data.Dataset` or a `keras_core.utils.PyDataset` to `fit()`,
434 which will in fact yield not only features (`x`)
435 but optionally targets (`y`) and sample weights (`sample_weight`).
436 Keras requires that the output of such iterator-likes be
437 unambiguous. The iterator should return a tuple
438 of length 1, 2, or 3, where the optional second and third elements
439 will be used for `y` and `sample_weight` respectively.
440 Any other type provided will be wrapped in
441 a length-one tuple, effectively treating everything as `x`. When
442 yielding dicts, they should still adhere to the top-level tuple
443 structure,
444 e.g. `({"x0": x0, "x1": x1}, y)`. Keras will not attempt to separate
445 features, targets, and weights from the keys of a single dict.
446 A notable unsupported data type is the `namedtuple`. The reason is
447 that it behaves like both an ordered datatype (tuple) and a mapping
448 datatype (dict). So given a namedtuple of the form:
449 `namedtuple("example_tuple", ["y", "x"])`
450 it is ambiguous whether to reverse the order of the elements when
451 interpreting the value. Even worse is a tuple of the form:
452 `namedtuple("other_tuple", ["x", "y", "z"])`
453 where it is unclear if the tuple was intended to be unpacked
454 into `x`, `y`, and `sample_weight` or passed through
455 as a single element to `x`.
456
457 Returns:
458 A `History` object. Its `History.history` attribute is
459 a record of training loss values and metrics values
460 at successive epochs, as well as validation loss values
461 and validation metrics values (if applicable).
462 """
463 raise NotImplementedError
464
465 def evaluate(
466 self,
467 x=None,
468 y=None,
469 batch_size=None,
470 verbose="auto",
471 sample_weight=None,
472 steps=None,
473 callbacks=None,
474 return_dict=False,
475 **kwargs,
476 ):
477 """Returns the loss value & metrics values for the model in test mode.
478
479 Computation is done in batches (see the `batch_size` arg.)
480
481 Args:
482 x: Input data. It could be:
483 - A NumPy array (or array-like), or a list of arrays
484 (in case the model has multiple inputs).
485 - A tensor, or a list of tensors
486 (in case the model has multiple inputs).
487 - A dict mapping input names to the corresponding array/tensors,
488 if the model has named inputs.
489 - A `tf.data.Dataset`. Should return a tuple
490 of either `(inputs, targets)` or
491 `(inputs, targets, sample_weights)`.
492 - A generator or `keras_core.utils.PyDataset` returning
493 `(inputs, targets)` or `(inputs, targets, sample_weights)`.
494 y: Target data. Like the input data `x`, it could be either NumPy
495 array(s) or backend-native tensor(s).
496 If `x` is a `tf.data.Dataset` or `keras_core.utils.PyDataset`
497 instance, `y` should not be specified
498 (since targets will be obtained from the iterator/dataset).
499 batch_size: Integer or `None`. Number of samples per batch of
500 computation. If unspecified, `batch_size` will default to 32. Do
501 not specify the `batch_size` if your data is in the form of a
502 dataset, generators, or `keras_core.utils.PyDataset` instances
503 (since they generate batches).
504 verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
505 0 = silent, 1 = progress bar, 2 = single line.
506 `"auto"` becomes 1 for most cases.
507 Note that the progress bar is not
508 particularly useful when logged to a file, so `verbose=2` is
509 recommended when not running interactively
510 (e.g. in a production environment). Defaults to `"auto"`.
511 sample_weight: Optional NumPy array of weights for the test samples,
512 used for weighting the loss function. You can either pass a flat
513 (1D) NumPy array with the same length as the input samples
514 (1:1 mapping between weights and samples), or in the case of
515 temporal data, you can pass a 2D array with shape `(samples,
516 sequence_length)`, to apply a different weight to every
517 timestep of every sample. This argument is not supported when
518 `x` is a dataset, instead pass sample weights as the third
519 element of `x`.
520 steps: Integer or `None`. Total number of steps (batches of samples)
521 before declaring the evaluation round finished. Ignored with the
522 default value of `None`. If `x` is a `tf.data.Dataset` and
523 `steps` is `None`, evaluation will run until the dataset
524 is exhausted.
525 callbacks: List of `keras_core.callbacks.Callback` instances.
526 List of callbacks to apply during evaluation.
527 return_dict: If `True`, loss and metric results are returned as a
528 dict, with each key being the name of the metric.
529 If `False`, they are returned as a list.
530
531 Returns:
532 Scalar test loss (if the model has a single output and no metrics)
533 or list of scalars (if the model has multiple outputs
534 and/or metrics). The attribute `model.metrics_names` will give you
535 the display labels for the scalar outputs.
536 """
537 raise NotImplementedError
538
539 def predict(
540 self, x, batch_size=None, verbose="auto", steps=None, callbacks=None
541 ):
542 """Generates output predictions for the input samples.
543
544 Computation is done in batches. This method is designed for batch
545 processing of large numbers of inputs. It is not intended for use inside
546 of loops that iterate over your data and process small numbers of inputs
547 at a time.
548
549 For small numbers of inputs that fit in one batch,
550 directly use `__call__()` for faster execution, e.g.,
551 `model(x)`, or `model(x, training=False)` if you have layers such as
552 `BatchNormalization` that behave differently during
553 inference.
554
555 Note: See [this FAQ entry](
556 https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call)
557 for more details about the difference between `Model` methods
558 `predict()` and `__call__()`.
559
560 Args:
561 x: Input samples. It could be:
562 - A NumPy array (or array-like), or a list of arrays
563 (in case the model has multiple inputs).
564 - A tensor, or a list of tensors
565 (in case the model has multiple inputs).
566 - A `tf.data.Dataset`.
567 - A `keras_core.utils.PyDataset` instance.
568 batch_size: Integer or `None`.
569 Number of samples per batch.
570 If unspecified, `batch_size` will default to 32.
571 Do not specify the `batch_size` if your data is in the
572 form of dataset, generators, or `keras_core.utils.PyDataset`
573 instances (since they generate batches).
574 verbose: `"auto"`, 0, 1, or 2. Verbosity mode.
575 0 = silent, 1 = progress bar, 2 = single line.
576 `"auto"` becomes 1 for most cases. Note that the progress bar
577 is not particularly useful when logged to a file,
578 so `verbose=2` is recommended when not running interactively
579 (e.g. in a production environment). Defaults to `"auto"`.
580 steps: Total number of steps (batches of samples)
581 before declaring the prediction round finished.
582 Ignored with the default value of `None`.
583 If `x` is a `tf.data.Dataset` and `steps` is `None`,
584 `predict()` will run until the input dataset is exhausted.
585 callbacks: List of `keras_core.callbacks.Callback` instances.
586 List of callbacks to apply during prediction.
587
588 Returns:
589 NumPy array(s) of predictions.
590 """
591 raise NotImplementedError
592
593 def train_on_batch(
594 self,
595 x,
596 y=None,
597 sample_weight=None,
598 class_weight=None,
599 return_dict=False,
600 ):
601 """Runs a single gradient update on a single batch of data.
602
603 Args:
604 x: Input data. Must be array-like.
605 y: Target data. Must be array-like.
606 sample_weight: Optional array of the same length as x, containing
607 weights to apply to the model's loss for each sample.
608 In the case of temporal data, you can pass a 2D array
609 with shape `(samples, sequence_length)`, to apply a different
610 weight to every timestep of every sample.
611 class_weight: Optional dictionary mapping class indices (integers)
612 to a weight (float) to apply to the model's loss for the samples
613 from this class during training. This can be useful to tell the
614 model to "pay more attention" to samples from an
615 under-represented class. When `class_weight` is specified
616 and targets have a rank of 2 or greater, either `y` must
617 be one-hot encoded, or an explicit final dimension of 1
618 must be included for sparse class labels.
619 return_dict: If `True`, loss and metric results are returned as a
620 dict, with each key being the name of the metric. If `False`,
621 they are returned as a list.
622
623 Returns:
624 A scalar loss value (when no metrics and `return_dict=False`),
625 a list of loss and metric values
626 (if there are metrics and `return_dict=False`), or a dict of
627 metric and loss values (if `return_dict=True`).
628 """
629 raise NotImplementedError
630
631 def test_on_batch(
632 self,
633 x,
634 y=None,
635 sample_weight=None,
636 return_dict=False,
637 ):
638 """Test the model on a single batch of samples.
639
640 Args:
641 x: Input data. Must be array-like.
642 y: Target data. Must be array-like.
643 sample_weight: Optional array of the same length as x, containing
644 weights to apply to the model's loss for each sample.
645 In the case of temporal data, you can pass a 2D array
646 with shape `(samples, sequence_length)`, to apply a different
647 weight to every timestep of every sample.
648 return_dict: If `True`, loss and metric results are returned as a
649 dict, with each key being the name of the metric. If `False`,
650 they are returned as a list.
651
652 Returns:
653 A scalar loss value (when no metrics and `return_dict=False`),
654 a list of loss and metric values
655 (if there are metrics and `return_dict=False`), or a dict of
656 metric and loss values (if `return_dict=True`).
657 """
658 raise NotImplementedError
659
660 def predict_on_batch(self, x):
661 """Returns predictions for a single batch of samples.
662
663 Args:
664 x: Input data. It must be array-like.
665
666 Returns:
667 NumPy array(s) of predictions.
668 """
669 raise NotImplementedError
670
671 def get_compile_config(self):
672 """Returns a serialized config with information for compiling the model.
673
674 This method returns a config dictionary containing all the information
675 (optimizer, loss, metrics, etc.) with which the model was compiled.
676
677 Returns:
678 A dict containing information for compiling the model.
679 """
680 if self.compiled and hasattr(self, "_compile_config"):
681 return self._compile_config.serialize()
682
683 def compile_from_config(self, config):
684 """Compiles the model with the information given in config.
685
686 This method uses the information in the config (optimizer, loss,
687 metrics, etc.) to compile the model.
688
689 Args:
690 config: Dict containing information for compiling the model.
691 """
692 has_overridden_compile = self.__class__.compile != Trainer.compile
693 if has_overridden_compile:
694 warnings.warn(
695 "`compile()` was not called as part of model loading "
696 "because the model's `compile()` method is custom. "
697 "All subclassed Models that have `compile()` "
698 "overridden should also override "
699 "`get_compile_config()` and `compile_from_config(config)`. "
700 "Alternatively, you can "
701 "call `compile()` manually after loading.",
702 stacklevel=2,
703 )
704 return
705 config = serialization_lib.deserialize_keras_object(config)
706 self.compile(**config)
707 if hasattr(self, "optimizer") and self.built:
708 # Create optimizer variables.
709 self.optimizer.build(self.trainable_variables)
710
711 def _should_eval(self, epoch, validation_freq):
712 epoch = epoch + 1 # one-index the user-facing epoch.
713 if isinstance(validation_freq, int):
714 return epoch % validation_freq == 0
715 elif isinstance(validation_freq, list):
716 return epoch in validation_freq
717 else:
718 raise ValueError(
719 "Expected `validation_freq` to be a list or int. "
720 f"Received: validation_freq={validation_freq} of the "
721 f"type {type(validation_freq)}."
722 )
723
724 def _pythonify_logs(self, logs):
725 result = {}
726 for key, value in sorted(logs.items()):
727 if isinstance(value, dict):
728 result.update(self._pythonify_logs(value))
729 else:
730 try:
731 value = float(value)
732 except:
733 pass
734 result[key] = value
735 return result
736
737 def _flatten_metrics_in_order(self, logs):
738 """Turns `logs` dict into a list as per key order of `metrics_names`."""
739 metric_names = [m.name for m in self.metrics]
740 results = []
741 for name in metric_names:
742 if name in logs:
743 results.append(logs[name])
744 for key in sorted(logs.keys()):
745 if key not in metric_names:
746 results.append(logs[key])
747 if len(results) == 1:
748 return results[0]
749 return results
750
751 def _assert_compile_called(self, method_name=None):
752 if not self.compiled:
753 msg = "You must call `compile()` before "
754 if metrics_module:
755 msg += "using the model."
756 else:
757 msg += f"calling `{method_name}()`."
758 raise ValueError(msg)
759
760
761 def resolve_auto_jit_compile(model):
762 if model_supports_jit(model):
763 if backend.backend() == "torch":
764 # Torch defaults to eager mode
765 # until torch compile is reliable
766 return False
767 return True
768 return False
769
770
771 def model_supports_jit(model):
772 if platform.system() == "Darwin" and "arm" in platform.processor().lower():
773 if backend.backend() == "tensorflow":
774 import tensorflow as tf
775
776 if tf.config.list_physical_devices("GPU"):
777 return False
778 if all(x.supports_jit for x in model._flatten_layers()):
779 return True
780 return False
781
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras_core/trainers/trainer.py b/keras_core/trainers/trainer.py
--- a/keras_core/trainers/trainer.py
+++ b/keras_core/trainers/trainer.py
@@ -44,7 +44,7 @@
)
else:
self._compile_loss = None
- if metrics is not None:
+ if metrics is not None or weighted_metrics is not None:
self._compile_metrics = CompileMetrics(
metrics, weighted_metrics, output_names=output_names
)
| {"golden_diff": "diff --git a/keras_core/trainers/trainer.py b/keras_core/trainers/trainer.py\n--- a/keras_core/trainers/trainer.py\n+++ b/keras_core/trainers/trainer.py\n@@ -44,7 +44,7 @@\n )\n else:\n self._compile_loss = None\n- if metrics is not None:\n+ if metrics is not None or weighted_metrics is not None:\n self._compile_metrics = CompileMetrics(\n metrics, weighted_metrics, output_names=output_names\n )\n", "issue": "`weighted_metrics` not showing up next to training progress bar\nTried porting this example (https://keras.io/examples/generative/gpt2_text_generation_with_kerasnlp/) to Keras Core. `accuracy` (passed as a `weighted_metric`) doesn't show up in the training bar (across backends).\r\n\r\nReproduced it here (with JAX): https://colab.research.google.com/drive/1XByfu9-vaFLnXdUjW6tLypMYy36C8LVx?usp=sharing\r\nThis is with legacy Keras: https://colab.research.google.com/drive/1fucRLYK_7ZDMRN5GURMgRIe80JknaFhj?usp=sharing\n", "before_files": [{"content": "import platform\nimport warnings\n\nfrom keras_core import backend\nfrom keras_core import metrics as metrics_module\nfrom keras_core import ops\nfrom keras_core import optimizers\nfrom keras_core.saving import serialization_lib\nfrom keras_core.trainers.compile_utils import CompileLoss\nfrom keras_core.trainers.compile_utils import CompileMetrics\nfrom keras_core.utils import traceback_utils\nfrom keras_core.utils import tracking\n\n\nclass Trainer:\n def __init__(self):\n self._lock = False\n self._run_eagerly = False\n self._jit_compile = None\n self.compiled = False\n self.steps_per_execution = 1\n\n @traceback_utils.filter_traceback\n @tracking.no_automatic_dependency_tracking\n def compile(\n self,\n optimizer=\"rmsprop\",\n loss=None,\n loss_weights=None,\n metrics=None,\n weighted_metrics=None,\n run_eagerly=False,\n steps_per_execution=1,\n jit_compile=\"auto\",\n ):\n self.optimizer = optimizers.get(optimizer)\n if hasattr(self, \"output_names\"):\n output_names = self.output_names\n else:\n output_names = None\n if loss is not None:\n self._compile_loss = CompileLoss(\n loss, loss_weights, output_names=output_names\n )\n else:\n self._compile_loss = None\n if metrics is not None:\n self._compile_metrics = CompileMetrics(\n metrics, weighted_metrics, output_names=output_names\n )\n else:\n self._compile_metrics = None\n if jit_compile == \"auto\":\n if run_eagerly:\n jit_compile = False\n else:\n jit_compile = resolve_auto_jit_compile(self)\n if jit_compile and run_eagerly:\n jit_compile = False\n warnings.warn(\n \"If `run_eagerly` is True, then `jit_compile` \"\n \"cannot also be True. Disabling `jit_compile`.\",\n stacklevel=2,\n )\n if jit_compile and backend.backend() == \"torch\":\n warnings.warn(\n \"`jit_compile` is not yet enabled for the PyTorch backend. \"\n \"Proceeding with `jit_compile=False`.\"\n )\n jit_compile = False\n self.jit_compile = jit_compile\n self.run_eagerly = run_eagerly\n self.stop_training = False\n self.compiled = True\n self._loss_tracker = metrics_module.Mean(name=\"loss\")\n self.steps_per_execution = steps_per_execution\n\n self.train_function = None\n self.test_function = None\n self.predict_function = None\n\n self._compile_config = serialization_lib.SerializableDict(\n optimizer=optimizer,\n loss=loss,\n loss_weights=loss_weights,\n metrics=metrics,\n weighted_metrics=weighted_metrics,\n run_eagerly=run_eagerly,\n steps_per_execution=steps_per_execution,\n jit_compile=jit_compile,\n )\n\n @property\n def jit_compile(self):\n if self._jit_compile is None:\n # Value was never set. Resolve it now.\n jit_compile = model_supports_jit(self)\n self._jit_compile = jit_compile\n return self._jit_compile\n\n @jit_compile.setter\n def jit_compile(self, value):\n self._jit_compile = value\n\n @property\n def run_eagerly(self):\n return self._run_eagerly\n\n @run_eagerly.setter\n def run_eagerly(self, value):\n self._run_eagerly = value\n\n @property\n def metrics(self):\n metrics = [self._loss_tracker]\n metrics.extend(self._metrics[:])\n if self._compile_metrics is not None:\n metrics += [self._compile_metrics]\n return metrics\n\n @property\n def metrics_names(self):\n return [m.name for m in self.metrics]\n\n @property\n def metrics_variables(self):\n vars = []\n for metric in self.metrics:\n vars.extend(metric.variables)\n return vars\n\n def reset_metrics(self):\n for m in self.metrics:\n m.reset_state()\n\n def compute_loss(\n self, x=None, y=None, y_pred=None, sample_weight=None, allow_empty=False\n ):\n \"\"\"Compute the total loss, validate it, and return it.\n\n Subclasses can optionally override this method to provide custom loss\n computation logic.\n\n Example:\n\n ```python\n class MyModel(Model):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.loss_tracker = metrics.Mean(name='loss')\n\n def compute_loss(self, x, y, y_pred, sample_weight):\n loss = ops.means((y_pred - y) ** 2)\n loss += ops.sum(self.losses)\n self.loss_tracker.update_state(loss)\n return loss\n\n def reset_metrics(self):\n self.loss_tracker.reset_state()\n\n @property\n def metrics(self):\n return [self.loss_tracker]\n\n inputs = layers.Input(shape=(10,), name='my_input')\n outputs = layers.Dense(10)(inputs)\n model = MyModel(inputs, outputs)\n model.add_loss(ops.sum(outputs))\n\n optimizer = SGD()\n model.compile(optimizer, loss='mse', steps_per_execution=10)\n dataset = ...\n model.fit(dataset, epochs=2, steps_per_epoch=10)\n print(f\"Custom loss: {model.loss_tracker.result()}\")\n ```\n\n Args:\n x: Input data.\n y: Target data.\n y_pred: Predictions returned by the model (output of `model(x)`)\n sample_weight: Sample weights for weighting the loss function.\n allow_empty: If `False`, the method will error out if\n no loss has been computed by the model. If `True`, then\n if no loss is computed, the method returns 0.\n\n Returns:\n The total loss as a scalar tensor, or `None` if no loss results\n (which is the case when called by `Model.test_step`).\n \"\"\"\n del x # The default implementation does not use `x`.\n losses = []\n if self._compile_loss is not None:\n loss = self._compile_loss(y, y_pred, sample_weight)\n if loss is not None:\n losses.append(loss)\n for loss in self.losses:\n losses.append(ops.cast(loss, dtype=backend.floatx()))\n if not allow_empty and len(losses) == 0:\n raise ValueError(\n \"No loss to compute. Provide a `loss` argument in `compile()`.\"\n )\n if len(losses) == 1:\n total_loss = losses[0]\n elif len(losses) == 0:\n total_loss = ops.zeros(())\n else:\n total_loss = ops.sum(losses)\n return total_loss\n\n def compute_metrics(self, x, y, y_pred, sample_weight=None):\n \"\"\"Update metric states and collect all metrics to be returned.\n\n Subclasses can optionally override this method to provide custom metric\n updating and collection logic.\n\n Example:\n\n ```python\n class MyModel(Sequential):\n def compute_metrics(self, x, y, y_pred, sample_weight):\n # This super call updates `self.compiled_metrics` and returns\n # results for all metrics listed in `self.metrics`.\n metric_results = super().compute_metrics(\n x, y, y_pred, sample_weight)\n\n # Note that `self.custom_metric` is not listed\n # in `self.metrics`.\n self.custom_metric.update_state(x, y, y_pred, sample_weight)\n metric_results['metric_name'] = self.custom_metric.result()\n return metric_results\n ```\n\n Args:\n x: Input data.\n y: Target data.\n y_pred: Predictions returned by the model output of `model.call(x)`.\n sample_weight: Sample weights for weighting the loss function.\n\n Returns:\n A `dict` containing values that will be passed to\n `keras_core.callbacks.CallbackList.on_train_batch_end()`. Typically,\n the values of the metrics listed in `self.metrics` are returned.\n Example: `{'loss': 0.2, 'accuracy': 0.7}`.\n \"\"\"\n del x # The default implementation does not use `x`.\n if self._compile_metrics is not None:\n self._compile_metrics.update_state(y, y_pred, sample_weight)\n return self.get_metrics_result()\n\n def get_metrics_result(self):\n \"\"\"Returns the model's metrics values as a dict.\n\n If any of the metric result is a dict (containing multiple metrics),\n each of them gets added to the top level returned dict of this method.\n\n Returns:\n A `dict` containing values of the metrics listed in `self.metrics`.\n Example: `{'loss': 0.2, 'accuracy': 0.7}`.\n \"\"\"\n return_metrics = {}\n for metric in self.metrics:\n result = metric.result()\n if isinstance(result, dict):\n return_metrics.update(result)\n else:\n return_metrics[metric.name] = result\n return self._pythonify_logs(return_metrics)\n\n def fit(\n self,\n x=None,\n y=None,\n batch_size=None,\n epochs=1,\n verbose=\"auto\",\n callbacks=None,\n validation_split=0.0,\n validation_data=None,\n shuffle=True,\n class_weight=None,\n sample_weight=None,\n initial_epoch=0,\n steps_per_epoch=None,\n validation_steps=None,\n validation_batch_size=None,\n validation_freq=1,\n ):\n \"\"\"Trains the model for a fixed number of epochs (dataset iterations).\n\n Args:\n x: Input data. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A dict mapping input names to the corresponding array/tensors,\n if the model has named inputs.\n - A `tf.data.Dataset`. Should return a tuple\n of either `(inputs, targets)` or\n `(inputs, targets, sample_weights)`.\n - A `keras_core.utils.PyDataset` returning `(inputs,\n targets)` or `(inputs, targets, sample_weights)`.\n y: Target data. Like the input data `x`,\n it could be either NumPy array(s) or backend-native tensor(s).\n If `x` is a dataset, generator,\n or `keras_core.utils.PyDataset` instance, `y` should\n not be specified (since targets will be obtained from `x`).\n batch_size: Integer or `None`.\n Number of samples per gradient update.\n If unspecified, `batch_size` will default to 32.\n Do not specify the `batch_size` if your data is in the\n form of datasets, generators, or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n epochs: Integer. Number of epochs to train the model.\n An epoch is an iteration over the entire `x` and `y`\n data provided\n (unless the `steps_per_epoch` flag is set to\n something other than None).\n Note that in conjunction with `initial_epoch`,\n `epochs` is to be understood as \"final epoch\".\n The model is not trained for a number of iterations\n given by `epochs`, but merely until the epoch\n of index `epochs` is reached.\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = one line per epoch.\n \"auto\" becomes 1 for most cases.\n Note that the progress bar is not\n particularly useful when logged to a file,\n so `verbose=2` is recommended when not running interactively\n (e.g., in a production environment). Defaults to `\"auto\"`.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during training.\n See `keras_core.callbacks`. Note\n `keras_core.callbacks.ProgbarLogger` and\n `keras_core.callbacks.History` callbacks are created\n automatically and need not be passed to `model.fit()`.\n `keras_core.callbacks.ProgbarLogger` is created\n or not based on the `verbose` argument in `model.fit()`.\n validation_split: Float between 0 and 1.\n Fraction of the training data to be used as validation data.\n The model will set apart this fraction of the training data,\n will not train on it, and will evaluate\n the loss and any model metrics\n on this data at the end of each epoch.\n The validation data is selected from the last samples\n in the `x` and `y` data provided, before shuffling. This\n argument is not supported when `x` is a dataset, generator or\n `keras_core.utils.PyDataset` instance.\n If both `validation_data` and `validation_split` are provided,\n `validation_data` will override `validation_split`.\n validation_data: Data on which to evaluate\n the loss and any model metrics at the end of each epoch.\n The model will not be trained on this data. Thus, note the fact\n that the validation loss of data provided using\n `validation_split` or `validation_data` is not affected by\n regularization layers like noise and dropout.\n `validation_data` will override `validation_split`.\n `validation_data` could be:\n - A tuple `(x_val, y_val)` of NumPy arrays or tensors.\n - A tuple `(x_val, y_val, val_sample_weights)` of NumPy\n arrays.\n - A `tf.data.Dataset`.\n - A Python generator or `keras_core.utils.PyDataset` returning\n `(inputs, targets)` or `(inputs, targets, sample_weights)`.\n shuffle: Boolean, whether to shuffle the training data\n before each epoch. This argument is\n ignored when `x` is a generator or a `tf.data.Dataset`.\n class_weight: Optional dictionary mapping class indices (integers)\n to a weight (float) value, used for weighting the loss function\n (during training only).\n This can be useful to tell the model to\n \"pay more attention\" to samples from\n an under-represented class. When `class_weight` is specified\n and targets have a rank of 2 or greater, either `y` must be\n one-hot encoded, or an explicit final dimension of `1` must\n be included for sparse class labels.\n sample_weight: Optional NumPy array of weights for\n the training samples, used for weighting the loss function\n (during training only). You can either pass a flat (1D)\n NumPy array with the same length as the input samples\n (1:1 mapping between weights and samples),\n or in the case of temporal data,\n you can pass a 2D array with shape\n `(samples, sequence_length)`,\n to apply a different weight to every timestep of every sample.\n This argument is not supported when `x` is a dataset, generator,\n or `keras_core.utils.PyDataset` instance, instead provide the\n sample_weights as the third element of `x`.\n Note that sample weighting does not apply to metrics specified\n via the `metrics` argument in `compile()`. To apply sample\n weighting to your metrics, you can specify them via the\n `weighted_metrics` in `compile()` instead.\n initial_epoch: Integer.\n Epoch at which to start training\n (useful for resuming a previous training run).\n steps_per_epoch: Integer or `None`.\n Total number of steps (batches of samples)\n before declaring one epoch finished and starting the\n next epoch. When training with input tensors such as\n backend-native tensors, the default `None` is equal to\n the number of samples in your dataset divided by\n the batch size, or 1 if that cannot be determined. If `x` is a\n `tf.data.Dataset`, and `steps_per_epoch`\n is `None`, the epoch will run until the input dataset is\n exhausted. When passing an infinitely repeating dataset, you\n must specify the `steps_per_epoch` argument. If\n `steps_per_epoch=-1` the training will run indefinitely with an\n infinitely repeating dataset.\n validation_steps: Only relevant if `validation_data` is provided.\n Total number of steps (batches of\n samples) to draw before stopping when performing validation\n at the end of every epoch. If `validation_steps` is `None`,\n validation will run until the `validation_data` dataset is\n exhausted. In the case of an infinitely repeated dataset, it\n will run into an infinite loop. If `validation_steps` is\n specified and only part of the dataset will be consumed, the\n evaluation will start from the beginning of the dataset at each\n epoch. This ensures that the same validation samples are used\n every time.\n validation_batch_size: Integer or `None`.\n Number of samples per validation batch.\n If unspecified, will default to `batch_size`.\n Do not specify the `validation_batch_size` if your data is in\n the form of datasets or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n validation_freq: Only relevant if validation data is provided.\n Specifies how many training epochs to run\n before a new validation run is performed, e.g. `validation_freq=2`\n runs validation every 2 epochs.\n\n Unpacking behavior for iterator-like inputs:\n A common pattern is to pass an iterator like object such as a\n `tf.data.Dataset` or a `keras_core.utils.PyDataset` to `fit()`,\n which will in fact yield not only features (`x`)\n but optionally targets (`y`) and sample weights (`sample_weight`).\n Keras requires that the output of such iterator-likes be\n unambiguous. The iterator should return a tuple\n of length 1, 2, or 3, where the optional second and third elements\n will be used for `y` and `sample_weight` respectively.\n Any other type provided will be wrapped in\n a length-one tuple, effectively treating everything as `x`. When\n yielding dicts, they should still adhere to the top-level tuple\n structure,\n e.g. `({\"x0\": x0, \"x1\": x1}, y)`. Keras will not attempt to separate\n features, targets, and weights from the keys of a single dict.\n A notable unsupported data type is the `namedtuple`. The reason is\n that it behaves like both an ordered datatype (tuple) and a mapping\n datatype (dict). So given a namedtuple of the form:\n `namedtuple(\"example_tuple\", [\"y\", \"x\"])`\n it is ambiguous whether to reverse the order of the elements when\n interpreting the value. Even worse is a tuple of the form:\n `namedtuple(\"other_tuple\", [\"x\", \"y\", \"z\"])`\n where it is unclear if the tuple was intended to be unpacked\n into `x`, `y`, and `sample_weight` or passed through\n as a single element to `x`.\n\n Returns:\n A `History` object. Its `History.history` attribute is\n a record of training loss values and metrics values\n at successive epochs, as well as validation loss values\n and validation metrics values (if applicable).\n \"\"\"\n raise NotImplementedError\n\n def evaluate(\n self,\n x=None,\n y=None,\n batch_size=None,\n verbose=\"auto\",\n sample_weight=None,\n steps=None,\n callbacks=None,\n return_dict=False,\n **kwargs,\n ):\n \"\"\"Returns the loss value & metrics values for the model in test mode.\n\n Computation is done in batches (see the `batch_size` arg.)\n\n Args:\n x: Input data. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A dict mapping input names to the corresponding array/tensors,\n if the model has named inputs.\n - A `tf.data.Dataset`. Should return a tuple\n of either `(inputs, targets)` or\n `(inputs, targets, sample_weights)`.\n - A generator or `keras_core.utils.PyDataset` returning\n `(inputs, targets)` or `(inputs, targets, sample_weights)`.\n y: Target data. Like the input data `x`, it could be either NumPy\n array(s) or backend-native tensor(s).\n If `x` is a `tf.data.Dataset` or `keras_core.utils.PyDataset`\n instance, `y` should not be specified\n (since targets will be obtained from the iterator/dataset).\n batch_size: Integer or `None`. Number of samples per batch of\n computation. If unspecified, `batch_size` will default to 32. Do\n not specify the `batch_size` if your data is in the form of a\n dataset, generators, or `keras_core.utils.PyDataset` instances\n (since they generate batches).\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = single line.\n `\"auto\"` becomes 1 for most cases.\n Note that the progress bar is not\n particularly useful when logged to a file, so `verbose=2` is\n recommended when not running interactively\n (e.g. in a production environment). Defaults to `\"auto\"`.\n sample_weight: Optional NumPy array of weights for the test samples,\n used for weighting the loss function. You can either pass a flat\n (1D) NumPy array with the same length as the input samples\n (1:1 mapping between weights and samples), or in the case of\n temporal data, you can pass a 2D array with shape `(samples,\n sequence_length)`, to apply a different weight to every\n timestep of every sample. This argument is not supported when\n `x` is a dataset, instead pass sample weights as the third\n element of `x`.\n steps: Integer or `None`. Total number of steps (batches of samples)\n before declaring the evaluation round finished. Ignored with the\n default value of `None`. If `x` is a `tf.data.Dataset` and\n `steps` is `None`, evaluation will run until the dataset\n is exhausted.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during evaluation.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric.\n If `False`, they are returned as a list.\n\n Returns:\n Scalar test loss (if the model has a single output and no metrics)\n or list of scalars (if the model has multiple outputs\n and/or metrics). The attribute `model.metrics_names` will give you\n the display labels for the scalar outputs.\n \"\"\"\n raise NotImplementedError\n\n def predict(\n self, x, batch_size=None, verbose=\"auto\", steps=None, callbacks=None\n ):\n \"\"\"Generates output predictions for the input samples.\n\n Computation is done in batches. This method is designed for batch\n processing of large numbers of inputs. It is not intended for use inside\n of loops that iterate over your data and process small numbers of inputs\n at a time.\n\n For small numbers of inputs that fit in one batch,\n directly use `__call__()` for faster execution, e.g.,\n `model(x)`, or `model(x, training=False)` if you have layers such as\n `BatchNormalization` that behave differently during\n inference.\n\n Note: See [this FAQ entry](\n https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call)\n for more details about the difference between `Model` methods\n `predict()` and `__call__()`.\n\n Args:\n x: Input samples. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A `tf.data.Dataset`.\n - A `keras_core.utils.PyDataset` instance.\n batch_size: Integer or `None`.\n Number of samples per batch.\n If unspecified, `batch_size` will default to 32.\n Do not specify the `batch_size` if your data is in the\n form of dataset, generators, or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = single line.\n `\"auto\"` becomes 1 for most cases. Note that the progress bar\n is not particularly useful when logged to a file,\n so `verbose=2` is recommended when not running interactively\n (e.g. in a production environment). Defaults to `\"auto\"`.\n steps: Total number of steps (batches of samples)\n before declaring the prediction round finished.\n Ignored with the default value of `None`.\n If `x` is a `tf.data.Dataset` and `steps` is `None`,\n `predict()` will run until the input dataset is exhausted.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during prediction.\n\n Returns:\n NumPy array(s) of predictions.\n \"\"\"\n raise NotImplementedError\n\n def train_on_batch(\n self,\n x,\n y=None,\n sample_weight=None,\n class_weight=None,\n return_dict=False,\n ):\n \"\"\"Runs a single gradient update on a single batch of data.\n\n Args:\n x: Input data. Must be array-like.\n y: Target data. Must be array-like.\n sample_weight: Optional array of the same length as x, containing\n weights to apply to the model's loss for each sample.\n In the case of temporal data, you can pass a 2D array\n with shape `(samples, sequence_length)`, to apply a different\n weight to every timestep of every sample.\n class_weight: Optional dictionary mapping class indices (integers)\n to a weight (float) to apply to the model's loss for the samples\n from this class during training. This can be useful to tell the\n model to \"pay more attention\" to samples from an\n under-represented class. When `class_weight` is specified\n and targets have a rank of 2 or greater, either `y` must\n be one-hot encoded, or an explicit final dimension of 1\n must be included for sparse class labels.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric. If `False`,\n they are returned as a list.\n\n Returns:\n A scalar loss value (when no metrics and `return_dict=False`),\n a list of loss and metric values\n (if there are metrics and `return_dict=False`), or a dict of\n metric and loss values (if `return_dict=True`).\n \"\"\"\n raise NotImplementedError\n\n def test_on_batch(\n self,\n x,\n y=None,\n sample_weight=None,\n return_dict=False,\n ):\n \"\"\"Test the model on a single batch of samples.\n\n Args:\n x: Input data. Must be array-like.\n y: Target data. Must be array-like.\n sample_weight: Optional array of the same length as x, containing\n weights to apply to the model's loss for each sample.\n In the case of temporal data, you can pass a 2D array\n with shape `(samples, sequence_length)`, to apply a different\n weight to every timestep of every sample.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric. If `False`,\n they are returned as a list.\n\n Returns:\n A scalar loss value (when no metrics and `return_dict=False`),\n a list of loss and metric values\n (if there are metrics and `return_dict=False`), or a dict of\n metric and loss values (if `return_dict=True`).\n \"\"\"\n raise NotImplementedError\n\n def predict_on_batch(self, x):\n \"\"\"Returns predictions for a single batch of samples.\n\n Args:\n x: Input data. It must be array-like.\n\n Returns:\n NumPy array(s) of predictions.\n \"\"\"\n raise NotImplementedError\n\n def get_compile_config(self):\n \"\"\"Returns a serialized config with information for compiling the model.\n\n This method returns a config dictionary containing all the information\n (optimizer, loss, metrics, etc.) with which the model was compiled.\n\n Returns:\n A dict containing information for compiling the model.\n \"\"\"\n if self.compiled and hasattr(self, \"_compile_config\"):\n return self._compile_config.serialize()\n\n def compile_from_config(self, config):\n \"\"\"Compiles the model with the information given in config.\n\n This method uses the information in the config (optimizer, loss,\n metrics, etc.) to compile the model.\n\n Args:\n config: Dict containing information for compiling the model.\n \"\"\"\n has_overridden_compile = self.__class__.compile != Trainer.compile\n if has_overridden_compile:\n warnings.warn(\n \"`compile()` was not called as part of model loading \"\n \"because the model's `compile()` method is custom. \"\n \"All subclassed Models that have `compile()` \"\n \"overridden should also override \"\n \"`get_compile_config()` and `compile_from_config(config)`. \"\n \"Alternatively, you can \"\n \"call `compile()` manually after loading.\",\n stacklevel=2,\n )\n return\n config = serialization_lib.deserialize_keras_object(config)\n self.compile(**config)\n if hasattr(self, \"optimizer\") and self.built:\n # Create optimizer variables.\n self.optimizer.build(self.trainable_variables)\n\n def _should_eval(self, epoch, validation_freq):\n epoch = epoch + 1 # one-index the user-facing epoch.\n if isinstance(validation_freq, int):\n return epoch % validation_freq == 0\n elif isinstance(validation_freq, list):\n return epoch in validation_freq\n else:\n raise ValueError(\n \"Expected `validation_freq` to be a list or int. \"\n f\"Received: validation_freq={validation_freq} of the \"\n f\"type {type(validation_freq)}.\"\n )\n\n def _pythonify_logs(self, logs):\n result = {}\n for key, value in sorted(logs.items()):\n if isinstance(value, dict):\n result.update(self._pythonify_logs(value))\n else:\n try:\n value = float(value)\n except:\n pass\n result[key] = value\n return result\n\n def _flatten_metrics_in_order(self, logs):\n \"\"\"Turns `logs` dict into a list as per key order of `metrics_names`.\"\"\"\n metric_names = [m.name for m in self.metrics]\n results = []\n for name in metric_names:\n if name in logs:\n results.append(logs[name])\n for key in sorted(logs.keys()):\n if key not in metric_names:\n results.append(logs[key])\n if len(results) == 1:\n return results[0]\n return results\n\n def _assert_compile_called(self, method_name=None):\n if not self.compiled:\n msg = \"You must call `compile()` before \"\n if metrics_module:\n msg += \"using the model.\"\n else:\n msg += f\"calling `{method_name}()`.\"\n raise ValueError(msg)\n\n\ndef resolve_auto_jit_compile(model):\n if model_supports_jit(model):\n if backend.backend() == \"torch\":\n # Torch defaults to eager mode\n # until torch compile is reliable\n return False\n return True\n return False\n\n\ndef model_supports_jit(model):\n if platform.system() == \"Darwin\" and \"arm\" in platform.processor().lower():\n if backend.backend() == \"tensorflow\":\n import tensorflow as tf\n\n if tf.config.list_physical_devices(\"GPU\"):\n return False\n if all(x.supports_jit for x in model._flatten_layers()):\n return True\n return False\n", "path": "keras_core/trainers/trainer.py"}], "after_files": [{"content": "import platform\nimport warnings\n\nfrom keras_core import backend\nfrom keras_core import metrics as metrics_module\nfrom keras_core import ops\nfrom keras_core import optimizers\nfrom keras_core.saving import serialization_lib\nfrom keras_core.trainers.compile_utils import CompileLoss\nfrom keras_core.trainers.compile_utils import CompileMetrics\nfrom keras_core.utils import traceback_utils\nfrom keras_core.utils import tracking\n\n\nclass Trainer:\n def __init__(self):\n self._lock = False\n self._run_eagerly = False\n self._jit_compile = None\n self.compiled = False\n self.steps_per_execution = 1\n\n @traceback_utils.filter_traceback\n @tracking.no_automatic_dependency_tracking\n def compile(\n self,\n optimizer=\"rmsprop\",\n loss=None,\n loss_weights=None,\n metrics=None,\n weighted_metrics=None,\n run_eagerly=False,\n steps_per_execution=1,\n jit_compile=\"auto\",\n ):\n self.optimizer = optimizers.get(optimizer)\n if hasattr(self, \"output_names\"):\n output_names = self.output_names\n else:\n output_names = None\n if loss is not None:\n self._compile_loss = CompileLoss(\n loss, loss_weights, output_names=output_names\n )\n else:\n self._compile_loss = None\n if metrics is not None or weighted_metrics is not None:\n self._compile_metrics = CompileMetrics(\n metrics, weighted_metrics, output_names=output_names\n )\n else:\n self._compile_metrics = None\n if jit_compile == \"auto\":\n if run_eagerly:\n jit_compile = False\n else:\n jit_compile = resolve_auto_jit_compile(self)\n if jit_compile and run_eagerly:\n jit_compile = False\n warnings.warn(\n \"If `run_eagerly` is True, then `jit_compile` \"\n \"cannot also be True. Disabling `jit_compile`.\",\n stacklevel=2,\n )\n if jit_compile and backend.backend() == \"torch\":\n warnings.warn(\n \"`jit_compile` is not yet enabled for the PyTorch backend. \"\n \"Proceeding with `jit_compile=False`.\"\n )\n jit_compile = False\n self.jit_compile = jit_compile\n self.run_eagerly = run_eagerly\n self.stop_training = False\n self.compiled = True\n self._loss_tracker = metrics_module.Mean(name=\"loss\")\n self.steps_per_execution = steps_per_execution\n\n self.train_function = None\n self.test_function = None\n self.predict_function = None\n\n self._compile_config = serialization_lib.SerializableDict(\n optimizer=optimizer,\n loss=loss,\n loss_weights=loss_weights,\n metrics=metrics,\n weighted_metrics=weighted_metrics,\n run_eagerly=run_eagerly,\n steps_per_execution=steps_per_execution,\n jit_compile=jit_compile,\n )\n\n @property\n def jit_compile(self):\n if self._jit_compile is None:\n # Value was never set. Resolve it now.\n jit_compile = model_supports_jit(self)\n self._jit_compile = jit_compile\n return self._jit_compile\n\n @jit_compile.setter\n def jit_compile(self, value):\n self._jit_compile = value\n\n @property\n def run_eagerly(self):\n return self._run_eagerly\n\n @run_eagerly.setter\n def run_eagerly(self, value):\n self._run_eagerly = value\n\n @property\n def metrics(self):\n metrics = [self._loss_tracker]\n metrics.extend(self._metrics[:])\n if self._compile_metrics is not None:\n metrics += [self._compile_metrics]\n return metrics\n\n @property\n def metrics_names(self):\n return [m.name for m in self.metrics]\n\n @property\n def metrics_variables(self):\n vars = []\n for metric in self.metrics:\n vars.extend(metric.variables)\n return vars\n\n def reset_metrics(self):\n for m in self.metrics:\n m.reset_state()\n\n def compute_loss(\n self, x=None, y=None, y_pred=None, sample_weight=None, allow_empty=False\n ):\n \"\"\"Compute the total loss, validate it, and return it.\n\n Subclasses can optionally override this method to provide custom loss\n computation logic.\n\n Example:\n\n ```python\n class MyModel(Model):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.loss_tracker = metrics.Mean(name='loss')\n\n def compute_loss(self, x, y, y_pred, sample_weight):\n loss = ops.means((y_pred - y) ** 2)\n loss += ops.sum(self.losses)\n self.loss_tracker.update_state(loss)\n return loss\n\n def reset_metrics(self):\n self.loss_tracker.reset_state()\n\n @property\n def metrics(self):\n return [self.loss_tracker]\n\n inputs = layers.Input(shape=(10,), name='my_input')\n outputs = layers.Dense(10)(inputs)\n model = MyModel(inputs, outputs)\n model.add_loss(ops.sum(outputs))\n\n optimizer = SGD()\n model.compile(optimizer, loss='mse', steps_per_execution=10)\n dataset = ...\n model.fit(dataset, epochs=2, steps_per_epoch=10)\n print(f\"Custom loss: {model.loss_tracker.result()}\")\n ```\n\n Args:\n x: Input data.\n y: Target data.\n y_pred: Predictions returned by the model (output of `model(x)`)\n sample_weight: Sample weights for weighting the loss function.\n allow_empty: If `False`, the method will error out if\n no loss has been computed by the model. If `True`, then\n if no loss is computed, the method returns 0.\n\n Returns:\n The total loss as a scalar tensor, or `None` if no loss results\n (which is the case when called by `Model.test_step`).\n \"\"\"\n del x # The default implementation does not use `x`.\n losses = []\n if self._compile_loss is not None:\n loss = self._compile_loss(y, y_pred, sample_weight)\n if loss is not None:\n losses.append(loss)\n for loss in self.losses:\n losses.append(ops.cast(loss, dtype=backend.floatx()))\n if not allow_empty and len(losses) == 0:\n raise ValueError(\n \"No loss to compute. Provide a `loss` argument in `compile()`.\"\n )\n if len(losses) == 1:\n total_loss = losses[0]\n elif len(losses) == 0:\n total_loss = ops.zeros(())\n else:\n total_loss = ops.sum(losses)\n return total_loss\n\n def compute_metrics(self, x, y, y_pred, sample_weight=None):\n \"\"\"Update metric states and collect all metrics to be returned.\n\n Subclasses can optionally override this method to provide custom metric\n updating and collection logic.\n\n Example:\n\n ```python\n class MyModel(Sequential):\n def compute_metrics(self, x, y, y_pred, sample_weight):\n # This super call updates `self.compiled_metrics` and returns\n # results for all metrics listed in `self.metrics`.\n metric_results = super().compute_metrics(\n x, y, y_pred, sample_weight)\n\n # Note that `self.custom_metric` is not listed\n # in `self.metrics`.\n self.custom_metric.update_state(x, y, y_pred, sample_weight)\n metric_results['metric_name'] = self.custom_metric.result()\n return metric_results\n ```\n\n Args:\n x: Input data.\n y: Target data.\n y_pred: Predictions returned by the model output of `model.call(x)`.\n sample_weight: Sample weights for weighting the loss function.\n\n Returns:\n A `dict` containing values that will be passed to\n `keras_core.callbacks.CallbackList.on_train_batch_end()`. Typically,\n the values of the metrics listed in `self.metrics` are returned.\n Example: `{'loss': 0.2, 'accuracy': 0.7}`.\n \"\"\"\n del x # The default implementation does not use `x`.\n if self._compile_metrics is not None:\n self._compile_metrics.update_state(y, y_pred, sample_weight)\n return self.get_metrics_result()\n\n def get_metrics_result(self):\n \"\"\"Returns the model's metrics values as a dict.\n\n If any of the metric result is a dict (containing multiple metrics),\n each of them gets added to the top level returned dict of this method.\n\n Returns:\n A `dict` containing values of the metrics listed in `self.metrics`.\n Example: `{'loss': 0.2, 'accuracy': 0.7}`.\n \"\"\"\n return_metrics = {}\n for metric in self.metrics:\n result = metric.result()\n if isinstance(result, dict):\n return_metrics.update(result)\n else:\n return_metrics[metric.name] = result\n return self._pythonify_logs(return_metrics)\n\n def fit(\n self,\n x=None,\n y=None,\n batch_size=None,\n epochs=1,\n verbose=\"auto\",\n callbacks=None,\n validation_split=0.0,\n validation_data=None,\n shuffle=True,\n class_weight=None,\n sample_weight=None,\n initial_epoch=0,\n steps_per_epoch=None,\n validation_steps=None,\n validation_batch_size=None,\n validation_freq=1,\n ):\n \"\"\"Trains the model for a fixed number of epochs (dataset iterations).\n\n Args:\n x: Input data. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A dict mapping input names to the corresponding array/tensors,\n if the model has named inputs.\n - A `tf.data.Dataset`. Should return a tuple\n of either `(inputs, targets)` or\n `(inputs, targets, sample_weights)`.\n - A `keras_core.utils.PyDataset` returning `(inputs,\n targets)` or `(inputs, targets, sample_weights)`.\n y: Target data. Like the input data `x`,\n it could be either NumPy array(s) or backend-native tensor(s).\n If `x` is a dataset, generator,\n or `keras_core.utils.PyDataset` instance, `y` should\n not be specified (since targets will be obtained from `x`).\n batch_size: Integer or `None`.\n Number of samples per gradient update.\n If unspecified, `batch_size` will default to 32.\n Do not specify the `batch_size` if your data is in the\n form of datasets, generators, or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n epochs: Integer. Number of epochs to train the model.\n An epoch is an iteration over the entire `x` and `y`\n data provided\n (unless the `steps_per_epoch` flag is set to\n something other than None).\n Note that in conjunction with `initial_epoch`,\n `epochs` is to be understood as \"final epoch\".\n The model is not trained for a number of iterations\n given by `epochs`, but merely until the epoch\n of index `epochs` is reached.\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = one line per epoch.\n \"auto\" becomes 1 for most cases.\n Note that the progress bar is not\n particularly useful when logged to a file,\n so `verbose=2` is recommended when not running interactively\n (e.g., in a production environment). Defaults to `\"auto\"`.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during training.\n See `keras_core.callbacks`. Note\n `keras_core.callbacks.ProgbarLogger` and\n `keras_core.callbacks.History` callbacks are created\n automatically and need not be passed to `model.fit()`.\n `keras_core.callbacks.ProgbarLogger` is created\n or not based on the `verbose` argument in `model.fit()`.\n validation_split: Float between 0 and 1.\n Fraction of the training data to be used as validation data.\n The model will set apart this fraction of the training data,\n will not train on it, and will evaluate\n the loss and any model metrics\n on this data at the end of each epoch.\n The validation data is selected from the last samples\n in the `x` and `y` data provided, before shuffling. This\n argument is not supported when `x` is a dataset, generator or\n `keras_core.utils.PyDataset` instance.\n If both `validation_data` and `validation_split` are provided,\n `validation_data` will override `validation_split`.\n validation_data: Data on which to evaluate\n the loss and any model metrics at the end of each epoch.\n The model will not be trained on this data. Thus, note the fact\n that the validation loss of data provided using\n `validation_split` or `validation_data` is not affected by\n regularization layers like noise and dropout.\n `validation_data` will override `validation_split`.\n `validation_data` could be:\n - A tuple `(x_val, y_val)` of NumPy arrays or tensors.\n - A tuple `(x_val, y_val, val_sample_weights)` of NumPy\n arrays.\n - A `tf.data.Dataset`.\n - A Python generator or `keras_core.utils.PyDataset` returning\n `(inputs, targets)` or `(inputs, targets, sample_weights)`.\n shuffle: Boolean, whether to shuffle the training data\n before each epoch. This argument is\n ignored when `x` is a generator or a `tf.data.Dataset`.\n class_weight: Optional dictionary mapping class indices (integers)\n to a weight (float) value, used for weighting the loss function\n (during training only).\n This can be useful to tell the model to\n \"pay more attention\" to samples from\n an under-represented class. When `class_weight` is specified\n and targets have a rank of 2 or greater, either `y` must be\n one-hot encoded, or an explicit final dimension of `1` must\n be included for sparse class labels.\n sample_weight: Optional NumPy array of weights for\n the training samples, used for weighting the loss function\n (during training only). You can either pass a flat (1D)\n NumPy array with the same length as the input samples\n (1:1 mapping between weights and samples),\n or in the case of temporal data,\n you can pass a 2D array with shape\n `(samples, sequence_length)`,\n to apply a different weight to every timestep of every sample.\n This argument is not supported when `x` is a dataset, generator,\n or `keras_core.utils.PyDataset` instance, instead provide the\n sample_weights as the third element of `x`.\n Note that sample weighting does not apply to metrics specified\n via the `metrics` argument in `compile()`. To apply sample\n weighting to your metrics, you can specify them via the\n `weighted_metrics` in `compile()` instead.\n initial_epoch: Integer.\n Epoch at which to start training\n (useful for resuming a previous training run).\n steps_per_epoch: Integer or `None`.\n Total number of steps (batches of samples)\n before declaring one epoch finished and starting the\n next epoch. When training with input tensors such as\n backend-native tensors, the default `None` is equal to\n the number of samples in your dataset divided by\n the batch size, or 1 if that cannot be determined. If `x` is a\n `tf.data.Dataset`, and `steps_per_epoch`\n is `None`, the epoch will run until the input dataset is\n exhausted. When passing an infinitely repeating dataset, you\n must specify the `steps_per_epoch` argument. If\n `steps_per_epoch=-1` the training will run indefinitely with an\n infinitely repeating dataset.\n validation_steps: Only relevant if `validation_data` is provided.\n Total number of steps (batches of\n samples) to draw before stopping when performing validation\n at the end of every epoch. If `validation_steps` is `None`,\n validation will run until the `validation_data` dataset is\n exhausted. In the case of an infinitely repeated dataset, it\n will run into an infinite loop. If `validation_steps` is\n specified and only part of the dataset will be consumed, the\n evaluation will start from the beginning of the dataset at each\n epoch. This ensures that the same validation samples are used\n every time.\n validation_batch_size: Integer or `None`.\n Number of samples per validation batch.\n If unspecified, will default to `batch_size`.\n Do not specify the `validation_batch_size` if your data is in\n the form of datasets or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n validation_freq: Only relevant if validation data is provided.\n Specifies how many training epochs to run\n before a new validation run is performed, e.g. `validation_freq=2`\n runs validation every 2 epochs.\n\n Unpacking behavior for iterator-like inputs:\n A common pattern is to pass an iterator like object such as a\n `tf.data.Dataset` or a `keras_core.utils.PyDataset` to `fit()`,\n which will in fact yield not only features (`x`)\n but optionally targets (`y`) and sample weights (`sample_weight`).\n Keras requires that the output of such iterator-likes be\n unambiguous. The iterator should return a tuple\n of length 1, 2, or 3, where the optional second and third elements\n will be used for `y` and `sample_weight` respectively.\n Any other type provided will be wrapped in\n a length-one tuple, effectively treating everything as `x`. When\n yielding dicts, they should still adhere to the top-level tuple\n structure,\n e.g. `({\"x0\": x0, \"x1\": x1}, y)`. Keras will not attempt to separate\n features, targets, and weights from the keys of a single dict.\n A notable unsupported data type is the `namedtuple`. The reason is\n that it behaves like both an ordered datatype (tuple) and a mapping\n datatype (dict). So given a namedtuple of the form:\n `namedtuple(\"example_tuple\", [\"y\", \"x\"])`\n it is ambiguous whether to reverse the order of the elements when\n interpreting the value. Even worse is a tuple of the form:\n `namedtuple(\"other_tuple\", [\"x\", \"y\", \"z\"])`\n where it is unclear if the tuple was intended to be unpacked\n into `x`, `y`, and `sample_weight` or passed through\n as a single element to `x`.\n\n Returns:\n A `History` object. Its `History.history` attribute is\n a record of training loss values and metrics values\n at successive epochs, as well as validation loss values\n and validation metrics values (if applicable).\n \"\"\"\n raise NotImplementedError\n\n def evaluate(\n self,\n x=None,\n y=None,\n batch_size=None,\n verbose=\"auto\",\n sample_weight=None,\n steps=None,\n callbacks=None,\n return_dict=False,\n **kwargs,\n ):\n \"\"\"Returns the loss value & metrics values for the model in test mode.\n\n Computation is done in batches (see the `batch_size` arg.)\n\n Args:\n x: Input data. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A dict mapping input names to the corresponding array/tensors,\n if the model has named inputs.\n - A `tf.data.Dataset`. Should return a tuple\n of either `(inputs, targets)` or\n `(inputs, targets, sample_weights)`.\n - A generator or `keras_core.utils.PyDataset` returning\n `(inputs, targets)` or `(inputs, targets, sample_weights)`.\n y: Target data. Like the input data `x`, it could be either NumPy\n array(s) or backend-native tensor(s).\n If `x` is a `tf.data.Dataset` or `keras_core.utils.PyDataset`\n instance, `y` should not be specified\n (since targets will be obtained from the iterator/dataset).\n batch_size: Integer or `None`. Number of samples per batch of\n computation. If unspecified, `batch_size` will default to 32. Do\n not specify the `batch_size` if your data is in the form of a\n dataset, generators, or `keras_core.utils.PyDataset` instances\n (since they generate batches).\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = single line.\n `\"auto\"` becomes 1 for most cases.\n Note that the progress bar is not\n particularly useful when logged to a file, so `verbose=2` is\n recommended when not running interactively\n (e.g. in a production environment). Defaults to `\"auto\"`.\n sample_weight: Optional NumPy array of weights for the test samples,\n used for weighting the loss function. You can either pass a flat\n (1D) NumPy array with the same length as the input samples\n (1:1 mapping between weights and samples), or in the case of\n temporal data, you can pass a 2D array with shape `(samples,\n sequence_length)`, to apply a different weight to every\n timestep of every sample. This argument is not supported when\n `x` is a dataset, instead pass sample weights as the third\n element of `x`.\n steps: Integer or `None`. Total number of steps (batches of samples)\n before declaring the evaluation round finished. Ignored with the\n default value of `None`. If `x` is a `tf.data.Dataset` and\n `steps` is `None`, evaluation will run until the dataset\n is exhausted.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during evaluation.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric.\n If `False`, they are returned as a list.\n\n Returns:\n Scalar test loss (if the model has a single output and no metrics)\n or list of scalars (if the model has multiple outputs\n and/or metrics). The attribute `model.metrics_names` will give you\n the display labels for the scalar outputs.\n \"\"\"\n raise NotImplementedError\n\n def predict(\n self, x, batch_size=None, verbose=\"auto\", steps=None, callbacks=None\n ):\n \"\"\"Generates output predictions for the input samples.\n\n Computation is done in batches. This method is designed for batch\n processing of large numbers of inputs. It is not intended for use inside\n of loops that iterate over your data and process small numbers of inputs\n at a time.\n\n For small numbers of inputs that fit in one batch,\n directly use `__call__()` for faster execution, e.g.,\n `model(x)`, or `model(x, training=False)` if you have layers such as\n `BatchNormalization` that behave differently during\n inference.\n\n Note: See [this FAQ entry](\n https://keras.io/getting_started/faq/#whats-the-difference-between-model-methods-predict-and-call)\n for more details about the difference between `Model` methods\n `predict()` and `__call__()`.\n\n Args:\n x: Input samples. It could be:\n - A NumPy array (or array-like), or a list of arrays\n (in case the model has multiple inputs).\n - A tensor, or a list of tensors\n (in case the model has multiple inputs).\n - A `tf.data.Dataset`.\n - A `keras_core.utils.PyDataset` instance.\n batch_size: Integer or `None`.\n Number of samples per batch.\n If unspecified, `batch_size` will default to 32.\n Do not specify the `batch_size` if your data is in the\n form of dataset, generators, or `keras_core.utils.PyDataset`\n instances (since they generate batches).\n verbose: `\"auto\"`, 0, 1, or 2. Verbosity mode.\n 0 = silent, 1 = progress bar, 2 = single line.\n `\"auto\"` becomes 1 for most cases. Note that the progress bar\n is not particularly useful when logged to a file,\n so `verbose=2` is recommended when not running interactively\n (e.g. in a production environment). Defaults to `\"auto\"`.\n steps: Total number of steps (batches of samples)\n before declaring the prediction round finished.\n Ignored with the default value of `None`.\n If `x` is a `tf.data.Dataset` and `steps` is `None`,\n `predict()` will run until the input dataset is exhausted.\n callbacks: List of `keras_core.callbacks.Callback` instances.\n List of callbacks to apply during prediction.\n\n Returns:\n NumPy array(s) of predictions.\n \"\"\"\n raise NotImplementedError\n\n def train_on_batch(\n self,\n x,\n y=None,\n sample_weight=None,\n class_weight=None,\n return_dict=False,\n ):\n \"\"\"Runs a single gradient update on a single batch of data.\n\n Args:\n x: Input data. Must be array-like.\n y: Target data. Must be array-like.\n sample_weight: Optional array of the same length as x, containing\n weights to apply to the model's loss for each sample.\n In the case of temporal data, you can pass a 2D array\n with shape `(samples, sequence_length)`, to apply a different\n weight to every timestep of every sample.\n class_weight: Optional dictionary mapping class indices (integers)\n to a weight (float) to apply to the model's loss for the samples\n from this class during training. This can be useful to tell the\n model to \"pay more attention\" to samples from an\n under-represented class. When `class_weight` is specified\n and targets have a rank of 2 or greater, either `y` must\n be one-hot encoded, or an explicit final dimension of 1\n must be included for sparse class labels.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric. If `False`,\n they are returned as a list.\n\n Returns:\n A scalar loss value (when no metrics and `return_dict=False`),\n a list of loss and metric values\n (if there are metrics and `return_dict=False`), or a dict of\n metric and loss values (if `return_dict=True`).\n \"\"\"\n raise NotImplementedError\n\n def test_on_batch(\n self,\n x,\n y=None,\n sample_weight=None,\n return_dict=False,\n ):\n \"\"\"Test the model on a single batch of samples.\n\n Args:\n x: Input data. Must be array-like.\n y: Target data. Must be array-like.\n sample_weight: Optional array of the same length as x, containing\n weights to apply to the model's loss for each sample.\n In the case of temporal data, you can pass a 2D array\n with shape `(samples, sequence_length)`, to apply a different\n weight to every timestep of every sample.\n return_dict: If `True`, loss and metric results are returned as a\n dict, with each key being the name of the metric. If `False`,\n they are returned as a list.\n\n Returns:\n A scalar loss value (when no metrics and `return_dict=False`),\n a list of loss and metric values\n (if there are metrics and `return_dict=False`), or a dict of\n metric and loss values (if `return_dict=True`).\n \"\"\"\n raise NotImplementedError\n\n def predict_on_batch(self, x):\n \"\"\"Returns predictions for a single batch of samples.\n\n Args:\n x: Input data. It must be array-like.\n\n Returns:\n NumPy array(s) of predictions.\n \"\"\"\n raise NotImplementedError\n\n def get_compile_config(self):\n \"\"\"Returns a serialized config with information for compiling the model.\n\n This method returns a config dictionary containing all the information\n (optimizer, loss, metrics, etc.) with which the model was compiled.\n\n Returns:\n A dict containing information for compiling the model.\n \"\"\"\n if self.compiled and hasattr(self, \"_compile_config\"):\n return self._compile_config.serialize()\n\n def compile_from_config(self, config):\n \"\"\"Compiles the model with the information given in config.\n\n This method uses the information in the config (optimizer, loss,\n metrics, etc.) to compile the model.\n\n Args:\n config: Dict containing information for compiling the model.\n \"\"\"\n has_overridden_compile = self.__class__.compile != Trainer.compile\n if has_overridden_compile:\n warnings.warn(\n \"`compile()` was not called as part of model loading \"\n \"because the model's `compile()` method is custom. \"\n \"All subclassed Models that have `compile()` \"\n \"overridden should also override \"\n \"`get_compile_config()` and `compile_from_config(config)`. \"\n \"Alternatively, you can \"\n \"call `compile()` manually after loading.\",\n stacklevel=2,\n )\n return\n config = serialization_lib.deserialize_keras_object(config)\n self.compile(**config)\n if hasattr(self, \"optimizer\") and self.built:\n # Create optimizer variables.\n self.optimizer.build(self.trainable_variables)\n\n def _should_eval(self, epoch, validation_freq):\n epoch = epoch + 1 # one-index the user-facing epoch.\n if isinstance(validation_freq, int):\n return epoch % validation_freq == 0\n elif isinstance(validation_freq, list):\n return epoch in validation_freq\n else:\n raise ValueError(\n \"Expected `validation_freq` to be a list or int. \"\n f\"Received: validation_freq={validation_freq} of the \"\n f\"type {type(validation_freq)}.\"\n )\n\n def _pythonify_logs(self, logs):\n result = {}\n for key, value in sorted(logs.items()):\n if isinstance(value, dict):\n result.update(self._pythonify_logs(value))\n else:\n try:\n value = float(value)\n except:\n pass\n result[key] = value\n return result\n\n def _flatten_metrics_in_order(self, logs):\n \"\"\"Turns `logs` dict into a list as per key order of `metrics_names`.\"\"\"\n metric_names = [m.name for m in self.metrics]\n results = []\n for name in metric_names:\n if name in logs:\n results.append(logs[name])\n for key in sorted(logs.keys()):\n if key not in metric_names:\n results.append(logs[key])\n if len(results) == 1:\n return results[0]\n return results\n\n def _assert_compile_called(self, method_name=None):\n if not self.compiled:\n msg = \"You must call `compile()` before \"\n if metrics_module:\n msg += \"using the model.\"\n else:\n msg += f\"calling `{method_name}()`.\"\n raise ValueError(msg)\n\n\ndef resolve_auto_jit_compile(model):\n if model_supports_jit(model):\n if backend.backend() == \"torch\":\n # Torch defaults to eager mode\n # until torch compile is reliable\n return False\n return True\n return False\n\n\ndef model_supports_jit(model):\n if platform.system() == \"Darwin\" and \"arm\" in platform.processor().lower():\n if backend.backend() == \"tensorflow\":\n import tensorflow as tf\n\n if tf.config.list_physical_devices(\"GPU\"):\n return False\n if all(x.supports_jit for x in model._flatten_layers()):\n return True\n return False\n", "path": "keras_core/trainers/trainer.py"}]} |
gh_patches_debug_1126 | rasdani/github-patches | git_diff | Netflix__lemur-61 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Cannot edit owner with no associated role
```
2015-08-26 20:33:36,751 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]
Traceback (most recent call last):
File "/apps/lemur/lemur/common/utils.py", line 46, in wrapper
resp = f(*args, **kwargs)
File "/apps/lemur/lemur/certificates/views.py", line 575, in put
permission = UpdateCertificatePermission(certificate_id, role.name)
AttributeError: 'NoneType' object has no attribute 'name'
2015-08-26 20:34:08,236 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]
Traceback (most recent call last):
File "/apps/lemur/lemur/common/utils.py", line 46, in wrapper
resp = f(*args, **kwargs)
File "/apps/lemur/lemur/certificates/views.py", line 575, in put
permission = UpdateCertificatePermission(certificate_id, role.name)
AttributeError: 'NoneType' object has no attribute 'name'
2015-08-26 20:37:19,147 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]
Traceback (most recent call last):
File "/apps/lemur/lemur/common/utils.py", line 46, in wrapper
resp = f(*args, **kwargs)
File "/apps/lemur/lemur/certificates/views.py", line 575, in put
permission = UpdateCertificatePermission(certificate_id, role.name)
AttributeError: 'NoneType' object has no attribute 'name'
```
If user enters a owner that has no associated role with it, they are unable to edit the owner.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lemur/certificates/views.py`
Content:
```
1 """
2 .. module: lemur.certificates.views
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6 .. moduleauthor:: Kevin Glisson <[email protected]>
7 """
8 from builtins import str
9
10 from flask import Blueprint, current_app, make_response, jsonify
11 from flask.ext.restful import reqparse, Api, fields
12
13 from cryptography import x509
14 from cryptography.hazmat.backends import default_backend
15 from cryptography.hazmat.primitives import serialization
16
17 from lemur.certificates import service
18 from lemur.authorities.models import Authority
19
20 from lemur.auth.service import AuthenticatedResource
21 from lemur.auth.permissions import ViewKeyPermission, AuthorityPermission, UpdateCertificatePermission
22
23 from lemur.roles import service as role_service
24
25 from lemur.common.utils import marshal_items, paginated_parser
26
27
28 mod = Blueprint('certificates', __name__)
29 api = Api(mod)
30
31
32 FIELDS = {
33 'name': fields.String,
34 'id': fields.Integer,
35 'bits': fields.Integer,
36 'deleted': fields.String,
37 'issuer': fields.String,
38 'serial': fields.String,
39 'owner': fields.String,
40 'chain': fields.String,
41 'san': fields.String,
42 'active': fields.Boolean,
43 'description': fields.String,
44 'notBefore': fields.DateTime(dt_format='iso8601', attribute='not_before'),
45 'notAfter': fields.DateTime(dt_format='iso8601', attribute='not_after'),
46 'cn': fields.String,
47 'status': fields.String,
48 'body': fields.String
49 }
50
51
52 def valid_authority(authority_options):
53 """
54 Defends against invalid authorities
55
56 :param authority_options:
57 :return: :raise ValueError:
58 """
59 name = authority_options['name']
60 authority = Authority.query.filter(Authority.name == name).one()
61
62 if not authority:
63 raise ValueError("Unable to find authority specified")
64
65 if not authority.active:
66 raise ValueError("Selected authority [{0}] is not currently active".format(name))
67
68 return authority
69
70
71 def pem_str(value, name):
72 """
73 Used to validate that the given string is a PEM formatted string
74
75 :param value:
76 :param name:
77 :return: :raise ValueError:
78 """
79 try:
80 x509.load_pem_x509_certificate(bytes(value), default_backend())
81 except Exception:
82 raise ValueError("The parameter '{0}' needs to be a valid PEM string".format(name))
83 return value
84
85
86 def private_key_str(value, name):
87 """
88 User to validate that a given string is a RSA private key
89
90 :param value:
91 :param name:
92 :return: :raise ValueError:
93 """
94 try:
95 serialization.load_pem_private_key(bytes(value), None, backend=default_backend())
96 except Exception:
97 raise ValueError("The parameter '{0}' needs to be a valid RSA private key".format(name))
98 return value
99
100
101 class CertificatesList(AuthenticatedResource):
102 """ Defines the 'certificates' endpoint """
103 def __init__(self):
104 self.reqparse = reqparse.RequestParser()
105 super(CertificatesList, self).__init__()
106
107 @marshal_items(FIELDS)
108 def get(self):
109 """
110 .. http:get:: /certificates
111
112 The current list of certificates
113
114 **Example request**:
115
116 .. sourcecode:: http
117
118 GET /certificates HTTP/1.1
119 Host: example.com
120 Accept: application/json, text/javascript
121
122 **Example response**:
123
124 .. sourcecode:: http
125
126 HTTP/1.1 200 OK
127 Vary: Accept
128 Content-Type: text/javascript
129
130 {
131 "items": [
132 {
133 "id": 1,
134 "name": "cert1",
135 "description": "this is cert1",
136 "bits": 2048,
137 "deleted": false,
138 "issuer": "ExampeInc.",
139 "serial": "123450",
140 "chain": "-----Begin ...",
141 "body": "-----Begin ...",
142 "san": true,
143 "owner": '[email protected]",
144 "active": true,
145 "notBefore": "2015-06-05T17:09:39",
146 "notAfter": "2015-06-10T17:09:39",
147 "cn": "example.com",
148 "status": "unknown"
149 }
150 ]
151 "total": 1
152 }
153
154 :query sortBy: field to sort on
155 :query sortDir: acs or desc
156 :query page: int. default is 1
157 :query filter: key value pair. format is k=v;
158 :query limit: limit number. default is 10
159 :reqheader Authorization: OAuth token to authenticate
160 :statuscode 200: no error
161 :statuscode 403: unauthenticated
162 """
163 parser = paginated_parser.copy()
164 parser.add_argument('timeRange', type=int, dest='time_range', location='args')
165 parser.add_argument('owner', type=bool, location='args')
166 parser.add_argument('id', type=str, location='args')
167 parser.add_argument('active', type=bool, location='args')
168 parser.add_argument('destinationId', type=int, dest="destination_id", location='args')
169 parser.add_argument('creator', type=str, location='args')
170 parser.add_argument('show', type=str, location='args')
171
172 args = parser.parse_args()
173 return service.render(args)
174
175 @marshal_items(FIELDS)
176 def post(self):
177 """
178 .. http:post:: /certificates
179
180 Creates a new certificate
181
182 **Example request**:
183
184 .. sourcecode:: http
185
186 POST /certificates HTTP/1.1
187 Host: example.com
188 Accept: application/json, text/javascript
189
190 {
191 "country": "US",
192 "state": "CA",
193 "location": "A Place",
194 "organization": "ExampleInc.",
195 "organizationalUnit": "Operations",
196 "owner": "[email protected]",
197 "description": "test",
198 "selectedAuthority": "timetest2",
199 "authority": {
200 "body": "-----BEGIN...",
201 "name": "timetest2",
202 "chain": "",
203 "notBefore": "2015-06-05T15:20:59",
204 "active": true,
205 "id": 50,
206 "notAfter": "2015-06-17T15:21:08",
207 "description": "dsfdsf"
208 },
209 "extensions": {
210 "basicConstraints": {},
211 "keyUsage": {
212 "isCritical": true,
213 "useKeyEncipherment": true,
214 "useDigitalSignature": true
215 },
216 "extendedKeyUsage": {
217 "isCritical": true,
218 "useServerAuthentication": true
219 },
220 "subjectKeyIdentifier": {
221 "includeSKI": true
222 },
223 "subAltNames": {
224 "names": []
225 }
226 },
227 "commonName": "test",
228 "validityStart": "2015-06-05T07:00:00.000Z",
229 "validityEnd": "2015-06-16T07:00:00.000Z"
230 }
231
232 **Example response**:
233
234 .. sourcecode:: http
235
236 HTTP/1.1 200 OK
237 Vary: Accept
238 Content-Type: text/javascript
239
240 {
241 "id": 1,
242 "name": "cert1",
243 "description": "this is cert1",
244 "bits": 2048,
245 "deleted": false,
246 "issuer": "ExampeInc.",
247 "serial": "123450",
248 "chain": "-----Begin ...",
249 "body": "-----Begin ...",
250 "san": true,
251 "owner": "[email protected]",
252 "active": false,
253 "notBefore": "2015-06-05T17:09:39",
254 "notAfter": "2015-06-10T17:09:39",
255 "cn": "example.com",
256 "status": "unknown"
257 }
258
259 :arg extensions: extensions to be used in the certificate
260 :arg description: description for new certificate
261 :arg owner: owner email
262 :arg validityStart: when the certificate should start being valid
263 :arg validityEnd: when the certificate should expire
264 :arg authority: authority that should issue the certificate
265 :arg country: country for the CSR
266 :arg state: state for the CSR
267 :arg location: location for the CSR
268 :arg organization: organization for CSR
269 :arg commonName: certiifcate common name
270 :reqheader Authorization: OAuth token to authenticate
271 :statuscode 200: no error
272 :statuscode 403: unauthenticated
273 """
274 self.reqparse.add_argument('extensions', type=dict, location='json')
275 self.reqparse.add_argument('destinations', type=list, default=[], location='json')
276 self.reqparse.add_argument('notifications', type=list, default=[], location='json')
277 self.reqparse.add_argument('owner', type=str, location='json')
278 self.reqparse.add_argument('validityStart', type=str, location='json') # TODO validate
279 self.reqparse.add_argument('validityEnd', type=str, location='json') # TODO validate
280 self.reqparse.add_argument('authority', type=valid_authority, location='json')
281 self.reqparse.add_argument('description', type=str, location='json')
282 self.reqparse.add_argument('country', type=str, location='json')
283 self.reqparse.add_argument('state', type=str, location='json')
284 self.reqparse.add_argument('location', type=str, location='json')
285 self.reqparse.add_argument('organization', type=str, location='json')
286 self.reqparse.add_argument('organizationalUnit', type=str, location='json')
287 self.reqparse.add_argument('owner', type=str, location='json')
288 self.reqparse.add_argument('commonName', type=str, location='json')
289
290 args = self.reqparse.parse_args()
291
292 authority = args['authority']
293 role = role_service.get_by_name(authority.owner)
294
295 # all the authority role members should be allowed
296 roles = [x.name for x in authority.roles]
297
298 # allow "owner" roles by team DL
299 roles.append(role)
300 permission = AuthorityPermission(authority.id, roles)
301
302 if permission.can():
303 return service.create(**args)
304
305 return dict(message="You are not authorized to use {0}".format(args['authority'].name)), 403
306
307
308 class CertificatesUpload(AuthenticatedResource):
309 """ Defines the 'certificates' upload endpoint """
310 def __init__(self):
311 self.reqparse = reqparse.RequestParser()
312 super(CertificatesUpload, self).__init__()
313
314 @marshal_items(FIELDS)
315 def post(self):
316 """
317 .. http:post:: /certificates/upload
318
319 Upload a certificate
320
321 **Example request**:
322
323 .. sourcecode:: http
324
325 POST /certificates/upload HTTP/1.1
326 Host: example.com
327 Accept: application/json, text/javascript
328
329 {
330 "owner": "[email protected]",
331 "publicCert": "---Begin Public...",
332 "intermediateCert": "---Begin Public...",
333 "privateKey": "---Begin Private..."
334 "destinations": [],
335 "notifications": [],
336 "name": "cert1"
337 }
338
339 **Example response**:
340
341 .. sourcecode:: http
342
343 HTTP/1.1 200 OK
344 Vary: Accept
345 Content-Type: text/javascript
346
347 {
348 "id": 1,
349 "name": "cert1",
350 "description": "this is cert1",
351 "bits": 2048,
352 "deleted": false,
353 "issuer": "ExampeInc.",
354 "serial": "123450",
355 "chain": "-----Begin ...",
356 "body": "-----Begin ...",
357 "san": true,
358 "owner": "[email protected]",
359 "active": true,
360 "notBefore": "2015-06-05T17:09:39",
361 "notAfter": "2015-06-10T17:09:39",
362 "cn": "example.com",
363 "status": "unknown"
364 }
365
366 :arg owner: owner email for certificate
367 :arg publicCert: valid PEM public key for certificate
368 :arg intermediateCert valid PEM intermediate key for certificate
369 :arg privateKey: valid PEM private key for certificate
370 :arg destinations: list of aws destinations to upload the certificate to
371 :reqheader Authorization: OAuth token to authenticate
372 :statuscode 403: unauthenticated
373 :statuscode 200: no error
374 """
375 self.reqparse.add_argument('description', type=str, location='json')
376 self.reqparse.add_argument('owner', type=str, required=True, location='json')
377 self.reqparse.add_argument('name', type=str, location='json')
378 self.reqparse.add_argument('publicCert', type=pem_str, required=True, dest='public_cert', location='json')
379 self.reqparse.add_argument('destinations', type=list, default=[], dest='destinations', location='json')
380 self.reqparse.add_argument('notifications', type=list, default=[], dest='notifications', location='json')
381 self.reqparse.add_argument('intermediateCert', type=pem_str, dest='intermediate_cert', location='json')
382 self.reqparse.add_argument('privateKey', type=private_key_str, dest='private_key', location='json')
383
384 args = self.reqparse.parse_args()
385 if args.get('destinations'):
386 if args.get('private_key'):
387 return service.upload(**args)
388 else:
389 raise Exception("Private key must be provided in order to upload certificate to AWS")
390 return service.upload(**args)
391
392
393 class CertificatesStats(AuthenticatedResource):
394 """ Defines the 'certificates' stats endpoint """
395 def __init__(self):
396 self.reqparse = reqparse.RequestParser()
397 super(CertificatesStats, self).__init__()
398
399 def get(self):
400 self.reqparse.add_argument('metric', type=str, location='args')
401 self.reqparse.add_argument('range', default=32, type=int, location='args')
402 self.reqparse.add_argument('destinationId', dest='destination_id', location='args')
403 self.reqparse.add_argument('active', type=str, default='true', location='args')
404
405 args = self.reqparse.parse_args()
406
407 items = service.stats(**args)
408 return dict(items=items, total=len(items))
409
410
411 class CertificatePrivateKey(AuthenticatedResource):
412 def __init__(self):
413 super(CertificatePrivateKey, self).__init__()
414
415 def get(self, certificate_id):
416 """
417 .. http:get:: /certificates/1/key
418
419 Retrieves the private key for a given certificate
420
421 **Example request**:
422
423 .. sourcecode:: http
424
425 GET /certificates/1/key HTTP/1.1
426 Host: example.com
427 Accept: application/json, text/javascript
428
429 **Example response**:
430
431 .. sourcecode:: http
432
433 HTTP/1.1 200 OK
434 Vary: Accept
435 Content-Type: text/javascript
436
437 {
438 "key": "----Begin ...",
439 }
440
441 :reqheader Authorization: OAuth token to authenticate
442 :statuscode 200: no error
443 :statuscode 403: unauthenticated
444 """
445 cert = service.get(certificate_id)
446 if not cert:
447 return dict(message="Cannot find specified certificate"), 404
448
449 role = role_service.get_by_name(cert.owner)
450
451 permission = ViewKeyPermission(certificate_id, getattr(role, 'name', None))
452
453 if permission.can():
454 response = make_response(jsonify(key=cert.private_key), 200)
455 response.headers['cache-control'] = 'private, max-age=0, no-cache, no-store'
456 response.headers['pragma'] = 'no-cache'
457 return response
458
459 return dict(message='You are not authorized to view this key'), 403
460
461
462 class Certificates(AuthenticatedResource):
463 def __init__(self):
464 self.reqparse = reqparse.RequestParser()
465 super(Certificates, self).__init__()
466
467 @marshal_items(FIELDS)
468 def get(self, certificate_id):
469 """
470 .. http:get:: /certificates/1
471
472 One certificate
473
474 **Example request**:
475
476 .. sourcecode:: http
477
478 GET /certificates/1 HTTP/1.1
479 Host: example.com
480 Accept: application/json, text/javascript
481
482 **Example response**:
483
484 .. sourcecode:: http
485
486 HTTP/1.1 200 OK
487 Vary: Accept
488 Content-Type: text/javascript
489
490 {
491 "id": 1,
492 "name": "cert1",
493 "description": "this is cert1",
494 "bits": 2048,
495 "deleted": false,
496 "issuer": "ExampeInc.",
497 "serial": "123450",
498 "chain": "-----Begin ...",
499 "body": "-----Begin ...",
500 "san": true,
501 "owner": "[email protected]",
502 "active": true,
503 "notBefore": "2015-06-05T17:09:39",
504 "notAfter": "2015-06-10T17:09:39",
505 "cn": "example.com",
506 "status": "unknown"
507 }
508
509 :reqheader Authorization: OAuth token to authenticate
510 :statuscode 200: no error
511 :statuscode 403: unauthenticated
512 """
513 return service.get(certificate_id)
514
515 @marshal_items(FIELDS)
516 def put(self, certificate_id):
517 """
518 .. http:put:: /certificates/1
519
520 Update a certificate
521
522 **Example request**:
523
524 .. sourcecode:: http
525
526 PUT /certificates/1 HTTP/1.1
527 Host: example.com
528 Accept: application/json, text/javascript
529
530 {
531 "owner": "[email protected]",
532 "active": false
533 "notifications": [],
534 "destinations": []
535 }
536
537 **Example response**:
538
539 .. sourcecode:: http
540
541 HTTP/1.1 200 OK
542 Vary: Accept
543 Content-Type: text/javascript
544
545 {
546 "id": 1,
547 "name": "cert1",
548 "description": "this is cert1",
549 "bits": 2048,
550 "deleted": false,
551 "issuer": "ExampeInc.",
552 "serial": "123450",
553 "chain": "-----Begin ...",
554 "body": "-----Begin ...",
555 "san": true,
556 "owner": "[email protected]",
557 "active": false,
558 "notBefore": "2015-06-05T17:09:39",
559 "notAfter": "2015-06-10T17:09:39",
560 "cn": "example.com",
561 "status": "unknown",
562 }
563
564 :reqheader Authorization: OAuth token to authenticate
565 :statuscode 200: no error
566 :statuscode 403: unauthenticated
567 """
568 self.reqparse.add_argument('active', type=bool, location='json')
569 self.reqparse.add_argument('owner', type=str, location='json')
570 self.reqparse.add_argument('description', type=str, location='json')
571 self.reqparse.add_argument('destinations', type=list, default=[], location='json')
572 self.reqparse.add_argument('notifications', type=list, default=[], location='json')
573 args = self.reqparse.parse_args()
574
575 cert = service.get(certificate_id)
576 role = role_service.get_by_name(cert.owner)
577 permission = UpdateCertificatePermission(certificate_id, role.name)
578
579 if permission.can():
580 return service.update(
581 certificate_id,
582 args['owner'],
583 args['description'],
584 args['active'],
585 args['destinations'],
586 args['notifications']
587 )
588
589 return dict(message='You are not authorized to update this certificate'), 403
590
591
592 class NotificationCertificatesList(AuthenticatedResource):
593 """ Defines the 'certificates' endpoint """
594 def __init__(self):
595 self.reqparse = reqparse.RequestParser()
596 super(NotificationCertificatesList, self).__init__()
597
598 @marshal_items(FIELDS)
599 def get(self, notification_id):
600 """
601 .. http:get:: /notifications/1/certificates
602
603 The current list of certificates for a given notification
604
605 **Example request**:
606
607 .. sourcecode:: http
608
609 GET /notifications/1/certificates HTTP/1.1
610 Host: example.com
611 Accept: application/json, text/javascript
612
613 **Example response**:
614
615 .. sourcecode:: http
616
617 HTTP/1.1 200 OK
618 Vary: Accept
619 Content-Type: text/javascript
620
621 {
622 "items": [
623 {
624 "id": 1,
625 "name": "cert1",
626 "description": "this is cert1",
627 "bits": 2048,
628 "deleted": false,
629 "issuer": "ExampeInc.",
630 "serial": "123450",
631 "chain": "-----Begin ...",
632 "body": "-----Begin ...",
633 "san": true,
634 "owner": '[email protected]",
635 "active": true,
636 "notBefore": "2015-06-05T17:09:39",
637 "notAfter": "2015-06-10T17:09:39",
638 "cn": "example.com",
639 "status": "unknown"
640 }
641 ]
642 "total": 1
643 }
644
645 :query sortBy: field to sort on
646 :query sortDir: acs or desc
647 :query page: int. default is 1
648 :query filter: key value pair. format is k=v;
649 :query limit: limit number. default is 10
650 :reqheader Authorization: OAuth token to authenticate
651 :statuscode 200: no error
652 :statuscode 403: unauthenticated
653 """
654 parser = paginated_parser.copy()
655 parser.add_argument('timeRange', type=int, dest='time_range', location='args')
656 parser.add_argument('owner', type=bool, location='args')
657 parser.add_argument('id', type=str, location='args')
658 parser.add_argument('active', type=bool, location='args')
659 parser.add_argument('destinationId', type=int, dest="destination_id", location='args')
660 parser.add_argument('creator', type=str, location='args')
661 parser.add_argument('show', type=str, location='args')
662
663 args = parser.parse_args()
664 args['notification_id'] = notification_id
665 return service.render(args)
666
667
668 class CertificatesDefaults(AuthenticatedResource):
669 """ Defineds the 'certificates' defaults endpoint """
670 def __init__(self):
671 super(CertificatesDefaults)
672
673 def get(self):
674 """
675 .. http:get:: /certificates/defaults
676
677 Returns defaults needed to generate CSRs
678
679 **Example request**:
680
681 .. sourcecode:: http
682
683 GET /certificates/defaults HTTP/1.1
684 Host: example.com
685 Accept: application/json, text/javascript
686
687 **Example response**:
688
689 .. sourcecode:: http
690
691 HTTP/1.1 200 OK
692 Vary: Accept
693 Content-Type: text/javascript
694
695 {
696 "country": "US",
697 "state": "CA",
698 "location": "Los Gatos",
699 "organization": "Netflix",
700 "organizationalUnit": "Operations"
701 }
702
703 :reqheader Authorization: OAuth token to authenticate
704 :statuscode 200: no error
705 :statuscode 403: unauthenticated
706 """
707 return dict(
708 country=current_app.config.get('LEMUR_DEFAULT_COUNTRY'),
709 state=current_app.config.get('LEMUR_DEFAULT_STATE'),
710 location=current_app.config.get('LEMUR_DEFAULT_LOCATION'),
711 organization=current_app.config.get('LEMUR_DEFAULT_ORGANIZATION'),
712 organizationalUnit=current_app.config.get('LEMUR_DEFAULT_ORGANIZATIONAL_UNIT')
713 )
714
715
716 api.add_resource(CertificatesList, '/certificates', endpoint='certificates')
717 api.add_resource(Certificates, '/certificates/<int:certificate_id>', endpoint='certificate')
718 api.add_resource(CertificatesStats, '/certificates/stats', endpoint='certificateStats')
719 api.add_resource(CertificatesUpload, '/certificates/upload', endpoint='certificateUpload')
720 api.add_resource(CertificatePrivateKey, '/certificates/<int:certificate_id>/key', endpoint='privateKeyCertificates')
721 api.add_resource(NotificationCertificatesList, '/notifications/<int:notification_id>/certificates', endpoint='notificationCertificates')
722 api.add_resource(CertificatesDefaults, '/certificates/defaults', endpoint='certificatesDefault')
723
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lemur/certificates/views.py b/lemur/certificates/views.py
--- a/lemur/certificates/views.py
+++ b/lemur/certificates/views.py
@@ -572,7 +572,8 @@
cert = service.get(certificate_id)
role = role_service.get_by_name(cert.owner)
- permission = UpdateCertificatePermission(certificate_id, role.name)
+
+ permission = UpdateCertificatePermission(certificate_id, getattr(role, 'name', None))
if permission.can():
return service.update(
| {"golden_diff": "diff --git a/lemur/certificates/views.py b/lemur/certificates/views.py\n--- a/lemur/certificates/views.py\n+++ b/lemur/certificates/views.py\n@@ -572,7 +572,8 @@\n \n cert = service.get(certificate_id)\n role = role_service.get_by_name(cert.owner)\n- permission = UpdateCertificatePermission(certificate_id, role.name)\n+\n+ permission = UpdateCertificatePermission(certificate_id, getattr(role, 'name', None))\n \n if permission.can():\n return service.update(\n", "issue": "Cannot edit owner with no associated role\n```\n2015-08-26 20:33:36,751 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]\nTraceback (most recent call last):\n File \"/apps/lemur/lemur/common/utils.py\", line 46, in wrapper\n resp = f(*args, **kwargs)\n File \"/apps/lemur/lemur/certificates/views.py\", line 575, in put\n permission = UpdateCertificatePermission(certificate_id, role.name)\nAttributeError: 'NoneType' object has no attribute 'name'\n2015-08-26 20:34:08,236 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]\nTraceback (most recent call last):\n File \"/apps/lemur/lemur/common/utils.py\", line 46, in wrapper\n resp = f(*args, **kwargs)\n File \"/apps/lemur/lemur/certificates/views.py\", line 575, in put\n permission = UpdateCertificatePermission(certificate_id, role.name)\nAttributeError: 'NoneType' object has no attribute 'name'\n2015-08-26 20:37:19,147 ERROR: 'NoneType' object has no attribute 'name' [in /apps/lemur/lemur/common/utils.py:60]\nTraceback (most recent call last):\n File \"/apps/lemur/lemur/common/utils.py\", line 46, in wrapper\n resp = f(*args, **kwargs)\n File \"/apps/lemur/lemur/certificates/views.py\", line 575, in put\n permission = UpdateCertificatePermission(certificate_id, role.name)\nAttributeError: 'NoneType' object has no attribute 'name'\n```\n\nIf user enters a owner that has no associated role with it, they are unable to edit the owner.\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.certificates.views\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nfrom builtins import str\n\nfrom flask import Blueprint, current_app, make_response, jsonify\nfrom flask.ext.restful import reqparse, Api, fields\n\nfrom cryptography import x509\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\n\nfrom lemur.certificates import service\nfrom lemur.authorities.models import Authority\n\nfrom lemur.auth.service import AuthenticatedResource\nfrom lemur.auth.permissions import ViewKeyPermission, AuthorityPermission, UpdateCertificatePermission\n\nfrom lemur.roles import service as role_service\n\nfrom lemur.common.utils import marshal_items, paginated_parser\n\n\nmod = Blueprint('certificates', __name__)\napi = Api(mod)\n\n\nFIELDS = {\n 'name': fields.String,\n 'id': fields.Integer,\n 'bits': fields.Integer,\n 'deleted': fields.String,\n 'issuer': fields.String,\n 'serial': fields.String,\n 'owner': fields.String,\n 'chain': fields.String,\n 'san': fields.String,\n 'active': fields.Boolean,\n 'description': fields.String,\n 'notBefore': fields.DateTime(dt_format='iso8601', attribute='not_before'),\n 'notAfter': fields.DateTime(dt_format='iso8601', attribute='not_after'),\n 'cn': fields.String,\n 'status': fields.String,\n 'body': fields.String\n}\n\n\ndef valid_authority(authority_options):\n \"\"\"\n Defends against invalid authorities\n\n :param authority_options:\n :return: :raise ValueError:\n \"\"\"\n name = authority_options['name']\n authority = Authority.query.filter(Authority.name == name).one()\n\n if not authority:\n raise ValueError(\"Unable to find authority specified\")\n\n if not authority.active:\n raise ValueError(\"Selected authority [{0}] is not currently active\".format(name))\n\n return authority\n\n\ndef pem_str(value, name):\n \"\"\"\n Used to validate that the given string is a PEM formatted string\n\n :param value:\n :param name:\n :return: :raise ValueError:\n \"\"\"\n try:\n x509.load_pem_x509_certificate(bytes(value), default_backend())\n except Exception:\n raise ValueError(\"The parameter '{0}' needs to be a valid PEM string\".format(name))\n return value\n\n\ndef private_key_str(value, name):\n \"\"\"\n User to validate that a given string is a RSA private key\n\n :param value:\n :param name:\n :return: :raise ValueError:\n \"\"\"\n try:\n serialization.load_pem_private_key(bytes(value), None, backend=default_backend())\n except Exception:\n raise ValueError(\"The parameter '{0}' needs to be a valid RSA private key\".format(name))\n return value\n\n\nclass CertificatesList(AuthenticatedResource):\n \"\"\" Defines the 'certificates' endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesList, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self):\n \"\"\"\n .. http:get:: /certificates\n\n The current list of certificates\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"items\": [\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": '[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n ]\n \"total\": 1\n }\n\n :query sortBy: field to sort on\n :query sortDir: acs or desc\n :query page: int. default is 1\n :query filter: key value pair. format is k=v;\n :query limit: limit number. default is 10\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n parser = paginated_parser.copy()\n parser.add_argument('timeRange', type=int, dest='time_range', location='args')\n parser.add_argument('owner', type=bool, location='args')\n parser.add_argument('id', type=str, location='args')\n parser.add_argument('active', type=bool, location='args')\n parser.add_argument('destinationId', type=int, dest=\"destination_id\", location='args')\n parser.add_argument('creator', type=str, location='args')\n parser.add_argument('show', type=str, location='args')\n\n args = parser.parse_args()\n return service.render(args)\n\n @marshal_items(FIELDS)\n def post(self):\n \"\"\"\n .. http:post:: /certificates\n\n Creates a new certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n POST /certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"country\": \"US\",\n \"state\": \"CA\",\n \"location\": \"A Place\",\n \"organization\": \"ExampleInc.\",\n \"organizationalUnit\": \"Operations\",\n \"owner\": \"[email protected]\",\n \"description\": \"test\",\n \"selectedAuthority\": \"timetest2\",\n \"authority\": {\n \"body\": \"-----BEGIN...\",\n \"name\": \"timetest2\",\n \"chain\": \"\",\n \"notBefore\": \"2015-06-05T15:20:59\",\n \"active\": true,\n \"id\": 50,\n \"notAfter\": \"2015-06-17T15:21:08\",\n \"description\": \"dsfdsf\"\n },\n \"extensions\": {\n \"basicConstraints\": {},\n \"keyUsage\": {\n \"isCritical\": true,\n \"useKeyEncipherment\": true,\n \"useDigitalSignature\": true\n },\n \"extendedKeyUsage\": {\n \"isCritical\": true,\n \"useServerAuthentication\": true\n },\n \"subjectKeyIdentifier\": {\n \"includeSKI\": true\n },\n \"subAltNames\": {\n \"names\": []\n }\n },\n \"commonName\": \"test\",\n \"validityStart\": \"2015-06-05T07:00:00.000Z\",\n \"validityEnd\": \"2015-06-16T07:00:00.000Z\"\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": false,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :arg extensions: extensions to be used in the certificate\n :arg description: description for new certificate\n :arg owner: owner email\n :arg validityStart: when the certificate should start being valid\n :arg validityEnd: when the certificate should expire\n :arg authority: authority that should issue the certificate\n :arg country: country for the CSR\n :arg state: state for the CSR\n :arg location: location for the CSR\n :arg organization: organization for CSR\n :arg commonName: certiifcate common name\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n self.reqparse.add_argument('extensions', type=dict, location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('validityStart', type=str, location='json') # TODO validate\n self.reqparse.add_argument('validityEnd', type=str, location='json') # TODO validate\n self.reqparse.add_argument('authority', type=valid_authority, location='json')\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('country', type=str, location='json')\n self.reqparse.add_argument('state', type=str, location='json')\n self.reqparse.add_argument('location', type=str, location='json')\n self.reqparse.add_argument('organization', type=str, location='json')\n self.reqparse.add_argument('organizationalUnit', type=str, location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('commonName', type=str, location='json')\n\n args = self.reqparse.parse_args()\n\n authority = args['authority']\n role = role_service.get_by_name(authority.owner)\n\n # all the authority role members should be allowed\n roles = [x.name for x in authority.roles]\n\n # allow \"owner\" roles by team DL\n roles.append(role)\n permission = AuthorityPermission(authority.id, roles)\n\n if permission.can():\n return service.create(**args)\n\n return dict(message=\"You are not authorized to use {0}\".format(args['authority'].name)), 403\n\n\nclass CertificatesUpload(AuthenticatedResource):\n \"\"\" Defines the 'certificates' upload endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesUpload, self).__init__()\n\n @marshal_items(FIELDS)\n def post(self):\n \"\"\"\n .. http:post:: /certificates/upload\n\n Upload a certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n POST /certificates/upload HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"owner\": \"[email protected]\",\n \"publicCert\": \"---Begin Public...\",\n \"intermediateCert\": \"---Begin Public...\",\n \"privateKey\": \"---Begin Private...\"\n \"destinations\": [],\n \"notifications\": [],\n \"name\": \"cert1\"\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :arg owner: owner email for certificate\n :arg publicCert: valid PEM public key for certificate\n :arg intermediateCert valid PEM intermediate key for certificate\n :arg privateKey: valid PEM private key for certificate\n :arg destinations: list of aws destinations to upload the certificate to\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 403: unauthenticated\n :statuscode 200: no error\n \"\"\"\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('owner', type=str, required=True, location='json')\n self.reqparse.add_argument('name', type=str, location='json')\n self.reqparse.add_argument('publicCert', type=pem_str, required=True, dest='public_cert', location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], dest='destinations', location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], dest='notifications', location='json')\n self.reqparse.add_argument('intermediateCert', type=pem_str, dest='intermediate_cert', location='json')\n self.reqparse.add_argument('privateKey', type=private_key_str, dest='private_key', location='json')\n\n args = self.reqparse.parse_args()\n if args.get('destinations'):\n if args.get('private_key'):\n return service.upload(**args)\n else:\n raise Exception(\"Private key must be provided in order to upload certificate to AWS\")\n return service.upload(**args)\n\n\nclass CertificatesStats(AuthenticatedResource):\n \"\"\" Defines the 'certificates' stats endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesStats, self).__init__()\n\n def get(self):\n self.reqparse.add_argument('metric', type=str, location='args')\n self.reqparse.add_argument('range', default=32, type=int, location='args')\n self.reqparse.add_argument('destinationId', dest='destination_id', location='args')\n self.reqparse.add_argument('active', type=str, default='true', location='args')\n\n args = self.reqparse.parse_args()\n\n items = service.stats(**args)\n return dict(items=items, total=len(items))\n\n\nclass CertificatePrivateKey(AuthenticatedResource):\n def __init__(self):\n super(CertificatePrivateKey, self).__init__()\n\n def get(self, certificate_id):\n \"\"\"\n .. http:get:: /certificates/1/key\n\n Retrieves the private key for a given certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/1/key HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"key\": \"----Begin ...\",\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n cert = service.get(certificate_id)\n if not cert:\n return dict(message=\"Cannot find specified certificate\"), 404\n\n role = role_service.get_by_name(cert.owner)\n\n permission = ViewKeyPermission(certificate_id, getattr(role, 'name', None))\n\n if permission.can():\n response = make_response(jsonify(key=cert.private_key), 200)\n response.headers['cache-control'] = 'private, max-age=0, no-cache, no-store'\n response.headers['pragma'] = 'no-cache'\n return response\n\n return dict(message='You are not authorized to view this key'), 403\n\n\nclass Certificates(AuthenticatedResource):\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(Certificates, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self, certificate_id):\n \"\"\"\n .. http:get:: /certificates/1\n\n One certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/1 HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n return service.get(certificate_id)\n\n @marshal_items(FIELDS)\n def put(self, certificate_id):\n \"\"\"\n .. http:put:: /certificates/1\n\n Update a certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n PUT /certificates/1 HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"owner\": \"[email protected]\",\n \"active\": false\n \"notifications\": [],\n \"destinations\": []\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": false,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\",\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n self.reqparse.add_argument('active', type=bool, location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], location='json')\n args = self.reqparse.parse_args()\n\n cert = service.get(certificate_id)\n role = role_service.get_by_name(cert.owner)\n permission = UpdateCertificatePermission(certificate_id, role.name)\n\n if permission.can():\n return service.update(\n certificate_id,\n args['owner'],\n args['description'],\n args['active'],\n args['destinations'],\n args['notifications']\n )\n\n return dict(message='You are not authorized to update this certificate'), 403\n\n\nclass NotificationCertificatesList(AuthenticatedResource):\n \"\"\" Defines the 'certificates' endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(NotificationCertificatesList, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self, notification_id):\n \"\"\"\n .. http:get:: /notifications/1/certificates\n\n The current list of certificates for a given notification\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /notifications/1/certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"items\": [\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": '[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n ]\n \"total\": 1\n }\n\n :query sortBy: field to sort on\n :query sortDir: acs or desc\n :query page: int. default is 1\n :query filter: key value pair. format is k=v;\n :query limit: limit number. default is 10\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n parser = paginated_parser.copy()\n parser.add_argument('timeRange', type=int, dest='time_range', location='args')\n parser.add_argument('owner', type=bool, location='args')\n parser.add_argument('id', type=str, location='args')\n parser.add_argument('active', type=bool, location='args')\n parser.add_argument('destinationId', type=int, dest=\"destination_id\", location='args')\n parser.add_argument('creator', type=str, location='args')\n parser.add_argument('show', type=str, location='args')\n\n args = parser.parse_args()\n args['notification_id'] = notification_id\n return service.render(args)\n\n\nclass CertificatesDefaults(AuthenticatedResource):\n \"\"\" Defineds the 'certificates' defaults endpoint \"\"\"\n def __init__(self):\n super(CertificatesDefaults)\n\n def get(self):\n \"\"\"\n .. http:get:: /certificates/defaults\n\n Returns defaults needed to generate CSRs\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/defaults HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"country\": \"US\",\n \"state\": \"CA\",\n \"location\": \"Los Gatos\",\n \"organization\": \"Netflix\",\n \"organizationalUnit\": \"Operations\"\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n return dict(\n country=current_app.config.get('LEMUR_DEFAULT_COUNTRY'),\n state=current_app.config.get('LEMUR_DEFAULT_STATE'),\n location=current_app.config.get('LEMUR_DEFAULT_LOCATION'),\n organization=current_app.config.get('LEMUR_DEFAULT_ORGANIZATION'),\n organizationalUnit=current_app.config.get('LEMUR_DEFAULT_ORGANIZATIONAL_UNIT')\n )\n\n\napi.add_resource(CertificatesList, '/certificates', endpoint='certificates')\napi.add_resource(Certificates, '/certificates/<int:certificate_id>', endpoint='certificate')\napi.add_resource(CertificatesStats, '/certificates/stats', endpoint='certificateStats')\napi.add_resource(CertificatesUpload, '/certificates/upload', endpoint='certificateUpload')\napi.add_resource(CertificatePrivateKey, '/certificates/<int:certificate_id>/key', endpoint='privateKeyCertificates')\napi.add_resource(NotificationCertificatesList, '/notifications/<int:notification_id>/certificates', endpoint='notificationCertificates')\napi.add_resource(CertificatesDefaults, '/certificates/defaults', endpoint='certificatesDefault')\n", "path": "lemur/certificates/views.py"}], "after_files": [{"content": "\"\"\"\n.. module: lemur.certificates.views\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nfrom builtins import str\n\nfrom flask import Blueprint, current_app, make_response, jsonify\nfrom flask.ext.restful import reqparse, Api, fields\n\nfrom cryptography import x509\nfrom cryptography.hazmat.backends import default_backend\nfrom cryptography.hazmat.primitives import serialization\n\nfrom lemur.certificates import service\nfrom lemur.authorities.models import Authority\n\nfrom lemur.auth.service import AuthenticatedResource\nfrom lemur.auth.permissions import ViewKeyPermission, AuthorityPermission, UpdateCertificatePermission\n\nfrom lemur.roles import service as role_service\n\nfrom lemur.common.utils import marshal_items, paginated_parser\n\n\nmod = Blueprint('certificates', __name__)\napi = Api(mod)\n\n\nFIELDS = {\n 'name': fields.String,\n 'id': fields.Integer,\n 'bits': fields.Integer,\n 'deleted': fields.String,\n 'issuer': fields.String,\n 'serial': fields.String,\n 'owner': fields.String,\n 'chain': fields.String,\n 'san': fields.String,\n 'active': fields.Boolean,\n 'description': fields.String,\n 'notBefore': fields.DateTime(dt_format='iso8601', attribute='not_before'),\n 'notAfter': fields.DateTime(dt_format='iso8601', attribute='not_after'),\n 'cn': fields.String,\n 'status': fields.String,\n 'body': fields.String\n}\n\n\ndef valid_authority(authority_options):\n \"\"\"\n Defends against invalid authorities\n\n :param authority_options:\n :return: :raise ValueError:\n \"\"\"\n name = authority_options['name']\n authority = Authority.query.filter(Authority.name == name).one()\n\n if not authority:\n raise ValueError(\"Unable to find authority specified\")\n\n if not authority.active:\n raise ValueError(\"Selected authority [{0}] is not currently active\".format(name))\n\n return authority\n\n\ndef pem_str(value, name):\n \"\"\"\n Used to validate that the given string is a PEM formatted string\n\n :param value:\n :param name:\n :return: :raise ValueError:\n \"\"\"\n try:\n x509.load_pem_x509_certificate(bytes(value), default_backend())\n except Exception:\n raise ValueError(\"The parameter '{0}' needs to be a valid PEM string\".format(name))\n return value\n\n\ndef private_key_str(value, name):\n \"\"\"\n User to validate that a given string is a RSA private key\n\n :param value:\n :param name:\n :return: :raise ValueError:\n \"\"\"\n try:\n serialization.load_pem_private_key(bytes(value), None, backend=default_backend())\n except Exception:\n raise ValueError(\"The parameter '{0}' needs to be a valid RSA private key\".format(name))\n return value\n\n\nclass CertificatesList(AuthenticatedResource):\n \"\"\" Defines the 'certificates' endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesList, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self):\n \"\"\"\n .. http:get:: /certificates\n\n The current list of certificates\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"items\": [\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": '[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n ]\n \"total\": 1\n }\n\n :query sortBy: field to sort on\n :query sortDir: acs or desc\n :query page: int. default is 1\n :query filter: key value pair. format is k=v;\n :query limit: limit number. default is 10\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n parser = paginated_parser.copy()\n parser.add_argument('timeRange', type=int, dest='time_range', location='args')\n parser.add_argument('owner', type=bool, location='args')\n parser.add_argument('id', type=str, location='args')\n parser.add_argument('active', type=bool, location='args')\n parser.add_argument('destinationId', type=int, dest=\"destination_id\", location='args')\n parser.add_argument('creator', type=str, location='args')\n parser.add_argument('show', type=str, location='args')\n\n args = parser.parse_args()\n return service.render(args)\n\n @marshal_items(FIELDS)\n def post(self):\n \"\"\"\n .. http:post:: /certificates\n\n Creates a new certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n POST /certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"country\": \"US\",\n \"state\": \"CA\",\n \"location\": \"A Place\",\n \"organization\": \"ExampleInc.\",\n \"organizationalUnit\": \"Operations\",\n \"owner\": \"[email protected]\",\n \"description\": \"test\",\n \"selectedAuthority\": \"timetest2\",\n \"authority\": {\n \"body\": \"-----BEGIN...\",\n \"name\": \"timetest2\",\n \"chain\": \"\",\n \"notBefore\": \"2015-06-05T15:20:59\",\n \"active\": true,\n \"id\": 50,\n \"notAfter\": \"2015-06-17T15:21:08\",\n \"description\": \"dsfdsf\"\n },\n \"extensions\": {\n \"basicConstraints\": {},\n \"keyUsage\": {\n \"isCritical\": true,\n \"useKeyEncipherment\": true,\n \"useDigitalSignature\": true\n },\n \"extendedKeyUsage\": {\n \"isCritical\": true,\n \"useServerAuthentication\": true\n },\n \"subjectKeyIdentifier\": {\n \"includeSKI\": true\n },\n \"subAltNames\": {\n \"names\": []\n }\n },\n \"commonName\": \"test\",\n \"validityStart\": \"2015-06-05T07:00:00.000Z\",\n \"validityEnd\": \"2015-06-16T07:00:00.000Z\"\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": false,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :arg extensions: extensions to be used in the certificate\n :arg description: description for new certificate\n :arg owner: owner email\n :arg validityStart: when the certificate should start being valid\n :arg validityEnd: when the certificate should expire\n :arg authority: authority that should issue the certificate\n :arg country: country for the CSR\n :arg state: state for the CSR\n :arg location: location for the CSR\n :arg organization: organization for CSR\n :arg commonName: certiifcate common name\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n self.reqparse.add_argument('extensions', type=dict, location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('validityStart', type=str, location='json') # TODO validate\n self.reqparse.add_argument('validityEnd', type=str, location='json') # TODO validate\n self.reqparse.add_argument('authority', type=valid_authority, location='json')\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('country', type=str, location='json')\n self.reqparse.add_argument('state', type=str, location='json')\n self.reqparse.add_argument('location', type=str, location='json')\n self.reqparse.add_argument('organization', type=str, location='json')\n self.reqparse.add_argument('organizationalUnit', type=str, location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('commonName', type=str, location='json')\n\n args = self.reqparse.parse_args()\n\n authority = args['authority']\n role = role_service.get_by_name(authority.owner)\n\n # all the authority role members should be allowed\n roles = [x.name for x in authority.roles]\n\n # allow \"owner\" roles by team DL\n roles.append(role)\n permission = AuthorityPermission(authority.id, roles)\n\n if permission.can():\n return service.create(**args)\n\n return dict(message=\"You are not authorized to use {0}\".format(args['authority'].name)), 403\n\n\nclass CertificatesUpload(AuthenticatedResource):\n \"\"\" Defines the 'certificates' upload endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesUpload, self).__init__()\n\n @marshal_items(FIELDS)\n def post(self):\n \"\"\"\n .. http:post:: /certificates/upload\n\n Upload a certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n POST /certificates/upload HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"owner\": \"[email protected]\",\n \"publicCert\": \"---Begin Public...\",\n \"intermediateCert\": \"---Begin Public...\",\n \"privateKey\": \"---Begin Private...\"\n \"destinations\": [],\n \"notifications\": []\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :arg owner: owner email for certificate\n :arg publicCert: valid PEM public key for certificate\n :arg intermediateCert valid PEM intermediate key for certificate\n :arg privateKey: valid PEM private key for certificate\n :arg destinations: list of aws destinations to upload the certificate to\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 403: unauthenticated\n :statuscode 200: no error\n \"\"\"\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('owner', type=str, required=True, location='json')\n self.reqparse.add_argument('publicCert', type=pem_str, required=True, dest='public_cert', location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], dest='destinations', location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], dest='notifications', location='json')\n self.reqparse.add_argument('intermediateCert', type=pem_str, dest='intermediate_cert', location='json')\n self.reqparse.add_argument('privateKey', type=private_key_str, dest='private_key', location='json')\n\n args = self.reqparse.parse_args()\n if args.get('destinations'):\n if args.get('private_key'):\n return service.upload(**args)\n else:\n raise Exception(\"Private key must be provided in order to upload certificate to AWS\")\n return service.upload(**args)\n\n\nclass CertificatesStats(AuthenticatedResource):\n \"\"\" Defines the 'certificates' stats endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(CertificatesStats, self).__init__()\n\n def get(self):\n self.reqparse.add_argument('metric', type=str, location='args')\n self.reqparse.add_argument('range', default=32, type=int, location='args')\n self.reqparse.add_argument('destinationId', dest='destination_id', location='args')\n self.reqparse.add_argument('active', type=str, default='true', location='args')\n\n args = self.reqparse.parse_args()\n\n items = service.stats(**args)\n return dict(items=items, total=len(items))\n\n\nclass CertificatePrivateKey(AuthenticatedResource):\n def __init__(self):\n super(CertificatePrivateKey, self).__init__()\n\n def get(self, certificate_id):\n \"\"\"\n .. http:get:: /certificates/1/key\n\n Retrieves the private key for a given certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/1/key HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"key\": \"----Begin ...\",\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n cert = service.get(certificate_id)\n if not cert:\n return dict(message=\"Cannot find specified certificate\"), 404\n\n role = role_service.get_by_name(cert.owner)\n\n permission = ViewKeyPermission(certificate_id, getattr(role, 'name', None))\n\n if permission.can():\n response = make_response(jsonify(key=cert.private_key), 200)\n response.headers['cache-control'] = 'private, max-age=0, no-cache, no-store'\n response.headers['pragma'] = 'no-cache'\n return response\n\n return dict(message='You are not authorized to view this key'), 403\n\n\nclass Certificates(AuthenticatedResource):\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(Certificates, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self, certificate_id):\n \"\"\"\n .. http:get:: /certificates/1\n\n One certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/1 HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n return service.get(certificate_id)\n\n @marshal_items(FIELDS)\n def put(self, certificate_id):\n \"\"\"\n .. http:put:: /certificates/1\n\n Update a certificate\n\n **Example request**:\n\n .. sourcecode:: http\n\n PUT /certificates/1 HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n {\n \"owner\": \"[email protected]\",\n \"active\": false\n \"notifications\": [],\n \"destinations\": []\n }\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": \"[email protected]\",\n \"active\": false,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\",\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n self.reqparse.add_argument('active', type=bool, location='json')\n self.reqparse.add_argument('owner', type=str, location='json')\n self.reqparse.add_argument('description', type=str, location='json')\n self.reqparse.add_argument('destinations', type=list, default=[], location='json')\n self.reqparse.add_argument('notifications', type=list, default=[], location='json')\n args = self.reqparse.parse_args()\n\n cert = service.get(certificate_id)\n role = role_service.get_by_name(cert.owner)\n\n permission = UpdateCertificatePermission(certificate_id, getattr(role, 'name', None))\n\n if permission.can():\n return service.update(\n certificate_id,\n args['owner'],\n args['description'],\n args['active'],\n args['destinations'],\n args['notifications']\n )\n\n return dict(message='You are not authorized to update this certificate'), 403\n\n\nclass NotificationCertificatesList(AuthenticatedResource):\n \"\"\" Defines the 'certificates' endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(NotificationCertificatesList, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self, notification_id):\n \"\"\"\n .. http:get:: /notifications/1/certificates\n\n The current list of certificates for a given notification\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /notifications/1/certificates HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"items\": [\n {\n \"id\": 1,\n \"name\": \"cert1\",\n \"description\": \"this is cert1\",\n \"bits\": 2048,\n \"deleted\": false,\n \"issuer\": \"ExampeInc.\",\n \"serial\": \"123450\",\n \"chain\": \"-----Begin ...\",\n \"body\": \"-----Begin ...\",\n \"san\": true,\n \"owner\": '[email protected]\",\n \"active\": true,\n \"notBefore\": \"2015-06-05T17:09:39\",\n \"notAfter\": \"2015-06-10T17:09:39\",\n \"cn\": \"example.com\",\n \"status\": \"unknown\"\n }\n ]\n \"total\": 1\n }\n\n :query sortBy: field to sort on\n :query sortDir: acs or desc\n :query page: int. default is 1\n :query filter: key value pair. format is k=v;\n :query limit: limit number. default is 10\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n parser = paginated_parser.copy()\n parser.add_argument('timeRange', type=int, dest='time_range', location='args')\n parser.add_argument('owner', type=bool, location='args')\n parser.add_argument('id', type=str, location='args')\n parser.add_argument('active', type=bool, location='args')\n parser.add_argument('destinationId', type=int, dest=\"destination_id\", location='args')\n parser.add_argument('creator', type=str, location='args')\n parser.add_argument('show', type=str, location='args')\n\n args = parser.parse_args()\n args['notification_id'] = notification_id\n return service.render(args)\n\n\nclass CertificatesDefaults(AuthenticatedResource):\n \"\"\" Defineds the 'certificates' defaults endpoint \"\"\"\n def __init__(self):\n super(CertificatesDefaults)\n\n def get(self):\n \"\"\"\n .. http:get:: /certificates/defaults\n\n Returns defaults needed to generate CSRs\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /certificates/defaults HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"country\": \"US\",\n \"state\": \"CA\",\n \"location\": \"Los Gatos\",\n \"organization\": \"Netflix\",\n \"organizationalUnit\": \"Operations\"\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n :statuscode 403: unauthenticated\n \"\"\"\n return dict(\n country=current_app.config.get('LEMUR_DEFAULT_COUNTRY'),\n state=current_app.config.get('LEMUR_DEFAULT_STATE'),\n location=current_app.config.get('LEMUR_DEFAULT_LOCATION'),\n organization=current_app.config.get('LEMUR_DEFAULT_ORGANIZATION'),\n organizationalUnit=current_app.config.get('LEMUR_DEFAULT_ORGANIZATIONAL_UNIT')\n )\n\n\napi.add_resource(CertificatesList, '/certificates', endpoint='certificates')\napi.add_resource(Certificates, '/certificates/<int:certificate_id>', endpoint='certificate')\napi.add_resource(CertificatesStats, '/certificates/stats', endpoint='certificateStats')\napi.add_resource(CertificatesUpload, '/certificates/upload', endpoint='certificateUpload')\napi.add_resource(CertificatePrivateKey, '/certificates/<int:certificate_id>/key', endpoint='privateKeyCertificates')\napi.add_resource(NotificationCertificatesList, '/notifications/<int:notification_id>/certificates', endpoint='notificationCertificates')\napi.add_resource(CertificatesDefaults, '/certificates/defaults', endpoint='certificatesDefault')\n", "path": "lemur/certificates/views.py"}]} |
gh_patches_debug_1127 | rasdani/github-patches | git_diff | conda__conda-6784 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Conda DAND Udacity yaml file error
snadar@ubuntu-desktop:~$ conda env create -f dand-env-linux.yaml
Using Anaconda API: https://api.anaconda.org
Fetching package metadata ...........
Solving package specifications: .
icu-54.1-0.tar 100% |################################| Time: 0:01:11 165.48 kB/s
jpeg-8d-2.tar. 100% |################################| Time: 0:00:06 122.05 kB/s
libgcc-ng-7.2. 100% |################################| Time: 0:00:47 133.59 kB/s
libgfortran-3. 100% |################################| Time: 0:00:02 127.21 kB/s
libstdcxx-ng-7 100% |################################| Time: 0:00:18 140.57 kB/s
mkl-11.3.3-0.t 100% |################################| Time: 0:08:42 245.18 kB/s
openssl-1.0.2j 100% |################################| Time: 0:00:06 496.81 kB/s
pixman-0.32.6- 100% |################################| Time: 0:00:06 413.05 kB/s
readline-6.2-2 100% |################################| Time: 0:00:01 333.28 kB/s
sqlite-3.13.0- 100% |################################| Time: 0:00:08 497.93 kB/s
tk-8.5.18-0.ta 100% |################################| Time: 0:00:03 529.07 kB/s
yaml-0.1.6-0.t 100% |################################| Time: 0:00:00 532.09 kB/s
zlib-1.2.8-3.t 100% |################################| Time: 0:00:00 548.04 kB/s
libgcc-7.2.0-h 100% |################################| Time: 0:00:00 523.41 kB/s
libiconv-1.15- 100% |################################| Time: 0:00:03 606.23 kB/s
libpng-1.6.22- 100% |################################| Time: 0:00:00 579.81 kB/s
libxcb-1.12-h8 100% |################################| Time: 0:00:00 638.65 kB/s
pcre-8.39-1.ta 100% |################################| Time: 0:00:01 619.63 kB/s
python-2.7.12- 100% |################################| Time: 0:00:18 677.69 kB/s
backports-1.0- 100% |################################| Time: 0:00:00 953.06 kB/s
backports_abc- 100% |################################| Time: 0:00:00 1.20 MB/s
beautifulsoup4 100% |################################| Time: 0:00:00 458.56 kB/s
dbus-1.10.20-0 100% |################################| Time: 0:00:02 555.57 kB/s
decorator-4.0. 100% |################################| Time: 0:00:00 2.49 MB/s
enum34-1.1.6-p 100% |################################| Time: 0:00:00 921.07 kB/s
freetype-2.5.5 100% |################################| Time: 0:00:06 433.13 kB/s
functools32-3. 100% |################################| Time: 0:00:00 1.26 MB/s
glib-2.50.2-1. 100% |################################| Time: 0:00:16 361.10 kB/s
ipython_genuti 100% |################################| Time: 0:00:00 326.48 kB/s
libxml2-2.9.4- 100% |################################| Time: 0:00:13 294.28 kB/s
markupsafe-0.2 100% |################################| Time: 0:00:00 376.16 kB/s
mistune-0.7.4- 100% |################################| Time: 0:00:01 393.87 kB/s
nltk-3.2.1-py2 100% |################################| Time: 0:00:06 295.22 kB/s
numpy-1.11.2-p 100% |################################| Time: 0:00:18 346.04 kB/s
path.py-8.2.1- 100% |################################| Time: 0:00:00 132.93 kB/s
ptyprocess-0.5 100% |################################| Time: 0:00:00 305.77 kB/s
pygments-2.1.3 100% |################################| Time: 0:00:04 289.69 kB/s
pymongo-3.3.0- 100% |################################| Time: 0:00:02 171.89 kB/s
pyparsing-2.1. 100% |################################| Time: 0:00:00 153.55 kB/s
pytz-2016.10-p 100% |################################| Time: 0:00:01 147.06 kB/s
pyyaml-3.12-py 100% |################################| Time: 0:00:01 195.65 kB/s
requests-2.12. 100% |################################| Time: 0:00:02 309.94 kB/s
setuptools-27. 100% |################################| Time: 0:00:01 337.28 kB/s
simplegeneric- 100% |################################| Time: 0:00:00 5.86 MB/s
sip-4.18-py27_ 100% |################################| Time: 0:00:00 489.63 kB/s
six-1.10.0-py2 100% |################################| Time: 0:00:00 10.14 MB/s
unicodecsv-0.1 100% |################################| Time: 0:00:00 15.37 MB/s
wcwidth-0.1.7- 100% |################################| Time: 0:00:00 5.09 MB/s
wheel-0.29.0-p 100% |################################| Time: 0:00:00 565.34 kB/s
xlrd-1.0.0-py2 100% |################################| Time: 0:00:00 419.97 kB/s
zeromq-4.1.5-0 100% |################################| Time: 0:00:16 270.52 kB/s
backports.shut 100% |################################| Time: 0:00:00 510.08 kB/s
clyent-1.2.2-p 100% |################################| Time: 0:00:00 613.19 kB/s
configparser-3 100% |################################| Time: 0:00:00 559.03 kB/s
cycler-0.10.0- 100% |################################| Time: 0:00:00 4.23 MB/s
fontconfig-2.1 100% |################################| Time: 0:00:01 351.49 kB/s
get_terminal_s 100% |################################| Time: 0:00:00 4.24 MB/s
gstreamer-1.8. 100% |################################| Time: 0:00:07 368.44 kB/s
jinja2-2.8-py2 100% |################################| Time: 0:00:01 185.39 kB/s
jsonschema-2.5 100% |################################| Time: 0:00:00 135.51 kB/s
pathlib2-2.1.0 100% |################################| Time: 0:00:00 498.12 kB/s
pexpect-4.0.1- 100% |################################| Time: 0:00:00 83.23 kB/s
pip-9.0.1-py27 100% |################################| Time: 0:00:09 174.59 kB/s
prompt_toolkit 100% |################################| Time: 0:00:01 172.84 kB/s
python-dateuti 100% |################################| Time: 0:00:00 373.96 kB/s
pyzmq-16.0.2-p 100% |################################| Time: 0:00:02 322.33 kB/s
scipy-0.18.1-n 100% |################################| Time: 0:01:29 363.25 kB/s
singledispatch 100% |################################| Time: 0:00:00 449.26 kB/s
ssl_match_host 100% |################################| Time: 0:00:00 1.53 MB/s
traitlets-4.3. 100% |################################| Time: 0:00:00 133.42 kB/s
anaconda-clien 100% |################################| Time: 0:00:01 100.87 kB/s
cairo-1.12.18- 100% |################################| Time: 0:00:02 296.19 kB/s
entrypoints-0. 100% |################################| Time: 0:00:00 2.84 MB/s
gst-plugins-ba 100% |################################| Time: 0:00:07 449.87 kB/s
jupyter_core-4 100% |################################| Time: 0:00:00 167.95 kB/s
pandas-0.19.1- 100% |################################| Time: 0:01:03 246.90 kB/s
pickleshare-0. 100% |################################| Time: 0:00:00 579.01 kB/s
scikit-learn-0 100% |################################| Time: 0:00:38 232.56 kB/s
tornado-4.4.2- 100% |################################| Time: 0:00:04 140.01 kB/s
ipython-5.1.0- 100% |################################| Time: 0:00:05 189.17 kB/s
jupyter_client 100% |################################| Time: 0:00:00 114.47 kB/s
nbformat-4.2.0 100% |################################| Time: 0:00:01 99.19 kB/s
pycairo-1.10.0 100% |################################| Time: 0:00:00 207.15 kB/s
qt-5.6.2-0.tar 100% |################################| Time: 0:02:43 277.77 kB/s
terminado-0.6- 100% |################################| Time: 0:00:00 325.08 kB/s
ipykernel-4.5. 100% |################################| Time: 0:00:02 59.41 kB/s
nbconvert-4.2. 100% |################################| Time: 0:00:02 156.67 kB/s
pyqt-5.6.0-py2 100% |################################| Time: 0:00:11 471.43 kB/s
jupyter_consol 100% |################################| Time: 0:00:00 698.52 kB/s
matplotlib-1.5 100% |################################| Time: 0:00:22 373.96 kB/s
notebook-4.3.0 100% |################################| Time: 0:00:16 338.69 kB/s
qtconsole-4.2. 100% |################################| Time: 0:00:01 133.56 kB/s
seaborn-0.7.1- 100% |################################| Time: 0:00:00 347.93 kB/s
widgetsnbexten 100% |################################| Time: 0:00:04 254.80 kB/s
ipywidgets-5.2 100% |################################| Time: 0:00:00 79.79 kB/s
jupyter-1.0.0- 100% |################################| Time: 0:00:00 2.42 MB/s
nb_anacondaclo 100% |################################| Time: 0:00:00 1.02 MB/s
nb_conda_kerne 100% |################################| Time: 0:00:00 46.93 kB/s
nb_conda-2.0.0 100% |################################| Time: 0:00:00 88.22 kB/s
_nb_ext_conf-0 100% |################################| Time: 0:00:00 632.75 kB/s
nbpresent-3.0. 100% |################################| Time: 0:00:02 190.61 kB/s
An unexpected error has occurred.
Please consider posting the following information to the
conda GitHub issue tracker at:
https://github.com/conda/conda/issues
Current conda install:
platform : linux-64
conda version : 4.3.30
conda is private : False
conda-env version : 4.3.30
conda-build version : 3.0.27
python version : 3.6.2.final.0
requests version : 2.18.4
root environment : /home/snadar/anaconda3 (writable)
default environment : /home/snadar/anaconda3
envs directories : /home/snadar/anaconda3/envs
/home/snadar/.conda/envs
package cache : /home/snadar/anaconda3/pkgs
/home/snadar/.conda/pkgs
channel URLs : https://repo.continuum.io/pkgs/main/linux-64
https://repo.continuum.io/pkgs/main/noarch
https://repo.continuum.io/pkgs/free/linux-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/linux-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/linux-64
https://repo.continuum.io/pkgs/pro/noarch
config file : None
netrc file : None
offline mode : False
user-agent : conda/4.3.30 requests/2.18.4 CPython/3.6.2 Linux/4.13.0-16-generic debian/stretch/sid glibc/2.26
UID:GID : 1000:1000
`$ /home/snadar/anaconda3/bin/conda-env create -f dand-env-linux.yaml`
Traceback (most recent call last):
File "/home/snadar/anaconda3/lib/python3.6/site-packages/conda/exceptions.py", line 640, in conda_exception_handler
return_value = func(*args, **kwargs)
File "/home/snadar/anaconda3/lib/python3.6/site-packages/conda_env/cli/main_create.py", line 108, in execute
installer.install(prefix, pkg_specs, args, env)
File "/home/snadar/anaconda3/lib/python3.6/site-packages/conda_env/installers/pip.py", line 8, in install
pip_cmd = pip_args(prefix) + ['install', ] + specs
TypeError: can only concatenate list (not "NoneType") to list
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda_env/installers/pip.py`
Content:
```
1 from __future__ import absolute_import
2
3 import os
4 import os.path as op
5 import subprocess
6 import tempfile
7 from conda_env.pip_util import pip_args
8 from conda.exceptions import CondaValueError
9
10
11 def _pip_install_via_requirements(prefix, specs, args, *_, **kwargs):
12 """
13 Installs the pip dependencies in specs using a temporary pip requirements file.
14
15 Args
16 ----
17 prefix: string
18 The path to the python and pip executables.
19
20 specs: iterable of strings
21 Each element should be a valid pip dependency.
22 See: https://pip.pypa.io/en/stable/user_guide/#requirements-files
23 https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format
24 """
25 try:
26 pip_workdir = op.dirname(op.abspath(args.file))
27 except AttributeError:
28 pip_workdir = None
29 requirements = None
30 try:
31 # Generate the temporary requirements file
32 requirements = tempfile.NamedTemporaryFile(mode='w',
33 prefix='condaenv.',
34 suffix='.requirements.txt',
35 dir=pip_workdir,
36 delete=False)
37 requirements.write('\n'.join(specs))
38 requirements.close()
39 # pip command line...
40 args, pip_version = pip_args(prefix)
41 pip_cmd = args + ['install', '-r', requirements.name]
42 # ...run it
43 process = subprocess.Popen(pip_cmd,
44 cwd=pip_workdir,
45 universal_newlines=True)
46 process.communicate()
47 if process.returncode != 0:
48 raise CondaValueError("pip returned an error")
49 finally:
50 # Win/Appveyor does not like it if we use context manager + delete=True.
51 # So we delete the temporary file in a finally block.
52 if requirements is not None and op.isfile(requirements.name):
53 os.remove(requirements.name)
54
55
56 # Conform to Installers API
57 install = _pip_install_via_requirements
58
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda_env/installers/pip.py b/conda_env/installers/pip.py
--- a/conda_env/installers/pip.py
+++ b/conda_env/installers/pip.py
@@ -38,6 +38,8 @@
requirements.close()
# pip command line...
args, pip_version = pip_args(prefix)
+ if args is None:
+ return
pip_cmd = args + ['install', '-r', requirements.name]
# ...run it
process = subprocess.Popen(pip_cmd,
| {"golden_diff": "diff --git a/conda_env/installers/pip.py b/conda_env/installers/pip.py\n--- a/conda_env/installers/pip.py\n+++ b/conda_env/installers/pip.py\n@@ -38,6 +38,8 @@\n requirements.close()\n # pip command line...\n args, pip_version = pip_args(prefix)\n+ if args is None:\n+ return\n pip_cmd = args + ['install', '-r', requirements.name]\n # ...run it\n process = subprocess.Popen(pip_cmd,\n", "issue": "Conda DAND Udacity yaml file error\nsnadar@ubuntu-desktop:~$ conda env create -f dand-env-linux.yaml\r\nUsing Anaconda API: https://api.anaconda.org\r\nFetching package metadata ...........\r\nSolving package specifications: .\r\nicu-54.1-0.tar 100% |################################| Time: 0:01:11 165.48 kB/s\r\njpeg-8d-2.tar. 100% |################################| Time: 0:00:06 122.05 kB/s\r\nlibgcc-ng-7.2. 100% |################################| Time: 0:00:47 133.59 kB/s\r\nlibgfortran-3. 100% |################################| Time: 0:00:02 127.21 kB/s\r\nlibstdcxx-ng-7 100% |################################| Time: 0:00:18 140.57 kB/s\r\nmkl-11.3.3-0.t 100% |################################| Time: 0:08:42 245.18 kB/s\r\nopenssl-1.0.2j 100% |################################| Time: 0:00:06 496.81 kB/s\r\npixman-0.32.6- 100% |################################| Time: 0:00:06 413.05 kB/s\r\nreadline-6.2-2 100% |################################| Time: 0:00:01 333.28 kB/s\r\nsqlite-3.13.0- 100% |################################| Time: 0:00:08 497.93 kB/s\r\ntk-8.5.18-0.ta 100% |################################| Time: 0:00:03 529.07 kB/s\r\nyaml-0.1.6-0.t 100% |################################| Time: 0:00:00 532.09 kB/s\r\nzlib-1.2.8-3.t 100% |################################| Time: 0:00:00 548.04 kB/s\r\nlibgcc-7.2.0-h 100% |################################| Time: 0:00:00 523.41 kB/s\r\nlibiconv-1.15- 100% |################################| Time: 0:00:03 606.23 kB/s\r\nlibpng-1.6.22- 100% |################################| Time: 0:00:00 579.81 kB/s\r\nlibxcb-1.12-h8 100% |################################| Time: 0:00:00 638.65 kB/s\r\npcre-8.39-1.ta 100% |################################| Time: 0:00:01 619.63 kB/s\r\npython-2.7.12- 100% |################################| Time: 0:00:18 677.69 kB/s\r\nbackports-1.0- 100% |################################| Time: 0:00:00 953.06 kB/s\r\nbackports_abc- 100% |################################| Time: 0:00:00 1.20 MB/s\r\nbeautifulsoup4 100% |################################| Time: 0:00:00 458.56 kB/s\r\ndbus-1.10.20-0 100% |################################| Time: 0:00:02 555.57 kB/s\r\ndecorator-4.0. 100% |################################| Time: 0:00:00 2.49 MB/s\r\nenum34-1.1.6-p 100% |################################| Time: 0:00:00 921.07 kB/s\r\nfreetype-2.5.5 100% |################################| Time: 0:00:06 433.13 kB/s\r\nfunctools32-3. 100% |################################| Time: 0:00:00 1.26 MB/s\r\nglib-2.50.2-1. 100% |################################| Time: 0:00:16 361.10 kB/s\r\nipython_genuti 100% |################################| Time: 0:00:00 326.48 kB/s\r\nlibxml2-2.9.4- 100% |################################| Time: 0:00:13 294.28 kB/s\r\nmarkupsafe-0.2 100% |################################| Time: 0:00:00 376.16 kB/s\r\nmistune-0.7.4- 100% |################################| Time: 0:00:01 393.87 kB/s\r\nnltk-3.2.1-py2 100% |################################| Time: 0:00:06 295.22 kB/s\r\nnumpy-1.11.2-p 100% |################################| Time: 0:00:18 346.04 kB/s\r\npath.py-8.2.1- 100% |################################| Time: 0:00:00 132.93 kB/s\r\nptyprocess-0.5 100% |################################| Time: 0:00:00 305.77 kB/s\r\npygments-2.1.3 100% |################################| Time: 0:00:04 289.69 kB/s\r\npymongo-3.3.0- 100% |################################| Time: 0:00:02 171.89 kB/s\r\npyparsing-2.1. 100% |################################| Time: 0:00:00 153.55 kB/s\r\npytz-2016.10-p 100% |################################| Time: 0:00:01 147.06 kB/s\r\npyyaml-3.12-py 100% |################################| Time: 0:00:01 195.65 kB/s\r\nrequests-2.12. 100% |################################| Time: 0:00:02 309.94 kB/s\r\nsetuptools-27. 100% |################################| Time: 0:00:01 337.28 kB/s\r\nsimplegeneric- 100% |################################| Time: 0:00:00 5.86 MB/s\r\nsip-4.18-py27_ 100% |################################| Time: 0:00:00 489.63 kB/s\r\nsix-1.10.0-py2 100% |################################| Time: 0:00:00 10.14 MB/s\r\nunicodecsv-0.1 100% |################################| Time: 0:00:00 15.37 MB/s\r\nwcwidth-0.1.7- 100% |################################| Time: 0:00:00 5.09 MB/s\r\nwheel-0.29.0-p 100% |################################| Time: 0:00:00 565.34 kB/s\r\nxlrd-1.0.0-py2 100% |################################| Time: 0:00:00 419.97 kB/s\r\nzeromq-4.1.5-0 100% |################################| Time: 0:00:16 270.52 kB/s\r\nbackports.shut 100% |################################| Time: 0:00:00 510.08 kB/s\r\nclyent-1.2.2-p 100% |################################| Time: 0:00:00 613.19 kB/s\r\nconfigparser-3 100% |################################| Time: 0:00:00 559.03 kB/s\r\ncycler-0.10.0- 100% |################################| Time: 0:00:00 4.23 MB/s\r\nfontconfig-2.1 100% |################################| Time: 0:00:01 351.49 kB/s\r\nget_terminal_s 100% |################################| Time: 0:00:00 4.24 MB/s\r\ngstreamer-1.8. 100% |################################| Time: 0:00:07 368.44 kB/s\r\njinja2-2.8-py2 100% |################################| Time: 0:00:01 185.39 kB/s\r\njsonschema-2.5 100% |################################| Time: 0:00:00 135.51 kB/s\r\npathlib2-2.1.0 100% |################################| Time: 0:00:00 498.12 kB/s\r\npexpect-4.0.1- 100% |################################| Time: 0:00:00 83.23 kB/s\r\npip-9.0.1-py27 100% |################################| Time: 0:00:09 174.59 kB/s\r\nprompt_toolkit 100% |################################| Time: 0:00:01 172.84 kB/s\r\npython-dateuti 100% |################################| Time: 0:00:00 373.96 kB/s\r\npyzmq-16.0.2-p 100% |################################| Time: 0:00:02 322.33 kB/s\r\nscipy-0.18.1-n 100% |################################| Time: 0:01:29 363.25 kB/s\r\nsingledispatch 100% |################################| Time: 0:00:00 449.26 kB/s\r\nssl_match_host 100% |################################| Time: 0:00:00 1.53 MB/s\r\ntraitlets-4.3. 100% |################################| Time: 0:00:00 133.42 kB/s\r\nanaconda-clien 100% |################################| Time: 0:00:01 100.87 kB/s\r\ncairo-1.12.18- 100% |################################| Time: 0:00:02 296.19 kB/s\r\nentrypoints-0. 100% |################################| Time: 0:00:00 2.84 MB/s\r\ngst-plugins-ba 100% |################################| Time: 0:00:07 449.87 kB/s\r\njupyter_core-4 100% |################################| Time: 0:00:00 167.95 kB/s\r\npandas-0.19.1- 100% |################################| Time: 0:01:03 246.90 kB/s\r\npickleshare-0. 100% |################################| Time: 0:00:00 579.01 kB/s\r\nscikit-learn-0 100% |################################| Time: 0:00:38 232.56 kB/s\r\ntornado-4.4.2- 100% |################################| Time: 0:00:04 140.01 kB/s\r\nipython-5.1.0- 100% |################################| Time: 0:00:05 189.17 kB/s\r\njupyter_client 100% |################################| Time: 0:00:00 114.47 kB/s\r\nnbformat-4.2.0 100% |################################| Time: 0:00:01 99.19 kB/s\r\npycairo-1.10.0 100% |################################| Time: 0:00:00 207.15 kB/s\r\nqt-5.6.2-0.tar 100% |################################| Time: 0:02:43 277.77 kB/s\r\nterminado-0.6- 100% |################################| Time: 0:00:00 325.08 kB/s\r\nipykernel-4.5. 100% |################################| Time: 0:00:02 59.41 kB/s\r\nnbconvert-4.2. 100% |################################| Time: 0:00:02 156.67 kB/s\r\npyqt-5.6.0-py2 100% |################################| Time: 0:00:11 471.43 kB/s\r\njupyter_consol 100% |################################| Time: 0:00:00 698.52 kB/s\r\nmatplotlib-1.5 100% |################################| Time: 0:00:22 373.96 kB/s\r\nnotebook-4.3.0 100% |################################| Time: 0:00:16 338.69 kB/s\r\nqtconsole-4.2. 100% |################################| Time: 0:00:01 133.56 kB/s\r\nseaborn-0.7.1- 100% |################################| Time: 0:00:00 347.93 kB/s\r\nwidgetsnbexten 100% |################################| Time: 0:00:04 254.80 kB/s\r\nipywidgets-5.2 100% |################################| Time: 0:00:00 79.79 kB/s\r\njupyter-1.0.0- 100% |################################| Time: 0:00:00 2.42 MB/s\r\nnb_anacondaclo 100% |################################| Time: 0:00:00 1.02 MB/s\r\nnb_conda_kerne 100% |################################| Time: 0:00:00 46.93 kB/s\r\nnb_conda-2.0.0 100% |################################| Time: 0:00:00 88.22 kB/s\r\n_nb_ext_conf-0 100% |################################| Time: 0:00:00 632.75 kB/s\r\nnbpresent-3.0. 100% |################################| Time: 0:00:02 190.61 kB/s\r\nAn unexpected error has occurred.\r\nPlease consider posting the following information to the\r\nconda GitHub issue tracker at:\r\n\r\n https://github.com/conda/conda/issues\r\n\r\n\r\n\r\nCurrent conda install:\r\n\r\n platform : linux-64\r\n conda version : 4.3.30\r\n conda is private : False\r\n conda-env version : 4.3.30\r\n conda-build version : 3.0.27\r\n python version : 3.6.2.final.0\r\n requests version : 2.18.4\r\n root environment : /home/snadar/anaconda3 (writable)\r\n default environment : /home/snadar/anaconda3\r\n envs directories : /home/snadar/anaconda3/envs\r\n /home/snadar/.conda/envs\r\n package cache : /home/snadar/anaconda3/pkgs\r\n /home/snadar/.conda/pkgs\r\n channel URLs : https://repo.continuum.io/pkgs/main/linux-64\r\n https://repo.continuum.io/pkgs/main/noarch\r\n https://repo.continuum.io/pkgs/free/linux-64\r\n https://repo.continuum.io/pkgs/free/noarch\r\n https://repo.continuum.io/pkgs/r/linux-64\r\n https://repo.continuum.io/pkgs/r/noarch\r\n https://repo.continuum.io/pkgs/pro/linux-64\r\n https://repo.continuum.io/pkgs/pro/noarch\r\n config file : None\r\n netrc file : None\r\n offline mode : False\r\n user-agent : conda/4.3.30 requests/2.18.4 CPython/3.6.2 Linux/4.13.0-16-generic debian/stretch/sid glibc/2.26 \r\n UID:GID : 1000:1000\r\n\r\n`$ /home/snadar/anaconda3/bin/conda-env create -f dand-env-linux.yaml`\r\n\r\n\r\n\r\n\r\n Traceback (most recent call last):\r\n File \"/home/snadar/anaconda3/lib/python3.6/site-packages/conda/exceptions.py\", line 640, in conda_exception_handler\r\n return_value = func(*args, **kwargs)\r\n File \"/home/snadar/anaconda3/lib/python3.6/site-packages/conda_env/cli/main_create.py\", line 108, in execute\r\n installer.install(prefix, pkg_specs, args, env)\r\n File \"/home/snadar/anaconda3/lib/python3.6/site-packages/conda_env/installers/pip.py\", line 8, in install\r\n pip_cmd = pip_args(prefix) + ['install', ] + specs\r\n TypeError: can only concatenate list (not \"NoneType\") to list\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport os\nimport os.path as op\nimport subprocess\nimport tempfile\nfrom conda_env.pip_util import pip_args\nfrom conda.exceptions import CondaValueError\n\n\ndef _pip_install_via_requirements(prefix, specs, args, *_, **kwargs):\n \"\"\"\n Installs the pip dependencies in specs using a temporary pip requirements file.\n\n Args\n ----\n prefix: string\n The path to the python and pip executables.\n\n specs: iterable of strings\n Each element should be a valid pip dependency.\n See: https://pip.pypa.io/en/stable/user_guide/#requirements-files\n https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format\n \"\"\"\n try:\n pip_workdir = op.dirname(op.abspath(args.file))\n except AttributeError:\n pip_workdir = None\n requirements = None\n try:\n # Generate the temporary requirements file\n requirements = tempfile.NamedTemporaryFile(mode='w',\n prefix='condaenv.',\n suffix='.requirements.txt',\n dir=pip_workdir,\n delete=False)\n requirements.write('\\n'.join(specs))\n requirements.close()\n # pip command line...\n args, pip_version = pip_args(prefix)\n pip_cmd = args + ['install', '-r', requirements.name]\n # ...run it\n process = subprocess.Popen(pip_cmd,\n cwd=pip_workdir,\n universal_newlines=True)\n process.communicate()\n if process.returncode != 0:\n raise CondaValueError(\"pip returned an error\")\n finally:\n # Win/Appveyor does not like it if we use context manager + delete=True.\n # So we delete the temporary file in a finally block.\n if requirements is not None and op.isfile(requirements.name):\n os.remove(requirements.name)\n\n\n# Conform to Installers API\ninstall = _pip_install_via_requirements\n", "path": "conda_env/installers/pip.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nimport os\nimport os.path as op\nimport subprocess\nimport tempfile\nfrom conda_env.pip_util import pip_args\nfrom conda.exceptions import CondaValueError\n\n\ndef _pip_install_via_requirements(prefix, specs, args, *_, **kwargs):\n \"\"\"\n Installs the pip dependencies in specs using a temporary pip requirements file.\n\n Args\n ----\n prefix: string\n The path to the python and pip executables.\n\n specs: iterable of strings\n Each element should be a valid pip dependency.\n See: https://pip.pypa.io/en/stable/user_guide/#requirements-files\n https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format\n \"\"\"\n try:\n pip_workdir = op.dirname(op.abspath(args.file))\n except AttributeError:\n pip_workdir = None\n requirements = None\n try:\n # Generate the temporary requirements file\n requirements = tempfile.NamedTemporaryFile(mode='w',\n prefix='condaenv.',\n suffix='.requirements.txt',\n dir=pip_workdir,\n delete=False)\n requirements.write('\\n'.join(specs))\n requirements.close()\n # pip command line...\n args, pip_version = pip_args(prefix)\n if args is None:\n return\n pip_cmd = args + ['install', '-r', requirements.name]\n # ...run it\n process = subprocess.Popen(pip_cmd,\n cwd=pip_workdir,\n universal_newlines=True)\n process.communicate()\n if process.returncode != 0:\n raise CondaValueError(\"pip returned an error\")\n finally:\n # Win/Appveyor does not like it if we use context manager + delete=True.\n # So we delete the temporary file in a finally block.\n if requirements is not None and op.isfile(requirements.name):\n os.remove(requirements.name)\n\n\n# Conform to Installers API\ninstall = _pip_install_via_requirements\n", "path": "conda_env/installers/pip.py"}]} |
gh_patches_debug_1128 | rasdani/github-patches | git_diff | sanic-org__sanic-1654 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
The response.content_type is not add to headers in ASGI
Perhaps the response.content_type is add to headers here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sanic/asgi.py`
Content:
```
1 import asyncio
2 import warnings
3
4 from inspect import isawaitable
5 from typing import Any, Awaitable, Callable, MutableMapping, Union
6 from urllib.parse import quote
7
8 from multidict import CIMultiDict
9
10 from sanic.exceptions import InvalidUsage, ServerError
11 from sanic.log import logger
12 from sanic.request import Request
13 from sanic.response import HTTPResponse, StreamingHTTPResponse
14 from sanic.server import StreamBuffer
15 from sanic.websocket import WebSocketConnection
16
17
18 ASGIScope = MutableMapping[str, Any]
19 ASGIMessage = MutableMapping[str, Any]
20 ASGISend = Callable[[ASGIMessage], Awaitable[None]]
21 ASGIReceive = Callable[[], Awaitable[ASGIMessage]]
22
23
24 class MockProtocol:
25 def __init__(self, transport: "MockTransport", loop):
26 self.transport = transport
27 self._not_paused = asyncio.Event(loop=loop)
28 self._not_paused.set()
29 self._complete = asyncio.Event(loop=loop)
30
31 def pause_writing(self) -> None:
32 self._not_paused.clear()
33
34 def resume_writing(self) -> None:
35 self._not_paused.set()
36
37 async def complete(self) -> None:
38 self._not_paused.set()
39 await self.transport.send(
40 {"type": "http.response.body", "body": b"", "more_body": False}
41 )
42
43 @property
44 def is_complete(self) -> bool:
45 return self._complete.is_set()
46
47 async def push_data(self, data: bytes) -> None:
48 if not self.is_complete:
49 await self.transport.send(
50 {"type": "http.response.body", "body": data, "more_body": True}
51 )
52
53 async def drain(self) -> None:
54 await self._not_paused.wait()
55
56
57 class MockTransport:
58 def __init__(
59 self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend
60 ) -> None:
61 self.scope = scope
62 self._receive = receive
63 self._send = send
64 self._protocol = None
65 self.loop = None
66
67 def get_protocol(self) -> MockProtocol:
68 if not self._protocol:
69 self._protocol = MockProtocol(self, self.loop)
70 return self._protocol
71
72 def get_extra_info(self, info: str) -> Union[str, bool]:
73 if info == "peername":
74 return self.scope.get("server")
75 elif info == "sslcontext":
76 return self.scope.get("scheme") in ["https", "wss"]
77
78 def get_websocket_connection(self) -> WebSocketConnection:
79 try:
80 return self._websocket_connection
81 except AttributeError:
82 raise InvalidUsage("Improper websocket connection.")
83
84 def create_websocket_connection(
85 self, send: ASGISend, receive: ASGIReceive
86 ) -> WebSocketConnection:
87 self._websocket_connection = WebSocketConnection(send, receive)
88 return self._websocket_connection
89
90 def add_task(self) -> None:
91 raise NotImplementedError
92
93 async def send(self, data) -> None:
94 # TODO:
95 # - Validation on data and that it is formatted properly and is valid
96 await self._send(data)
97
98 async def receive(self) -> ASGIMessage:
99 return await self._receive()
100
101
102 class Lifespan:
103 def __init__(self, asgi_app: "ASGIApp") -> None:
104 self.asgi_app = asgi_app
105
106 if "before_server_start" in self.asgi_app.sanic_app.listeners:
107 warnings.warn(
108 'You have set a listener for "before_server_start" '
109 "in ASGI mode. "
110 "It will be executed as early as possible, but not before "
111 "the ASGI server is started."
112 )
113 if "after_server_stop" in self.asgi_app.sanic_app.listeners:
114 warnings.warn(
115 'You have set a listener for "after_server_stop" '
116 "in ASGI mode. "
117 "It will be executed as late as possible, but not after "
118 "the ASGI server is stopped."
119 )
120
121 async def startup(self) -> None:
122 """
123 Gather the listeners to fire on server start.
124 Because we are using a third-party server and not Sanic server, we do
125 not have access to fire anything BEFORE the server starts.
126 Therefore, we fire before_server_start and after_server_start
127 in sequence since the ASGI lifespan protocol only supports a single
128 startup event.
129 """
130 listeners = self.asgi_app.sanic_app.listeners.get(
131 "before_server_start", []
132 ) + self.asgi_app.sanic_app.listeners.get("after_server_start", [])
133
134 for handler in listeners:
135 response = handler(
136 self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop
137 )
138 if isawaitable(response):
139 await response
140
141 async def shutdown(self) -> None:
142 """
143 Gather the listeners to fire on server stop.
144 Because we are using a third-party server and not Sanic server, we do
145 not have access to fire anything AFTER the server stops.
146 Therefore, we fire before_server_stop and after_server_stop
147 in sequence since the ASGI lifespan protocol only supports a single
148 shutdown event.
149 """
150 listeners = self.asgi_app.sanic_app.listeners.get(
151 "before_server_stop", []
152 ) + self.asgi_app.sanic_app.listeners.get("after_server_stop", [])
153
154 for handler in listeners:
155 response = handler(
156 self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop
157 )
158 if isawaitable(response):
159 await response
160
161 async def __call__(
162 self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend
163 ) -> None:
164 message = await receive()
165 if message["type"] == "lifespan.startup":
166 await self.startup()
167 await send({"type": "lifespan.startup.complete"})
168
169 message = await receive()
170 if message["type"] == "lifespan.shutdown":
171 await self.shutdown()
172 await send({"type": "lifespan.shutdown.complete"})
173
174
175 class ASGIApp:
176 def __init__(self) -> None:
177 self.ws = None
178
179 @classmethod
180 async def create(
181 cls, sanic_app, scope: ASGIScope, receive: ASGIReceive, send: ASGISend
182 ) -> "ASGIApp":
183 instance = cls()
184 instance.sanic_app = sanic_app
185 instance.transport = MockTransport(scope, receive, send)
186 instance.transport.add_task = sanic_app.loop.create_task
187 instance.transport.loop = sanic_app.loop
188
189 headers = CIMultiDict(
190 [
191 (key.decode("latin-1"), value.decode("latin-1"))
192 for key, value in scope.get("headers", [])
193 ]
194 )
195 instance.do_stream = (
196 True if headers.get("expect") == "100-continue" else False
197 )
198 instance.lifespan = Lifespan(instance)
199
200 if scope["type"] == "lifespan":
201 await instance.lifespan(scope, receive, send)
202 else:
203 url_bytes = scope.get("root_path", "") + quote(scope["path"])
204 url_bytes = url_bytes.encode("latin-1")
205 url_bytes += b"?" + scope["query_string"]
206
207 if scope["type"] == "http":
208 version = scope["http_version"]
209 method = scope["method"]
210 elif scope["type"] == "websocket":
211 version = "1.1"
212 method = "GET"
213
214 instance.ws = instance.transport.create_websocket_connection(
215 send, receive
216 )
217 await instance.ws.accept()
218 else:
219 pass
220 # TODO:
221 # - close connection
222
223 request_class = sanic_app.request_class or Request
224 instance.request = request_class(
225 url_bytes,
226 headers,
227 version,
228 method,
229 instance.transport,
230 sanic_app,
231 )
232
233 if sanic_app.is_request_stream:
234 is_stream_handler = sanic_app.router.is_stream_handler(
235 instance.request
236 )
237 if is_stream_handler:
238 instance.request.stream = StreamBuffer(
239 sanic_app.config.REQUEST_BUFFER_QUEUE_SIZE
240 )
241 instance.do_stream = True
242
243 return instance
244
245 async def read_body(self) -> bytes:
246 """
247 Read and return the entire body from an incoming ASGI message.
248 """
249 body = b""
250 more_body = True
251 while more_body:
252 message = await self.transport.receive()
253 body += message.get("body", b"")
254 more_body = message.get("more_body", False)
255
256 return body
257
258 async def stream_body(self) -> None:
259 """
260 Read and stream the body in chunks from an incoming ASGI message.
261 """
262 more_body = True
263
264 while more_body:
265 message = await self.transport.receive()
266 chunk = message.get("body", b"")
267 await self.request.stream.put(chunk)
268
269 more_body = message.get("more_body", False)
270
271 await self.request.stream.put(None)
272
273 async def __call__(self) -> None:
274 """
275 Handle the incoming request.
276 """
277 if not self.do_stream:
278 self.request.body = await self.read_body()
279 else:
280 self.sanic_app.loop.create_task(self.stream_body())
281
282 handler = self.sanic_app.handle_request
283 callback = None if self.ws else self.stream_callback
284 await handler(self.request, None, callback)
285
286 async def stream_callback(self, response: HTTPResponse) -> None:
287 """
288 Write the response.
289 """
290 headers = []
291 cookies = {}
292 try:
293 cookies = {
294 v.key: v
295 for _, v in list(
296 filter(
297 lambda item: item[0].lower() == "set-cookie",
298 response.headers.items(),
299 )
300 )
301 }
302 headers += [
303 (str(name).encode("latin-1"), str(value).encode("latin-1"))
304 for name, value in response.headers.items()
305 if name.lower() not in ["set-cookie"]
306 ]
307 except AttributeError:
308 logger.error(
309 "Invalid response object for url %s, "
310 "Expected Type: HTTPResponse, Actual Type: %s",
311 self.request.url,
312 type(response),
313 )
314 exception = ServerError("Invalid response type")
315 response = self.sanic_app.error_handler.response(
316 self.request, exception
317 )
318 headers = [
319 (str(name).encode("latin-1"), str(value).encode("latin-1"))
320 for name, value in response.headers.items()
321 if name not in (b"Set-Cookie",)
322 ]
323
324 if "content-length" not in response.headers and not isinstance(
325 response, StreamingHTTPResponse
326 ):
327 headers += [
328 (b"content-length", str(len(response.body)).encode("latin-1"))
329 ]
330
331 if response.cookies:
332 cookies.update(
333 {
334 v.key: v
335 for _, v in response.cookies.items()
336 if v.key not in cookies.keys()
337 }
338 )
339
340 headers += [
341 (b"set-cookie", cookie.encode("utf-8"))
342 for k, cookie in cookies.items()
343 ]
344
345 await self.transport.send(
346 {
347 "type": "http.response.start",
348 "status": response.status,
349 "headers": headers,
350 }
351 )
352
353 if isinstance(response, StreamingHTTPResponse):
354 response.protocol = self.transport.get_protocol()
355 await response.stream()
356 await response.protocol.complete()
357
358 else:
359 await self.transport.send(
360 {
361 "type": "http.response.body",
362 "body": response.body,
363 "more_body": False,
364 }
365 )
366
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sanic/asgi.py b/sanic/asgi.py
--- a/sanic/asgi.py
+++ b/sanic/asgi.py
@@ -328,6 +328,11 @@
(b"content-length", str(len(response.body)).encode("latin-1"))
]
+ if "content-type" not in response.headers:
+ headers += [
+ (b"content-type", str(response.content_type).encode("latin-1"))
+ ]
+
if response.cookies:
cookies.update(
{
| {"golden_diff": "diff --git a/sanic/asgi.py b/sanic/asgi.py\n--- a/sanic/asgi.py\n+++ b/sanic/asgi.py\n@@ -328,6 +328,11 @@\n (b\"content-length\", str(len(response.body)).encode(\"latin-1\"))\n ]\n \n+ if \"content-type\" not in response.headers:\n+ headers += [\n+ (b\"content-type\", str(response.content_type).encode(\"latin-1\"))\n+ ]\n+\n if response.cookies:\n cookies.update(\n {\n", "issue": "The response.content_type is not add to headers in ASGI\nPerhaps the response.content_type is add to headers here.\n", "before_files": [{"content": "import asyncio\nimport warnings\n\nfrom inspect import isawaitable\nfrom typing import Any, Awaitable, Callable, MutableMapping, Union\nfrom urllib.parse import quote\n\nfrom multidict import CIMultiDict\n\nfrom sanic.exceptions import InvalidUsage, ServerError\nfrom sanic.log import logger\nfrom sanic.request import Request\nfrom sanic.response import HTTPResponse, StreamingHTTPResponse\nfrom sanic.server import StreamBuffer\nfrom sanic.websocket import WebSocketConnection\n\n\nASGIScope = MutableMapping[str, Any]\nASGIMessage = MutableMapping[str, Any]\nASGISend = Callable[[ASGIMessage], Awaitable[None]]\nASGIReceive = Callable[[], Awaitable[ASGIMessage]]\n\n\nclass MockProtocol:\n def __init__(self, transport: \"MockTransport\", loop):\n self.transport = transport\n self._not_paused = asyncio.Event(loop=loop)\n self._not_paused.set()\n self._complete = asyncio.Event(loop=loop)\n\n def pause_writing(self) -> None:\n self._not_paused.clear()\n\n def resume_writing(self) -> None:\n self._not_paused.set()\n\n async def complete(self) -> None:\n self._not_paused.set()\n await self.transport.send(\n {\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False}\n )\n\n @property\n def is_complete(self) -> bool:\n return self._complete.is_set()\n\n async def push_data(self, data: bytes) -> None:\n if not self.is_complete:\n await self.transport.send(\n {\"type\": \"http.response.body\", \"body\": data, \"more_body\": True}\n )\n\n async def drain(self) -> None:\n await self._not_paused.wait()\n\n\nclass MockTransport:\n def __init__(\n self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> None:\n self.scope = scope\n self._receive = receive\n self._send = send\n self._protocol = None\n self.loop = None\n\n def get_protocol(self) -> MockProtocol:\n if not self._protocol:\n self._protocol = MockProtocol(self, self.loop)\n return self._protocol\n\n def get_extra_info(self, info: str) -> Union[str, bool]:\n if info == \"peername\":\n return self.scope.get(\"server\")\n elif info == \"sslcontext\":\n return self.scope.get(\"scheme\") in [\"https\", \"wss\"]\n\n def get_websocket_connection(self) -> WebSocketConnection:\n try:\n return self._websocket_connection\n except AttributeError:\n raise InvalidUsage(\"Improper websocket connection.\")\n\n def create_websocket_connection(\n self, send: ASGISend, receive: ASGIReceive\n ) -> WebSocketConnection:\n self._websocket_connection = WebSocketConnection(send, receive)\n return self._websocket_connection\n\n def add_task(self) -> None:\n raise NotImplementedError\n\n async def send(self, data) -> None:\n # TODO:\n # - Validation on data and that it is formatted properly and is valid\n await self._send(data)\n\n async def receive(self) -> ASGIMessage:\n return await self._receive()\n\n\nclass Lifespan:\n def __init__(self, asgi_app: \"ASGIApp\") -> None:\n self.asgi_app = asgi_app\n\n if \"before_server_start\" in self.asgi_app.sanic_app.listeners:\n warnings.warn(\n 'You have set a listener for \"before_server_start\" '\n \"in ASGI mode. \"\n \"It will be executed as early as possible, but not before \"\n \"the ASGI server is started.\"\n )\n if \"after_server_stop\" in self.asgi_app.sanic_app.listeners:\n warnings.warn(\n 'You have set a listener for \"after_server_stop\" '\n \"in ASGI mode. \"\n \"It will be executed as late as possible, but not after \"\n \"the ASGI server is stopped.\"\n )\n\n async def startup(self) -> None:\n \"\"\"\n Gather the listeners to fire on server start.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything BEFORE the server starts.\n Therefore, we fire before_server_start and after_server_start\n in sequence since the ASGI lifespan protocol only supports a single\n startup event.\n \"\"\"\n listeners = self.asgi_app.sanic_app.listeners.get(\n \"before_server_start\", []\n ) + self.asgi_app.sanic_app.listeners.get(\"after_server_start\", [])\n\n for handler in listeners:\n response = handler(\n self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop\n )\n if isawaitable(response):\n await response\n\n async def shutdown(self) -> None:\n \"\"\"\n Gather the listeners to fire on server stop.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything AFTER the server stops.\n Therefore, we fire before_server_stop and after_server_stop\n in sequence since the ASGI lifespan protocol only supports a single\n shutdown event.\n \"\"\"\n listeners = self.asgi_app.sanic_app.listeners.get(\n \"before_server_stop\", []\n ) + self.asgi_app.sanic_app.listeners.get(\"after_server_stop\", [])\n\n for handler in listeners:\n response = handler(\n self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop\n )\n if isawaitable(response):\n await response\n\n async def __call__(\n self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> None:\n message = await receive()\n if message[\"type\"] == \"lifespan.startup\":\n await self.startup()\n await send({\"type\": \"lifespan.startup.complete\"})\n\n message = await receive()\n if message[\"type\"] == \"lifespan.shutdown\":\n await self.shutdown()\n await send({\"type\": \"lifespan.shutdown.complete\"})\n\n\nclass ASGIApp:\n def __init__(self) -> None:\n self.ws = None\n\n @classmethod\n async def create(\n cls, sanic_app, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> \"ASGIApp\":\n instance = cls()\n instance.sanic_app = sanic_app\n instance.transport = MockTransport(scope, receive, send)\n instance.transport.add_task = sanic_app.loop.create_task\n instance.transport.loop = sanic_app.loop\n\n headers = CIMultiDict(\n [\n (key.decode(\"latin-1\"), value.decode(\"latin-1\"))\n for key, value in scope.get(\"headers\", [])\n ]\n )\n instance.do_stream = (\n True if headers.get(\"expect\") == \"100-continue\" else False\n )\n instance.lifespan = Lifespan(instance)\n\n if scope[\"type\"] == \"lifespan\":\n await instance.lifespan(scope, receive, send)\n else:\n url_bytes = scope.get(\"root_path\", \"\") + quote(scope[\"path\"])\n url_bytes = url_bytes.encode(\"latin-1\")\n url_bytes += b\"?\" + scope[\"query_string\"]\n\n if scope[\"type\"] == \"http\":\n version = scope[\"http_version\"]\n method = scope[\"method\"]\n elif scope[\"type\"] == \"websocket\":\n version = \"1.1\"\n method = \"GET\"\n\n instance.ws = instance.transport.create_websocket_connection(\n send, receive\n )\n await instance.ws.accept()\n else:\n pass\n # TODO:\n # - close connection\n\n request_class = sanic_app.request_class or Request\n instance.request = request_class(\n url_bytes,\n headers,\n version,\n method,\n instance.transport,\n sanic_app,\n )\n\n if sanic_app.is_request_stream:\n is_stream_handler = sanic_app.router.is_stream_handler(\n instance.request\n )\n if is_stream_handler:\n instance.request.stream = StreamBuffer(\n sanic_app.config.REQUEST_BUFFER_QUEUE_SIZE\n )\n instance.do_stream = True\n\n return instance\n\n async def read_body(self) -> bytes:\n \"\"\"\n Read and return the entire body from an incoming ASGI message.\n \"\"\"\n body = b\"\"\n more_body = True\n while more_body:\n message = await self.transport.receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n\n return body\n\n async def stream_body(self) -> None:\n \"\"\"\n Read and stream the body in chunks from an incoming ASGI message.\n \"\"\"\n more_body = True\n\n while more_body:\n message = await self.transport.receive()\n chunk = message.get(\"body\", b\"\")\n await self.request.stream.put(chunk)\n\n more_body = message.get(\"more_body\", False)\n\n await self.request.stream.put(None)\n\n async def __call__(self) -> None:\n \"\"\"\n Handle the incoming request.\n \"\"\"\n if not self.do_stream:\n self.request.body = await self.read_body()\n else:\n self.sanic_app.loop.create_task(self.stream_body())\n\n handler = self.sanic_app.handle_request\n callback = None if self.ws else self.stream_callback\n await handler(self.request, None, callback)\n\n async def stream_callback(self, response: HTTPResponse) -> None:\n \"\"\"\n Write the response.\n \"\"\"\n headers = []\n cookies = {}\n try:\n cookies = {\n v.key: v\n for _, v in list(\n filter(\n lambda item: item[0].lower() == \"set-cookie\",\n response.headers.items(),\n )\n )\n }\n headers += [\n (str(name).encode(\"latin-1\"), str(value).encode(\"latin-1\"))\n for name, value in response.headers.items()\n if name.lower() not in [\"set-cookie\"]\n ]\n except AttributeError:\n logger.error(\n \"Invalid response object for url %s, \"\n \"Expected Type: HTTPResponse, Actual Type: %s\",\n self.request.url,\n type(response),\n )\n exception = ServerError(\"Invalid response type\")\n response = self.sanic_app.error_handler.response(\n self.request, exception\n )\n headers = [\n (str(name).encode(\"latin-1\"), str(value).encode(\"latin-1\"))\n for name, value in response.headers.items()\n if name not in (b\"Set-Cookie\",)\n ]\n\n if \"content-length\" not in response.headers and not isinstance(\n response, StreamingHTTPResponse\n ):\n headers += [\n (b\"content-length\", str(len(response.body)).encode(\"latin-1\"))\n ]\n\n if response.cookies:\n cookies.update(\n {\n v.key: v\n for _, v in response.cookies.items()\n if v.key not in cookies.keys()\n }\n )\n\n headers += [\n (b\"set-cookie\", cookie.encode(\"utf-8\"))\n for k, cookie in cookies.items()\n ]\n\n await self.transport.send(\n {\n \"type\": \"http.response.start\",\n \"status\": response.status,\n \"headers\": headers,\n }\n )\n\n if isinstance(response, StreamingHTTPResponse):\n response.protocol = self.transport.get_protocol()\n await response.stream()\n await response.protocol.complete()\n\n else:\n await self.transport.send(\n {\n \"type\": \"http.response.body\",\n \"body\": response.body,\n \"more_body\": False,\n }\n )\n", "path": "sanic/asgi.py"}], "after_files": [{"content": "import asyncio\nimport warnings\n\nfrom inspect import isawaitable\nfrom typing import Any, Awaitable, Callable, MutableMapping, Union\nfrom urllib.parse import quote\n\nfrom multidict import CIMultiDict\n\nfrom sanic.exceptions import InvalidUsage, ServerError\nfrom sanic.log import logger\nfrom sanic.request import Request\nfrom sanic.response import HTTPResponse, StreamingHTTPResponse\nfrom sanic.server import StreamBuffer\nfrom sanic.websocket import WebSocketConnection\n\n\nASGIScope = MutableMapping[str, Any]\nASGIMessage = MutableMapping[str, Any]\nASGISend = Callable[[ASGIMessage], Awaitable[None]]\nASGIReceive = Callable[[], Awaitable[ASGIMessage]]\n\n\nclass MockProtocol:\n def __init__(self, transport: \"MockTransport\", loop):\n self.transport = transport\n self._not_paused = asyncio.Event(loop=loop)\n self._not_paused.set()\n self._complete = asyncio.Event(loop=loop)\n\n def pause_writing(self) -> None:\n self._not_paused.clear()\n\n def resume_writing(self) -> None:\n self._not_paused.set()\n\n async def complete(self) -> None:\n self._not_paused.set()\n await self.transport.send(\n {\"type\": \"http.response.body\", \"body\": b\"\", \"more_body\": False}\n )\n\n @property\n def is_complete(self) -> bool:\n return self._complete.is_set()\n\n async def push_data(self, data: bytes) -> None:\n if not self.is_complete:\n await self.transport.send(\n {\"type\": \"http.response.body\", \"body\": data, \"more_body\": True}\n )\n\n async def drain(self) -> None:\n await self._not_paused.wait()\n\n\nclass MockTransport:\n def __init__(\n self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> None:\n self.scope = scope\n self._receive = receive\n self._send = send\n self._protocol = None\n self.loop = None\n\n def get_protocol(self) -> MockProtocol:\n if not self._protocol:\n self._protocol = MockProtocol(self, self.loop)\n return self._protocol\n\n def get_extra_info(self, info: str) -> Union[str, bool]:\n if info == \"peername\":\n return self.scope.get(\"server\")\n elif info == \"sslcontext\":\n return self.scope.get(\"scheme\") in [\"https\", \"wss\"]\n\n def get_websocket_connection(self) -> WebSocketConnection:\n try:\n return self._websocket_connection\n except AttributeError:\n raise InvalidUsage(\"Improper websocket connection.\")\n\n def create_websocket_connection(\n self, send: ASGISend, receive: ASGIReceive\n ) -> WebSocketConnection:\n self._websocket_connection = WebSocketConnection(send, receive)\n return self._websocket_connection\n\n def add_task(self) -> None:\n raise NotImplementedError\n\n async def send(self, data) -> None:\n # TODO:\n # - Validation on data and that it is formatted properly and is valid\n await self._send(data)\n\n async def receive(self) -> ASGIMessage:\n return await self._receive()\n\n\nclass Lifespan:\n def __init__(self, asgi_app: \"ASGIApp\") -> None:\n self.asgi_app = asgi_app\n\n if \"before_server_start\" in self.asgi_app.sanic_app.listeners:\n warnings.warn(\n 'You have set a listener for \"before_server_start\" '\n \"in ASGI mode. \"\n \"It will be executed as early as possible, but not before \"\n \"the ASGI server is started.\"\n )\n if \"after_server_stop\" in self.asgi_app.sanic_app.listeners:\n warnings.warn(\n 'You have set a listener for \"after_server_stop\" '\n \"in ASGI mode. \"\n \"It will be executed as late as possible, but not after \"\n \"the ASGI server is stopped.\"\n )\n\n async def startup(self) -> None:\n \"\"\"\n Gather the listeners to fire on server start.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything BEFORE the server starts.\n Therefore, we fire before_server_start and after_server_start\n in sequence since the ASGI lifespan protocol only supports a single\n startup event.\n \"\"\"\n listeners = self.asgi_app.sanic_app.listeners.get(\n \"before_server_start\", []\n ) + self.asgi_app.sanic_app.listeners.get(\"after_server_start\", [])\n\n for handler in listeners:\n response = handler(\n self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop\n )\n if isawaitable(response):\n await response\n\n async def shutdown(self) -> None:\n \"\"\"\n Gather the listeners to fire on server stop.\n Because we are using a third-party server and not Sanic server, we do\n not have access to fire anything AFTER the server stops.\n Therefore, we fire before_server_stop and after_server_stop\n in sequence since the ASGI lifespan protocol only supports a single\n shutdown event.\n \"\"\"\n listeners = self.asgi_app.sanic_app.listeners.get(\n \"before_server_stop\", []\n ) + self.asgi_app.sanic_app.listeners.get(\"after_server_stop\", [])\n\n for handler in listeners:\n response = handler(\n self.asgi_app.sanic_app, self.asgi_app.sanic_app.loop\n )\n if isawaitable(response):\n await response\n\n async def __call__(\n self, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> None:\n message = await receive()\n if message[\"type\"] == \"lifespan.startup\":\n await self.startup()\n await send({\"type\": \"lifespan.startup.complete\"})\n\n message = await receive()\n if message[\"type\"] == \"lifespan.shutdown\":\n await self.shutdown()\n await send({\"type\": \"lifespan.shutdown.complete\"})\n\n\nclass ASGIApp:\n def __init__(self) -> None:\n self.ws = None\n\n @classmethod\n async def create(\n cls, sanic_app, scope: ASGIScope, receive: ASGIReceive, send: ASGISend\n ) -> \"ASGIApp\":\n instance = cls()\n instance.sanic_app = sanic_app\n instance.transport = MockTransport(scope, receive, send)\n instance.transport.add_task = sanic_app.loop.create_task\n instance.transport.loop = sanic_app.loop\n\n headers = CIMultiDict(\n [\n (key.decode(\"latin-1\"), value.decode(\"latin-1\"))\n for key, value in scope.get(\"headers\", [])\n ]\n )\n instance.do_stream = (\n True if headers.get(\"expect\") == \"100-continue\" else False\n )\n instance.lifespan = Lifespan(instance)\n\n if scope[\"type\"] == \"lifespan\":\n await instance.lifespan(scope, receive, send)\n else:\n url_bytes = scope.get(\"root_path\", \"\") + quote(scope[\"path\"])\n url_bytes = url_bytes.encode(\"latin-1\")\n url_bytes += b\"?\" + scope[\"query_string\"]\n\n if scope[\"type\"] == \"http\":\n version = scope[\"http_version\"]\n method = scope[\"method\"]\n elif scope[\"type\"] == \"websocket\":\n version = \"1.1\"\n method = \"GET\"\n\n instance.ws = instance.transport.create_websocket_connection(\n send, receive\n )\n await instance.ws.accept()\n else:\n pass\n # TODO:\n # - close connection\n\n request_class = sanic_app.request_class or Request\n instance.request = request_class(\n url_bytes,\n headers,\n version,\n method,\n instance.transport,\n sanic_app,\n )\n\n if sanic_app.is_request_stream:\n is_stream_handler = sanic_app.router.is_stream_handler(\n instance.request\n )\n if is_stream_handler:\n instance.request.stream = StreamBuffer(\n sanic_app.config.REQUEST_BUFFER_QUEUE_SIZE\n )\n instance.do_stream = True\n\n return instance\n\n async def read_body(self) -> bytes:\n \"\"\"\n Read and return the entire body from an incoming ASGI message.\n \"\"\"\n body = b\"\"\n more_body = True\n while more_body:\n message = await self.transport.receive()\n body += message.get(\"body\", b\"\")\n more_body = message.get(\"more_body\", False)\n\n return body\n\n async def stream_body(self) -> None:\n \"\"\"\n Read and stream the body in chunks from an incoming ASGI message.\n \"\"\"\n more_body = True\n\n while more_body:\n message = await self.transport.receive()\n chunk = message.get(\"body\", b\"\")\n await self.request.stream.put(chunk)\n\n more_body = message.get(\"more_body\", False)\n\n await self.request.stream.put(None)\n\n async def __call__(self) -> None:\n \"\"\"\n Handle the incoming request.\n \"\"\"\n if not self.do_stream:\n self.request.body = await self.read_body()\n else:\n self.sanic_app.loop.create_task(self.stream_body())\n\n handler = self.sanic_app.handle_request\n callback = None if self.ws else self.stream_callback\n await handler(self.request, None, callback)\n\n async def stream_callback(self, response: HTTPResponse) -> None:\n \"\"\"\n Write the response.\n \"\"\"\n headers = []\n cookies = {}\n try:\n cookies = {\n v.key: v\n for _, v in list(\n filter(\n lambda item: item[0].lower() == \"set-cookie\",\n response.headers.items(),\n )\n )\n }\n headers += [\n (str(name).encode(\"latin-1\"), str(value).encode(\"latin-1\"))\n for name, value in response.headers.items()\n if name.lower() not in [\"set-cookie\"]\n ]\n except AttributeError:\n logger.error(\n \"Invalid response object for url %s, \"\n \"Expected Type: HTTPResponse, Actual Type: %s\",\n self.request.url,\n type(response),\n )\n exception = ServerError(\"Invalid response type\")\n response = self.sanic_app.error_handler.response(\n self.request, exception\n )\n headers = [\n (str(name).encode(\"latin-1\"), str(value).encode(\"latin-1\"))\n for name, value in response.headers.items()\n if name not in (b\"Set-Cookie\",)\n ]\n\n if \"content-length\" not in response.headers and not isinstance(\n response, StreamingHTTPResponse\n ):\n headers += [\n (b\"content-length\", str(len(response.body)).encode(\"latin-1\"))\n ]\n\n if \"content-type\" not in response.headers:\n headers += [\n (b\"content-type\", str(response.content_type).encode(\"latin-1\"))\n ]\n\n if response.cookies:\n cookies.update(\n {\n v.key: v\n for _, v in response.cookies.items()\n if v.key not in cookies.keys()\n }\n )\n\n headers += [\n (b\"set-cookie\", cookie.encode(\"utf-8\"))\n for k, cookie in cookies.items()\n ]\n\n await self.transport.send(\n {\n \"type\": \"http.response.start\",\n \"status\": response.status,\n \"headers\": headers,\n }\n )\n\n if isinstance(response, StreamingHTTPResponse):\n response.protocol = self.transport.get_protocol()\n await response.stream()\n await response.protocol.complete()\n\n else:\n await self.transport.send(\n {\n \"type\": \"http.response.body\",\n \"body\": response.body,\n \"more_body\": False,\n }\n )\n", "path": "sanic/asgi.py"}]} |
gh_patches_debug_1129 | rasdani/github-patches | git_diff | wright-group__WrightTools-576 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
remove_variable doesn't work if implied is False
https://github.com/wright-group/WrightTools/blob/7803e4ae618b670c4f4d5811eddac9746fa045dd/WrightTools/data/_data.py#L938-L948
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `WrightTools/data/_data.py`
Content:
```
1 """Central data class and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import collections
8 import operator
9 import functools
10 import warnings
11
12 import numpy as np
13
14 import h5py
15
16 import scipy
17 from scipy.interpolate import griddata, interp1d
18
19 from .._group import Group
20 from .. import collection as wt_collection
21 from .. import exceptions as wt_exceptions
22 from .. import kit as wt_kit
23 from .. import units as wt_units
24 from ._axis import Axis, identifier_to_operator
25 from ._channel import Channel
26 from ._variable import Variable
27
28
29 # --- define --------------------------------------------------------------------------------------
30
31
32 __all__ = ['Data']
33
34
35 # --- class ---------------------------------------------------------------------------------------
36
37
38 class Data(Group):
39 """Multidimensional dataset."""
40
41 class_name = 'Data'
42
43 def __init__(self, *args, **kwargs):
44 self._axes = []
45 Group.__init__(self, *args, **kwargs)
46 # populate axes from attrs string
47 for identifier in self.attrs.get('axes', []):
48 identifier = identifier.decode()
49 expression, units = identifier.split('{')
50 units = units.replace('}', '')
51 for i in identifier_to_operator.keys():
52 expression = expression.replace(i, identifier_to_operator[i])
53 expression = expression.replace(' ', '') # remove all whitespace
54 axis = Axis(self, expression, units.strip())
55 self._axes.append(axis)
56 self._current_axis_identities_in_natural_namespace = []
57 self._on_axes_updated()
58 # the following are populated if not already recorded
59 self.channel_names
60 self.source
61 self.variable_names
62
63 def __repr__(self):
64 return '<WrightTools.Data \'{0}\' {1} at {2}>'.format(
65 self.natural_name, str(self.axis_names), '::'.join([self.filepath, self.name]))
66
67 @property
68 def axes(self):
69 return tuple(self._axes)
70
71 @property
72 def axis_expressions(self):
73 """Axis expressions."""
74 return tuple(a.expression for a in self._axes)
75
76 @property
77 def axis_names(self):
78 """Axis names."""
79 return tuple(a.natural_name for a in self._axes)
80
81 @property
82 def channel_names(self):
83 """Channel names."""
84 if 'channel_names' not in self.attrs.keys():
85 self.attrs['channel_names'] = np.array([], dtype='S')
86 return tuple(s.decode() for s in self.attrs['channel_names'])
87
88 @channel_names.setter
89 def channel_names(self, value):
90 """Set channel names."""
91 self.attrs['channel_names'] = np.array(value, dtype='S')
92
93 @property
94 def channels(self):
95 """Channels."""
96 return tuple(self[n] for n in self.channel_names)
97
98 @property
99 def datasets(self):
100 """Datasets."""
101 return tuple(v for _, v in self.items() if isinstance(v, h5py.Dataset))
102
103 @property
104 def kind(self):
105 """Kind."""
106 if 'kind' not in self.attrs.keys():
107 self.attrs['kind'] = 'None'
108 value = self.attrs['kind']
109 return value if not value == 'None' else None
110
111 @property
112 def ndim(self):
113 """Get number of dimensions."""
114 try:
115 assert self._ndim is not None
116 except (AssertionError, AttributeError):
117 if len(self.variables) == 0:
118 self._ndim = 0
119 else:
120 self._ndim = self.variables[0].ndim
121 finally:
122 return self._ndim
123
124 @property
125 def shape(self):
126 """Shape."""
127 try:
128 assert self._shape is not None
129 except (AssertionError, AttributeError):
130 self._shape = wt_kit.joint_shape(*self.variables)
131 finally:
132 return self._shape
133
134 @property
135 def size(self):
136 """Size."""
137 return functools.reduce(operator.mul, self.shape)
138
139 @property
140 def source(self):
141 """Source."""
142 if 'source' not in self.attrs.keys():
143 self.attrs['source'] = 'None'
144 value = self.attrs['source']
145 return value if not value == 'None' else None
146
147 @property
148 def units(self):
149 """All axis units."""
150 return tuple(a.units for a in self._axes)
151
152 @property
153 def variable_names(self):
154 """Variable names."""
155 if 'variable_names' not in self.attrs.keys():
156 self.attrs['variable_names'] = np.array([], dtype='S')
157 return tuple(s.decode() for s in self.attrs['variable_names'])
158
159 @variable_names.setter
160 def variable_names(self, value):
161 """Set variable names."""
162 self.attrs['variable_names'] = np.array(value, dtype='S')
163
164 @property
165 def variables(self):
166 """Variables."""
167 try:
168 assert self._variables is not None
169 except (AssertionError, AttributeError):
170 self._variables = [self[n] for n in self.variable_names]
171 finally:
172 return self._variables
173
174 @property
175 def _leaf(self):
176 return '{0} {1}'.format(self.natural_name, self.shape)
177
178 def _on_axes_updated(self):
179 """Method to run when axes are changed in any way.
180
181 Propagates updated axes properly.
182 """
183 # update attrs
184 self.attrs['axes'] = [a.identity.encode() for a in self._axes]
185 # remove old attributes
186 while len(self._current_axis_identities_in_natural_namespace) > 0:
187 key = self._current_axis_identities_in_natural_namespace.pop(0)
188 self.__dict__.pop(key)
189 # populate new attributes
190 for a in self._axes:
191 key = a.natural_name
192 setattr(self, key, a)
193 self._current_axis_identities_in_natural_namespace.append(key)
194
195 def _print_branch(self, prefix, depth, verbose):
196
197 def print_leaves(prefix, lis, vline=True):
198 for i, item in enumerate(lis):
199 if vline:
200 a = '│ '
201 else:
202 a = ' '
203 if i + 1 == len(lis):
204 b = '└── '
205 else:
206 b = '├── '
207 s = prefix + a + b + '{0}: {1}'.format(i, item._leaf)
208 print(s)
209
210 if verbose:
211 # axes
212 print(prefix + '├── axes')
213 print_leaves(prefix, self.axes)
214 # variables
215 print(prefix + '├── variables')
216 print_leaves(prefix, self.variables)
217 # channels
218 print(prefix + '└── channels')
219 print_leaves(prefix, self.channels, vline=False)
220 else:
221 # axes
222 s = 'axes: '
223 s += ', '.join(['{0} ({1})'.format(a.expression, a.units) for a in self.axes])
224 print(prefix + '├── ' + s)
225 # channels
226 s = 'channels: '
227 s += ', '.join(self.channel_names)
228 print(prefix + '└── ' + s)
229
230 def bring_to_front(self, channel):
231 """Bring a specific channel to the zero-indexed position in channels.
232
233 All other channels get pushed back but remain in order.
234
235 Parameters
236 ----------
237 channel : int or str
238 Channel index or name.
239 """
240 channel_index = wt_kit.get_index(self.channel_names, channel)
241 new = list(self.channel_names)
242 new.insert(0, new.pop(channel_index))
243 self.channel_names = new
244
245 def chop(self, *args, at={}, parent=None, verbose=True):
246 """Divide the dataset into its lower-dimensionality components.
247
248 Parameters
249 ----------
250 axis : str or int (args)
251 Axes of the returned data objects. Strings refer to the names of
252 axes in this object, integers refer to their index. Provide multiple
253 axes to return multidimensional data objects.
254 at : dict (optional)
255 Choice of position along an axis. Keys are axis names, values are lists
256 ``[position, input units]``. If exact position does not exist,
257 the closest valid position is used.
258 parent : WrightTools Collection instance (optional)
259 Collection to place the new "chop" collection within. Default is
260 None (new parent).
261 verbose : bool (optional)
262 Toggle talkback. Default is True.
263
264 Returns
265 -------
266 WrightTools Collection
267 Collection of chopped data objects.
268
269 Examples
270 --------
271 >>> data.axis_names
272 ['d2', 'w1', 'w2']
273
274 Get all w1 wigners.
275
276 >>> datas = data.chop('d2', 'w1')
277 >>> len(datas)
278 51
279
280 Get 2D frequency at d2=0 fs.
281
282 >>> datas = data.chop('w1', 'w2', at={'d2': [0, 'fs']})
283 >>> len(datas)
284 0
285 >>> datas[0].axis_names
286 ['w1', 'w2']
287 >>> datas[0].d2[:]
288 0.
289
290 See Also
291 --------
292 collapse
293 Collapse the dataset along one axis.
294 split
295 Split the dataset while maintaining its dimensionality.
296 """
297 # parse args
298 args = list(args)
299 for i, arg in enumerate(args):
300 if isinstance(arg, int):
301 args[i] = self._axes[arg].expression
302 # get output collection
303 out = wt_collection.Collection(name='chop', parent=parent)
304 # get output shape
305 kept = args + list(at.keys())
306 kept_axes = [self._axes[self.axis_expressions.index(a)] for a in kept]
307 removed_axes = [a for a in self._axes if a not in kept_axes]
308 removed_shape = wt_kit.joint_shape(*removed_axes)
309 if removed_shape == ():
310 removed_shape = (1,) * self.ndim
311 # iterate
312 i = 0
313 for idx in np.ndindex(removed_shape):
314 idx = np.array(idx, dtype=object)
315 idx[np.array(removed_shape) == 1] = slice(None)
316 for axis, point in at.items():
317 point, units = point
318 destination_units = self._axes[self.axis_names.index(axis)].units
319 point = wt_units.converter(point, units, destination_units)
320 axis_index = self.axis_names.index(axis)
321 axis = self._axes[axis_index]
322 idx[axis_index] = np.argmin(np.abs(axis[tuple(idx)] - point))
323 data = out.create_data(name='chop%03i' % i)
324 for v in self.variables:
325 kwargs = {}
326 kwargs['name'] = v.natural_name
327 kwargs['values'] = v[idx]
328 kwargs['units'] = v.units
329 kwargs['label'] = v.label
330 kwargs.update(v.attrs)
331 data.create_variable(**kwargs)
332 for c in self.channels:
333 kwargs = {}
334 kwargs['name'] = c.natural_name
335 kwargs['values'] = c[idx]
336 kwargs['units'] = c.units
337 kwargs['label'] = c.label
338 kwargs['signed'] = c.signed
339 kwargs.update(c.attrs)
340 data.create_channel(**kwargs)
341 new_axes = [a.expression for a in kept_axes if a.expression not in at.keys()]
342 new_axis_units = [a.units for a in kept_axes if a.expression not in at.keys()]
343 data.transform(*new_axes)
344 for j, units in enumerate(new_axis_units):
345 data.axes[j].convert(units)
346 i += 1
347 out.flush()
348 # return
349 if verbose:
350 es = [a.expression for a in kept_axes]
351 print('chopped data into %d piece(s)' % len(out), 'in', es)
352 return out
353
354 def collapse(self, axis, method='integrate'):
355 """
356 Collapse the dataset along one axis.
357
358 Parameters
359 ----------
360 axis : int or str
361 The axis to collapse along.
362 method : {'integrate', 'average', 'sum', 'max', 'min'} (optional)
363 The method of collapsing the given axis. Method may also be list
364 of methods corresponding to the channels of the object. Default
365 is integrate. All methods but integrate disregard NANs.
366
367 See Also
368 --------
369 chop
370 Divide the dataset into its lower-dimensionality components.
371 split
372 Split the dataset while maintaining its dimensionality.
373 """
374 raise NotImplementedError
375 # get axis index --------------------------------------------------------------------------
376 if isinstance(axis, int):
377 axis_index = axis
378 elif isinstance(axis, str):
379 axis_index = self.axis_names.index(axis)
380 else:
381 raise TypeError("axis: expected {int, str}, got %s" % type(axis))
382 # methods ---------------------------------------------------------------------------------
383 if isinstance(method, list):
384 if len(method) == len(self.channels):
385 methods = method
386 else:
387 print('method argument incompatible in data.collapse')
388 elif isinstance(method, str):
389 methods = [method for _ in self.channels]
390 # collapse --------------------------------------------------------------------------------
391 for method, channel in zip(methods, self.channels):
392 if method in ['int', 'integrate']:
393 channel[:] = np.trapz(
394 y=channel[:], x=self._axes[axis_index][:], axis=axis_index)
395 elif method == 'sum':
396 channel[:] = np.nansum(channel[:], axis=axis_index)
397 elif method in ['max', 'maximum']:
398 channel[:] = np.nanmax(channel[:], axis=axis_index)
399 elif method in ['min', 'minimum']:
400 channel[:] = np.nanmin(channel[:], axis=axis_index)
401 elif method in ['ave', 'average', 'mean']:
402 channel[:] = np.nanmean(channel[:], axis=axis_index)
403 else:
404 print('method not recognized in data.collapse')
405 # cleanup ---------------------------------------------------------------------------------
406 self._axes.pop(axis_index)
407
408 def convert(self, destination_units, *, convert_variables=False, verbose=True):
409 """Convert all compatable axes to given units.
410
411 Parameters
412 ----------
413 destination_units : str
414 Destination units.
415 convert_variables : boolean (optional)
416 Toggle conversion of stored arrays. Default is False
417 verbose : bool (optional)
418 Toggle talkback. Default is True.
419
420 See Also
421 --------
422 Axis.convert
423 Convert a single axis object to compatable units. Call on an
424 axis object in data.axes.
425 """
426 # get kind of units
427 units_kind = wt_units.kind(destination_units)
428 # apply to all compatible axes
429 for axis in self.axes:
430 if axis.units_kind == units_kind:
431 axis.convert(destination_units, convert_variables=convert_variables)
432 if verbose:
433 print('axis', axis.expression, 'converted')
434 if convert_variables:
435 for var in self.variables:
436 if wt_units.kind(var.units) == units_kind:
437 var.convert(destination_units)
438
439 if verbose:
440 print('variable', var.natural_name, 'converted')
441 self._on_axes_updated()
442
443 def create_channel(self, name, values=None, shape=None, units=None, **kwargs):
444 """Append a new channel.
445
446 Parameters
447 ----------
448 name : string
449 Unique name for this channel.
450 values : array (optional)
451 Array. If None, an empty array equaling the data shape is
452 created. Default is None.
453 shape : tuple of int
454 Shape to use. must broadcast with the full shape.
455 Only used if `values` is None.
456 Default is the full shape of self.
457 units : string (optional)
458 Channel units. Default is None.
459 kwargs : dict
460 Additional keyword arguments passed to Channel instantiation.
461
462 Returns
463 -------
464 Channel
465 Created channel.
466 """
467 require_kwargs = {}
468 if values is None:
469 if shape is None:
470 require_kwargs['shape'] = self.shape
471 else:
472 require_kwargs['shape'] = shape
473 require_kwargs['dtype'] = np.float64
474 else:
475 require_kwargs['data'] = values
476 require_kwargs['shape'] = values.shape
477 require_kwargs['dtype'] = values.dtype
478 # create dataset
479 dataset_id = self.require_dataset(name=name, chunks=True, **require_kwargs).id
480 channel = Channel(self, dataset_id, units=units, **kwargs)
481 # finish
482 self.attrs['channel_names'] = np.append(self.attrs['channel_names'], name.encode())
483 return channel
484
485 def create_variable(self, name, values=None, shape=None, units=None, **kwargs):
486 """Add new child variable.
487
488 Parameters
489 ----------
490 name : string
491 Unique identifier.
492 values : array-like (optional)
493 Array to populate variable with. If None, an variable will be filled with NaN.
494 Default is None.
495 shape : tuple of int
496 Shape to use. must broadcast with the full shape.
497 Only used if `values` is None.
498 Default is the full shape of self.
499 units : string (optional)
500 Variable units. Default is None.
501 kwargs
502 Additional kwargs to variable instantiation.
503
504 Returns
505 -------
506 WrightTools Variable
507 New child variable.
508 """
509 if values is None:
510 if shape is None:
511 shape = self.shape
512 dtype = np.float64
513 else:
514 shape = values.shape
515 dtype = values.dtype
516 # create dataset
517 id = self.require_dataset(name=name, data=values, shape=shape, dtype=dtype).id
518 variable = Variable(self, id, units=units, **kwargs)
519 # finish
520 self.variables.append(variable)
521 self.attrs['variable_names'] = np.append(self.attrs['variable_names'], name.encode())
522 return variable
523
524 def flush(self):
525 super().flush()
526
527 def get_nadir(self, channel=0):
528 """Get the coordinates in units of the minimum in a channel.
529
530 Parameters
531 ----------
532 channel : int or str (optional)
533 Channel. Default is 0.
534
535 Returns
536 -------
537 generator of numbers
538 Coordinates in units for each axis.
539 """
540 # get channel
541 if isinstance(channel, int):
542 channel_index = channel
543 elif isinstance(channel, str):
544 channel_index = self.channel_names.index(channel)
545 else:
546 raise TypeError("channel: expected {int, str}, got %s" % type(channel))
547 channel = self.channels[channel_index]
548 # get indicies
549 idx = channel.argmin()
550 # finish
551 return tuple(a[idx] for a in self._axes)
552
553 def get_zenith(self, channel=0):
554 """Get the coordinates in units of the maximum in a channel.
555
556 Parameters
557 ----------
558 channel : int or str (optional)
559 Channel. Default is 0.
560
561 Returns
562 -------
563 generator of numbers
564 Coordinates in units for each axis.
565 """
566 # get channel
567 if isinstance(channel, int):
568 channel_index = channel
569 elif isinstance(channel, str):
570 channel_index = self.channel_names.index(channel)
571 else:
572 raise TypeError("channel: expected {int, str}, got %s" % type(channel))
573 channel = self.channels[channel_index]
574 # get indicies
575 idx = channel.argmax()
576 # finish
577 return tuple(a[idx] for a in self._axes)
578
579 def heal(self, channel=0, method='linear', fill_value=np.nan,
580 verbose=True):
581 """
582 Remove nans from channel using interpolation.
583
584 Parameters
585 ----------
586 channel : int or str (optional)
587 Channel to heal. Default is 0.
588 method : {'linear', 'nearest', 'cubic'} (optional)
589 The interpolation method. Note that cubic interpolation is only
590 possible for 1D and 2D data. See `griddata`__ for more information.
591 Default is linear.
592 fill_value : number-like (optional)
593 The value written to pixels that cannot be filled by interpolation.
594 Default is nan.
595 verbose : bool (optional)
596 Toggle talkback. Default is True.
597
598
599 __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
600
601
602 .. note:: Healing may take several minutes for large datasets.
603 Interpolation time goes as nearest, linear, then cubic.
604
605
606 """
607 warnings.warn('heal', category=wt_exceptions.EntireDatasetInMemoryWarning)
608 timer = wt_kit.Timer(verbose=False)
609 with timer:
610 # channel
611 if isinstance(channel, int):
612 channel_index = channel
613 elif isinstance(channel, str):
614 channel_index = self.channel_names.index(channel)
615 else:
616 raise TypeError("channel: expected {int, str}, got %s" % type(channel))
617 channel = self.channels[channel_index]
618 values = self.channels[channel_index][:]
619 points = [axis[:] for axis in self._axes]
620 xi = tuple(np.meshgrid(*points, indexing='ij'))
621 # 'undo' gridding
622 arr = np.zeros((len(self._axes) + 1, values.size))
623 for i in range(len(self._axes)):
624 arr[i] = xi[i].flatten()
625 arr[-1] = values.flatten()
626 # remove nans
627 arr = arr[:, ~np.isnan(arr).any(axis=0)]
628 # grid data wants tuples
629 tup = tuple([arr[i] for i in range(len(arr) - 1)])
630 # grid data
631 out = griddata(tup, arr[-1], xi, method=method, fill_value=fill_value)
632 self.channels[channel_index][:] = out
633 # print
634 if verbose:
635 print('channel {0} healed in {1} seconds'.format(
636 channel.name, np.around(timer.interval, decimals=3)))
637
638 def level(self, channel, axis, npts, *, verbose=True):
639 """Subtract the average value of npts at the edge of a given axis.
640
641 Parameters
642 ----------
643 channel : int or str
644 Channel to level.
645 axis : int
646 Axis to level along.
647 npts : int
648 Number of points to average for each slice. Positive numbers
649 take points at leading indicies and negative numbers take points
650 at trailing indicies.
651 verbose : bool (optional)
652 Toggle talkback. Default is True.
653 """
654 warnings.warn('level', category=wt_exceptions.EntireDatasetInMemoryWarning)
655 channel_index = wt_kit.get_index(self.channel_names, channel)
656 channel = self.channels[channel_index]
657 # verify npts not zero
658 npts = int(npts)
659 if npts == 0:
660 raise wt_exceptions.ValueError('npts must not be zero')
661 # get subtrahend
662 ss = [slice(None)] * self.ndim
663 if npts > 0:
664 ss[axis] = slice(0, npts, None)
665 else:
666 ss[axis] = slice(npts, None, None)
667 subtrahend = np.nanmean(channel[ss], axis=axis)
668 if self.ndim > 1:
669 subtrahend = np.expand_dims(subtrahend, axis=axis)
670 # level
671 channel[:] = channel[:] - subtrahend # verbose
672 # finish
673 channel._null = 0
674 if verbose:
675 print('channel {0} leveled along axis {1}'.format(channel.natural_name, axis))
676
677 def map_variable(self, variable, points, input_units='same', *, name=None, parent=None,
678 verbose=True):
679 """Map points of an axis to new points using linear interpolation.
680
681 Out-of-bounds points are written nan.
682
683 Parameters
684 ----------
685 variable : string
686 The variable to map onto.
687 points : array-like or int
688 If array, the new points. If int, new points will have the same
689 limits, with int defining the number of evenly spaced points
690 between.
691 input_units : str (optional)
692 The units of the new points. Default is same, which assumes
693 the new points have the same units as the axis.
694 name : string (optional)
695 The name of the new data object. If None, generated from
696 natural_name. Default is None.
697 parent : WrightTools.Collection (optional)
698 Parent of new data object. If None, data is made at root of a
699 new temporary file.
700 verbose : bool (optional)
701 Toggle talkback. Default is True.
702
703 Returns
704 -------
705 WrightTools.Data
706 New data object.
707 """
708 # get variable index
709 variable_index = wt_kit.get_index(self.variable_names, variable)
710 variable = self.variables[variable_index]
711 # get points
712 if isinstance(points, int):
713 points = np.linspace(variable.min(), variable.max(), points)
714 points = np.array(points)
715 # points dimensionality
716 if points.ndim < variable.ndim:
717 for i, d in enumerate(variable.shape):
718 if d == 1:
719 points = np.expand_dims(points, axis=i)
720 # convert points
721 if input_units == 'same':
722 pass
723 else:
724 points = wt_units.converter(points, input_units, variable.units)
725 # construct new data object
726 special = ['name', 'axes', 'channel_names', 'variable_names']
727 kwargs = {k: v for k, v in self.attrs.items() if k not in special}
728 if name is None:
729 name = '{0}_{1}_mapped'.format(self.natural_name, variable.natural_name)
730 kwargs['name'] = name
731 kwargs['parent'] = parent
732 out = Data(**kwargs)
733 # mapped variable
734 values = points
735 out.create_variable(values=values, **variable.attrs)
736 # orthogonal variables
737 for v in self.variables:
738 if wt_kit.orthogonal(v.shape, variable.shape):
739 out.create_variable(values=v[:], **v.attrs)
740 out.transform(*self.axis_expressions)
741 # interpolate
742 if self.ndim == 1:
743
744 def interpolate(dataset, points):
745 function = scipy.interpolate.interp1d(variable[:], dataset[:], bounds_error=False)
746 return function(points)
747
748 else:
749 pts = np.array([a.full.flatten() for a in self.axes]).T
750 out_pts = np.array([a.full.flatten() for a in out.axes]).T
751
752 def interpolate(dataset, points):
753 values = dataset.full.flatten()
754 function = scipy.interpolate.LinearNDInterpolator(pts, values, rescale=True)
755 new = function(out_pts)
756 new.shape = out.shape
757 return new
758
759 for v in self.variables:
760 if v.natural_name not in out.variable_names:
761 out.create_variable(values=interpolate(v, points), **v.attrs)
762 out.variable_names = self.variable_names # enforce old order
763 out._variables = None # force regeneration of variables @property
764 for channel in self.channels:
765 out.create_channel(values=interpolate(channel, points), **channel.attrs)
766 # finish
767 if verbose:
768 print('data mapped from {0} to {1}'.format(self.shape, out.shape))
769 return out
770
771 def offset(self, points, offsets, along, offset_axis,
772 units='same', offset_units='same', mode='valid',
773 method='linear', verbose=True):
774 """Offset one axis based on another axis' values.
775
776 Useful for correcting instrumental artifacts such as zerotune.
777
778 Parameters
779 ----------
780 points : 1D array-like
781 Points.
782 offsets : 1D array-like
783 Offsets.
784 along : str or int
785 Axis that points array lies along.
786 offset_axis : str or int
787 Axis to offset using offsets.
788 units : str (optional)
789 Units of points array.
790 offset_units : str (optional)
791 Units of offsets aray.
792 mode : {'valid', 'full', 'old'} (optional)
793 Define how far the new axis will extend. Points outside of valid
794 interpolation range will be written nan.
795 method : {'linear', 'nearest', 'cubic'} (optional)
796 The interpolation method. Note that cubic interpolation is only
797 possible for 1D and 2D data. See `griddata`__ for more information.
798 Default is linear.
799 verbose : bool (optional)
800 Toggle talkback. Default is True.
801
802
803 __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html
804
805 >>> points # an array of w1 points
806 >>> offsets # an array of d1 corrections
807 >>> data.offset(points, offsets, 'w1', 'd1')
808
809 """
810 raise NotImplementedError
811 # axis ------------------------------------------------------------------------------------
812 if isinstance(along, int):
813 axis_index = along
814 elif isinstance(along, str):
815 axis_index = self.axis_names.index(along)
816 else:
817 raise TypeError("along: expected {int, str}, got %s" % type(along))
818 axis = self._axes[axis_index]
819 # values & points -------------------------------------------------------------------------
820 # get values, points, units
821 if units == 'same':
822 input_units = axis.units
823 else:
824 input_units = units
825 # check offsets is 1D or 0D
826 if len(offsets.shape) == 1:
827 pass
828 else:
829 raise RuntimeError('values must be 1D or 0D in offset!')
830 # check if units is compatible, convert
831 dictionary = getattr(wt_units, axis.units_kind)
832 if input_units in dictionary.keys():
833 pass
834 else:
835 raise RuntimeError('units incompatible in offset!')
836 points = wt_units.converter(points, input_units, axis.units)
837 # create correction array
838 function = interp1d(points, offsets, bounds_error=False)
839 corrections = function(axis[:])
840 # remove nans
841 finite_indicies = np.where(np.isfinite(corrections))[0]
842 left_pad_width = finite_indicies[0]
843 right_pad_width = len(corrections) - finite_indicies[-1] - 1
844 corrections = np.pad(corrections[np.isfinite(corrections)],
845 (int(left_pad_width), int(right_pad_width)), mode='edge')
846 # do correction ---------------------------------------------------------------------------
847 # transpose so axis is last
848 transpose_order = np.arange(len(self._axes))
849 transpose_order[axis_index] = len(self._axes) - 1
850 transpose_order[-1] = axis_index
851 self.transpose(transpose_order, verbose=False)
852 # get offset axis index
853 if isinstance(offset_axis, int):
854 offset_axis_index = offset_axis
855 elif isinstance(offset_axis, str):
856 offset_axis_index = self.axis_names.index(offset_axis)
857 else:
858 raise TypeError("offset_axis: expected {int, str}, got %s" % type(offset_axis))
859 # new points
860 new_points = [a[:] for a in self._axes]
861 old_offset_axis_points = self._axes[offset_axis_index][:]
862 spacing = abs((old_offset_axis_points.max() - old_offset_axis_points.min()) /
863 float(len(old_offset_axis_points)))
864 if mode == 'old':
865 new_offset_axis_points = old_offset_axis_points
866 elif mode == 'valid':
867 _max = old_offset_axis_points.max() + corrections.min()
868 _min = old_offset_axis_points.min() + corrections.max()
869 n = int(abs(np.ceil((_max - _min) / spacing)))
870 new_offset_axis_points = np.linspace(_min, _max, n)
871 elif mode == 'full':
872 _max = old_offset_axis_points.max() + corrections.max()
873 _min = old_offset_axis_points.min() + corrections.min()
874 n = np.ceil((_max - _min) / spacing)
875 new_offset_axis_points = np.linspace(_min, _max, n)
876 new_points[offset_axis_index] = new_offset_axis_points
877 new_xi = tuple(np.meshgrid(*new_points, indexing='ij'))
878 xi = tuple(np.meshgrid(*[a[:] for a in self._axes], indexing='ij'))
879 for channel in self.channels:
880 # 'undo' gridding
881 arr = np.zeros((len(self._axes) + 1, channel[:].size))
882 for i in range(len(self._axes)):
883 arr[i] = xi[i].flatten()
884 arr[-1] = channel[:].flatten()
885 # do corrections
886 corrections = list(corrections)
887 corrections = corrections * int((len(arr[0]) / len(corrections)))
888 arr[offset_axis_index] += corrections
889 # grid data
890 tup = tuple([arr[i] for i in range(len(arr) - 1)])
891 # note that rescale is crucial in this operation
892 out = griddata(tup, arr[-1], new_xi, method=method,
893 fill_value=np.nan, rescale=True)
894 channel[:] = out
895 self._axes[offset_axis_index][:] = new_offset_axis_points
896 # transpose out
897 self.transpose(transpose_order, verbose=False)
898
899 def print_tree(self, *, verbose=True):
900 """Print a ascii-formatted tree representation of the data contents."""
901 print('{0} ({1})'.format(self.natural_name, self.filepath))
902 self._print_branch('', depth=0, verbose=verbose)
903
904 def remove_channel(self, channel, *, verbose=True):
905 """Remove channel from data.
906
907 Parameters
908 ----------
909 channel : int or str
910 Channel index or name to remove.
911 verbose : boolean (optional)
912 Toggle talkback. Default is True.
913 """
914 channel_index = wt_kit.get_index(self.channel_names, channel)
915 new = list(self.channel_names)
916 name = new.pop(channel_index)
917 del self[name]
918 self.channel_names = new
919 if verbose:
920 print('channel {0} removed'.format(name))
921
922 def remove_variable(self, variable, *, implied=True, verbose=True):
923 """Remove variable from data.
924
925 Parameters
926 ----------
927 variable : int or str
928 Variable index or name to remove.
929 implied : boolean (optional)
930 Toggle deletion of other variables that start with the same
931 name. Default is True.
932 verbose : boolean (optional)
933 Toggle talkback. Default is True.
934 """
935 if isinstance(variable, int):
936 variable = self.variable_names[variable]
937 # find all of the implied variables
938 removed = []
939 if implied:
940 for n in self.variable_names:
941 if n.startswith(variable):
942 removed.append(n)
943 # check that axes will not be ruined
944 for n in removed:
945 for a in self._axes:
946 if n in a.expression:
947 message = '{0} is contained in axis {1}'.format(n, a.expression)
948 raise RuntimeError(message)
949 # do removal
950 for n in removed:
951 variable_index = wt_kit.get_index(self.variable_names, n)
952 new = list(self.variable_names)
953 name = new.pop(variable_index)
954 del self[name]
955 self.variable_names = new
956 # finish
957 if verbose:
958 print('{0} variable(s) removed:'.format(len(removed)))
959 for n in removed:
960 print(' {0}'.format(n))
961
962 def rename_channels(self, *, verbose=True, **kwargs):
963 """Rename a set of channels.
964
965 Parameters
966 ----------
967 kwargs
968 Keyword arguments of the form current:'new'.
969 verbose : boolean (optional)
970 Toggle talkback. Default is True
971 """
972 # ensure that items will remain unique
973 changed = kwargs.keys()
974 for k, v in kwargs.items():
975 if v not in changed and v in self.keys():
976 raise wt_exceptions.NameNotUniqueError(v)
977 # compile references to items that are changing
978 new = {}
979 for k, v in kwargs.items():
980 obj = self[k]
981 index = self.channel_names.index(k)
982 # rename
983 new[v] = obj, index
984 obj.instances.pop(obj.fullpath, None)
985 obj.natural_name = str(v)
986 # remove old references
987 del self[k]
988 # apply new references
989 names = list(self.channel_names)
990 for v, value in new.items():
991 obj, index = value
992 self[v] = obj
993 names[index] = v
994 self.channel_names = names
995 # finish
996 if verbose:
997 print('{0} channel(s) renamed:'.format(len(kwargs)))
998 for k, v in kwargs.items():
999 print(' {0} --> {1}'.format(k, v))
1000
1001 def rename_variables(self, *, implied=True, verbose=True, **kwargs):
1002 """Rename a set of variables.
1003
1004 Parameters
1005 ----------
1006 kwargs
1007 Keyword arguments of the form current:'new'.
1008 implied : boolean (optional)
1009 Toggle inclusion of other variables that start with the same
1010 name. Default is True.
1011 verbose : boolean (optional)
1012 Toggle talkback. Default is True
1013 """
1014 # find all of the implied variables
1015 kwargs = collections.OrderedDict(kwargs)
1016 if implied:
1017 new = collections.OrderedDict()
1018 for k, v in kwargs.items():
1019 for n in self.variable_names:
1020 if n.startswith(k):
1021 new[n] = n.replace(k, v, 1)
1022 kwargs = new
1023 # ensure that items will remain unique
1024 changed = kwargs.keys()
1025 for k, v in kwargs.items():
1026 if v not in changed and v in self.keys():
1027 raise wt_exceptions.NameNotUniqueError(v)
1028 # compile references to items that are changing
1029 new = {}
1030 for k, v in kwargs.items():
1031 obj = self[k]
1032 index = self.variable_names.index(k)
1033 # rename
1034 new[v] = obj, index
1035 obj.instances.pop(obj.fullpath, None)
1036 obj.natural_name = str(v)
1037 # remove old references
1038 del self[k]
1039 # apply new references
1040 names = list(self.variable_names)
1041 for v, value in new.items():
1042 obj, index = value
1043 self[v] = obj
1044 names[index] = v
1045 self.variable_names = names
1046 # update axes
1047 units = self.units
1048 new = list(self.axis_expressions)
1049 for i, v in enumerate(kwargs.keys()):
1050 for j, n in enumerate(new):
1051 new[j] = n.replace(v, '{%i}' % i)
1052 for i, n in enumerate(new):
1053 new[i] = n.format(*kwargs.values())
1054 self.transform(*new)
1055 for a, u in zip(self._axes, units):
1056 a.convert(u)
1057 # finish
1058 if verbose:
1059 print('{0} variable(s) renamed:'.format(len(kwargs)))
1060 for k, v in kwargs.items():
1061 print(' {0} --> {1}'.format(k, v))
1062
1063 def share_nans(self):
1064 """Share not-a-numbers between all channels.
1065
1066 If any channel is nan at a given index, all channels will be nan
1067 at that index after this operation.
1068
1069 Uses the share_nans method found in wt.kit.
1070 """
1071 def f(_, s, channels):
1072 outs = wt_kit.share_nans(*[c[s] for c in channels])
1073 for c, o in zip(channels, outs):
1074 c[s] = o
1075
1076 self.channels[0].chunkwise(f, self.channels)
1077
1078 def smooth(self, factors, channel=None, verbose=True):
1079 """Smooth a channel using an n-dimenional `kaiser window`__.
1080
1081 Note, all arrays are loaded into memory.
1082
1083 __ https://en.wikipedia.org/wiki/Kaiser_window
1084
1085 Parameters
1086 ----------
1087 factors : int or list of int
1088 The smoothing factor. You may provide a list of smoothing factors
1089 for each axis.
1090 channel : int or str or None (optional)
1091 The channel to smooth. If None, all channels will be smoothed.
1092 Default is None.
1093 verbose : bool (optional)
1094 Toggle talkback. Default is True.
1095 """
1096 warnings.warn('smooth', category=wt_exceptions.EntireDatasetInMemoryWarning)
1097 # get factors -----------------------------------------------------------------------------
1098
1099 if isinstance(factors, list):
1100 pass
1101 else:
1102 dummy = np.zeros(len(self._axes))
1103 dummy[::] = factors
1104 factors = list(dummy)
1105 # get channels ----------------------------------------------------------------------------
1106 if channel is None:
1107 channels = self.channels
1108 else:
1109 if isinstance(channel, int):
1110 channel_index = channel
1111 elif isinstance(channel, str):
1112 channel_index = self.channel_names.index(channel)
1113 else:
1114 raise TypeError("channel: expected {int, str}, got %s" % type(channel))
1115 channels = [self.channels[channel_index]]
1116 # smooth ----------------------------------------------------------------------------------
1117 for channel in channels:
1118 values = channel[:]
1119 for axis_index in range(len(factors)):
1120 factor = factors[axis_index]
1121 # transpose so the axis of interest is last
1122 transpose_order = range(len(values.shape))
1123 # replace axis_index with zero
1124 transpose_order = [len(values.shape) - 1 if i ==
1125 axis_index else i for i in transpose_order]
1126 transpose_order[len(values.shape) - 1] = axis_index
1127 values = values.transpose(transpose_order)
1128 # get kaiser window
1129 beta = 5.0
1130 w = np.kaiser(2 * factor + 1, beta)
1131 # for all slices...
1132 for index in np.ndindex(values[..., 0].shape):
1133 current_slice = values[index]
1134 temp_slice = np.pad(current_slice, int(factor), mode=str('edge'))
1135 values[index] = np.convolve(temp_slice, w / w.sum(), mode=str('valid'))
1136 # transpose out
1137 values = values.transpose(transpose_order)
1138 # return array to channel object
1139 channel[:] = values
1140 if verbose:
1141 print('smoothed data')
1142
1143 def split(self, axis, positions, units='same', direction='below', parent=None, verbose=True):
1144 """
1145 Split the data object along a given axis, in units.
1146
1147 Parameters
1148 ----------
1149 axis : int or str
1150 The axis to split along.
1151 positions : number-type or 1D array-type
1152 The position(s) to split at, in units. If a non-exact position is
1153 given, the closest valid axis position will be used.
1154 units : str (optional)
1155 The units of the given positions. Default is same, which assumes
1156 input units are identical to axis units.
1157 direction : {'below', 'above'} (optional)
1158 Choose which group of data the points at positions remains with.
1159 This decision is based on the value, not the index.
1160 Consider points [0, 1, 2, 3, 4, 5] and split value [3]. If direction
1161 is above the returned objects are [0, 1, 2] and [3, 4, 5]. If
1162 direction is below the returned objects are [0, 1, 2, 3] and
1163 [4, 5]. Default is below.
1164 parent : WrightTools.Collection
1165 The parent collection in which to place the 'split' collection.
1166 verbose : bool (optional)
1167 Toggle talkback. Default is True.
1168
1169 Returns
1170 -------
1171 WrightTools.collection.Collection
1172 A Collection of data objects.
1173 The order of the objects is such that the axis points retain their original order.
1174
1175 See Also
1176 --------
1177 chop
1178 Divide the dataset into its lower-dimensionality components.
1179 collapse
1180 Collapse the dataset along one axis.
1181 """
1182 raise NotImplementedError
1183 # axis ------------------------------------------------------------------------------------
1184 if isinstance(axis, int):
1185 axis_index = axis
1186 elif isinstance(axis, str):
1187 axis_index = self.axis_names.index(axis)
1188 else:
1189 raise TypeError("axis: expected {int, str}, got %s" % type(axis))
1190 axis = self._axes[axis_index]
1191 # indicies --------------------------------------------------------------------------------
1192 # positions must be iterable and should be a numpy array
1193 if type(positions) in [int, float]:
1194 positions = [positions]
1195 positions = np.array(positions)
1196 # positions should be in the data units
1197 if units != 'same':
1198 positions = wt_units.converter(positions, units, axis.units)
1199 # get indicies of split
1200 indicies = []
1201 for position in positions:
1202 idx = np.argmin(abs(axis[:] - position))
1203 indicies.append(idx)
1204 indicies.sort()
1205 # set direction according to units
1206 flip = direction == 'above'
1207 if axis[-1] < axis[0]:
1208 flip = not flip
1209 if flip:
1210 indicies = [i - 1 for i in indicies]
1211 # process ---------------------------------------------------------------------------------
1212 outs = wt_collection.Collection(name='split', parent=parent,
1213 edit_local=parent is not None)
1214 start = 0
1215 stop = 0
1216 for i in range(len(indicies) + 1):
1217 # get start and stop
1218 start = stop # previous value
1219 if i == len(indicies):
1220 stop = len(axis)
1221 else:
1222 stop = indicies[i] + 1
1223 # new data object prepare
1224 new_name = "split%03d" % i
1225 if stop - start < 1:
1226 outs.create_data("")
1227 elif stop - start == 1:
1228 attrs = dict(self.attrs)
1229 attrs.pop('name', None)
1230 new_data = outs.create_data(new_name, **attrs)
1231 for ax in self._axes:
1232 if ax != axis:
1233 attrs = dict(ax.attrs)
1234 attrs.pop('name', None)
1235 attrs.pop('units', None)
1236 new_data.create_axis(ax.natural_name, ax[:], ax.units, **attrs)
1237 slc = [slice(None)] * len(self.shape)
1238 slc[axis_index] = start
1239 for ch in self.channels:
1240 attrs = dict(ch.attrs)
1241 attrs.pop('name', None)
1242 attrs.pop('units', None)
1243 new_data.create_channel(ch.natural_name, ch[:][slc], ch.units, **attrs)
1244 else:
1245 attrs = dict(self.attrs)
1246 attrs.pop('name', None)
1247 new_data = outs.create_data(new_name, **attrs)
1248 for ax in self._axes:
1249 if ax == axis:
1250 slc = slice(start, stop)
1251 else:
1252 slc = slice(None)
1253 attrs = dict(ax.attrs)
1254 attrs.pop('name', None)
1255 attrs.pop('units', None)
1256 new_data.create_axis(ax.natural_name, ax[slc], ax.units, **attrs)
1257 slc = [slice(None)] * len(self.shape)
1258 slc[axis_index] = slice(start, stop)
1259 for ch in self.channels:
1260 attrs = dict(ch.attrs)
1261 attrs.pop('name', None)
1262 attrs.pop('units', None)
1263 new_data.create_channel(ch.natural_name, ch[slc], ch.units, **attrs)
1264 # post process ----------------------------------------------------------------------------
1265 if verbose:
1266 print('split data into {0} pieces along {1}:'.format(len(indicies) + 1,
1267 axis.natural_name))
1268 for i in range(len(outs)):
1269 new_data = outs[i]
1270 if new_data is None:
1271 print(' {0} : None'.format(i))
1272 elif len(new_data.shape) < len(self.shape):
1273 print(' {0} : {1} {2}(constant)'.format(i, axis.natural_name, axis.units))
1274 else:
1275 new_axis = new_data.axes[axis_index]
1276 print(' {0} : {1} to {2} {3} (length {4})'.format(i, new_axis[0],
1277 new_axis[-1],
1278 new_axis.units,
1279 new_axis.size))
1280 return outs
1281
1282 def transform(self, *axes, verbose=True):
1283 """Transform the data.
1284
1285 Parameters
1286 ----------
1287 axes : strings
1288 Expressions for the new set of axes.
1289 verbose : boolean (optional)
1290 Toggle talkback. Default is True
1291 """
1292 # TODO: ensure that transform does not break data
1293 # create
1294 new = []
1295 current = {a.expression: a for a in self._axes}
1296 for expression in axes:
1297 axis = current.get(expression, Axis(self, expression))
1298 new.append(axis)
1299 self._axes = new
1300 # units
1301 for a in self._axes:
1302 if a.units is None:
1303 a.convert(a.variables[0].units)
1304 # finish
1305 self.flush()
1306 self._on_axes_updated()
1307
1308 def zoom(self, factor, order=1, verbose=True):
1309 """Zoom the data array using spline interpolation of the requested order.
1310
1311 The number of points along each axis is increased by factor.
1312 See `scipy ndimage`__ for more info.
1313
1314 __ http://docs.scipy.org/doc/scipy/reference/
1315 generated/scipy.ndimage.interpolation.zoom.html
1316
1317 Parameters
1318 ----------
1319 factor : float
1320 The number of points along each axis will increase by this factor.
1321 order : int (optional)
1322 The order of the spline used to interpolate onto new points.
1323 verbose : bool (optional)
1324 Toggle talkback. Default is True.
1325 """
1326 raise NotImplementedError
1327 import scipy.ndimage
1328 # axes
1329 for axis in self._axes:
1330 axis[:] = scipy.ndimage.interpolation.zoom(axis[:], factor, order=order)
1331 # channels
1332 for channel in self.channels:
1333 channel[:] = scipy.ndimage.interpolation.zoom(channel[:], factor, order=order)
1334 # return
1335 if verbose:
1336 print('data zoomed to new shape:', self.shape)
1337
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/WrightTools/data/_data.py b/WrightTools/data/_data.py
--- a/WrightTools/data/_data.py
+++ b/WrightTools/data/_data.py
@@ -940,6 +940,8 @@
for n in self.variable_names:
if n.startswith(variable):
removed.append(n)
+ else:
+ removed = [variable]
# check that axes will not be ruined
for n in removed:
for a in self._axes:
| {"golden_diff": "diff --git a/WrightTools/data/_data.py b/WrightTools/data/_data.py\n--- a/WrightTools/data/_data.py\n+++ b/WrightTools/data/_data.py\n@@ -940,6 +940,8 @@\n for n in self.variable_names:\n if n.startswith(variable):\n removed.append(n)\n+ else:\n+ removed = [variable]\n # check that axes will not be ruined\n for n in removed:\n for a in self._axes:\n", "issue": "remove_variable doesn't work if implied is False\nhttps://github.com/wright-group/WrightTools/blob/7803e4ae618b670c4f4d5811eddac9746fa045dd/WrightTools/data/_data.py#L938-L948\n", "before_files": [{"content": "\"\"\"Central data class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport collections\nimport operator\nimport functools\nimport warnings\n\nimport numpy as np\n\nimport h5py\n\nimport scipy\nfrom scipy.interpolate import griddata, interp1d\n\nfrom .._group import Group\nfrom .. import collection as wt_collection\nfrom .. import exceptions as wt_exceptions\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\nfrom ._axis import Axis, identifier_to_operator\nfrom ._channel import Channel\nfrom ._variable import Variable\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Data']\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Data(Group):\n \"\"\"Multidimensional dataset.\"\"\"\n\n class_name = 'Data'\n\n def __init__(self, *args, **kwargs):\n self._axes = []\n Group.__init__(self, *args, **kwargs)\n # populate axes from attrs string\n for identifier in self.attrs.get('axes', []):\n identifier = identifier.decode()\n expression, units = identifier.split('{')\n units = units.replace('}', '')\n for i in identifier_to_operator.keys():\n expression = expression.replace(i, identifier_to_operator[i])\n expression = expression.replace(' ', '') # remove all whitespace\n axis = Axis(self, expression, units.strip())\n self._axes.append(axis)\n self._current_axis_identities_in_natural_namespace = []\n self._on_axes_updated()\n # the following are populated if not already recorded\n self.channel_names\n self.source\n self.variable_names\n\n def __repr__(self):\n return '<WrightTools.Data \\'{0}\\' {1} at {2}>'.format(\n self.natural_name, str(self.axis_names), '::'.join([self.filepath, self.name]))\n\n @property\n def axes(self):\n return tuple(self._axes)\n\n @property\n def axis_expressions(self):\n \"\"\"Axis expressions.\"\"\"\n return tuple(a.expression for a in self._axes)\n\n @property\n def axis_names(self):\n \"\"\"Axis names.\"\"\"\n return tuple(a.natural_name for a in self._axes)\n\n @property\n def channel_names(self):\n \"\"\"Channel names.\"\"\"\n if 'channel_names' not in self.attrs.keys():\n self.attrs['channel_names'] = np.array([], dtype='S')\n return tuple(s.decode() for s in self.attrs['channel_names'])\n\n @channel_names.setter\n def channel_names(self, value):\n \"\"\"Set channel names.\"\"\"\n self.attrs['channel_names'] = np.array(value, dtype='S')\n\n @property\n def channels(self):\n \"\"\"Channels.\"\"\"\n return tuple(self[n] for n in self.channel_names)\n\n @property\n def datasets(self):\n \"\"\"Datasets.\"\"\"\n return tuple(v for _, v in self.items() if isinstance(v, h5py.Dataset))\n\n @property\n def kind(self):\n \"\"\"Kind.\"\"\"\n if 'kind' not in self.attrs.keys():\n self.attrs['kind'] = 'None'\n value = self.attrs['kind']\n return value if not value == 'None' else None\n\n @property\n def ndim(self):\n \"\"\"Get number of dimensions.\"\"\"\n try:\n assert self._ndim is not None\n except (AssertionError, AttributeError):\n if len(self.variables) == 0:\n self._ndim = 0\n else:\n self._ndim = self.variables[0].ndim\n finally:\n return self._ndim\n\n @property\n def shape(self):\n \"\"\"Shape.\"\"\"\n try:\n assert self._shape is not None\n except (AssertionError, AttributeError):\n self._shape = wt_kit.joint_shape(*self.variables)\n finally:\n return self._shape\n\n @property\n def size(self):\n \"\"\"Size.\"\"\"\n return functools.reduce(operator.mul, self.shape)\n\n @property\n def source(self):\n \"\"\"Source.\"\"\"\n if 'source' not in self.attrs.keys():\n self.attrs['source'] = 'None'\n value = self.attrs['source']\n return value if not value == 'None' else None\n\n @property\n def units(self):\n \"\"\"All axis units.\"\"\"\n return tuple(a.units for a in self._axes)\n\n @property\n def variable_names(self):\n \"\"\"Variable names.\"\"\"\n if 'variable_names' not in self.attrs.keys():\n self.attrs['variable_names'] = np.array([], dtype='S')\n return tuple(s.decode() for s in self.attrs['variable_names'])\n\n @variable_names.setter\n def variable_names(self, value):\n \"\"\"Set variable names.\"\"\"\n self.attrs['variable_names'] = np.array(value, dtype='S')\n\n @property\n def variables(self):\n \"\"\"Variables.\"\"\"\n try:\n assert self._variables is not None\n except (AssertionError, AttributeError):\n self._variables = [self[n] for n in self.variable_names]\n finally:\n return self._variables\n\n @property\n def _leaf(self):\n return '{0} {1}'.format(self.natural_name, self.shape)\n\n def _on_axes_updated(self):\n \"\"\"Method to run when axes are changed in any way.\n\n Propagates updated axes properly.\n \"\"\"\n # update attrs\n self.attrs['axes'] = [a.identity.encode() for a in self._axes]\n # remove old attributes\n while len(self._current_axis_identities_in_natural_namespace) > 0:\n key = self._current_axis_identities_in_natural_namespace.pop(0)\n self.__dict__.pop(key)\n # populate new attributes\n for a in self._axes:\n key = a.natural_name\n setattr(self, key, a)\n self._current_axis_identities_in_natural_namespace.append(key)\n\n def _print_branch(self, prefix, depth, verbose):\n\n def print_leaves(prefix, lis, vline=True):\n for i, item in enumerate(lis):\n if vline:\n a = '\u2502 '\n else:\n a = ' '\n if i + 1 == len(lis):\n b = '\u2514\u2500\u2500 '\n else:\n b = '\u251c\u2500\u2500 '\n s = prefix + a + b + '{0}: {1}'.format(i, item._leaf)\n print(s)\n\n if verbose:\n # axes\n print(prefix + '\u251c\u2500\u2500 axes')\n print_leaves(prefix, self.axes)\n # variables\n print(prefix + '\u251c\u2500\u2500 variables')\n print_leaves(prefix, self.variables)\n # channels\n print(prefix + '\u2514\u2500\u2500 channels')\n print_leaves(prefix, self.channels, vline=False)\n else:\n # axes\n s = 'axes: '\n s += ', '.join(['{0} ({1})'.format(a.expression, a.units) for a in self.axes])\n print(prefix + '\u251c\u2500\u2500 ' + s)\n # channels\n s = 'channels: '\n s += ', '.join(self.channel_names)\n print(prefix + '\u2514\u2500\u2500 ' + s)\n\n def bring_to_front(self, channel):\n \"\"\"Bring a specific channel to the zero-indexed position in channels.\n\n All other channels get pushed back but remain in order.\n\n Parameters\n ----------\n channel : int or str\n Channel index or name.\n \"\"\"\n channel_index = wt_kit.get_index(self.channel_names, channel)\n new = list(self.channel_names)\n new.insert(0, new.pop(channel_index))\n self.channel_names = new\n\n def chop(self, *args, at={}, parent=None, verbose=True):\n \"\"\"Divide the dataset into its lower-dimensionality components.\n\n Parameters\n ----------\n axis : str or int (args)\n Axes of the returned data objects. Strings refer to the names of\n axes in this object, integers refer to their index. Provide multiple\n axes to return multidimensional data objects.\n at : dict (optional)\n Choice of position along an axis. Keys are axis names, values are lists\n ``[position, input units]``. If exact position does not exist,\n the closest valid position is used.\n parent : WrightTools Collection instance (optional)\n Collection to place the new \"chop\" collection within. Default is\n None (new parent).\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools Collection\n Collection of chopped data objects.\n\n Examples\n --------\n >>> data.axis_names\n ['d2', 'w1', 'w2']\n\n Get all w1 wigners.\n\n >>> datas = data.chop('d2', 'w1')\n >>> len(datas)\n 51\n\n Get 2D frequency at d2=0 fs.\n\n >>> datas = data.chop('w1', 'w2', at={'d2': [0, 'fs']})\n >>> len(datas)\n 0\n >>> datas[0].axis_names\n ['w1', 'w2']\n >>> datas[0].d2[:]\n 0.\n\n See Also\n --------\n collapse\n Collapse the dataset along one axis.\n split\n Split the dataset while maintaining its dimensionality.\n \"\"\"\n # parse args\n args = list(args)\n for i, arg in enumerate(args):\n if isinstance(arg, int):\n args[i] = self._axes[arg].expression\n # get output collection\n out = wt_collection.Collection(name='chop', parent=parent)\n # get output shape\n kept = args + list(at.keys())\n kept_axes = [self._axes[self.axis_expressions.index(a)] for a in kept]\n removed_axes = [a for a in self._axes if a not in kept_axes]\n removed_shape = wt_kit.joint_shape(*removed_axes)\n if removed_shape == ():\n removed_shape = (1,) * self.ndim\n # iterate\n i = 0\n for idx in np.ndindex(removed_shape):\n idx = np.array(idx, dtype=object)\n idx[np.array(removed_shape) == 1] = slice(None)\n for axis, point in at.items():\n point, units = point\n destination_units = self._axes[self.axis_names.index(axis)].units\n point = wt_units.converter(point, units, destination_units)\n axis_index = self.axis_names.index(axis)\n axis = self._axes[axis_index]\n idx[axis_index] = np.argmin(np.abs(axis[tuple(idx)] - point))\n data = out.create_data(name='chop%03i' % i)\n for v in self.variables:\n kwargs = {}\n kwargs['name'] = v.natural_name\n kwargs['values'] = v[idx]\n kwargs['units'] = v.units\n kwargs['label'] = v.label\n kwargs.update(v.attrs)\n data.create_variable(**kwargs)\n for c in self.channels:\n kwargs = {}\n kwargs['name'] = c.natural_name\n kwargs['values'] = c[idx]\n kwargs['units'] = c.units\n kwargs['label'] = c.label\n kwargs['signed'] = c.signed\n kwargs.update(c.attrs)\n data.create_channel(**kwargs)\n new_axes = [a.expression for a in kept_axes if a.expression not in at.keys()]\n new_axis_units = [a.units for a in kept_axes if a.expression not in at.keys()]\n data.transform(*new_axes)\n for j, units in enumerate(new_axis_units):\n data.axes[j].convert(units)\n i += 1\n out.flush()\n # return\n if verbose:\n es = [a.expression for a in kept_axes]\n print('chopped data into %d piece(s)' % len(out), 'in', es)\n return out\n\n def collapse(self, axis, method='integrate'):\n \"\"\"\n Collapse the dataset along one axis.\n\n Parameters\n ----------\n axis : int or str\n The axis to collapse along.\n method : {'integrate', 'average', 'sum', 'max', 'min'} (optional)\n The method of collapsing the given axis. Method may also be list\n of methods corresponding to the channels of the object. Default\n is integrate. All methods but integrate disregard NANs.\n\n See Also\n --------\n chop\n Divide the dataset into its lower-dimensionality components.\n split\n Split the dataset while maintaining its dimensionality.\n \"\"\"\n raise NotImplementedError\n # get axis index --------------------------------------------------------------------------\n if isinstance(axis, int):\n axis_index = axis\n elif isinstance(axis, str):\n axis_index = self.axis_names.index(axis)\n else:\n raise TypeError(\"axis: expected {int, str}, got %s\" % type(axis))\n # methods ---------------------------------------------------------------------------------\n if isinstance(method, list):\n if len(method) == len(self.channels):\n methods = method\n else:\n print('method argument incompatible in data.collapse')\n elif isinstance(method, str):\n methods = [method for _ in self.channels]\n # collapse --------------------------------------------------------------------------------\n for method, channel in zip(methods, self.channels):\n if method in ['int', 'integrate']:\n channel[:] = np.trapz(\n y=channel[:], x=self._axes[axis_index][:], axis=axis_index)\n elif method == 'sum':\n channel[:] = np.nansum(channel[:], axis=axis_index)\n elif method in ['max', 'maximum']:\n channel[:] = np.nanmax(channel[:], axis=axis_index)\n elif method in ['min', 'minimum']:\n channel[:] = np.nanmin(channel[:], axis=axis_index)\n elif method in ['ave', 'average', 'mean']:\n channel[:] = np.nanmean(channel[:], axis=axis_index)\n else:\n print('method not recognized in data.collapse')\n # cleanup ---------------------------------------------------------------------------------\n self._axes.pop(axis_index)\n\n def convert(self, destination_units, *, convert_variables=False, verbose=True):\n \"\"\"Convert all compatable axes to given units.\n\n Parameters\n ----------\n destination_units : str\n Destination units.\n convert_variables : boolean (optional)\n Toggle conversion of stored arrays. Default is False\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n See Also\n --------\n Axis.convert\n Convert a single axis object to compatable units. Call on an\n axis object in data.axes.\n \"\"\"\n # get kind of units\n units_kind = wt_units.kind(destination_units)\n # apply to all compatible axes\n for axis in self.axes:\n if axis.units_kind == units_kind:\n axis.convert(destination_units, convert_variables=convert_variables)\n if verbose:\n print('axis', axis.expression, 'converted')\n if convert_variables:\n for var in self.variables:\n if wt_units.kind(var.units) == units_kind:\n var.convert(destination_units)\n\n if verbose:\n print('variable', var.natural_name, 'converted')\n self._on_axes_updated()\n\n def create_channel(self, name, values=None, shape=None, units=None, **kwargs):\n \"\"\"Append a new channel.\n\n Parameters\n ----------\n name : string\n Unique name for this channel.\n values : array (optional)\n Array. If None, an empty array equaling the data shape is\n created. Default is None.\n shape : tuple of int\n Shape to use. must broadcast with the full shape.\n Only used if `values` is None.\n Default is the full shape of self.\n units : string (optional)\n Channel units. Default is None.\n kwargs : dict\n Additional keyword arguments passed to Channel instantiation.\n\n Returns\n -------\n Channel\n Created channel.\n \"\"\"\n require_kwargs = {}\n if values is None:\n if shape is None:\n require_kwargs['shape'] = self.shape\n else:\n require_kwargs['shape'] = shape\n require_kwargs['dtype'] = np.float64\n else:\n require_kwargs['data'] = values\n require_kwargs['shape'] = values.shape\n require_kwargs['dtype'] = values.dtype\n # create dataset\n dataset_id = self.require_dataset(name=name, chunks=True, **require_kwargs).id\n channel = Channel(self, dataset_id, units=units, **kwargs)\n # finish\n self.attrs['channel_names'] = np.append(self.attrs['channel_names'], name.encode())\n return channel\n\n def create_variable(self, name, values=None, shape=None, units=None, **kwargs):\n \"\"\"Add new child variable.\n\n Parameters\n ----------\n name : string\n Unique identifier.\n values : array-like (optional)\n Array to populate variable with. If None, an variable will be filled with NaN.\n Default is None.\n shape : tuple of int\n Shape to use. must broadcast with the full shape.\n Only used if `values` is None.\n Default is the full shape of self.\n units : string (optional)\n Variable units. Default is None.\n kwargs\n Additional kwargs to variable instantiation.\n\n Returns\n -------\n WrightTools Variable\n New child variable.\n \"\"\"\n if values is None:\n if shape is None:\n shape = self.shape\n dtype = np.float64\n else:\n shape = values.shape\n dtype = values.dtype\n # create dataset\n id = self.require_dataset(name=name, data=values, shape=shape, dtype=dtype).id\n variable = Variable(self, id, units=units, **kwargs)\n # finish\n self.variables.append(variable)\n self.attrs['variable_names'] = np.append(self.attrs['variable_names'], name.encode())\n return variable\n\n def flush(self):\n super().flush()\n\n def get_nadir(self, channel=0):\n \"\"\"Get the coordinates in units of the minimum in a channel.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel. Default is 0.\n\n Returns\n -------\n generator of numbers\n Coordinates in units for each axis.\n \"\"\"\n # get channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n # get indicies\n idx = channel.argmin()\n # finish\n return tuple(a[idx] for a in self._axes)\n\n def get_zenith(self, channel=0):\n \"\"\"Get the coordinates in units of the maximum in a channel.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel. Default is 0.\n\n Returns\n -------\n generator of numbers\n Coordinates in units for each axis.\n \"\"\"\n # get channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n # get indicies\n idx = channel.argmax()\n # finish\n return tuple(a[idx] for a in self._axes)\n\n def heal(self, channel=0, method='linear', fill_value=np.nan,\n verbose=True):\n \"\"\"\n Remove nans from channel using interpolation.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel to heal. Default is 0.\n method : {'linear', 'nearest', 'cubic'} (optional)\n The interpolation method. Note that cubic interpolation is only\n possible for 1D and 2D data. See `griddata`__ for more information.\n Default is linear.\n fill_value : number-like (optional)\n The value written to pixels that cannot be filled by interpolation.\n Default is nan.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n\n __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html\n\n\n .. note:: Healing may take several minutes for large datasets.\n Interpolation time goes as nearest, linear, then cubic.\n\n\n \"\"\"\n warnings.warn('heal', category=wt_exceptions.EntireDatasetInMemoryWarning)\n timer = wt_kit.Timer(verbose=False)\n with timer:\n # channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n values = self.channels[channel_index][:]\n points = [axis[:] for axis in self._axes]\n xi = tuple(np.meshgrid(*points, indexing='ij'))\n # 'undo' gridding\n arr = np.zeros((len(self._axes) + 1, values.size))\n for i in range(len(self._axes)):\n arr[i] = xi[i].flatten()\n arr[-1] = values.flatten()\n # remove nans\n arr = arr[:, ~np.isnan(arr).any(axis=0)]\n # grid data wants tuples\n tup = tuple([arr[i] for i in range(len(arr) - 1)])\n # grid data\n out = griddata(tup, arr[-1], xi, method=method, fill_value=fill_value)\n self.channels[channel_index][:] = out\n # print\n if verbose:\n print('channel {0} healed in {1} seconds'.format(\n channel.name, np.around(timer.interval, decimals=3)))\n\n def level(self, channel, axis, npts, *, verbose=True):\n \"\"\"Subtract the average value of npts at the edge of a given axis.\n\n Parameters\n ----------\n channel : int or str\n Channel to level.\n axis : int\n Axis to level along.\n npts : int\n Number of points to average for each slice. Positive numbers\n take points at leading indicies and negative numbers take points\n at trailing indicies.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n warnings.warn('level', category=wt_exceptions.EntireDatasetInMemoryWarning)\n channel_index = wt_kit.get_index(self.channel_names, channel)\n channel = self.channels[channel_index]\n # verify npts not zero\n npts = int(npts)\n if npts == 0:\n raise wt_exceptions.ValueError('npts must not be zero')\n # get subtrahend\n ss = [slice(None)] * self.ndim\n if npts > 0:\n ss[axis] = slice(0, npts, None)\n else:\n ss[axis] = slice(npts, None, None)\n subtrahend = np.nanmean(channel[ss], axis=axis)\n if self.ndim > 1:\n subtrahend = np.expand_dims(subtrahend, axis=axis)\n # level\n channel[:] = channel[:] - subtrahend # verbose\n # finish\n channel._null = 0\n if verbose:\n print('channel {0} leveled along axis {1}'.format(channel.natural_name, axis))\n\n def map_variable(self, variable, points, input_units='same', *, name=None, parent=None,\n verbose=True):\n \"\"\"Map points of an axis to new points using linear interpolation.\n\n Out-of-bounds points are written nan.\n\n Parameters\n ----------\n variable : string\n The variable to map onto.\n points : array-like or int\n If array, the new points. If int, new points will have the same\n limits, with int defining the number of evenly spaced points\n between.\n input_units : str (optional)\n The units of the new points. Default is same, which assumes\n the new points have the same units as the axis.\n name : string (optional)\n The name of the new data object. If None, generated from\n natural_name. Default is None.\n parent : WrightTools.Collection (optional)\n Parent of new data object. If None, data is made at root of a\n new temporary file.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools.Data\n New data object.\n \"\"\"\n # get variable index\n variable_index = wt_kit.get_index(self.variable_names, variable)\n variable = self.variables[variable_index]\n # get points\n if isinstance(points, int):\n points = np.linspace(variable.min(), variable.max(), points)\n points = np.array(points)\n # points dimensionality\n if points.ndim < variable.ndim:\n for i, d in enumerate(variable.shape):\n if d == 1:\n points = np.expand_dims(points, axis=i)\n # convert points\n if input_units == 'same':\n pass\n else:\n points = wt_units.converter(points, input_units, variable.units)\n # construct new data object\n special = ['name', 'axes', 'channel_names', 'variable_names']\n kwargs = {k: v for k, v in self.attrs.items() if k not in special}\n if name is None:\n name = '{0}_{1}_mapped'.format(self.natural_name, variable.natural_name)\n kwargs['name'] = name\n kwargs['parent'] = parent\n out = Data(**kwargs)\n # mapped variable\n values = points\n out.create_variable(values=values, **variable.attrs)\n # orthogonal variables\n for v in self.variables:\n if wt_kit.orthogonal(v.shape, variable.shape):\n out.create_variable(values=v[:], **v.attrs)\n out.transform(*self.axis_expressions)\n # interpolate\n if self.ndim == 1:\n\n def interpolate(dataset, points):\n function = scipy.interpolate.interp1d(variable[:], dataset[:], bounds_error=False)\n return function(points)\n\n else:\n pts = np.array([a.full.flatten() for a in self.axes]).T\n out_pts = np.array([a.full.flatten() for a in out.axes]).T\n\n def interpolate(dataset, points):\n values = dataset.full.flatten()\n function = scipy.interpolate.LinearNDInterpolator(pts, values, rescale=True)\n new = function(out_pts)\n new.shape = out.shape\n return new\n\n for v in self.variables:\n if v.natural_name not in out.variable_names:\n out.create_variable(values=interpolate(v, points), **v.attrs)\n out.variable_names = self.variable_names # enforce old order\n out._variables = None # force regeneration of variables @property\n for channel in self.channels:\n out.create_channel(values=interpolate(channel, points), **channel.attrs)\n # finish\n if verbose:\n print('data mapped from {0} to {1}'.format(self.shape, out.shape))\n return out\n\n def offset(self, points, offsets, along, offset_axis,\n units='same', offset_units='same', mode='valid',\n method='linear', verbose=True):\n \"\"\"Offset one axis based on another axis' values.\n\n Useful for correcting instrumental artifacts such as zerotune.\n\n Parameters\n ----------\n points : 1D array-like\n Points.\n offsets : 1D array-like\n Offsets.\n along : str or int\n Axis that points array lies along.\n offset_axis : str or int\n Axis to offset using offsets.\n units : str (optional)\n Units of points array.\n offset_units : str (optional)\n Units of offsets aray.\n mode : {'valid', 'full', 'old'} (optional)\n Define how far the new axis will extend. Points outside of valid\n interpolation range will be written nan.\n method : {'linear', 'nearest', 'cubic'} (optional)\n The interpolation method. Note that cubic interpolation is only\n possible for 1D and 2D data. See `griddata`__ for more information.\n Default is linear.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n\n __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html\n\n >>> points # an array of w1 points\n >>> offsets # an array of d1 corrections\n >>> data.offset(points, offsets, 'w1', 'd1')\n\n \"\"\"\n raise NotImplementedError\n # axis ------------------------------------------------------------------------------------\n if isinstance(along, int):\n axis_index = along\n elif isinstance(along, str):\n axis_index = self.axis_names.index(along)\n else:\n raise TypeError(\"along: expected {int, str}, got %s\" % type(along))\n axis = self._axes[axis_index]\n # values & points -------------------------------------------------------------------------\n # get values, points, units\n if units == 'same':\n input_units = axis.units\n else:\n input_units = units\n # check offsets is 1D or 0D\n if len(offsets.shape) == 1:\n pass\n else:\n raise RuntimeError('values must be 1D or 0D in offset!')\n # check if units is compatible, convert\n dictionary = getattr(wt_units, axis.units_kind)\n if input_units in dictionary.keys():\n pass\n else:\n raise RuntimeError('units incompatible in offset!')\n points = wt_units.converter(points, input_units, axis.units)\n # create correction array\n function = interp1d(points, offsets, bounds_error=False)\n corrections = function(axis[:])\n # remove nans\n finite_indicies = np.where(np.isfinite(corrections))[0]\n left_pad_width = finite_indicies[0]\n right_pad_width = len(corrections) - finite_indicies[-1] - 1\n corrections = np.pad(corrections[np.isfinite(corrections)],\n (int(left_pad_width), int(right_pad_width)), mode='edge')\n # do correction ---------------------------------------------------------------------------\n # transpose so axis is last\n transpose_order = np.arange(len(self._axes))\n transpose_order[axis_index] = len(self._axes) - 1\n transpose_order[-1] = axis_index\n self.transpose(transpose_order, verbose=False)\n # get offset axis index\n if isinstance(offset_axis, int):\n offset_axis_index = offset_axis\n elif isinstance(offset_axis, str):\n offset_axis_index = self.axis_names.index(offset_axis)\n else:\n raise TypeError(\"offset_axis: expected {int, str}, got %s\" % type(offset_axis))\n # new points\n new_points = [a[:] for a in self._axes]\n old_offset_axis_points = self._axes[offset_axis_index][:]\n spacing = abs((old_offset_axis_points.max() - old_offset_axis_points.min()) /\n float(len(old_offset_axis_points)))\n if mode == 'old':\n new_offset_axis_points = old_offset_axis_points\n elif mode == 'valid':\n _max = old_offset_axis_points.max() + corrections.min()\n _min = old_offset_axis_points.min() + corrections.max()\n n = int(abs(np.ceil((_max - _min) / spacing)))\n new_offset_axis_points = np.linspace(_min, _max, n)\n elif mode == 'full':\n _max = old_offset_axis_points.max() + corrections.max()\n _min = old_offset_axis_points.min() + corrections.min()\n n = np.ceil((_max - _min) / spacing)\n new_offset_axis_points = np.linspace(_min, _max, n)\n new_points[offset_axis_index] = new_offset_axis_points\n new_xi = tuple(np.meshgrid(*new_points, indexing='ij'))\n xi = tuple(np.meshgrid(*[a[:] for a in self._axes], indexing='ij'))\n for channel in self.channels:\n # 'undo' gridding\n arr = np.zeros((len(self._axes) + 1, channel[:].size))\n for i in range(len(self._axes)):\n arr[i] = xi[i].flatten()\n arr[-1] = channel[:].flatten()\n # do corrections\n corrections = list(corrections)\n corrections = corrections * int((len(arr[0]) / len(corrections)))\n arr[offset_axis_index] += corrections\n # grid data\n tup = tuple([arr[i] for i in range(len(arr) - 1)])\n # note that rescale is crucial in this operation\n out = griddata(tup, arr[-1], new_xi, method=method,\n fill_value=np.nan, rescale=True)\n channel[:] = out\n self._axes[offset_axis_index][:] = new_offset_axis_points\n # transpose out\n self.transpose(transpose_order, verbose=False)\n\n def print_tree(self, *, verbose=True):\n \"\"\"Print a ascii-formatted tree representation of the data contents.\"\"\"\n print('{0} ({1})'.format(self.natural_name, self.filepath))\n self._print_branch('', depth=0, verbose=verbose)\n\n def remove_channel(self, channel, *, verbose=True):\n \"\"\"Remove channel from data.\n\n Parameters\n ----------\n channel : int or str\n Channel index or name to remove.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n \"\"\"\n channel_index = wt_kit.get_index(self.channel_names, channel)\n new = list(self.channel_names)\n name = new.pop(channel_index)\n del self[name]\n self.channel_names = new\n if verbose:\n print('channel {0} removed'.format(name))\n\n def remove_variable(self, variable, *, implied=True, verbose=True):\n \"\"\"Remove variable from data.\n\n Parameters\n ----------\n variable : int or str\n Variable index or name to remove.\n implied : boolean (optional)\n Toggle deletion of other variables that start with the same\n name. Default is True.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n \"\"\"\n if isinstance(variable, int):\n variable = self.variable_names[variable]\n # find all of the implied variables\n removed = []\n if implied:\n for n in self.variable_names:\n if n.startswith(variable):\n removed.append(n)\n # check that axes will not be ruined\n for n in removed:\n for a in self._axes:\n if n in a.expression:\n message = '{0} is contained in axis {1}'.format(n, a.expression)\n raise RuntimeError(message)\n # do removal\n for n in removed:\n variable_index = wt_kit.get_index(self.variable_names, n)\n new = list(self.variable_names)\n name = new.pop(variable_index)\n del self[name]\n self.variable_names = new\n # finish\n if verbose:\n print('{0} variable(s) removed:'.format(len(removed)))\n for n in removed:\n print(' {0}'.format(n))\n\n def rename_channels(self, *, verbose=True, **kwargs):\n \"\"\"Rename a set of channels.\n\n Parameters\n ----------\n kwargs\n Keyword arguments of the form current:'new'.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # ensure that items will remain unique\n changed = kwargs.keys()\n for k, v in kwargs.items():\n if v not in changed and v in self.keys():\n raise wt_exceptions.NameNotUniqueError(v)\n # compile references to items that are changing\n new = {}\n for k, v in kwargs.items():\n obj = self[k]\n index = self.channel_names.index(k)\n # rename\n new[v] = obj, index\n obj.instances.pop(obj.fullpath, None)\n obj.natural_name = str(v)\n # remove old references\n del self[k]\n # apply new references\n names = list(self.channel_names)\n for v, value in new.items():\n obj, index = value\n self[v] = obj\n names[index] = v\n self.channel_names = names\n # finish\n if verbose:\n print('{0} channel(s) renamed:'.format(len(kwargs)))\n for k, v in kwargs.items():\n print(' {0} --> {1}'.format(k, v))\n\n def rename_variables(self, *, implied=True, verbose=True, **kwargs):\n \"\"\"Rename a set of variables.\n\n Parameters\n ----------\n kwargs\n Keyword arguments of the form current:'new'.\n implied : boolean (optional)\n Toggle inclusion of other variables that start with the same\n name. Default is True.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # find all of the implied variables\n kwargs = collections.OrderedDict(kwargs)\n if implied:\n new = collections.OrderedDict()\n for k, v in kwargs.items():\n for n in self.variable_names:\n if n.startswith(k):\n new[n] = n.replace(k, v, 1)\n kwargs = new\n # ensure that items will remain unique\n changed = kwargs.keys()\n for k, v in kwargs.items():\n if v not in changed and v in self.keys():\n raise wt_exceptions.NameNotUniqueError(v)\n # compile references to items that are changing\n new = {}\n for k, v in kwargs.items():\n obj = self[k]\n index = self.variable_names.index(k)\n # rename\n new[v] = obj, index\n obj.instances.pop(obj.fullpath, None)\n obj.natural_name = str(v)\n # remove old references\n del self[k]\n # apply new references\n names = list(self.variable_names)\n for v, value in new.items():\n obj, index = value\n self[v] = obj\n names[index] = v\n self.variable_names = names\n # update axes\n units = self.units\n new = list(self.axis_expressions)\n for i, v in enumerate(kwargs.keys()):\n for j, n in enumerate(new):\n new[j] = n.replace(v, '{%i}' % i)\n for i, n in enumerate(new):\n new[i] = n.format(*kwargs.values())\n self.transform(*new)\n for a, u in zip(self._axes, units):\n a.convert(u)\n # finish\n if verbose:\n print('{0} variable(s) renamed:'.format(len(kwargs)))\n for k, v in kwargs.items():\n print(' {0} --> {1}'.format(k, v))\n\n def share_nans(self):\n \"\"\"Share not-a-numbers between all channels.\n\n If any channel is nan at a given index, all channels will be nan\n at that index after this operation.\n\n Uses the share_nans method found in wt.kit.\n \"\"\"\n def f(_, s, channels):\n outs = wt_kit.share_nans(*[c[s] for c in channels])\n for c, o in zip(channels, outs):\n c[s] = o\n\n self.channels[0].chunkwise(f, self.channels)\n\n def smooth(self, factors, channel=None, verbose=True):\n \"\"\"Smooth a channel using an n-dimenional `kaiser window`__.\n\n Note, all arrays are loaded into memory.\n\n __ https://en.wikipedia.org/wiki/Kaiser_window\n\n Parameters\n ----------\n factors : int or list of int\n The smoothing factor. You may provide a list of smoothing factors\n for each axis.\n channel : int or str or None (optional)\n The channel to smooth. If None, all channels will be smoothed.\n Default is None.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n warnings.warn('smooth', category=wt_exceptions.EntireDatasetInMemoryWarning)\n # get factors -----------------------------------------------------------------------------\n\n if isinstance(factors, list):\n pass\n else:\n dummy = np.zeros(len(self._axes))\n dummy[::] = factors\n factors = list(dummy)\n # get channels ----------------------------------------------------------------------------\n if channel is None:\n channels = self.channels\n else:\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channels = [self.channels[channel_index]]\n # smooth ----------------------------------------------------------------------------------\n for channel in channels:\n values = channel[:]\n for axis_index in range(len(factors)):\n factor = factors[axis_index]\n # transpose so the axis of interest is last\n transpose_order = range(len(values.shape))\n # replace axis_index with zero\n transpose_order = [len(values.shape) - 1 if i ==\n axis_index else i for i in transpose_order]\n transpose_order[len(values.shape) - 1] = axis_index\n values = values.transpose(transpose_order)\n # get kaiser window\n beta = 5.0\n w = np.kaiser(2 * factor + 1, beta)\n # for all slices...\n for index in np.ndindex(values[..., 0].shape):\n current_slice = values[index]\n temp_slice = np.pad(current_slice, int(factor), mode=str('edge'))\n values[index] = np.convolve(temp_slice, w / w.sum(), mode=str('valid'))\n # transpose out\n values = values.transpose(transpose_order)\n # return array to channel object\n channel[:] = values\n if verbose:\n print('smoothed data')\n\n def split(self, axis, positions, units='same', direction='below', parent=None, verbose=True):\n \"\"\"\n Split the data object along a given axis, in units.\n\n Parameters\n ----------\n axis : int or str\n The axis to split along.\n positions : number-type or 1D array-type\n The position(s) to split at, in units. If a non-exact position is\n given, the closest valid axis position will be used.\n units : str (optional)\n The units of the given positions. Default is same, which assumes\n input units are identical to axis units.\n direction : {'below', 'above'} (optional)\n Choose which group of data the points at positions remains with.\n This decision is based on the value, not the index.\n Consider points [0, 1, 2, 3, 4, 5] and split value [3]. If direction\n is above the returned objects are [0, 1, 2] and [3, 4, 5]. If\n direction is below the returned objects are [0, 1, 2, 3] and\n [4, 5]. Default is below.\n parent : WrightTools.Collection\n The parent collection in which to place the 'split' collection.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools.collection.Collection\n A Collection of data objects.\n The order of the objects is such that the axis points retain their original order.\n\n See Also\n --------\n chop\n Divide the dataset into its lower-dimensionality components.\n collapse\n Collapse the dataset along one axis.\n \"\"\"\n raise NotImplementedError\n # axis ------------------------------------------------------------------------------------\n if isinstance(axis, int):\n axis_index = axis\n elif isinstance(axis, str):\n axis_index = self.axis_names.index(axis)\n else:\n raise TypeError(\"axis: expected {int, str}, got %s\" % type(axis))\n axis = self._axes[axis_index]\n # indicies --------------------------------------------------------------------------------\n # positions must be iterable and should be a numpy array\n if type(positions) in [int, float]:\n positions = [positions]\n positions = np.array(positions)\n # positions should be in the data units\n if units != 'same':\n positions = wt_units.converter(positions, units, axis.units)\n # get indicies of split\n indicies = []\n for position in positions:\n idx = np.argmin(abs(axis[:] - position))\n indicies.append(idx)\n indicies.sort()\n # set direction according to units\n flip = direction == 'above'\n if axis[-1] < axis[0]:\n flip = not flip\n if flip:\n indicies = [i - 1 for i in indicies]\n # process ---------------------------------------------------------------------------------\n outs = wt_collection.Collection(name='split', parent=parent,\n edit_local=parent is not None)\n start = 0\n stop = 0\n for i in range(len(indicies) + 1):\n # get start and stop\n start = stop # previous value\n if i == len(indicies):\n stop = len(axis)\n else:\n stop = indicies[i] + 1\n # new data object prepare\n new_name = \"split%03d\" % i\n if stop - start < 1:\n outs.create_data(\"\")\n elif stop - start == 1:\n attrs = dict(self.attrs)\n attrs.pop('name', None)\n new_data = outs.create_data(new_name, **attrs)\n for ax in self._axes:\n if ax != axis:\n attrs = dict(ax.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_axis(ax.natural_name, ax[:], ax.units, **attrs)\n slc = [slice(None)] * len(self.shape)\n slc[axis_index] = start\n for ch in self.channels:\n attrs = dict(ch.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_channel(ch.natural_name, ch[:][slc], ch.units, **attrs)\n else:\n attrs = dict(self.attrs)\n attrs.pop('name', None)\n new_data = outs.create_data(new_name, **attrs)\n for ax in self._axes:\n if ax == axis:\n slc = slice(start, stop)\n else:\n slc = slice(None)\n attrs = dict(ax.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_axis(ax.natural_name, ax[slc], ax.units, **attrs)\n slc = [slice(None)] * len(self.shape)\n slc[axis_index] = slice(start, stop)\n for ch in self.channels:\n attrs = dict(ch.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_channel(ch.natural_name, ch[slc], ch.units, **attrs)\n # post process ----------------------------------------------------------------------------\n if verbose:\n print('split data into {0} pieces along {1}:'.format(len(indicies) + 1,\n axis.natural_name))\n for i in range(len(outs)):\n new_data = outs[i]\n if new_data is None:\n print(' {0} : None'.format(i))\n elif len(new_data.shape) < len(self.shape):\n print(' {0} : {1} {2}(constant)'.format(i, axis.natural_name, axis.units))\n else:\n new_axis = new_data.axes[axis_index]\n print(' {0} : {1} to {2} {3} (length {4})'.format(i, new_axis[0],\n new_axis[-1],\n new_axis.units,\n new_axis.size))\n return outs\n\n def transform(self, *axes, verbose=True):\n \"\"\"Transform the data.\n\n Parameters\n ----------\n axes : strings\n Expressions for the new set of axes.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # TODO: ensure that transform does not break data\n # create\n new = []\n current = {a.expression: a for a in self._axes}\n for expression in axes:\n axis = current.get(expression, Axis(self, expression))\n new.append(axis)\n self._axes = new\n # units\n for a in self._axes:\n if a.units is None:\n a.convert(a.variables[0].units)\n # finish\n self.flush()\n self._on_axes_updated()\n\n def zoom(self, factor, order=1, verbose=True):\n \"\"\"Zoom the data array using spline interpolation of the requested order.\n\n The number of points along each axis is increased by factor.\n See `scipy ndimage`__ for more info.\n\n __ http://docs.scipy.org/doc/scipy/reference/\n generated/scipy.ndimage.interpolation.zoom.html\n\n Parameters\n ----------\n factor : float\n The number of points along each axis will increase by this factor.\n order : int (optional)\n The order of the spline used to interpolate onto new points.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n raise NotImplementedError\n import scipy.ndimage\n # axes\n for axis in self._axes:\n axis[:] = scipy.ndimage.interpolation.zoom(axis[:], factor, order=order)\n # channels\n for channel in self.channels:\n channel[:] = scipy.ndimage.interpolation.zoom(channel[:], factor, order=order)\n # return\n if verbose:\n print('data zoomed to new shape:', self.shape)\n", "path": "WrightTools/data/_data.py"}], "after_files": [{"content": "\"\"\"Central data class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport collections\nimport operator\nimport functools\nimport warnings\n\nimport numpy as np\n\nimport h5py\n\nimport scipy\nfrom scipy.interpolate import griddata, interp1d\n\nfrom .._group import Group\nfrom .. import collection as wt_collection\nfrom .. import exceptions as wt_exceptions\nfrom .. import kit as wt_kit\nfrom .. import units as wt_units\nfrom ._axis import Axis, identifier_to_operator\nfrom ._channel import Channel\nfrom ._variable import Variable\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['Data']\n\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Data(Group):\n \"\"\"Multidimensional dataset.\"\"\"\n\n class_name = 'Data'\n\n def __init__(self, *args, **kwargs):\n self._axes = []\n Group.__init__(self, *args, **kwargs)\n # populate axes from attrs string\n for identifier in self.attrs.get('axes', []):\n identifier = identifier.decode()\n expression, units = identifier.split('{')\n units = units.replace('}', '')\n for i in identifier_to_operator.keys():\n expression = expression.replace(i, identifier_to_operator[i])\n expression = expression.replace(' ', '') # remove all whitespace\n axis = Axis(self, expression, units.strip())\n self._axes.append(axis)\n self._current_axis_identities_in_natural_namespace = []\n self._on_axes_updated()\n # the following are populated if not already recorded\n self.channel_names\n self.source\n self.variable_names\n\n def __repr__(self):\n return '<WrightTools.Data \\'{0}\\' {1} at {2}>'.format(\n self.natural_name, str(self.axis_names), '::'.join([self.filepath, self.name]))\n\n @property\n def axes(self):\n return tuple(self._axes)\n\n @property\n def axis_expressions(self):\n \"\"\"Axis expressions.\"\"\"\n return tuple(a.expression for a in self._axes)\n\n @property\n def axis_names(self):\n \"\"\"Axis names.\"\"\"\n return tuple(a.natural_name for a in self._axes)\n\n @property\n def channel_names(self):\n \"\"\"Channel names.\"\"\"\n if 'channel_names' not in self.attrs.keys():\n self.attrs['channel_names'] = np.array([], dtype='S')\n return tuple(s.decode() for s in self.attrs['channel_names'])\n\n @channel_names.setter\n def channel_names(self, value):\n \"\"\"Set channel names.\"\"\"\n self.attrs['channel_names'] = np.array(value, dtype='S')\n\n @property\n def channels(self):\n \"\"\"Channels.\"\"\"\n return tuple(self[n] for n in self.channel_names)\n\n @property\n def datasets(self):\n \"\"\"Datasets.\"\"\"\n return tuple(v for _, v in self.items() if isinstance(v, h5py.Dataset))\n\n @property\n def kind(self):\n \"\"\"Kind.\"\"\"\n if 'kind' not in self.attrs.keys():\n self.attrs['kind'] = 'None'\n value = self.attrs['kind']\n return value if not value == 'None' else None\n\n @property\n def ndim(self):\n \"\"\"Get number of dimensions.\"\"\"\n try:\n assert self._ndim is not None\n except (AssertionError, AttributeError):\n if len(self.variables) == 0:\n self._ndim = 0\n else:\n self._ndim = self.variables[0].ndim\n finally:\n return self._ndim\n\n @property\n def shape(self):\n \"\"\"Shape.\"\"\"\n try:\n assert self._shape is not None\n except (AssertionError, AttributeError):\n self._shape = wt_kit.joint_shape(*self.variables)\n finally:\n return self._shape\n\n @property\n def size(self):\n \"\"\"Size.\"\"\"\n return functools.reduce(operator.mul, self.shape)\n\n @property\n def source(self):\n \"\"\"Source.\"\"\"\n if 'source' not in self.attrs.keys():\n self.attrs['source'] = 'None'\n value = self.attrs['source']\n return value if not value == 'None' else None\n\n @property\n def units(self):\n \"\"\"All axis units.\"\"\"\n return tuple(a.units for a in self._axes)\n\n @property\n def variable_names(self):\n \"\"\"Variable names.\"\"\"\n if 'variable_names' not in self.attrs.keys():\n self.attrs['variable_names'] = np.array([], dtype='S')\n return tuple(s.decode() for s in self.attrs['variable_names'])\n\n @variable_names.setter\n def variable_names(self, value):\n \"\"\"Set variable names.\"\"\"\n self.attrs['variable_names'] = np.array(value, dtype='S')\n\n @property\n def variables(self):\n \"\"\"Variables.\"\"\"\n try:\n assert self._variables is not None\n except (AssertionError, AttributeError):\n self._variables = [self[n] for n in self.variable_names]\n finally:\n return self._variables\n\n @property\n def _leaf(self):\n return '{0} {1}'.format(self.natural_name, self.shape)\n\n def _on_axes_updated(self):\n \"\"\"Method to run when axes are changed in any way.\n\n Propagates updated axes properly.\n \"\"\"\n # update attrs\n self.attrs['axes'] = [a.identity.encode() for a in self._axes]\n # remove old attributes\n while len(self._current_axis_identities_in_natural_namespace) > 0:\n key = self._current_axis_identities_in_natural_namespace.pop(0)\n self.__dict__.pop(key)\n # populate new attributes\n for a in self._axes:\n key = a.natural_name\n setattr(self, key, a)\n self._current_axis_identities_in_natural_namespace.append(key)\n\n def _print_branch(self, prefix, depth, verbose):\n\n def print_leaves(prefix, lis, vline=True):\n for i, item in enumerate(lis):\n if vline:\n a = '\u2502 '\n else:\n a = ' '\n if i + 1 == len(lis):\n b = '\u2514\u2500\u2500 '\n else:\n b = '\u251c\u2500\u2500 '\n s = prefix + a + b + '{0}: {1}'.format(i, item._leaf)\n print(s)\n\n if verbose:\n # axes\n print(prefix + '\u251c\u2500\u2500 axes')\n print_leaves(prefix, self.axes)\n # variables\n print(prefix + '\u251c\u2500\u2500 variables')\n print_leaves(prefix, self.variables)\n # channels\n print(prefix + '\u2514\u2500\u2500 channels')\n print_leaves(prefix, self.channels, vline=False)\n else:\n # axes\n s = 'axes: '\n s += ', '.join(['{0} ({1})'.format(a.expression, a.units) for a in self.axes])\n print(prefix + '\u251c\u2500\u2500 ' + s)\n # channels\n s = 'channels: '\n s += ', '.join(self.channel_names)\n print(prefix + '\u2514\u2500\u2500 ' + s)\n\n def bring_to_front(self, channel):\n \"\"\"Bring a specific channel to the zero-indexed position in channels.\n\n All other channels get pushed back but remain in order.\n\n Parameters\n ----------\n channel : int or str\n Channel index or name.\n \"\"\"\n channel_index = wt_kit.get_index(self.channel_names, channel)\n new = list(self.channel_names)\n new.insert(0, new.pop(channel_index))\n self.channel_names = new\n\n def chop(self, *args, at={}, parent=None, verbose=True):\n \"\"\"Divide the dataset into its lower-dimensionality components.\n\n Parameters\n ----------\n axis : str or int (args)\n Axes of the returned data objects. Strings refer to the names of\n axes in this object, integers refer to their index. Provide multiple\n axes to return multidimensional data objects.\n at : dict (optional)\n Choice of position along an axis. Keys are axis names, values are lists\n ``[position, input units]``. If exact position does not exist,\n the closest valid position is used.\n parent : WrightTools Collection instance (optional)\n Collection to place the new \"chop\" collection within. Default is\n None (new parent).\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools Collection\n Collection of chopped data objects.\n\n Examples\n --------\n >>> data.axis_names\n ['d2', 'w1', 'w2']\n\n Get all w1 wigners.\n\n >>> datas = data.chop('d2', 'w1')\n >>> len(datas)\n 51\n\n Get 2D frequency at d2=0 fs.\n\n >>> datas = data.chop('w1', 'w2', at={'d2': [0, 'fs']})\n >>> len(datas)\n 0\n >>> datas[0].axis_names\n ['w1', 'w2']\n >>> datas[0].d2[:]\n 0.\n\n See Also\n --------\n collapse\n Collapse the dataset along one axis.\n split\n Split the dataset while maintaining its dimensionality.\n \"\"\"\n # parse args\n args = list(args)\n for i, arg in enumerate(args):\n if isinstance(arg, int):\n args[i] = self._axes[arg].expression\n # get output collection\n out = wt_collection.Collection(name='chop', parent=parent)\n # get output shape\n kept = args + list(at.keys())\n kept_axes = [self._axes[self.axis_expressions.index(a)] for a in kept]\n removed_axes = [a for a in self._axes if a not in kept_axes]\n removed_shape = wt_kit.joint_shape(*removed_axes)\n if removed_shape == ():\n removed_shape = (1,) * self.ndim\n # iterate\n i = 0\n for idx in np.ndindex(removed_shape):\n idx = np.array(idx, dtype=object)\n idx[np.array(removed_shape) == 1] = slice(None)\n for axis, point in at.items():\n point, units = point\n destination_units = self._axes[self.axis_names.index(axis)].units\n point = wt_units.converter(point, units, destination_units)\n axis_index = self.axis_names.index(axis)\n axis = self._axes[axis_index]\n idx[axis_index] = np.argmin(np.abs(axis[tuple(idx)] - point))\n data = out.create_data(name='chop%03i' % i)\n for v in self.variables:\n kwargs = {}\n kwargs['name'] = v.natural_name\n kwargs['values'] = v[idx]\n kwargs['units'] = v.units\n kwargs['label'] = v.label\n kwargs.update(v.attrs)\n data.create_variable(**kwargs)\n for c in self.channels:\n kwargs = {}\n kwargs['name'] = c.natural_name\n kwargs['values'] = c[idx]\n kwargs['units'] = c.units\n kwargs['label'] = c.label\n kwargs['signed'] = c.signed\n kwargs.update(c.attrs)\n data.create_channel(**kwargs)\n new_axes = [a.expression for a in kept_axes if a.expression not in at.keys()]\n new_axis_units = [a.units for a in kept_axes if a.expression not in at.keys()]\n data.transform(*new_axes)\n for j, units in enumerate(new_axis_units):\n data.axes[j].convert(units)\n i += 1\n out.flush()\n # return\n if verbose:\n es = [a.expression for a in kept_axes]\n print('chopped data into %d piece(s)' % len(out), 'in', es)\n return out\n\n def collapse(self, axis, method='integrate'):\n \"\"\"\n Collapse the dataset along one axis.\n\n Parameters\n ----------\n axis : int or str\n The axis to collapse along.\n method : {'integrate', 'average', 'sum', 'max', 'min'} (optional)\n The method of collapsing the given axis. Method may also be list\n of methods corresponding to the channels of the object. Default\n is integrate. All methods but integrate disregard NANs.\n\n See Also\n --------\n chop\n Divide the dataset into its lower-dimensionality components.\n split\n Split the dataset while maintaining its dimensionality.\n \"\"\"\n raise NotImplementedError\n # get axis index --------------------------------------------------------------------------\n if isinstance(axis, int):\n axis_index = axis\n elif isinstance(axis, str):\n axis_index = self.axis_names.index(axis)\n else:\n raise TypeError(\"axis: expected {int, str}, got %s\" % type(axis))\n # methods ---------------------------------------------------------------------------------\n if isinstance(method, list):\n if len(method) == len(self.channels):\n methods = method\n else:\n print('method argument incompatible in data.collapse')\n elif isinstance(method, str):\n methods = [method for _ in self.channels]\n # collapse --------------------------------------------------------------------------------\n for method, channel in zip(methods, self.channels):\n if method in ['int', 'integrate']:\n channel[:] = np.trapz(\n y=channel[:], x=self._axes[axis_index][:], axis=axis_index)\n elif method == 'sum':\n channel[:] = np.nansum(channel[:], axis=axis_index)\n elif method in ['max', 'maximum']:\n channel[:] = np.nanmax(channel[:], axis=axis_index)\n elif method in ['min', 'minimum']:\n channel[:] = np.nanmin(channel[:], axis=axis_index)\n elif method in ['ave', 'average', 'mean']:\n channel[:] = np.nanmean(channel[:], axis=axis_index)\n else:\n print('method not recognized in data.collapse')\n # cleanup ---------------------------------------------------------------------------------\n self._axes.pop(axis_index)\n\n def convert(self, destination_units, *, convert_variables=False, verbose=True):\n \"\"\"Convert all compatable axes to given units.\n\n Parameters\n ----------\n destination_units : str\n Destination units.\n convert_variables : boolean (optional)\n Toggle conversion of stored arrays. Default is False\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n See Also\n --------\n Axis.convert\n Convert a single axis object to compatable units. Call on an\n axis object in data.axes.\n \"\"\"\n # get kind of units\n units_kind = wt_units.kind(destination_units)\n # apply to all compatible axes\n for axis in self.axes:\n if axis.units_kind == units_kind:\n axis.convert(destination_units, convert_variables=convert_variables)\n if verbose:\n print('axis', axis.expression, 'converted')\n if convert_variables:\n for var in self.variables:\n if wt_units.kind(var.units) == units_kind:\n var.convert(destination_units)\n\n if verbose:\n print('variable', var.natural_name, 'converted')\n self._on_axes_updated()\n\n def create_channel(self, name, values=None, shape=None, units=None, **kwargs):\n \"\"\"Append a new channel.\n\n Parameters\n ----------\n name : string\n Unique name for this channel.\n values : array (optional)\n Array. If None, an empty array equaling the data shape is\n created. Default is None.\n shape : tuple of int\n Shape to use. must broadcast with the full shape.\n Only used if `values` is None.\n Default is the full shape of self.\n units : string (optional)\n Channel units. Default is None.\n kwargs : dict\n Additional keyword arguments passed to Channel instantiation.\n\n Returns\n -------\n Channel\n Created channel.\n \"\"\"\n require_kwargs = {}\n if values is None:\n if shape is None:\n require_kwargs['shape'] = self.shape\n else:\n require_kwargs['shape'] = shape\n require_kwargs['dtype'] = np.float64\n else:\n require_kwargs['data'] = values\n require_kwargs['shape'] = values.shape\n require_kwargs['dtype'] = values.dtype\n # create dataset\n dataset_id = self.require_dataset(name=name, chunks=True, **require_kwargs).id\n channel = Channel(self, dataset_id, units=units, **kwargs)\n # finish\n self.attrs['channel_names'] = np.append(self.attrs['channel_names'], name.encode())\n return channel\n\n def create_variable(self, name, values=None, shape=None, units=None, **kwargs):\n \"\"\"Add new child variable.\n\n Parameters\n ----------\n name : string\n Unique identifier.\n values : array-like (optional)\n Array to populate variable with. If None, an variable will be filled with NaN.\n Default is None.\n shape : tuple of int\n Shape to use. must broadcast with the full shape.\n Only used if `values` is None.\n Default is the full shape of self.\n units : string (optional)\n Variable units. Default is None.\n kwargs\n Additional kwargs to variable instantiation.\n\n Returns\n -------\n WrightTools Variable\n New child variable.\n \"\"\"\n if values is None:\n if shape is None:\n shape = self.shape\n dtype = np.float64\n else:\n shape = values.shape\n dtype = values.dtype\n # create dataset\n id = self.require_dataset(name=name, data=values, shape=shape, dtype=dtype).id\n variable = Variable(self, id, units=units, **kwargs)\n # finish\n self.variables.append(variable)\n self.attrs['variable_names'] = np.append(self.attrs['variable_names'], name.encode())\n return variable\n\n def flush(self):\n super().flush()\n\n def get_nadir(self, channel=0):\n \"\"\"Get the coordinates in units of the minimum in a channel.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel. Default is 0.\n\n Returns\n -------\n generator of numbers\n Coordinates in units for each axis.\n \"\"\"\n # get channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n # get indicies\n idx = channel.argmin()\n # finish\n return tuple(a[idx] for a in self._axes)\n\n def get_zenith(self, channel=0):\n \"\"\"Get the coordinates in units of the maximum in a channel.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel. Default is 0.\n\n Returns\n -------\n generator of numbers\n Coordinates in units for each axis.\n \"\"\"\n # get channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n # get indicies\n idx = channel.argmax()\n # finish\n return tuple(a[idx] for a in self._axes)\n\n def heal(self, channel=0, method='linear', fill_value=np.nan,\n verbose=True):\n \"\"\"\n Remove nans from channel using interpolation.\n\n Parameters\n ----------\n channel : int or str (optional)\n Channel to heal. Default is 0.\n method : {'linear', 'nearest', 'cubic'} (optional)\n The interpolation method. Note that cubic interpolation is only\n possible for 1D and 2D data. See `griddata`__ for more information.\n Default is linear.\n fill_value : number-like (optional)\n The value written to pixels that cannot be filled by interpolation.\n Default is nan.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n\n __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html\n\n\n .. note:: Healing may take several minutes for large datasets.\n Interpolation time goes as nearest, linear, then cubic.\n\n\n \"\"\"\n warnings.warn('heal', category=wt_exceptions.EntireDatasetInMemoryWarning)\n timer = wt_kit.Timer(verbose=False)\n with timer:\n # channel\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channel = self.channels[channel_index]\n values = self.channels[channel_index][:]\n points = [axis[:] for axis in self._axes]\n xi = tuple(np.meshgrid(*points, indexing='ij'))\n # 'undo' gridding\n arr = np.zeros((len(self._axes) + 1, values.size))\n for i in range(len(self._axes)):\n arr[i] = xi[i].flatten()\n arr[-1] = values.flatten()\n # remove nans\n arr = arr[:, ~np.isnan(arr).any(axis=0)]\n # grid data wants tuples\n tup = tuple([arr[i] for i in range(len(arr) - 1)])\n # grid data\n out = griddata(tup, arr[-1], xi, method=method, fill_value=fill_value)\n self.channels[channel_index][:] = out\n # print\n if verbose:\n print('channel {0} healed in {1} seconds'.format(\n channel.name, np.around(timer.interval, decimals=3)))\n\n def level(self, channel, axis, npts, *, verbose=True):\n \"\"\"Subtract the average value of npts at the edge of a given axis.\n\n Parameters\n ----------\n channel : int or str\n Channel to level.\n axis : int\n Axis to level along.\n npts : int\n Number of points to average for each slice. Positive numbers\n take points at leading indicies and negative numbers take points\n at trailing indicies.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n warnings.warn('level', category=wt_exceptions.EntireDatasetInMemoryWarning)\n channel_index = wt_kit.get_index(self.channel_names, channel)\n channel = self.channels[channel_index]\n # verify npts not zero\n npts = int(npts)\n if npts == 0:\n raise wt_exceptions.ValueError('npts must not be zero')\n # get subtrahend\n ss = [slice(None)] * self.ndim\n if npts > 0:\n ss[axis] = slice(0, npts, None)\n else:\n ss[axis] = slice(npts, None, None)\n subtrahend = np.nanmean(channel[ss], axis=axis)\n if self.ndim > 1:\n subtrahend = np.expand_dims(subtrahend, axis=axis)\n # level\n channel[:] = channel[:] - subtrahend # verbose\n # finish\n channel._null = 0\n if verbose:\n print('channel {0} leveled along axis {1}'.format(channel.natural_name, axis))\n\n def map_variable(self, variable, points, input_units='same', *, name=None, parent=None,\n verbose=True):\n \"\"\"Map points of an axis to new points using linear interpolation.\n\n Out-of-bounds points are written nan.\n\n Parameters\n ----------\n variable : string\n The variable to map onto.\n points : array-like or int\n If array, the new points. If int, new points will have the same\n limits, with int defining the number of evenly spaced points\n between.\n input_units : str (optional)\n The units of the new points. Default is same, which assumes\n the new points have the same units as the axis.\n name : string (optional)\n The name of the new data object. If None, generated from\n natural_name. Default is None.\n parent : WrightTools.Collection (optional)\n Parent of new data object. If None, data is made at root of a\n new temporary file.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools.Data\n New data object.\n \"\"\"\n # get variable index\n variable_index = wt_kit.get_index(self.variable_names, variable)\n variable = self.variables[variable_index]\n # get points\n if isinstance(points, int):\n points = np.linspace(variable.min(), variable.max(), points)\n points = np.array(points)\n # points dimensionality\n if points.ndim < variable.ndim:\n for i, d in enumerate(variable.shape):\n if d == 1:\n points = np.expand_dims(points, axis=i)\n # convert points\n if input_units == 'same':\n pass\n else:\n points = wt_units.converter(points, input_units, variable.units)\n # construct new data object\n special = ['name', 'axes', 'channel_names', 'variable_names']\n kwargs = {k: v for k, v in self.attrs.items() if k not in special}\n if name is None:\n name = '{0}_{1}_mapped'.format(self.natural_name, variable.natural_name)\n kwargs['name'] = name\n kwargs['parent'] = parent\n out = Data(**kwargs)\n # mapped variable\n values = points\n out.create_variable(values=values, **variable.attrs)\n # orthogonal variables\n for v in self.variables:\n if wt_kit.orthogonal(v.shape, variable.shape):\n out.create_variable(values=v[:], **v.attrs)\n out.transform(*self.axis_expressions)\n # interpolate\n if self.ndim == 1:\n\n def interpolate(dataset, points):\n function = scipy.interpolate.interp1d(variable[:], dataset[:], bounds_error=False)\n return function(points)\n\n else:\n pts = np.array([a.full.flatten() for a in self.axes]).T\n out_pts = np.array([a.full.flatten() for a in out.axes]).T\n\n def interpolate(dataset, points):\n values = dataset.full.flatten()\n function = scipy.interpolate.LinearNDInterpolator(pts, values, rescale=True)\n new = function(out_pts)\n new.shape = out.shape\n return new\n\n for v in self.variables:\n if v.natural_name not in out.variable_names:\n out.create_variable(values=interpolate(v, points), **v.attrs)\n out.variable_names = self.variable_names # enforce old order\n out._variables = None # force regeneration of variables @property\n for channel in self.channels:\n out.create_channel(values=interpolate(channel, points), **channel.attrs)\n # finish\n if verbose:\n print('data mapped from {0} to {1}'.format(self.shape, out.shape))\n return out\n\n def offset(self, points, offsets, along, offset_axis,\n units='same', offset_units='same', mode='valid',\n method='linear', verbose=True):\n \"\"\"Offset one axis based on another axis' values.\n\n Useful for correcting instrumental artifacts such as zerotune.\n\n Parameters\n ----------\n points : 1D array-like\n Points.\n offsets : 1D array-like\n Offsets.\n along : str or int\n Axis that points array lies along.\n offset_axis : str or int\n Axis to offset using offsets.\n units : str (optional)\n Units of points array.\n offset_units : str (optional)\n Units of offsets aray.\n mode : {'valid', 'full', 'old'} (optional)\n Define how far the new axis will extend. Points outside of valid\n interpolation range will be written nan.\n method : {'linear', 'nearest', 'cubic'} (optional)\n The interpolation method. Note that cubic interpolation is only\n possible for 1D and 2D data. See `griddata`__ for more information.\n Default is linear.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n\n __ http://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.griddata.html\n\n >>> points # an array of w1 points\n >>> offsets # an array of d1 corrections\n >>> data.offset(points, offsets, 'w1', 'd1')\n\n \"\"\"\n raise NotImplementedError\n # axis ------------------------------------------------------------------------------------\n if isinstance(along, int):\n axis_index = along\n elif isinstance(along, str):\n axis_index = self.axis_names.index(along)\n else:\n raise TypeError(\"along: expected {int, str}, got %s\" % type(along))\n axis = self._axes[axis_index]\n # values & points -------------------------------------------------------------------------\n # get values, points, units\n if units == 'same':\n input_units = axis.units\n else:\n input_units = units\n # check offsets is 1D or 0D\n if len(offsets.shape) == 1:\n pass\n else:\n raise RuntimeError('values must be 1D or 0D in offset!')\n # check if units is compatible, convert\n dictionary = getattr(wt_units, axis.units_kind)\n if input_units in dictionary.keys():\n pass\n else:\n raise RuntimeError('units incompatible in offset!')\n points = wt_units.converter(points, input_units, axis.units)\n # create correction array\n function = interp1d(points, offsets, bounds_error=False)\n corrections = function(axis[:])\n # remove nans\n finite_indicies = np.where(np.isfinite(corrections))[0]\n left_pad_width = finite_indicies[0]\n right_pad_width = len(corrections) - finite_indicies[-1] - 1\n corrections = np.pad(corrections[np.isfinite(corrections)],\n (int(left_pad_width), int(right_pad_width)), mode='edge')\n # do correction ---------------------------------------------------------------------------\n # transpose so axis is last\n transpose_order = np.arange(len(self._axes))\n transpose_order[axis_index] = len(self._axes) - 1\n transpose_order[-1] = axis_index\n self.transpose(transpose_order, verbose=False)\n # get offset axis index\n if isinstance(offset_axis, int):\n offset_axis_index = offset_axis\n elif isinstance(offset_axis, str):\n offset_axis_index = self.axis_names.index(offset_axis)\n else:\n raise TypeError(\"offset_axis: expected {int, str}, got %s\" % type(offset_axis))\n # new points\n new_points = [a[:] for a in self._axes]\n old_offset_axis_points = self._axes[offset_axis_index][:]\n spacing = abs((old_offset_axis_points.max() - old_offset_axis_points.min()) /\n float(len(old_offset_axis_points)))\n if mode == 'old':\n new_offset_axis_points = old_offset_axis_points\n elif mode == 'valid':\n _max = old_offset_axis_points.max() + corrections.min()\n _min = old_offset_axis_points.min() + corrections.max()\n n = int(abs(np.ceil((_max - _min) / spacing)))\n new_offset_axis_points = np.linspace(_min, _max, n)\n elif mode == 'full':\n _max = old_offset_axis_points.max() + corrections.max()\n _min = old_offset_axis_points.min() + corrections.min()\n n = np.ceil((_max - _min) / spacing)\n new_offset_axis_points = np.linspace(_min, _max, n)\n new_points[offset_axis_index] = new_offset_axis_points\n new_xi = tuple(np.meshgrid(*new_points, indexing='ij'))\n xi = tuple(np.meshgrid(*[a[:] for a in self._axes], indexing='ij'))\n for channel in self.channels:\n # 'undo' gridding\n arr = np.zeros((len(self._axes) + 1, channel[:].size))\n for i in range(len(self._axes)):\n arr[i] = xi[i].flatten()\n arr[-1] = channel[:].flatten()\n # do corrections\n corrections = list(corrections)\n corrections = corrections * int((len(arr[0]) / len(corrections)))\n arr[offset_axis_index] += corrections\n # grid data\n tup = tuple([arr[i] for i in range(len(arr) - 1)])\n # note that rescale is crucial in this operation\n out = griddata(tup, arr[-1], new_xi, method=method,\n fill_value=np.nan, rescale=True)\n channel[:] = out\n self._axes[offset_axis_index][:] = new_offset_axis_points\n # transpose out\n self.transpose(transpose_order, verbose=False)\n\n def print_tree(self, *, verbose=True):\n \"\"\"Print a ascii-formatted tree representation of the data contents.\"\"\"\n print('{0} ({1})'.format(self.natural_name, self.filepath))\n self._print_branch('', depth=0, verbose=verbose)\n\n def remove_channel(self, channel, *, verbose=True):\n \"\"\"Remove channel from data.\n\n Parameters\n ----------\n channel : int or str\n Channel index or name to remove.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n \"\"\"\n channel_index = wt_kit.get_index(self.channel_names, channel)\n new = list(self.channel_names)\n name = new.pop(channel_index)\n del self[name]\n self.channel_names = new\n if verbose:\n print('channel {0} removed'.format(name))\n\n def remove_variable(self, variable, *, implied=True, verbose=True):\n \"\"\"Remove variable from data.\n\n Parameters\n ----------\n variable : int or str\n Variable index or name to remove.\n implied : boolean (optional)\n Toggle deletion of other variables that start with the same\n name. Default is True.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n \"\"\"\n if isinstance(variable, int):\n variable = self.variable_names[variable]\n # find all of the implied variables\n removed = []\n if implied:\n for n in self.variable_names:\n if n.startswith(variable):\n removed.append(n)\n else:\n removed = [variable]\n # check that axes will not be ruined\n for n in removed:\n for a in self._axes:\n if n in a.expression:\n message = '{0} is contained in axis {1}'.format(n, a.expression)\n raise RuntimeError(message)\n # do removal\n for n in removed:\n variable_index = wt_kit.get_index(self.variable_names, n)\n new = list(self.variable_names)\n name = new.pop(variable_index)\n del self[name]\n self.variable_names = new\n # finish\n if verbose:\n print('{0} variable(s) removed:'.format(len(removed)))\n for n in removed:\n print(' {0}'.format(n))\n\n def rename_channels(self, *, verbose=True, **kwargs):\n \"\"\"Rename a set of channels.\n\n Parameters\n ----------\n kwargs\n Keyword arguments of the form current:'new'.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # ensure that items will remain unique\n changed = kwargs.keys()\n for k, v in kwargs.items():\n if v not in changed and v in self.keys():\n raise wt_exceptions.NameNotUniqueError(v)\n # compile references to items that are changing\n new = {}\n for k, v in kwargs.items():\n obj = self[k]\n index = self.channel_names.index(k)\n # rename\n new[v] = obj, index\n obj.instances.pop(obj.fullpath, None)\n obj.natural_name = str(v)\n # remove old references\n del self[k]\n # apply new references\n names = list(self.channel_names)\n for v, value in new.items():\n obj, index = value\n self[v] = obj\n names[index] = v\n self.channel_names = names\n # finish\n if verbose:\n print('{0} channel(s) renamed:'.format(len(kwargs)))\n for k, v in kwargs.items():\n print(' {0} --> {1}'.format(k, v))\n\n def rename_variables(self, *, implied=True, verbose=True, **kwargs):\n \"\"\"Rename a set of variables.\n\n Parameters\n ----------\n kwargs\n Keyword arguments of the form current:'new'.\n implied : boolean (optional)\n Toggle inclusion of other variables that start with the same\n name. Default is True.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # find all of the implied variables\n kwargs = collections.OrderedDict(kwargs)\n if implied:\n new = collections.OrderedDict()\n for k, v in kwargs.items():\n for n in self.variable_names:\n if n.startswith(k):\n new[n] = n.replace(k, v, 1)\n kwargs = new\n # ensure that items will remain unique\n changed = kwargs.keys()\n for k, v in kwargs.items():\n if v not in changed and v in self.keys():\n raise wt_exceptions.NameNotUniqueError(v)\n # compile references to items that are changing\n new = {}\n for k, v in kwargs.items():\n obj = self[k]\n index = self.variable_names.index(k)\n # rename\n new[v] = obj, index\n obj.instances.pop(obj.fullpath, None)\n obj.natural_name = str(v)\n # remove old references\n del self[k]\n # apply new references\n names = list(self.variable_names)\n for v, value in new.items():\n obj, index = value\n self[v] = obj\n names[index] = v\n self.variable_names = names\n # update axes\n units = self.units\n new = list(self.axis_expressions)\n for i, v in enumerate(kwargs.keys()):\n for j, n in enumerate(new):\n new[j] = n.replace(v, '{%i}' % i)\n for i, n in enumerate(new):\n new[i] = n.format(*kwargs.values())\n self.transform(*new)\n for a, u in zip(self._axes, units):\n a.convert(u)\n # finish\n if verbose:\n print('{0} variable(s) renamed:'.format(len(kwargs)))\n for k, v in kwargs.items():\n print(' {0} --> {1}'.format(k, v))\n\n def share_nans(self):\n \"\"\"Share not-a-numbers between all channels.\n\n If any channel is nan at a given index, all channels will be nan\n at that index after this operation.\n\n Uses the share_nans method found in wt.kit.\n \"\"\"\n def f(_, s, channels):\n outs = wt_kit.share_nans(*[c[s] for c in channels])\n for c, o in zip(channels, outs):\n c[s] = o\n\n self.channels[0].chunkwise(f, self.channels)\n\n def smooth(self, factors, channel=None, verbose=True):\n \"\"\"Smooth a channel using an n-dimenional `kaiser window`__.\n\n Note, all arrays are loaded into memory.\n\n __ https://en.wikipedia.org/wiki/Kaiser_window\n\n Parameters\n ----------\n factors : int or list of int\n The smoothing factor. You may provide a list of smoothing factors\n for each axis.\n channel : int or str or None (optional)\n The channel to smooth. If None, all channels will be smoothed.\n Default is None.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n warnings.warn('smooth', category=wt_exceptions.EntireDatasetInMemoryWarning)\n # get factors -----------------------------------------------------------------------------\n\n if isinstance(factors, list):\n pass\n else:\n dummy = np.zeros(len(self._axes))\n dummy[::] = factors\n factors = list(dummy)\n # get channels ----------------------------------------------------------------------------\n if channel is None:\n channels = self.channels\n else:\n if isinstance(channel, int):\n channel_index = channel\n elif isinstance(channel, str):\n channel_index = self.channel_names.index(channel)\n else:\n raise TypeError(\"channel: expected {int, str}, got %s\" % type(channel))\n channels = [self.channels[channel_index]]\n # smooth ----------------------------------------------------------------------------------\n for channel in channels:\n values = channel[:]\n for axis_index in range(len(factors)):\n factor = factors[axis_index]\n # transpose so the axis of interest is last\n transpose_order = range(len(values.shape))\n # replace axis_index with zero\n transpose_order = [len(values.shape) - 1 if i ==\n axis_index else i for i in transpose_order]\n transpose_order[len(values.shape) - 1] = axis_index\n values = values.transpose(transpose_order)\n # get kaiser window\n beta = 5.0\n w = np.kaiser(2 * factor + 1, beta)\n # for all slices...\n for index in np.ndindex(values[..., 0].shape):\n current_slice = values[index]\n temp_slice = np.pad(current_slice, int(factor), mode=str('edge'))\n values[index] = np.convolve(temp_slice, w / w.sum(), mode=str('valid'))\n # transpose out\n values = values.transpose(transpose_order)\n # return array to channel object\n channel[:] = values\n if verbose:\n print('smoothed data')\n\n def split(self, axis, positions, units='same', direction='below', parent=None, verbose=True):\n \"\"\"\n Split the data object along a given axis, in units.\n\n Parameters\n ----------\n axis : int or str\n The axis to split along.\n positions : number-type or 1D array-type\n The position(s) to split at, in units. If a non-exact position is\n given, the closest valid axis position will be used.\n units : str (optional)\n The units of the given positions. Default is same, which assumes\n input units are identical to axis units.\n direction : {'below', 'above'} (optional)\n Choose which group of data the points at positions remains with.\n This decision is based on the value, not the index.\n Consider points [0, 1, 2, 3, 4, 5] and split value [3]. If direction\n is above the returned objects are [0, 1, 2] and [3, 4, 5]. If\n direction is below the returned objects are [0, 1, 2, 3] and\n [4, 5]. Default is below.\n parent : WrightTools.Collection\n The parent collection in which to place the 'split' collection.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n WrightTools.collection.Collection\n A Collection of data objects.\n The order of the objects is such that the axis points retain their original order.\n\n See Also\n --------\n chop\n Divide the dataset into its lower-dimensionality components.\n collapse\n Collapse the dataset along one axis.\n \"\"\"\n raise NotImplementedError\n # axis ------------------------------------------------------------------------------------\n if isinstance(axis, int):\n axis_index = axis\n elif isinstance(axis, str):\n axis_index = self.axis_names.index(axis)\n else:\n raise TypeError(\"axis: expected {int, str}, got %s\" % type(axis))\n axis = self._axes[axis_index]\n # indicies --------------------------------------------------------------------------------\n # positions must be iterable and should be a numpy array\n if type(positions) in [int, float]:\n positions = [positions]\n positions = np.array(positions)\n # positions should be in the data units\n if units != 'same':\n positions = wt_units.converter(positions, units, axis.units)\n # get indicies of split\n indicies = []\n for position in positions:\n idx = np.argmin(abs(axis[:] - position))\n indicies.append(idx)\n indicies.sort()\n # set direction according to units\n flip = direction == 'above'\n if axis[-1] < axis[0]:\n flip = not flip\n if flip:\n indicies = [i - 1 for i in indicies]\n # process ---------------------------------------------------------------------------------\n outs = wt_collection.Collection(name='split', parent=parent,\n edit_local=parent is not None)\n start = 0\n stop = 0\n for i in range(len(indicies) + 1):\n # get start and stop\n start = stop # previous value\n if i == len(indicies):\n stop = len(axis)\n else:\n stop = indicies[i] + 1\n # new data object prepare\n new_name = \"split%03d\" % i\n if stop - start < 1:\n outs.create_data(\"\")\n elif stop - start == 1:\n attrs = dict(self.attrs)\n attrs.pop('name', None)\n new_data = outs.create_data(new_name, **attrs)\n for ax in self._axes:\n if ax != axis:\n attrs = dict(ax.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_axis(ax.natural_name, ax[:], ax.units, **attrs)\n slc = [slice(None)] * len(self.shape)\n slc[axis_index] = start\n for ch in self.channels:\n attrs = dict(ch.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_channel(ch.natural_name, ch[:][slc], ch.units, **attrs)\n else:\n attrs = dict(self.attrs)\n attrs.pop('name', None)\n new_data = outs.create_data(new_name, **attrs)\n for ax in self._axes:\n if ax == axis:\n slc = slice(start, stop)\n else:\n slc = slice(None)\n attrs = dict(ax.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_axis(ax.natural_name, ax[slc], ax.units, **attrs)\n slc = [slice(None)] * len(self.shape)\n slc[axis_index] = slice(start, stop)\n for ch in self.channels:\n attrs = dict(ch.attrs)\n attrs.pop('name', None)\n attrs.pop('units', None)\n new_data.create_channel(ch.natural_name, ch[slc], ch.units, **attrs)\n # post process ----------------------------------------------------------------------------\n if verbose:\n print('split data into {0} pieces along {1}:'.format(len(indicies) + 1,\n axis.natural_name))\n for i in range(len(outs)):\n new_data = outs[i]\n if new_data is None:\n print(' {0} : None'.format(i))\n elif len(new_data.shape) < len(self.shape):\n print(' {0} : {1} {2}(constant)'.format(i, axis.natural_name, axis.units))\n else:\n new_axis = new_data.axes[axis_index]\n print(' {0} : {1} to {2} {3} (length {4})'.format(i, new_axis[0],\n new_axis[-1],\n new_axis.units,\n new_axis.size))\n return outs\n\n def transform(self, *axes, verbose=True):\n \"\"\"Transform the data.\n\n Parameters\n ----------\n axes : strings\n Expressions for the new set of axes.\n verbose : boolean (optional)\n Toggle talkback. Default is True\n \"\"\"\n # TODO: ensure that transform does not break data\n # create\n new = []\n current = {a.expression: a for a in self._axes}\n for expression in axes:\n axis = current.get(expression, Axis(self, expression))\n new.append(axis)\n self._axes = new\n # units\n for a in self._axes:\n if a.units is None:\n a.convert(a.variables[0].units)\n # finish\n self.flush()\n self._on_axes_updated()\n\n def zoom(self, factor, order=1, verbose=True):\n \"\"\"Zoom the data array using spline interpolation of the requested order.\n\n The number of points along each axis is increased by factor.\n See `scipy ndimage`__ for more info.\n\n __ http://docs.scipy.org/doc/scipy/reference/\n generated/scipy.ndimage.interpolation.zoom.html\n\n Parameters\n ----------\n factor : float\n The number of points along each axis will increase by this factor.\n order : int (optional)\n The order of the spline used to interpolate onto new points.\n verbose : bool (optional)\n Toggle talkback. Default is True.\n \"\"\"\n raise NotImplementedError\n import scipy.ndimage\n # axes\n for axis in self._axes:\n axis[:] = scipy.ndimage.interpolation.zoom(axis[:], factor, order=order)\n # channels\n for channel in self.channels:\n channel[:] = scipy.ndimage.interpolation.zoom(channel[:], factor, order=order)\n # return\n if verbose:\n print('data zoomed to new shape:', self.shape)\n", "path": "WrightTools/data/_data.py"}]} |
gh_patches_debug_1130 | rasdani/github-patches | git_diff | encode__starlette-88 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
CORSMiddleware is sending an extra 'http.response.body'
It seems that even with all tests passing and cors being successfully applied, CORSMiddleware still raises a runtime error.
Code being tested:
```python
app = Starlette()
app.add_middleware(CORSMiddleware, allow_origins=["*"])
@app.route("/")
async def homepage(request):
return PlainTextResponse('Hello', status_code=200)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
```
And the error being produced:
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/alexbotello/.local/share/virtualenvs/starlette-dshJy1CJ/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 384, in run_asgi
result = await asgi(self.receive, self.send)
File "/home/alexbotello/Code/starlette/starlette/exceptions.py", line 60, in app
raise exc from None
File "/home/alexbotello/Code/starlette/starlette/exceptions.py", line 52, in app
await instance(receive, sender)
File "/home/alexbotello/Code/starlette/starlette/middleware/cors.py", line 116, in simple_response
await inner(receive, send)
File "/home/alexbotello/Code/starlette/starlette/applications.py", line 26, in awaitable
await response(receive, send)
File "/home/alexbotello/Code/starlette/starlette/responses.py", line 100, in __call__
await send({"type": "http.response.body", "body": self.body})
File "/home/alexbotello/Code/starlette/starlette/middleware/cors.py", line 130, in send
await send(message)
File "/home/alexbotello/Code/starlette/starlette/exceptions.py", line 47, in sender
await send(message)
File "/home/alexbotello/.local/share/virtualenvs/starlette-dshJy1CJ/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 518, in send
raise RuntimeError(msg % message_type)
RuntimeError: Unexpected ASGI message 'http.response.body' sent, after response already completed.
```
It seems the issue is originating from `send`. Specifically:
```python
if message["type"] != "http.response.start":
await send(message)
```
Removing this fixes the issue and does not break any tests.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlette/middleware/cors.py`
Content:
```
1 from starlette.datastructures import Headers, MutableHeaders, URL
2 from starlette.responses import PlainTextResponse
3 from starlette.types import ASGIApp, ASGIInstance, Scope
4 import functools
5 import typing
6
7
8 ALL_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
9
10
11 class CORSMiddleware:
12 def __init__(
13 self,
14 app: ASGIApp,
15 allow_origins: typing.Sequence[str] = (),
16 allow_methods: typing.Sequence[str] = ("GET",),
17 allow_headers: typing.Sequence[str] = (),
18 allow_credentials: bool = False,
19 expose_headers: typing.Sequence[str] = (),
20 max_age: int = 600,
21 ):
22
23 if "*" in allow_methods:
24 allow_methods = ALL_METHODS
25
26 simple_headers = {}
27 if "*" in allow_origins:
28 simple_headers["Access-Control-Allow-Origin"] = "*"
29 if allow_credentials:
30 simple_headers["Access-Control-Allow-Credentials"] = "true"
31 if expose_headers:
32 simple_headers["Access-Control-Expose-Headers"] = ", ".join(expose_headers)
33
34 preflight_headers = {}
35 if "*" in allow_origins:
36 preflight_headers["Access-Control-Allow-Origin"] = "*"
37 else:
38 preflight_headers["Vary"] = "Origin"
39 preflight_headers.update(
40 {
41 "Access-Control-Allow-Methods": ", ".join(allow_methods),
42 "Access-Control-Max-Age": str(max_age),
43 }
44 )
45 if allow_headers and "*" not in allow_headers:
46 preflight_headers["Access-Control-Allow-Headers"] = ", ".join(allow_headers)
47 if allow_credentials:
48 preflight_headers["Access-Control-Allow-Credentials"] = "true"
49
50 self.app = app
51 self.allow_origins = allow_origins
52 self.allow_methods = allow_methods
53 self.allow_headers = allow_headers
54 self.allow_all_origins = "*" in allow_origins
55 self.allow_all_headers = "*" in allow_headers
56 self.simple_headers = simple_headers
57 self.preflight_headers = preflight_headers
58
59 def __call__(self, scope: Scope):
60 if scope["type"] == "http":
61 method = scope["method"]
62 headers = Headers(scope["headers"])
63 origin = headers.get("origin")
64
65 if origin is not None:
66 if method == "OPTIONS" and "access-control-request-method" in headers:
67 return self.preflight_response(request_headers=headers)
68 else:
69 return functools.partial(
70 self.simple_response, scope=scope, origin=origin
71 )
72
73 return self.app(scope)
74
75 def preflight_response(self, request_headers):
76 requested_origin = request_headers["origin"]
77 requested_method = request_headers["access-control-request-method"]
78 requested_headers = request_headers.get("access-control-request-headers")
79 requested_cookie = "cookie" in request_headers
80
81 headers = dict(self.preflight_headers)
82 failures = []
83
84 # If we only allow specific origins, then we have to mirror back
85 # the Origin header in the response.
86 if not self.allow_all_origins:
87 if requested_origin in self.allow_origins:
88 headers["Access-Control-Allow-Origin"] = requested_origin
89 else:
90 failures.append("origin")
91
92 if requested_method not in self.allow_methods:
93 failures.append("method")
94
95 # If we allow all headers, then we have to mirror back any requested
96 # headers in the response.
97 if self.allow_all_headers and requested_headers is not None:
98 headers["Access-Control-Allow-Headers"] = requested_headers
99 elif requested_headers is not None:
100 for header in requested_headers.split(","):
101 if header.strip() not in self.allow_headers:
102 failures.append("headers")
103
104 # We don't strictly need to use 400 responses here, since its up to
105 # the browser to enforce the CORS policy, but its more informative
106 # if we do.
107 if failures:
108 failure_text = "Disallowed CORS " + ", ".join(failures)
109 return PlainTextResponse(failure_text, status_code=400, headers=headers)
110
111 return PlainTextResponse("OK", status_code=200, headers=headers)
112
113 async def simple_response(self, receive, send, scope=None, origin=None):
114 inner = self.app(scope)
115 send = functools.partial(self.send, send=send, origin=origin)
116 await inner(receive, send)
117
118 async def send(self, message, send=None, origin=None):
119 if message["type"] != "http.response.start":
120 await send(message)
121
122 message.setdefault("headers", [])
123 headers = MutableHeaders(message["headers"])
124
125 # If we only allow specific origins, then we have to mirror back
126 # the Origin header in the response.
127 if not self.allow_all_origins and origin in self.allow_origins:
128 headers["Access-Control-Allow-Origin"] = origin
129 headers.update(self.simple_headers)
130 await send(message)
131
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py
--- a/starlette/middleware/cors.py
+++ b/starlette/middleware/cors.py
@@ -118,6 +118,7 @@
async def send(self, message, send=None, origin=None):
if message["type"] != "http.response.start":
await send(message)
+ return
message.setdefault("headers", [])
headers = MutableHeaders(message["headers"])
| {"golden_diff": "diff --git a/starlette/middleware/cors.py b/starlette/middleware/cors.py\n--- a/starlette/middleware/cors.py\n+++ b/starlette/middleware/cors.py\n@@ -118,6 +118,7 @@\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n+ return\n \n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n", "issue": "CORSMiddleware is sending an extra 'http.response.body'\nIt seems that even with all tests passing and cors being successfully applied, CORSMiddleware still raises a runtime error.\r\n\r\nCode being tested:\r\n```python\r\napp = Starlette()\r\n\r\napp.add_middleware(CORSMiddleware, allow_origins=[\"*\"])\r\n\r\[email protected](\"/\")\r\nasync def homepage(request):\r\n return PlainTextResponse('Hello', status_code=200)\r\n\r\nif __name__ == \"__main__\":\r\n uvicorn.run(app, host=\"0.0.0.0\", port=8000)\r\n```\r\n\r\nAnd the error being produced:\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/home/alexbotello/.local/share/virtualenvs/starlette-dshJy1CJ/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 384, in run_asgi\r\n result = await asgi(self.receive, self.send)\r\n File \"/home/alexbotello/Code/starlette/starlette/exceptions.py\", line 60, in app\r\n raise exc from None\r\n File \"/home/alexbotello/Code/starlette/starlette/exceptions.py\", line 52, in app\r\n await instance(receive, sender)\r\n File \"/home/alexbotello/Code/starlette/starlette/middleware/cors.py\", line 116, in simple_response\r\n await inner(receive, send)\r\n File \"/home/alexbotello/Code/starlette/starlette/applications.py\", line 26, in awaitable\r\n await response(receive, send)\r\n File \"/home/alexbotello/Code/starlette/starlette/responses.py\", line 100, in __call__\r\n await send({\"type\": \"http.response.body\", \"body\": self.body})\r\n File \"/home/alexbotello/Code/starlette/starlette/middleware/cors.py\", line 130, in send\r\n await send(message)\r\n File \"/home/alexbotello/Code/starlette/starlette/exceptions.py\", line 47, in sender\r\n await send(message)\r\n File \"/home/alexbotello/.local/share/virtualenvs/starlette-dshJy1CJ/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 518, in send\r\n raise RuntimeError(msg % message_type)\r\nRuntimeError: Unexpected ASGI message 'http.response.body' sent, after response already completed.\r\n```\r\nIt seems the issue is originating from `send`. Specifically:\r\n```python\r\nif message[\"type\"] != \"http.response.start\":\r\n await send(message)\r\n```\r\nRemoving this fixes the issue and does not break any tests.\n", "before_files": [{"content": "from starlette.datastructures import Headers, MutableHeaders, URL\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport functools\nimport typing\n\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ):\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n def __call__(self, scope: Scope):\n if scope[\"type\"] == \"http\":\n method = scope[\"method\"]\n headers = Headers(scope[\"headers\"])\n origin = headers.get(\"origin\")\n\n if origin is not None:\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n return functools.partial(\n self.simple_response, scope=scope, origin=origin\n )\n\n return self.app(scope)\n\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n requested_cookie = \"cookie\" in request_headers\n\n headers = dict(self.preflight_headers)\n failures = []\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins:\n if requested_origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in requested_headers.split(\",\"):\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(self, receive, send, scope=None, origin=None):\n inner = self.app(scope)\n send = functools.partial(self.send, send=send, origin=origin)\n await inner(receive, send)\n\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins and origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "path": "starlette/middleware/cors.py"}], "after_files": [{"content": "from starlette.datastructures import Headers, MutableHeaders, URL\nfrom starlette.responses import PlainTextResponse\nfrom starlette.types import ASGIApp, ASGIInstance, Scope\nimport functools\nimport typing\n\n\nALL_METHODS = (\"DELETE\", \"GET\", \"OPTIONS\", \"PATCH\", \"POST\", \"PUT\")\n\n\nclass CORSMiddleware:\n def __init__(\n self,\n app: ASGIApp,\n allow_origins: typing.Sequence[str] = (),\n allow_methods: typing.Sequence[str] = (\"GET\",),\n allow_headers: typing.Sequence[str] = (),\n allow_credentials: bool = False,\n expose_headers: typing.Sequence[str] = (),\n max_age: int = 600,\n ):\n\n if \"*\" in allow_methods:\n allow_methods = ALL_METHODS\n\n simple_headers = {}\n if \"*\" in allow_origins:\n simple_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n if allow_credentials:\n simple_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n if expose_headers:\n simple_headers[\"Access-Control-Expose-Headers\"] = \", \".join(expose_headers)\n\n preflight_headers = {}\n if \"*\" in allow_origins:\n preflight_headers[\"Access-Control-Allow-Origin\"] = \"*\"\n else:\n preflight_headers[\"Vary\"] = \"Origin\"\n preflight_headers.update(\n {\n \"Access-Control-Allow-Methods\": \", \".join(allow_methods),\n \"Access-Control-Max-Age\": str(max_age),\n }\n )\n if allow_headers and \"*\" not in allow_headers:\n preflight_headers[\"Access-Control-Allow-Headers\"] = \", \".join(allow_headers)\n if allow_credentials:\n preflight_headers[\"Access-Control-Allow-Credentials\"] = \"true\"\n\n self.app = app\n self.allow_origins = allow_origins\n self.allow_methods = allow_methods\n self.allow_headers = allow_headers\n self.allow_all_origins = \"*\" in allow_origins\n self.allow_all_headers = \"*\" in allow_headers\n self.simple_headers = simple_headers\n self.preflight_headers = preflight_headers\n\n def __call__(self, scope: Scope):\n if scope[\"type\"] == \"http\":\n method = scope[\"method\"]\n headers = Headers(scope[\"headers\"])\n origin = headers.get(\"origin\")\n\n if origin is not None:\n if method == \"OPTIONS\" and \"access-control-request-method\" in headers:\n return self.preflight_response(request_headers=headers)\n else:\n return functools.partial(\n self.simple_response, scope=scope, origin=origin\n )\n\n return self.app(scope)\n\n def preflight_response(self, request_headers):\n requested_origin = request_headers[\"origin\"]\n requested_method = request_headers[\"access-control-request-method\"]\n requested_headers = request_headers.get(\"access-control-request-headers\")\n requested_cookie = \"cookie\" in request_headers\n\n headers = dict(self.preflight_headers)\n failures = []\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins:\n if requested_origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = requested_origin\n else:\n failures.append(\"origin\")\n\n if requested_method not in self.allow_methods:\n failures.append(\"method\")\n\n # If we allow all headers, then we have to mirror back any requested\n # headers in the response.\n if self.allow_all_headers and requested_headers is not None:\n headers[\"Access-Control-Allow-Headers\"] = requested_headers\n elif requested_headers is not None:\n for header in requested_headers.split(\",\"):\n if header.strip() not in self.allow_headers:\n failures.append(\"headers\")\n\n # We don't strictly need to use 400 responses here, since its up to\n # the browser to enforce the CORS policy, but its more informative\n # if we do.\n if failures:\n failure_text = \"Disallowed CORS \" + \", \".join(failures)\n return PlainTextResponse(failure_text, status_code=400, headers=headers)\n\n return PlainTextResponse(\"OK\", status_code=200, headers=headers)\n\n async def simple_response(self, receive, send, scope=None, origin=None):\n inner = self.app(scope)\n send = functools.partial(self.send, send=send, origin=origin)\n await inner(receive, send)\n\n async def send(self, message, send=None, origin=None):\n if message[\"type\"] != \"http.response.start\":\n await send(message)\n return\n\n message.setdefault(\"headers\", [])\n headers = MutableHeaders(message[\"headers\"])\n\n # If we only allow specific origins, then we have to mirror back\n # the Origin header in the response.\n if not self.allow_all_origins and origin in self.allow_origins:\n headers[\"Access-Control-Allow-Origin\"] = origin\n headers.update(self.simple_headers)\n await send(message)\n", "path": "starlette/middleware/cors.py"}]} |
gh_patches_debug_1131 | rasdani/github-patches | git_diff | kserve__kserve-2835 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
No matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\
/kind bug
**What steps did you take and what happened:**
Deploy kserve in raw mode on kubernetes 1.26 where autoscaling/v2beta2 is no longer available
**What did you expect to happen:**
Kserve should support v2 of the api
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `hack/python-sdk/update_release_version_helper.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright 2023 The KServe Authors.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import tomlkit
18 import argparse
19
20 parser = argparse.ArgumentParser(description="Update release version in python toml files")
21 parser.add_argument("version", type=str, help="release version")
22 args, _ = parser.parse_known_args()
23
24 toml_files = [
25 "python/kserve/pyproject.toml",
26 "python/aiffairness/pyproject.toml",
27 "python/aixexplainer/pyproject.toml",
28 "python/alibiexplainer/pyproject.toml",
29 "python/artexplainer/pyproject.toml",
30 "python/custom_model/pyproject.toml",
31 "python/custom_transformer/pyproject.toml",
32 "python/lgbserver/pyproject.toml",
33 "python/paddleserver/pyproject.toml",
34 "python/pmmlserver/pyproject.toml",
35 "python/sklearnserver/pyproject.toml",
36 "python/xgbserver/pyproject.toml",
37 ]
38
39 for toml_file in toml_files:
40 with open(toml_file, "r") as file:
41 toml_config = tomlkit.load(file)
42 toml_config['tool']['poetry']['version'] = args.version
43
44 with open(toml_file, "w") as file:
45 tomlkit.dump(toml_config, file)
46
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/hack/python-sdk/update_release_version_helper.py b/hack/python-sdk/update_release_version_helper.py
--- a/hack/python-sdk/update_release_version_helper.py
+++ b/hack/python-sdk/update_release_version_helper.py
@@ -24,7 +24,6 @@
toml_files = [
"python/kserve/pyproject.toml",
"python/aiffairness/pyproject.toml",
- "python/aixexplainer/pyproject.toml",
"python/alibiexplainer/pyproject.toml",
"python/artexplainer/pyproject.toml",
"python/custom_model/pyproject.toml",
| {"golden_diff": "diff --git a/hack/python-sdk/update_release_version_helper.py b/hack/python-sdk/update_release_version_helper.py\n--- a/hack/python-sdk/update_release_version_helper.py\n+++ b/hack/python-sdk/update_release_version_helper.py\n@@ -24,7 +24,6 @@\n toml_files = [\n \"python/kserve/pyproject.toml\",\n \"python/aiffairness/pyproject.toml\",\n- \"python/aixexplainer/pyproject.toml\",\n \"python/alibiexplainer/pyproject.toml\",\n \"python/artexplainer/pyproject.toml\",\n \"python/custom_model/pyproject.toml\",\n", "issue": "No matches for kind \\\"HorizontalPodAutoscaler\\\" in version \\\"autoscaling/v2beta2\\\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nDeploy kserve in raw mode on kubernetes 1.26 where autoscaling/v2beta2 is no longer available\r\n\r\n\r\n**What did you expect to happen:**\r\nKserve should support v2 of the api\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tomlkit\nimport argparse\n\nparser = argparse.ArgumentParser(description=\"Update release version in python toml files\")\nparser.add_argument(\"version\", type=str, help=\"release version\")\nargs, _ = parser.parse_known_args()\n\ntoml_files = [\n \"python/kserve/pyproject.toml\",\n \"python/aiffairness/pyproject.toml\",\n \"python/aixexplainer/pyproject.toml\",\n \"python/alibiexplainer/pyproject.toml\",\n \"python/artexplainer/pyproject.toml\",\n \"python/custom_model/pyproject.toml\",\n \"python/custom_transformer/pyproject.toml\",\n \"python/lgbserver/pyproject.toml\",\n \"python/paddleserver/pyproject.toml\",\n \"python/pmmlserver/pyproject.toml\",\n \"python/sklearnserver/pyproject.toml\",\n \"python/xgbserver/pyproject.toml\",\n]\n\nfor toml_file in toml_files:\n with open(toml_file, \"r\") as file:\n toml_config = tomlkit.load(file)\n toml_config['tool']['poetry']['version'] = args.version\n\n with open(toml_file, \"w\") as file:\n tomlkit.dump(toml_config, file)\n", "path": "hack/python-sdk/update_release_version_helper.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tomlkit\nimport argparse\n\nparser = argparse.ArgumentParser(description=\"Update release version in python toml files\")\nparser.add_argument(\"version\", type=str, help=\"release version\")\nargs, _ = parser.parse_known_args()\n\ntoml_files = [\n \"python/kserve/pyproject.toml\",\n \"python/aiffairness/pyproject.toml\",\n \"python/alibiexplainer/pyproject.toml\",\n \"python/artexplainer/pyproject.toml\",\n \"python/custom_model/pyproject.toml\",\n \"python/custom_transformer/pyproject.toml\",\n \"python/lgbserver/pyproject.toml\",\n \"python/paddleserver/pyproject.toml\",\n \"python/pmmlserver/pyproject.toml\",\n \"python/sklearnserver/pyproject.toml\",\n \"python/xgbserver/pyproject.toml\",\n]\n\nfor toml_file in toml_files:\n with open(toml_file, \"r\") as file:\n toml_config = tomlkit.load(file)\n toml_config['tool']['poetry']['version'] = args.version\n\n with open(toml_file, \"w\") as file:\n tomlkit.dump(toml_config, file)\n", "path": "hack/python-sdk/update_release_version_helper.py"}]} |
gh_patches_debug_1132 | rasdani/github-patches | git_diff | ckan__ckan-3962 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Package resources not included when installing from source in non-editable mode.
Known affects:
- CKAN 2.7.2; Python 2.7.14; Ubuntu 16.04 & 17.10;
```
$ virtualenv --version
15.1.0
$ virtualenv --no-site-packages test-venv
<snip>
$ ./test-venv/bin/pip freeze --all
pip==9.0.1
setuptools==38.2.4
wheel==0.30.0
```
### Expected behaviour
Checking out the repository and installing it(without the editable flag) should install the required package resources like `ckan/migration/migrate.cfg`.
## Actual behaviour
Installing the package without the editable flag does not include the package resources meaning all JavaScript, CSS, templates and the migration config noted above are not included.
This makes the package non-functional.
For me the problem arose because `ckan/migration/migrate.cfg` did not exist in my install directory, and hence the database could not be created. There are numerous other files listed in `MANIFEST.in` which are also required for CKAN to run which are not included.
### What steps can be taken to reproduce the issue?
Following the [installing from source](http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html) instructions, omitting the `-e` flag from step **c**
## Resolution.
The solution to this is to add `include_package_data=True` to the `setup` call in `setup.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # encoding: utf-8
2
3 import os
4 import os.path
5
6 # Avoid problem releasing to pypi from vagrant
7 if os.environ.get('USER', '') == 'vagrant':
8 del os.link
9
10 try:
11 from setuptools import (setup, find_packages,
12 __version__ as setuptools_version)
13 except ImportError:
14 from ez_setup import use_setuptools
15 use_setuptools()
16 from setuptools import (setup, find_packages,
17 __version__ as setuptools_version)
18
19 from ckan import (__version__, __description__, __long_description__,
20 __license__)
21
22
23 #
24 # Check setuptools version
25 #
26
27 def parse_version(s):
28 return map(int, s.split('.'))
29
30 HERE = os.path.dirname(__file__)
31 with open(os.path.join(HERE, 'requirement-setuptools.txt')) as f:
32 setuptools_requirement = f.read().strip()
33 min_setuptools_version = parse_version(setuptools_requirement.split('==')[1])
34 if parse_version(setuptools_version) < min_setuptools_version:
35 raise AssertionError(
36 'setuptools version error\n'
37 'You need a newer version of setuptools.\n'
38 'Install the recommended version:\n'
39 ' pip install -r requirement-setuptools.txt\n'
40 'and then try again to install ckan into your python environment.'
41 )
42
43
44 entry_points = {
45 'nose.plugins.0.10': [
46 'main = ckan.ckan_nose_plugin:CkanNose',
47 ],
48 'paste.app_factory': [
49 'main = ckan.config.middleware:make_app',
50 ],
51 'paste.app_install': [
52 'main = ckan.config.install:CKANInstaller',
53 ],
54 'paste.paster_command': [
55 'db = ckan.lib.cli:ManageDb',
56 'create-test-data = ckan.lib.cli:CreateTestDataCommand',
57 'sysadmin = ckan.lib.cli:Sysadmin',
58 'user = ckan.lib.cli:UserCmd',
59 'dataset = ckan.lib.cli:DatasetCmd',
60 'search-index = ckan.lib.cli:SearchIndexCommand',
61 'ratings = ckan.lib.cli:Ratings',
62 'notify = ckan.lib.cli:Notification',
63 'celeryd = ckan.lib.cli:Celery',
64 'rdf-export = ckan.lib.cli:RDFExport',
65 'tracking = ckan.lib.cli:Tracking',
66 'plugin-info = ckan.lib.cli:PluginInfo',
67 'profile = ckan.lib.cli:Profile',
68 'color = ckan.lib.cli:CreateColorSchemeCommand',
69 'check-po-files = ckan.i18n.check_po_files:CheckPoFiles',
70 'trans = ckan.lib.cli:TranslationsCommand',
71 'minify = ckan.lib.cli:MinifyCommand',
72 'less = ckan.lib.cli:LessCommand',
73 'datastore = ckanext.datastore.commands:datastore_group',
74 'datapusher = ckanext.datapusher.cli:DatapusherCommand',
75 'front-end-build = ckan.lib.cli:FrontEndBuildCommand',
76 'views = ckan.lib.cli:ViewsCommand',
77 'config-tool = ckan.lib.cli:ConfigToolCommand',
78 'jobs = ckan.lib.cli:JobsCommand',
79 ],
80 'console_scripts': [
81 'ckan-admin = bin.ckan_admin:Command',
82 ],
83 'paste.paster_create_template': [
84 'ckanext = ckan.pastertemplates:CkanextTemplate',
85 ],
86 'ckan.forms': [
87 'standard = ckan.forms.package:get_standard_fieldset',
88 'package = ckan.forms.package:get_standard_fieldset',
89 'group = ckan.forms.group:get_group_fieldset',
90 'package_group = ckan.forms.group:get_package_group_fieldset',
91 ],
92 'ckan.search': [
93 'sql = ckan.lib.search.sql:SqlSearchBackend',
94 'solr = ckan.lib.search.solr_backend:SolrSearchBackend',
95 ],
96 'ckan.plugins': [
97 'synchronous_search = ckan.lib.search:SynchronousSearchPlugin',
98 'stats = ckanext.stats.plugin:StatsPlugin',
99 'publisher_form = ckanext.publisher_form.forms:PublisherForm',
100 'publisher_dataset_form = ckanext.publisher_form.forms:PublisherDatasetForm',
101 'multilingual_dataset = ckanext.multilingual.plugin:MultilingualDataset',
102 'multilingual_group = ckanext.multilingual.plugin:MultilingualGroup',
103 'multilingual_tag = ckanext.multilingual.plugin:MultilingualTag',
104 'multilingual_resource = ckanext.multilingual.plugin:MultilingualResource',
105 'organizations = ckanext.organizations.forms:OrganizationForm',
106 'organizations_dataset = ckanext.organizations.forms:OrganizationDatasetForm',
107 'datastore = ckanext.datastore.plugin:DatastorePlugin',
108 'datapusher=ckanext.datapusher.plugin:DatapusherPlugin',
109 'test_tag_vocab_plugin = ckanext.test_tag_vocab_plugin:MockVocabTagsPlugin',
110 'resource_proxy = ckanext.resourceproxy.plugin:ResourceProxy',
111 'text_view = ckanext.textview.plugin:TextView',
112 'recline_view = ckanext.reclineview.plugin:ReclineView',
113 'recline_grid_view = ckanext.reclineview.plugin:ReclineGridView',
114 'recline_graph_view = ckanext.reclineview.plugin:ReclineGraphView',
115 'recline_map_view = ckanext.reclineview.plugin:ReclineMapView',
116 'datatables_view = ckanext.datatablesview.plugin:DataTablesView',
117 'image_view = ckanext.imageview.plugin:ImageView',
118 'webpage_view = ckanext.webpageview.plugin:WebPageView',
119 # FIXME: Remove deprecated resource previews below. You should use the
120 # versions as *_view instead.
121 'text_preview = ckanext.textview.plugin:TextView',
122 'recline_preview = ckanext.reclineview.plugin:ReclineView',
123 'recline_grid = ckanext.reclineview.plugin:ReclineGridView',
124 'recline_graph = ckanext.reclineview.plugin:ReclineGraphView',
125 'recline_map = ckanext.reclineview.plugin:ReclineMapView',
126 # End of deprecated previews
127 'example_itemplatehelpers = ckanext.example_itemplatehelpers.plugin:ExampleITemplateHelpersPlugin',
128 'example_idatasetform = ckanext.example_idatasetform.plugin:ExampleIDatasetFormPlugin',
129 'example_idatasetform_v1 = ckanext.example_idatasetform.plugin_v1:ExampleIDatasetFormPlugin',
130 'example_idatasetform_v2 = ckanext.example_idatasetform.plugin_v2:ExampleIDatasetFormPlugin',
131 'example_idatasetform_v3 = ckanext.example_idatasetform.plugin_v3:ExampleIDatasetFormPlugin',
132 'example_idatasetform_v4 = ckanext.example_idatasetform.plugin_v4:ExampleIDatasetFormPlugin',
133 'example_igroupform = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin',
134 'example_igroupform_default_group_type = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin_DefaultGroupType',
135 'example_igroupform_organization = ckanext.example_igroupform.plugin:ExampleIGroupFormOrganizationPlugin',
136 'example_iauthfunctions_v1 = ckanext.example_iauthfunctions.plugin_v1:ExampleIAuthFunctionsPlugin',
137 'example_iauthfunctions_v2 = ckanext.example_iauthfunctions.plugin_v2:ExampleIAuthFunctionsPlugin',
138 'example_iauthfunctions_v3 = ckanext.example_iauthfunctions.plugin_v3:ExampleIAuthFunctionsPlugin',
139 'example_iauthfunctions_v4 = ckanext.example_iauthfunctions.plugin_v4:ExampleIAuthFunctionsPlugin',
140 'example_iauthfunctions_v5_custom_config_setting = ckanext.example_iauthfunctions.plugin_v5_custom_config_setting:ExampleIAuthFunctionsPlugin',
141 'example_iauthfunctions_v6_parent_auth_functions = ckanext.example_iauthfunctions.plugin_v6_parent_auth_functions:ExampleIAuthFunctionsPlugin',
142 'example_theme_v01_empty_extension = ckanext.example_theme_docs.v01_empty_extension.plugin:ExampleThemePlugin',
143 'example_theme_v02_empty_template = ckanext.example_theme_docs.v02_empty_template.plugin:ExampleThemePlugin',
144 'example_theme_v03_jinja = ckanext.example_theme_docs.v03_jinja.plugin:ExampleThemePlugin',
145 'example_theme_v04_ckan_extends = ckanext.example_theme_docs.v04_ckan_extends.plugin:ExampleThemePlugin',
146 'example_theme_v05_block = ckanext.example_theme_docs.v05_block.plugin:ExampleThemePlugin',
147 'example_theme_v06_super = ckanext.example_theme_docs.v06_super.plugin:ExampleThemePlugin',
148 'example_theme_v07_helper_function = ckanext.example_theme_docs.v07_helper_function.plugin:ExampleThemePlugin',
149 'example_theme_v08_custom_helper_function = ckanext.example_theme_docs.v08_custom_helper_function.plugin:ExampleThemePlugin',
150 'example_theme_v09_snippet = ckanext.example_theme_docs.v09_snippet.plugin:ExampleThemePlugin',
151 'example_theme_v10_custom_snippet = ckanext.example_theme_docs.v10_custom_snippet.plugin:ExampleThemePlugin',
152 'example_theme_v11_HTML_and_CSS = ckanext.example_theme_docs.v11_HTML_and_CSS.plugin:ExampleThemePlugin',
153 'example_theme_v12_extra_public_dir = ckanext.example_theme_docs.v12_extra_public_dir.plugin:ExampleThemePlugin',
154 'example_theme_v13_custom_css = ckanext.example_theme_docs.v13_custom_css.plugin:ExampleThemePlugin',
155 'example_theme_v14_more_custom_css = ckanext.example_theme_docs.v14_more_custom_css.plugin:ExampleThemePlugin',
156 'example_theme_v15_fanstatic = ckanext.example_theme_docs.v15_fanstatic.plugin:ExampleThemePlugin',
157 'example_theme_v16_initialize_a_javascript_module = ckanext.example_theme_docs.v16_initialize_a_javascript_module.plugin:ExampleThemePlugin',
158 'example_theme_v17_popover = ckanext.example_theme_docs.v17_popover.plugin:ExampleThemePlugin',
159 'example_theme_v18_snippet_api = ckanext.example_theme_docs.v18_snippet_api.plugin:ExampleThemePlugin',
160 'example_theme_v19_01_error = ckanext.example_theme_docs.v19_01_error.plugin:ExampleThemePlugin',
161 'example_theme_v19_02_error_handling = ckanext.example_theme_docs.v19_02_error_handling.plugin:ExampleThemePlugin',
162 'example_theme_v20_pubsub = ckanext.example_theme_docs.v20_pubsub.plugin:ExampleThemePlugin',
163 'example_theme_v21_custom_jquery_plugin = ckanext.example_theme_docs.v21_custom_jquery_plugin.plugin:ExampleThemePlugin',
164 'example_theme_custom_config_setting = ckanext.example_theme_docs.custom_config_setting.plugin:ExampleThemePlugin',
165 'example_theme_custom_emails = ckanext.example_theme_docs.custom_emails.plugin:ExampleCustomEmailsPlugin',
166 'example_iresourcecontroller = ckanext.example_iresourcecontroller.plugin:ExampleIResourceControllerPlugin',
167 'example_ivalidators = ckanext.example_ivalidators.plugin:ExampleIValidatorsPlugin',
168 'example_iconfigurer = ckanext.example_iconfigurer.plugin:ExampleIConfigurerPlugin',
169 'example_itranslation = ckanext.example_itranslation.plugin:ExampleITranslationPlugin',
170 'example_iconfigurer_v1 = ckanext.example_iconfigurer.plugin_v1:ExampleIConfigurerPlugin',
171 'example_iconfigurer_v2 = ckanext.example_iconfigurer.plugin_v2:ExampleIConfigurerPlugin',
172 'example_flask_iblueprint = ckanext.example_flask_iblueprint.plugin:ExampleFlaskIBlueprintPlugin',
173 'example_iuploader = ckanext.example_iuploader.plugin:ExampleIUploader',
174 'example_idatastorebackend = ckanext.example_idatastorebackend.plugin:ExampleIDatastoreBackendPlugin',
175 'example_ipermissionlabels = ckanext.example_ipermissionlabels.plugin:ExampleIPermissionLabelsPlugin',
176 ],
177 'ckan.system_plugins': [
178 'domain_object_mods = ckan.model.modification:DomainObjectModificationExtension',
179 ],
180 'ckan.test_plugins': [
181 'routes_plugin = tests.legacy.ckantestplugins:RoutesPlugin',
182 'mapper_plugin = tests.legacy.ckantestplugins:MapperPlugin',
183 'session_plugin = tests.legacy.ckantestplugins:SessionPlugin',
184 'mapper_plugin2 = tests.legacy.ckantestplugins:MapperPlugin2',
185 'authorizer_plugin = tests.legacy.ckantestplugins:AuthorizerPlugin',
186 'test_observer_plugin = tests.legacy.ckantestplugins:PluginObserverPlugin',
187 'action_plugin = tests.legacy.ckantestplugins:ActionPlugin',
188 'auth_plugin = tests.legacy.ckantestplugins:AuthPlugin',
189 'test_group_plugin = tests.legacy.ckantestplugins:MockGroupControllerPlugin',
190 'test_package_controller_plugin = tests.legacy.ckantestplugins:MockPackageControllerPlugin',
191 'test_resource_preview = tests.legacy.ckantestplugins:MockResourcePreviewExtension',
192 'test_json_resource_preview = tests.legacy.ckantestplugins:JsonMockResourcePreviewExtension',
193 'sample_datastore_plugin = ckanext.datastore.tests.sample_datastore_plugin:SampleDataStorePlugin',
194 'example_datastore_deleted_with_count_plugin = ckanext.datastore.tests.test_chained_action:ExampleDataStoreDeletedWithCountPlugin',
195 'test_datastore_view = ckan.tests.lib.test_datapreview:MockDatastoreBasedResourceView',
196 'test_datapusher_plugin = ckanext.datapusher.tests.test_interfaces:FakeDataPusherPlugin',
197 'test_routing_plugin = ckan.tests.config.test_middleware:MockRoutingPlugin',
198 'test_flash_plugin = ckan.tests.config.test_sessions:FlashMessagePlugin',
199 'test_helpers_plugin = ckan.tests.lib.test_helpers:TestHelpersPlugin',
200 'test_feed_plugin = ckan.tests.controllers.test_feed:MockFeedPlugin',
201 'test_js_translations_plugin = ckan.tests.lib.test_i18n:TestJSTranslationsPlugin',
202 ],
203 'babel.extractors': [
204 'ckan = ckan.lib.extract:extract_ckan',
205 ],
206 }
207
208 setup(
209 name='ckan',
210 version=__version__,
211 author='https://github.com/ckan/ckan/graphs/contributors',
212 author_email='[email protected]',
213 license=__license__,
214 url='http://ckan.org/',
215 description=__description__,
216 keywords='data packaging component tool server',
217 long_description=__long_description__,
218 zip_safe=False,
219 packages=find_packages(exclude=['ez_setup']),
220 namespace_packages=['ckanext', 'ckanext.stats'],
221 message_extractors={
222 'ckan': [
223 ('**.py', 'python', None),
224 ('**.js', 'javascript', None),
225 ('templates/importer/**', 'ignore', None),
226 ('templates/**.html', 'ckan', None),
227 ('templates/**.txt', 'ckan', None),
228 ('templates_legacy/**.html', 'ckan', None),
229 ('public/**', 'ignore', None),
230 ],
231 'ckanext': [
232 ('**.py', 'python', None),
233 ('**.js', 'javascript', None),
234 ('**.html', 'ckan', None),
235 ('multilingual/solr/*.txt', 'ignore', None),
236 ]
237 },
238 entry_points=entry_points,
239 # setup.py test command needs a TestSuite so does not work with py.test
240 # test_suite = 'nose.collector',
241 # tests_require=[ 'py >= 0.8.0-alpha2' ]
242 classifiers=[
243 # https://pypi.python.org/pypi?%3Aaction=list_classifiers
244 'Development Status :: 5 - Production/Stable',
245 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',
246 'Programming Language :: Python',
247 'Programming Language :: Python :: 2 :: Only',
248 'Programming Language :: Python :: 2.7',
249 ],
250 )
251
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -216,6 +216,7 @@
keywords='data packaging component tool server',
long_description=__long_description__,
zip_safe=False,
+ include_package_data=True,
packages=find_packages(exclude=['ez_setup']),
namespace_packages=['ckanext', 'ckanext.stats'],
message_extractors={
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -216,6 +216,7 @@\n keywords='data packaging component tool server',\n long_description=__long_description__,\n zip_safe=False,\n+ include_package_data=True,\n packages=find_packages(exclude=['ez_setup']),\n namespace_packages=['ckanext', 'ckanext.stats'],\n message_extractors={\n", "issue": "Package resources not included when installing from source in non-editable mode.\nKnown affects:\r\n - CKAN 2.7.2; Python 2.7.14; Ubuntu 16.04 & 17.10; \r\n ```\r\n $ virtualenv --version\r\n 15.1.0\r\n $ virtualenv --no-site-packages test-venv\r\n <snip>\r\n $ ./test-venv/bin/pip freeze --all \r\n pip==9.0.1\r\n setuptools==38.2.4\r\n wheel==0.30.0\r\n ```\r\n\r\n### Expected behaviour\r\nChecking out the repository and installing it(without the editable flag) should install the required package resources like `ckan/migration/migrate.cfg`.\r\n\r\n## Actual behaviour\r\nInstalling the package without the editable flag does not include the package resources meaning all JavaScript, CSS, templates and the migration config noted above are not included.\r\n\r\nThis makes the package non-functional. \r\n\r\nFor me the problem arose because `ckan/migration/migrate.cfg` did not exist in my install directory, and hence the database could not be created. There are numerous other files listed in `MANIFEST.in` which are also required for CKAN to run which are not included.\r\n\r\n### What steps can be taken to reproduce the issue? \r\n\r\nFollowing the [installing from source](http://docs.ckan.org/en/latest/maintaining/installing/install-from-source.html) instructions, omitting the `-e` flag from step **c**\r\n\r\n## Resolution.\r\n\r\nThe solution to this is to add `include_package_data=True` to the `setup` call in `setup.py`\r\n\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\nimport os\nimport os.path\n\n# Avoid problem releasing to pypi from vagrant\nif os.environ.get('USER', '') == 'vagrant':\n del os.link\n\ntry:\n from setuptools import (setup, find_packages,\n __version__ as setuptools_version)\nexcept ImportError:\n from ez_setup import use_setuptools\n use_setuptools()\n from setuptools import (setup, find_packages,\n __version__ as setuptools_version)\n\nfrom ckan import (__version__, __description__, __long_description__,\n __license__)\n\n\n#\n# Check setuptools version\n#\n\ndef parse_version(s):\n return map(int, s.split('.'))\n\nHERE = os.path.dirname(__file__)\nwith open(os.path.join(HERE, 'requirement-setuptools.txt')) as f:\n setuptools_requirement = f.read().strip()\nmin_setuptools_version = parse_version(setuptools_requirement.split('==')[1])\nif parse_version(setuptools_version) < min_setuptools_version:\n raise AssertionError(\n 'setuptools version error\\n'\n 'You need a newer version of setuptools.\\n'\n 'Install the recommended version:\\n'\n ' pip install -r requirement-setuptools.txt\\n'\n 'and then try again to install ckan into your python environment.'\n )\n\n\nentry_points = {\n 'nose.plugins.0.10': [\n 'main = ckan.ckan_nose_plugin:CkanNose',\n ],\n 'paste.app_factory': [\n 'main = ckan.config.middleware:make_app',\n ],\n 'paste.app_install': [\n 'main = ckan.config.install:CKANInstaller',\n ],\n 'paste.paster_command': [\n 'db = ckan.lib.cli:ManageDb',\n 'create-test-data = ckan.lib.cli:CreateTestDataCommand',\n 'sysadmin = ckan.lib.cli:Sysadmin',\n 'user = ckan.lib.cli:UserCmd',\n 'dataset = ckan.lib.cli:DatasetCmd',\n 'search-index = ckan.lib.cli:SearchIndexCommand',\n 'ratings = ckan.lib.cli:Ratings',\n 'notify = ckan.lib.cli:Notification',\n 'celeryd = ckan.lib.cli:Celery',\n 'rdf-export = ckan.lib.cli:RDFExport',\n 'tracking = ckan.lib.cli:Tracking',\n 'plugin-info = ckan.lib.cli:PluginInfo',\n 'profile = ckan.lib.cli:Profile',\n 'color = ckan.lib.cli:CreateColorSchemeCommand',\n 'check-po-files = ckan.i18n.check_po_files:CheckPoFiles',\n 'trans = ckan.lib.cli:TranslationsCommand',\n 'minify = ckan.lib.cli:MinifyCommand',\n 'less = ckan.lib.cli:LessCommand',\n 'datastore = ckanext.datastore.commands:datastore_group',\n 'datapusher = ckanext.datapusher.cli:DatapusherCommand',\n 'front-end-build = ckan.lib.cli:FrontEndBuildCommand',\n 'views = ckan.lib.cli:ViewsCommand',\n 'config-tool = ckan.lib.cli:ConfigToolCommand',\n 'jobs = ckan.lib.cli:JobsCommand',\n ],\n 'console_scripts': [\n 'ckan-admin = bin.ckan_admin:Command',\n ],\n 'paste.paster_create_template': [\n 'ckanext = ckan.pastertemplates:CkanextTemplate',\n ],\n 'ckan.forms': [\n 'standard = ckan.forms.package:get_standard_fieldset',\n 'package = ckan.forms.package:get_standard_fieldset',\n 'group = ckan.forms.group:get_group_fieldset',\n 'package_group = ckan.forms.group:get_package_group_fieldset',\n ],\n 'ckan.search': [\n 'sql = ckan.lib.search.sql:SqlSearchBackend',\n 'solr = ckan.lib.search.solr_backend:SolrSearchBackend',\n ],\n 'ckan.plugins': [\n 'synchronous_search = ckan.lib.search:SynchronousSearchPlugin',\n 'stats = ckanext.stats.plugin:StatsPlugin',\n 'publisher_form = ckanext.publisher_form.forms:PublisherForm',\n 'publisher_dataset_form = ckanext.publisher_form.forms:PublisherDatasetForm',\n 'multilingual_dataset = ckanext.multilingual.plugin:MultilingualDataset',\n 'multilingual_group = ckanext.multilingual.plugin:MultilingualGroup',\n 'multilingual_tag = ckanext.multilingual.plugin:MultilingualTag',\n 'multilingual_resource = ckanext.multilingual.plugin:MultilingualResource',\n 'organizations = ckanext.organizations.forms:OrganizationForm',\n 'organizations_dataset = ckanext.organizations.forms:OrganizationDatasetForm',\n 'datastore = ckanext.datastore.plugin:DatastorePlugin',\n 'datapusher=ckanext.datapusher.plugin:DatapusherPlugin',\n 'test_tag_vocab_plugin = ckanext.test_tag_vocab_plugin:MockVocabTagsPlugin',\n 'resource_proxy = ckanext.resourceproxy.plugin:ResourceProxy',\n 'text_view = ckanext.textview.plugin:TextView',\n 'recline_view = ckanext.reclineview.plugin:ReclineView',\n 'recline_grid_view = ckanext.reclineview.plugin:ReclineGridView',\n 'recline_graph_view = ckanext.reclineview.plugin:ReclineGraphView',\n 'recline_map_view = ckanext.reclineview.plugin:ReclineMapView',\n 'datatables_view = ckanext.datatablesview.plugin:DataTablesView',\n 'image_view = ckanext.imageview.plugin:ImageView',\n 'webpage_view = ckanext.webpageview.plugin:WebPageView',\n # FIXME: Remove deprecated resource previews below. You should use the\n # versions as *_view instead.\n 'text_preview = ckanext.textview.plugin:TextView',\n 'recline_preview = ckanext.reclineview.plugin:ReclineView',\n 'recline_grid = ckanext.reclineview.plugin:ReclineGridView',\n 'recline_graph = ckanext.reclineview.plugin:ReclineGraphView',\n 'recline_map = ckanext.reclineview.plugin:ReclineMapView',\n # End of deprecated previews\n 'example_itemplatehelpers = ckanext.example_itemplatehelpers.plugin:ExampleITemplateHelpersPlugin',\n 'example_idatasetform = ckanext.example_idatasetform.plugin:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v1 = ckanext.example_idatasetform.plugin_v1:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v2 = ckanext.example_idatasetform.plugin_v2:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v3 = ckanext.example_idatasetform.plugin_v3:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v4 = ckanext.example_idatasetform.plugin_v4:ExampleIDatasetFormPlugin',\n 'example_igroupform = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin',\n 'example_igroupform_default_group_type = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin_DefaultGroupType',\n 'example_igroupform_organization = ckanext.example_igroupform.plugin:ExampleIGroupFormOrganizationPlugin',\n 'example_iauthfunctions_v1 = ckanext.example_iauthfunctions.plugin_v1:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v2 = ckanext.example_iauthfunctions.plugin_v2:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v3 = ckanext.example_iauthfunctions.plugin_v3:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v4 = ckanext.example_iauthfunctions.plugin_v4:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v5_custom_config_setting = ckanext.example_iauthfunctions.plugin_v5_custom_config_setting:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v6_parent_auth_functions = ckanext.example_iauthfunctions.plugin_v6_parent_auth_functions:ExampleIAuthFunctionsPlugin',\n 'example_theme_v01_empty_extension = ckanext.example_theme_docs.v01_empty_extension.plugin:ExampleThemePlugin',\n 'example_theme_v02_empty_template = ckanext.example_theme_docs.v02_empty_template.plugin:ExampleThemePlugin',\n 'example_theme_v03_jinja = ckanext.example_theme_docs.v03_jinja.plugin:ExampleThemePlugin',\n 'example_theme_v04_ckan_extends = ckanext.example_theme_docs.v04_ckan_extends.plugin:ExampleThemePlugin',\n 'example_theme_v05_block = ckanext.example_theme_docs.v05_block.plugin:ExampleThemePlugin',\n 'example_theme_v06_super = ckanext.example_theme_docs.v06_super.plugin:ExampleThemePlugin',\n 'example_theme_v07_helper_function = ckanext.example_theme_docs.v07_helper_function.plugin:ExampleThemePlugin',\n 'example_theme_v08_custom_helper_function = ckanext.example_theme_docs.v08_custom_helper_function.plugin:ExampleThemePlugin',\n 'example_theme_v09_snippet = ckanext.example_theme_docs.v09_snippet.plugin:ExampleThemePlugin',\n 'example_theme_v10_custom_snippet = ckanext.example_theme_docs.v10_custom_snippet.plugin:ExampleThemePlugin',\n 'example_theme_v11_HTML_and_CSS = ckanext.example_theme_docs.v11_HTML_and_CSS.plugin:ExampleThemePlugin',\n 'example_theme_v12_extra_public_dir = ckanext.example_theme_docs.v12_extra_public_dir.plugin:ExampleThemePlugin',\n 'example_theme_v13_custom_css = ckanext.example_theme_docs.v13_custom_css.plugin:ExampleThemePlugin',\n 'example_theme_v14_more_custom_css = ckanext.example_theme_docs.v14_more_custom_css.plugin:ExampleThemePlugin',\n 'example_theme_v15_fanstatic = ckanext.example_theme_docs.v15_fanstatic.plugin:ExampleThemePlugin',\n 'example_theme_v16_initialize_a_javascript_module = ckanext.example_theme_docs.v16_initialize_a_javascript_module.plugin:ExampleThemePlugin',\n 'example_theme_v17_popover = ckanext.example_theme_docs.v17_popover.plugin:ExampleThemePlugin',\n 'example_theme_v18_snippet_api = ckanext.example_theme_docs.v18_snippet_api.plugin:ExampleThemePlugin',\n 'example_theme_v19_01_error = ckanext.example_theme_docs.v19_01_error.plugin:ExampleThemePlugin',\n 'example_theme_v19_02_error_handling = ckanext.example_theme_docs.v19_02_error_handling.plugin:ExampleThemePlugin',\n 'example_theme_v20_pubsub = ckanext.example_theme_docs.v20_pubsub.plugin:ExampleThemePlugin',\n 'example_theme_v21_custom_jquery_plugin = ckanext.example_theme_docs.v21_custom_jquery_plugin.plugin:ExampleThemePlugin',\n 'example_theme_custom_config_setting = ckanext.example_theme_docs.custom_config_setting.plugin:ExampleThemePlugin',\n 'example_theme_custom_emails = ckanext.example_theme_docs.custom_emails.plugin:ExampleCustomEmailsPlugin',\n 'example_iresourcecontroller = ckanext.example_iresourcecontroller.plugin:ExampleIResourceControllerPlugin',\n 'example_ivalidators = ckanext.example_ivalidators.plugin:ExampleIValidatorsPlugin',\n 'example_iconfigurer = ckanext.example_iconfigurer.plugin:ExampleIConfigurerPlugin',\n 'example_itranslation = ckanext.example_itranslation.plugin:ExampleITranslationPlugin',\n 'example_iconfigurer_v1 = ckanext.example_iconfigurer.plugin_v1:ExampleIConfigurerPlugin',\n 'example_iconfigurer_v2 = ckanext.example_iconfigurer.plugin_v2:ExampleIConfigurerPlugin',\n 'example_flask_iblueprint = ckanext.example_flask_iblueprint.plugin:ExampleFlaskIBlueprintPlugin',\n 'example_iuploader = ckanext.example_iuploader.plugin:ExampleIUploader',\n 'example_idatastorebackend = ckanext.example_idatastorebackend.plugin:ExampleIDatastoreBackendPlugin',\n 'example_ipermissionlabels = ckanext.example_ipermissionlabels.plugin:ExampleIPermissionLabelsPlugin',\n ],\n 'ckan.system_plugins': [\n 'domain_object_mods = ckan.model.modification:DomainObjectModificationExtension',\n ],\n 'ckan.test_plugins': [\n 'routes_plugin = tests.legacy.ckantestplugins:RoutesPlugin',\n 'mapper_plugin = tests.legacy.ckantestplugins:MapperPlugin',\n 'session_plugin = tests.legacy.ckantestplugins:SessionPlugin',\n 'mapper_plugin2 = tests.legacy.ckantestplugins:MapperPlugin2',\n 'authorizer_plugin = tests.legacy.ckantestplugins:AuthorizerPlugin',\n 'test_observer_plugin = tests.legacy.ckantestplugins:PluginObserverPlugin',\n 'action_plugin = tests.legacy.ckantestplugins:ActionPlugin',\n 'auth_plugin = tests.legacy.ckantestplugins:AuthPlugin',\n 'test_group_plugin = tests.legacy.ckantestplugins:MockGroupControllerPlugin',\n 'test_package_controller_plugin = tests.legacy.ckantestplugins:MockPackageControllerPlugin',\n 'test_resource_preview = tests.legacy.ckantestplugins:MockResourcePreviewExtension',\n 'test_json_resource_preview = tests.legacy.ckantestplugins:JsonMockResourcePreviewExtension',\n 'sample_datastore_plugin = ckanext.datastore.tests.sample_datastore_plugin:SampleDataStorePlugin',\n 'example_datastore_deleted_with_count_plugin = ckanext.datastore.tests.test_chained_action:ExampleDataStoreDeletedWithCountPlugin',\n 'test_datastore_view = ckan.tests.lib.test_datapreview:MockDatastoreBasedResourceView',\n 'test_datapusher_plugin = ckanext.datapusher.tests.test_interfaces:FakeDataPusherPlugin',\n 'test_routing_plugin = ckan.tests.config.test_middleware:MockRoutingPlugin',\n 'test_flash_plugin = ckan.tests.config.test_sessions:FlashMessagePlugin',\n 'test_helpers_plugin = ckan.tests.lib.test_helpers:TestHelpersPlugin',\n 'test_feed_plugin = ckan.tests.controllers.test_feed:MockFeedPlugin',\n 'test_js_translations_plugin = ckan.tests.lib.test_i18n:TestJSTranslationsPlugin',\n ],\n 'babel.extractors': [\n 'ckan = ckan.lib.extract:extract_ckan',\n ],\n}\n\nsetup(\n name='ckan',\n version=__version__,\n author='https://github.com/ckan/ckan/graphs/contributors',\n author_email='[email protected]',\n license=__license__,\n url='http://ckan.org/',\n description=__description__,\n keywords='data packaging component tool server',\n long_description=__long_description__,\n zip_safe=False,\n packages=find_packages(exclude=['ez_setup']),\n namespace_packages=['ckanext', 'ckanext.stats'],\n message_extractors={\n 'ckan': [\n ('**.py', 'python', None),\n ('**.js', 'javascript', None),\n ('templates/importer/**', 'ignore', None),\n ('templates/**.html', 'ckan', None),\n ('templates/**.txt', 'ckan', None),\n ('templates_legacy/**.html', 'ckan', None),\n ('public/**', 'ignore', None),\n ],\n 'ckanext': [\n ('**.py', 'python', None),\n ('**.js', 'javascript', None),\n ('**.html', 'ckan', None),\n ('multilingual/solr/*.txt', 'ignore', None),\n ]\n },\n entry_points=entry_points,\n # setup.py test command needs a TestSuite so does not work with py.test\n # test_suite = 'nose.collector',\n # tests_require=[ 'py >= 0.8.0-alpha2' ]\n classifiers=[\n # https://pypi.python.org/pypi?%3Aaction=list_classifiers\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2 :: Only',\n 'Programming Language :: Python :: 2.7',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# encoding: utf-8\n\nimport os\nimport os.path\n\n# Avoid problem releasing to pypi from vagrant\nif os.environ.get('USER', '') == 'vagrant':\n del os.link\n\ntry:\n from setuptools import (setup, find_packages,\n __version__ as setuptools_version)\nexcept ImportError:\n from ez_setup import use_setuptools\n use_setuptools()\n from setuptools import (setup, find_packages,\n __version__ as setuptools_version)\n\nfrom ckan import (__version__, __description__, __long_description__,\n __license__)\n\n\n#\n# Check setuptools version\n#\n\ndef parse_version(s):\n return map(int, s.split('.'))\n\nHERE = os.path.dirname(__file__)\nwith open(os.path.join(HERE, 'requirement-setuptools.txt')) as f:\n setuptools_requirement = f.read().strip()\nmin_setuptools_version = parse_version(setuptools_requirement.split('==')[1])\nif parse_version(setuptools_version) < min_setuptools_version:\n raise AssertionError(\n 'setuptools version error\\n'\n 'You need a newer version of setuptools.\\n'\n 'Install the recommended version:\\n'\n ' pip install -r requirement-setuptools.txt\\n'\n 'and then try again to install ckan into your python environment.'\n )\n\n\nentry_points = {\n 'nose.plugins.0.10': [\n 'main = ckan.ckan_nose_plugin:CkanNose',\n ],\n 'paste.app_factory': [\n 'main = ckan.config.middleware:make_app',\n ],\n 'paste.app_install': [\n 'main = ckan.config.install:CKANInstaller',\n ],\n 'paste.paster_command': [\n 'db = ckan.lib.cli:ManageDb',\n 'create-test-data = ckan.lib.cli:CreateTestDataCommand',\n 'sysadmin = ckan.lib.cli:Sysadmin',\n 'user = ckan.lib.cli:UserCmd',\n 'dataset = ckan.lib.cli:DatasetCmd',\n 'search-index = ckan.lib.cli:SearchIndexCommand',\n 'ratings = ckan.lib.cli:Ratings',\n 'notify = ckan.lib.cli:Notification',\n 'celeryd = ckan.lib.cli:Celery',\n 'rdf-export = ckan.lib.cli:RDFExport',\n 'tracking = ckan.lib.cli:Tracking',\n 'plugin-info = ckan.lib.cli:PluginInfo',\n 'profile = ckan.lib.cli:Profile',\n 'color = ckan.lib.cli:CreateColorSchemeCommand',\n 'check-po-files = ckan.i18n.check_po_files:CheckPoFiles',\n 'trans = ckan.lib.cli:TranslationsCommand',\n 'minify = ckan.lib.cli:MinifyCommand',\n 'less = ckan.lib.cli:LessCommand',\n 'datastore = ckanext.datastore.commands:datastore_group',\n 'datapusher = ckanext.datapusher.cli:DatapusherCommand',\n 'front-end-build = ckan.lib.cli:FrontEndBuildCommand',\n 'views = ckan.lib.cli:ViewsCommand',\n 'config-tool = ckan.lib.cli:ConfigToolCommand',\n 'jobs = ckan.lib.cli:JobsCommand',\n ],\n 'console_scripts': [\n 'ckan-admin = bin.ckan_admin:Command',\n ],\n 'paste.paster_create_template': [\n 'ckanext = ckan.pastertemplates:CkanextTemplate',\n ],\n 'ckan.forms': [\n 'standard = ckan.forms.package:get_standard_fieldset',\n 'package = ckan.forms.package:get_standard_fieldset',\n 'group = ckan.forms.group:get_group_fieldset',\n 'package_group = ckan.forms.group:get_package_group_fieldset',\n ],\n 'ckan.search': [\n 'sql = ckan.lib.search.sql:SqlSearchBackend',\n 'solr = ckan.lib.search.solr_backend:SolrSearchBackend',\n ],\n 'ckan.plugins': [\n 'synchronous_search = ckan.lib.search:SynchronousSearchPlugin',\n 'stats = ckanext.stats.plugin:StatsPlugin',\n 'publisher_form = ckanext.publisher_form.forms:PublisherForm',\n 'publisher_dataset_form = ckanext.publisher_form.forms:PublisherDatasetForm',\n 'multilingual_dataset = ckanext.multilingual.plugin:MultilingualDataset',\n 'multilingual_group = ckanext.multilingual.plugin:MultilingualGroup',\n 'multilingual_tag = ckanext.multilingual.plugin:MultilingualTag',\n 'multilingual_resource = ckanext.multilingual.plugin:MultilingualResource',\n 'organizations = ckanext.organizations.forms:OrganizationForm',\n 'organizations_dataset = ckanext.organizations.forms:OrganizationDatasetForm',\n 'datastore = ckanext.datastore.plugin:DatastorePlugin',\n 'datapusher=ckanext.datapusher.plugin:DatapusherPlugin',\n 'test_tag_vocab_plugin = ckanext.test_tag_vocab_plugin:MockVocabTagsPlugin',\n 'resource_proxy = ckanext.resourceproxy.plugin:ResourceProxy',\n 'text_view = ckanext.textview.plugin:TextView',\n 'recline_view = ckanext.reclineview.plugin:ReclineView',\n 'recline_grid_view = ckanext.reclineview.plugin:ReclineGridView',\n 'recline_graph_view = ckanext.reclineview.plugin:ReclineGraphView',\n 'recline_map_view = ckanext.reclineview.plugin:ReclineMapView',\n 'datatables_view = ckanext.datatablesview.plugin:DataTablesView',\n 'image_view = ckanext.imageview.plugin:ImageView',\n 'webpage_view = ckanext.webpageview.plugin:WebPageView',\n # FIXME: Remove deprecated resource previews below. You should use the\n # versions as *_view instead.\n 'text_preview = ckanext.textview.plugin:TextView',\n 'recline_preview = ckanext.reclineview.plugin:ReclineView',\n 'recline_grid = ckanext.reclineview.plugin:ReclineGridView',\n 'recline_graph = ckanext.reclineview.plugin:ReclineGraphView',\n 'recline_map = ckanext.reclineview.plugin:ReclineMapView',\n # End of deprecated previews\n 'example_itemplatehelpers = ckanext.example_itemplatehelpers.plugin:ExampleITemplateHelpersPlugin',\n 'example_idatasetform = ckanext.example_idatasetform.plugin:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v1 = ckanext.example_idatasetform.plugin_v1:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v2 = ckanext.example_idatasetform.plugin_v2:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v3 = ckanext.example_idatasetform.plugin_v3:ExampleIDatasetFormPlugin',\n 'example_idatasetform_v4 = ckanext.example_idatasetform.plugin_v4:ExampleIDatasetFormPlugin',\n 'example_igroupform = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin',\n 'example_igroupform_default_group_type = ckanext.example_igroupform.plugin:ExampleIGroupFormPlugin_DefaultGroupType',\n 'example_igroupform_organization = ckanext.example_igroupform.plugin:ExampleIGroupFormOrganizationPlugin',\n 'example_iauthfunctions_v1 = ckanext.example_iauthfunctions.plugin_v1:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v2 = ckanext.example_iauthfunctions.plugin_v2:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v3 = ckanext.example_iauthfunctions.plugin_v3:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v4 = ckanext.example_iauthfunctions.plugin_v4:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v5_custom_config_setting = ckanext.example_iauthfunctions.plugin_v5_custom_config_setting:ExampleIAuthFunctionsPlugin',\n 'example_iauthfunctions_v6_parent_auth_functions = ckanext.example_iauthfunctions.plugin_v6_parent_auth_functions:ExampleIAuthFunctionsPlugin',\n 'example_theme_v01_empty_extension = ckanext.example_theme_docs.v01_empty_extension.plugin:ExampleThemePlugin',\n 'example_theme_v02_empty_template = ckanext.example_theme_docs.v02_empty_template.plugin:ExampleThemePlugin',\n 'example_theme_v03_jinja = ckanext.example_theme_docs.v03_jinja.plugin:ExampleThemePlugin',\n 'example_theme_v04_ckan_extends = ckanext.example_theme_docs.v04_ckan_extends.plugin:ExampleThemePlugin',\n 'example_theme_v05_block = ckanext.example_theme_docs.v05_block.plugin:ExampleThemePlugin',\n 'example_theme_v06_super = ckanext.example_theme_docs.v06_super.plugin:ExampleThemePlugin',\n 'example_theme_v07_helper_function = ckanext.example_theme_docs.v07_helper_function.plugin:ExampleThemePlugin',\n 'example_theme_v08_custom_helper_function = ckanext.example_theme_docs.v08_custom_helper_function.plugin:ExampleThemePlugin',\n 'example_theme_v09_snippet = ckanext.example_theme_docs.v09_snippet.plugin:ExampleThemePlugin',\n 'example_theme_v10_custom_snippet = ckanext.example_theme_docs.v10_custom_snippet.plugin:ExampleThemePlugin',\n 'example_theme_v11_HTML_and_CSS = ckanext.example_theme_docs.v11_HTML_and_CSS.plugin:ExampleThemePlugin',\n 'example_theme_v12_extra_public_dir = ckanext.example_theme_docs.v12_extra_public_dir.plugin:ExampleThemePlugin',\n 'example_theme_v13_custom_css = ckanext.example_theme_docs.v13_custom_css.plugin:ExampleThemePlugin',\n 'example_theme_v14_more_custom_css = ckanext.example_theme_docs.v14_more_custom_css.plugin:ExampleThemePlugin',\n 'example_theme_v15_fanstatic = ckanext.example_theme_docs.v15_fanstatic.plugin:ExampleThemePlugin',\n 'example_theme_v16_initialize_a_javascript_module = ckanext.example_theme_docs.v16_initialize_a_javascript_module.plugin:ExampleThemePlugin',\n 'example_theme_v17_popover = ckanext.example_theme_docs.v17_popover.plugin:ExampleThemePlugin',\n 'example_theme_v18_snippet_api = ckanext.example_theme_docs.v18_snippet_api.plugin:ExampleThemePlugin',\n 'example_theme_v19_01_error = ckanext.example_theme_docs.v19_01_error.plugin:ExampleThemePlugin',\n 'example_theme_v19_02_error_handling = ckanext.example_theme_docs.v19_02_error_handling.plugin:ExampleThemePlugin',\n 'example_theme_v20_pubsub = ckanext.example_theme_docs.v20_pubsub.plugin:ExampleThemePlugin',\n 'example_theme_v21_custom_jquery_plugin = ckanext.example_theme_docs.v21_custom_jquery_plugin.plugin:ExampleThemePlugin',\n 'example_theme_custom_config_setting = ckanext.example_theme_docs.custom_config_setting.plugin:ExampleThemePlugin',\n 'example_theme_custom_emails = ckanext.example_theme_docs.custom_emails.plugin:ExampleCustomEmailsPlugin',\n 'example_iresourcecontroller = ckanext.example_iresourcecontroller.plugin:ExampleIResourceControllerPlugin',\n 'example_ivalidators = ckanext.example_ivalidators.plugin:ExampleIValidatorsPlugin',\n 'example_iconfigurer = ckanext.example_iconfigurer.plugin:ExampleIConfigurerPlugin',\n 'example_itranslation = ckanext.example_itranslation.plugin:ExampleITranslationPlugin',\n 'example_iconfigurer_v1 = ckanext.example_iconfigurer.plugin_v1:ExampleIConfigurerPlugin',\n 'example_iconfigurer_v2 = ckanext.example_iconfigurer.plugin_v2:ExampleIConfigurerPlugin',\n 'example_flask_iblueprint = ckanext.example_flask_iblueprint.plugin:ExampleFlaskIBlueprintPlugin',\n 'example_iuploader = ckanext.example_iuploader.plugin:ExampleIUploader',\n 'example_idatastorebackend = ckanext.example_idatastorebackend.plugin:ExampleIDatastoreBackendPlugin',\n 'example_ipermissionlabels = ckanext.example_ipermissionlabels.plugin:ExampleIPermissionLabelsPlugin',\n ],\n 'ckan.system_plugins': [\n 'domain_object_mods = ckan.model.modification:DomainObjectModificationExtension',\n ],\n 'ckan.test_plugins': [\n 'routes_plugin = tests.legacy.ckantestplugins:RoutesPlugin',\n 'mapper_plugin = tests.legacy.ckantestplugins:MapperPlugin',\n 'session_plugin = tests.legacy.ckantestplugins:SessionPlugin',\n 'mapper_plugin2 = tests.legacy.ckantestplugins:MapperPlugin2',\n 'authorizer_plugin = tests.legacy.ckantestplugins:AuthorizerPlugin',\n 'test_observer_plugin = tests.legacy.ckantestplugins:PluginObserverPlugin',\n 'action_plugin = tests.legacy.ckantestplugins:ActionPlugin',\n 'auth_plugin = tests.legacy.ckantestplugins:AuthPlugin',\n 'test_group_plugin = tests.legacy.ckantestplugins:MockGroupControllerPlugin',\n 'test_package_controller_plugin = tests.legacy.ckantestplugins:MockPackageControllerPlugin',\n 'test_resource_preview = tests.legacy.ckantestplugins:MockResourcePreviewExtension',\n 'test_json_resource_preview = tests.legacy.ckantestplugins:JsonMockResourcePreviewExtension',\n 'sample_datastore_plugin = ckanext.datastore.tests.sample_datastore_plugin:SampleDataStorePlugin',\n 'example_datastore_deleted_with_count_plugin = ckanext.datastore.tests.test_chained_action:ExampleDataStoreDeletedWithCountPlugin',\n 'test_datastore_view = ckan.tests.lib.test_datapreview:MockDatastoreBasedResourceView',\n 'test_datapusher_plugin = ckanext.datapusher.tests.test_interfaces:FakeDataPusherPlugin',\n 'test_routing_plugin = ckan.tests.config.test_middleware:MockRoutingPlugin',\n 'test_flash_plugin = ckan.tests.config.test_sessions:FlashMessagePlugin',\n 'test_helpers_plugin = ckan.tests.lib.test_helpers:TestHelpersPlugin',\n 'test_feed_plugin = ckan.tests.controllers.test_feed:MockFeedPlugin',\n 'test_js_translations_plugin = ckan.tests.lib.test_i18n:TestJSTranslationsPlugin',\n ],\n 'babel.extractors': [\n 'ckan = ckan.lib.extract:extract_ckan',\n ],\n}\n\nsetup(\n name='ckan',\n version=__version__,\n author='https://github.com/ckan/ckan/graphs/contributors',\n author_email='[email protected]',\n license=__license__,\n url='http://ckan.org/',\n description=__description__,\n keywords='data packaging component tool server',\n long_description=__long_description__,\n zip_safe=False,\n include_package_data=True,\n packages=find_packages(exclude=['ez_setup']),\n namespace_packages=['ckanext', 'ckanext.stats'],\n message_extractors={\n 'ckan': [\n ('**.py', 'python', None),\n ('**.js', 'javascript', None),\n ('templates/importer/**', 'ignore', None),\n ('templates/**.html', 'ckan', None),\n ('templates/**.txt', 'ckan', None),\n ('templates_legacy/**.html', 'ckan', None),\n ('public/**', 'ignore', None),\n ],\n 'ckanext': [\n ('**.py', 'python', None),\n ('**.js', 'javascript', None),\n ('**.html', 'ckan', None),\n ('multilingual/solr/*.txt', 'ignore', None),\n ]\n },\n entry_points=entry_points,\n # setup.py test command needs a TestSuite so does not work with py.test\n # test_suite = 'nose.collector',\n # tests_require=[ 'py >= 0.8.0-alpha2' ]\n classifiers=[\n # https://pypi.python.org/pypi?%3Aaction=list_classifiers\n 'Development Status :: 5 - Production/Stable',\n 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2 :: Only',\n 'Programming Language :: Python :: 2.7',\n ],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1133 | rasdani/github-patches | git_diff | librosa__librosa-1673 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Confused by example in mel_to_stft doc
In the example for [mel_to_stft](https://librosa.org/doc/main/generated/librosa.feature.inverse.mel_to_stft.html) we have the following code:
```
y, sr = librosa.load(librosa.ex('trumpet'))
S = np.abs(librosa.stft(y))
mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)
S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)
```
S is the magnitude, mel_spec is therefore a mel magnitude spectrogram (not mel power spectrogram). However, the code for `librosa.feature.inverse.mel_to_stft` does the following:
```
M : np.ndarray [shape=(..., n_mels, n), non-negative]
The spectrogram as produced by `feature.melspectrogram`
inverse = nnls(mel_basis, M)
return np.power(inverse, 1.0 / power, out=inverse)
```
The `power` variable is not passed in the example and is therefore the default value `power = 2.0`. This would be correct when assuming a power spectrogram as input, which is not the case here.
My understanding is, that `S_inv` is now the approximate square root of `S`? However, in the example we proceed to compare the two assuming they should be equal, apart from the approximation error.
IMO we either need to convert magnitude into power with `S = np.abs(librosa.stft(y))**2` or pass the `power` parameter in `S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, power=1)`.
Confused by example in mel_to_stft doc
In the example for [mel_to_stft](https://librosa.org/doc/main/generated/librosa.feature.inverse.mel_to_stft.html) we have the following code:
```
y, sr = librosa.load(librosa.ex('trumpet'))
S = np.abs(librosa.stft(y))
mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)
S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)
```
S is the magnitude, mel_spec is therefore a mel magnitude spectrogram (not mel power spectrogram). However, the code for `librosa.feature.inverse.mel_to_stft` does the following:
```
M : np.ndarray [shape=(..., n_mels, n), non-negative]
The spectrogram as produced by `feature.melspectrogram`
inverse = nnls(mel_basis, M)
return np.power(inverse, 1.0 / power, out=inverse)
```
The `power` variable is not passed in the example and is therefore the default value `power = 2.0`. This would be correct when assuming a power spectrogram as input, which is not the case here.
My understanding is, that `S_inv` is now the approximate square root of `S`? However, in the example we proceed to compare the two assuming they should be equal, apart from the approximation error.
IMO we either need to convert magnitude into power with `S = np.abs(librosa.stft(y))**2` or pass the `power` parameter in `S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, power=1)`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `librosa/feature/inverse.py`
Content:
```
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """Feature inversion"""
4
5 import warnings
6 import numpy as np
7 import scipy.fftpack
8
9 from ..util.exceptions import ParameterError
10 from ..core.spectrum import griffinlim
11 from ..core.spectrum import db_to_power
12 from ..util.utils import tiny
13 from .. import filters
14 from ..util import nnls, expand_to
15 from numpy.typing import DTypeLike
16 from typing import Any, Callable, Optional, Union
17 from .._typing import _WindowSpec, _PadModeSTFT
18
19 __all__ = ["mel_to_stft", "mel_to_audio", "mfcc_to_mel", "mfcc_to_audio"]
20
21
22 def mel_to_stft(
23 M: np.ndarray,
24 *,
25 sr: float = 22050,
26 n_fft: int = 2048,
27 power: float = 2.0,
28 **kwargs: Any,
29 ) -> np.ndarray:
30 """Approximate STFT magnitude from a Mel power spectrogram.
31
32 Parameters
33 ----------
34 M : np.ndarray [shape=(..., n_mels, n), non-negative]
35 The spectrogram as produced by `feature.melspectrogram`
36 sr : number > 0 [scalar]
37 sampling rate of the underlying signal
38 n_fft : int > 0 [scalar]
39 number of FFT components in the resulting STFT
40 power : float > 0 [scalar]
41 Exponent for the magnitude melspectrogram
42 **kwargs : additional keyword arguments for Mel filter bank parameters
43 n_mels : int > 0 [scalar]
44 number of Mel bands to generate
45 fmin : float >= 0 [scalar]
46 lowest frequency (in Hz)
47 fmax : float >= 0 [scalar]
48 highest frequency (in Hz).
49 If `None`, use ``fmax = sr / 2.0``
50 htk : bool [scalar]
51 use HTK formula instead of Slaney
52 norm : {None, 'slaney', or number} [scalar]
53 If 'slaney', divide the triangular mel weights by the width of
54 the mel band (area normalization).
55 If numeric, use `librosa.util.normalize` to normalize each filter
56 by to unit l_p norm. See `librosa.util.normalize` for a full
57 description of supported norm values (including `+-np.inf`).
58 Otherwise, leave all the triangles aiming for a peak value of 1.0
59 dtype : np.dtype
60 The data type of the output basis.
61 By default, uses 32-bit (single-precision) floating point.
62
63 Returns
64 -------
65 S : np.ndarray [shape=(..., n_fft, t), non-negative]
66 An approximate linear magnitude spectrogram
67
68 See Also
69 --------
70 librosa.feature.melspectrogram
71 librosa.stft
72 librosa.filters.mel
73 librosa.util.nnls
74
75 Examples
76 --------
77 >>> y, sr = librosa.load(librosa.ex('trumpet'))
78 >>> S = np.abs(librosa.stft(y))
79 >>> mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)
80 >>> S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)
81
82 Compare the results visually
83
84 >>> import matplotlib.pyplot as plt
85 >>> fig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)
86 >>> img = librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max, top_db=None),
87 ... y_axis='log', x_axis='time', ax=ax[0])
88 >>> ax[0].set(title='Original STFT')
89 >>> ax[0].label_outer()
90 >>> librosa.display.specshow(librosa.amplitude_to_db(S_inv, ref=np.max, top_db=None),
91 ... y_axis='log', x_axis='time', ax=ax[1])
92 >>> ax[1].set(title='Reconstructed STFT')
93 >>> ax[1].label_outer()
94 >>> librosa.display.specshow(librosa.amplitude_to_db(np.abs(S_inv - S),
95 ... ref=S.max(), top_db=None),
96 ... vmax=0, y_axis='log', x_axis='time', cmap='magma', ax=ax[2])
97 >>> ax[2].set(title='Residual error (dB)')
98 >>> fig.colorbar(img, ax=ax, format="%+2.f dB")
99 """
100
101 # Construct a mel basis with dtype matching the input data
102 mel_basis = filters.mel(
103 sr=sr, n_fft=n_fft, n_mels=M.shape[-2], dtype=M.dtype, **kwargs
104 )
105
106 # Find the non-negative least squares solution, and apply
107 # the inverse exponent.
108 # We'll do the exponentiation in-place.
109 inverse = nnls(mel_basis, M)
110 return np.power(inverse, 1.0 / power, out=inverse)
111
112
113 def mel_to_audio(
114 M: np.ndarray,
115 *,
116 sr: float = 22050,
117 n_fft: int = 2048,
118 hop_length: Optional[int] = None,
119 win_length: Optional[int] = None,
120 window: _WindowSpec = "hann",
121 center: bool = True,
122 pad_mode: _PadModeSTFT = "constant",
123 power: float = 2.0,
124 n_iter: int = 32,
125 length: Optional[int] = None,
126 dtype: DTypeLike = np.float32,
127 **kwargs: Any,
128 ) -> np.ndarray:
129 """Invert a mel power spectrogram to audio using Griffin-Lim.
130
131 This is primarily a convenience wrapper for:
132
133 >>> S = librosa.feature.inverse.mel_to_stft(M)
134 >>> y = librosa.griffinlim(S)
135
136 Parameters
137 ----------
138 M : np.ndarray [shape=(..., n_mels, n), non-negative]
139 The spectrogram as produced by `feature.melspectrogram`
140 sr : number > 0 [scalar]
141 sampling rate of the underlying signal
142 n_fft : int > 0 [scalar]
143 number of FFT components in the resulting STFT
144 hop_length : None or int > 0
145 The hop length of the STFT. If not provided, it will default to ``n_fft // 4``
146 win_length : None or int > 0
147 The window length of the STFT. By default, it will equal ``n_fft``
148 window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]
149 A window specification as supported by `stft` or `istft`
150 center : boolean
151 If `True`, the STFT is assumed to use centered frames.
152 If `False`, the STFT is assumed to use left-aligned frames.
153 pad_mode : string
154 If ``center=True``, the padding mode to use at the edges of the signal.
155 By default, STFT uses zero padding.
156 power : float > 0 [scalar]
157 Exponent for the magnitude melspectrogram
158 n_iter : int > 0
159 The number of iterations for Griffin-Lim
160 length : None or int > 0
161 If provided, the output ``y`` is zero-padded or clipped to exactly ``length``
162 samples.
163 dtype : np.dtype
164 Real numeric type for the time-domain signal. Default is 32-bit float.
165 **kwargs : additional keyword arguments for Mel filter bank parameters
166 n_mels : int > 0 [scalar]
167 number of Mel bands to generate
168 fmin : float >= 0 [scalar]
169 lowest frequency (in Hz)
170 fmax : float >= 0 [scalar]
171 highest frequency (in Hz).
172 If `None`, use ``fmax = sr / 2.0``
173 htk : bool [scalar]
174 use HTK formula instead of Slaney
175 norm : {None, 'slaney', or number} [scalar]
176 If 'slaney', divide the triangular mel weights by the width of
177 the mel band (area normalization).
178 If numeric, use `librosa.util.normalize` to normalize each filter
179 by to unit l_p norm. See `librosa.util.normalize` for a full
180 description of supported norm values (including `+-np.inf`).
181 Otherwise, leave all the triangles aiming for a peak value of 1.0
182
183 Returns
184 -------
185 y : np.ndarray [shape(..., n,)]
186 time-domain signal reconstructed from ``M``
187
188 See Also
189 --------
190 librosa.griffinlim
191 librosa.feature.melspectrogram
192 librosa.filters.mel
193 librosa.feature.inverse.mel_to_stft
194 """
195
196 stft = mel_to_stft(M, sr=sr, n_fft=n_fft, power=power, **kwargs)
197
198 return griffinlim(
199 stft,
200 n_iter=n_iter,
201 hop_length=hop_length,
202 win_length=win_length,
203 n_fft=n_fft,
204 window=window,
205 center=center,
206 dtype=dtype,
207 length=length,
208 pad_mode=pad_mode,
209 )
210
211
212 def mfcc_to_mel(
213 mfcc: np.ndarray,
214 *,
215 n_mels: int = 128,
216 dct_type: int = 2,
217 norm: Optional[str] = "ortho",
218 ref: float = 1.0,
219 lifter: float = 0,
220 ) -> np.ndarray:
221 """Invert Mel-frequency cepstral coefficients to approximate a Mel power
222 spectrogram.
223
224 This inversion proceeds in two steps:
225
226 1. The inverse DCT is applied to the MFCCs
227 2. `librosa.db_to_power` is applied to map the dB-scaled result to a power spectrogram
228
229 Parameters
230 ----------
231 mfcc : np.ndarray [shape=(..., n_mfcc, n)]
232 The Mel-frequency cepstral coefficients
233 n_mels : int > 0
234 The number of Mel frequencies
235 dct_type : {1, 2, 3}
236 Discrete cosine transform (DCT) type
237 By default, DCT type-2 is used.
238 norm : None or 'ortho'
239 If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal
240 DCT basis.
241 Normalization is not supported for `dct_type=1`.
242 ref : float
243 Reference power for (inverse) decibel calculation
244 lifter : number >= 0
245 If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::
246 M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter) * lifter / 2)
247
248 Returns
249 -------
250 M : np.ndarray [shape=(..., n_mels, n)]
251 An approximate Mel power spectrum recovered from ``mfcc``
252
253 Warns
254 -----
255 UserWarning
256 due to critical values in lifter array that invokes underflow.
257
258 See Also
259 --------
260 librosa.feature.mfcc
261 librosa.feature.melspectrogram
262 scipy.fftpack.dct
263 """
264 if lifter > 0:
265 n_mfcc = mfcc.shape[-2]
266 idx = np.arange(1, 1 + n_mfcc, dtype=mfcc.dtype)
267 idx = expand_to(idx, ndim=mfcc.ndim, axes=-2)
268 lifter_sine = 1 + lifter * 0.5 * np.sin(np.pi * idx / lifter)
269
270 # raise a UserWarning if lifter array includes critical values
271 if np.any(np.abs(lifter_sine) < np.finfo(lifter_sine.dtype).eps):
272 warnings.warn(
273 message="lifter array includes critical values that may invoke underflow.",
274 category=UserWarning,
275 stacklevel=2,
276 )
277
278 # lifter mfcc values
279 mfcc = mfcc / (lifter_sine + tiny(mfcc))
280
281 elif lifter != 0:
282 raise ParameterError("MFCC to mel lifter must be a non-negative number.")
283
284 logmel = scipy.fftpack.idct(mfcc, axis=-2, type=dct_type, norm=norm, n=n_mels)
285 return db_to_power(logmel, ref=ref)
286
287
288 def mfcc_to_audio(
289 mfcc: np.ndarray,
290 *,
291 n_mels: int = 128,
292 dct_type: int = 2,
293 norm: Optional[str] = "ortho",
294 ref: float = 1.0,
295 lifter: float = 0,
296 **kwargs: Any,
297 ) -> np.ndarray:
298 """Convert Mel-frequency cepstral coefficients to a time-domain audio signal
299
300 This function is primarily a convenience wrapper for the following steps:
301
302 1. Convert mfcc to Mel power spectrum (`mfcc_to_mel`)
303 2. Convert Mel power spectrum to time-domain audio (`mel_to_audio`)
304
305 Parameters
306 ----------
307 mfcc : np.ndarray [shape=(..., n_mfcc, n)]
308 The Mel-frequency cepstral coefficients
309 n_mels : int > 0
310 The number of Mel frequencies
311 dct_type : {1, 2, 3}
312 Discrete cosine transform (DCT) type
313 By default, DCT type-2 is used.
314 norm : None or 'ortho'
315 If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal
316 DCT basis.
317 Normalization is not supported for ``dct_type=1``.
318 ref : float
319 Reference power for (inverse) decibel calculation
320 lifter : number >= 0
321 If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::
322 M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter)) * lifter / 2
323 **kwargs : additional keyword arguments to pass through to `mel_to_audio`
324 M : np.ndarray [shape=(..., n_mels, n), non-negative]
325 The spectrogram as produced by `feature.melspectrogram`
326 sr : number > 0 [scalar]
327 sampling rate of the underlying signal
328 n_fft : int > 0 [scalar]
329 number of FFT components in the resulting STFT
330 hop_length : None or int > 0
331 The hop length of the STFT. If not provided, it will default to ``n_fft // 4``
332 win_length : None or int > 0
333 The window length of the STFT. By default, it will equal ``n_fft``
334 window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]
335 A window specification as supported by `stft` or `istft`
336 center : boolean
337 If `True`, the STFT is assumed to use centered frames.
338 If `False`, the STFT is assumed to use left-aligned frames.
339 pad_mode : string
340 If ``center=True``, the padding mode to use at the edges of the signal.
341 By default, STFT uses zero padding.
342 power : float > 0 [scalar]
343 Exponent for the magnitude melspectrogram
344 n_iter : int > 0
345 The number of iterations for Griffin-Lim
346 length : None or int > 0
347 If provided, the output ``y`` is zero-padded or clipped to exactly ``length``
348 samples.
349 dtype : np.dtype
350 Real numeric type for the time-domain signal. Default is 32-bit float.
351 **kwargs : additional keyword arguments for Mel filter bank parameters
352 fmin : float >= 0 [scalar]
353 lowest frequency (in Hz)
354 fmax : float >= 0 [scalar]
355 highest frequency (in Hz).
356 If `None`, use ``fmax = sr / 2.0``
357 htk : bool [scalar]
358 use HTK formula instead of Slaney
359
360 Returns
361 -------
362 y : np.ndarray [shape=(..., n)]
363 A time-domain signal reconstructed from `mfcc`
364
365 See Also
366 --------
367 mfcc_to_mel
368 mel_to_audio
369 librosa.feature.mfcc
370 librosa.griffinlim
371 scipy.fftpack.dct
372 """
373 mel_spec = mfcc_to_mel(
374 mfcc, n_mels=n_mels, dct_type=dct_type, norm=norm, ref=ref, lifter=lifter
375 )
376
377 return mel_to_audio(mel_spec, **kwargs)
378
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/librosa/feature/inverse.py b/librosa/feature/inverse.py
--- a/librosa/feature/inverse.py
+++ b/librosa/feature/inverse.py
@@ -75,7 +75,7 @@
Examples
--------
>>> y, sr = librosa.load(librosa.ex('trumpet'))
- >>> S = np.abs(librosa.stft(y))
+ >>> S = librosa.util.abs2(librosa.stft(y))
>>> mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)
>>> S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)
| {"golden_diff": "diff --git a/librosa/feature/inverse.py b/librosa/feature/inverse.py\n--- a/librosa/feature/inverse.py\n+++ b/librosa/feature/inverse.py\n@@ -75,7 +75,7 @@\n Examples\n --------\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n- >>> S = np.abs(librosa.stft(y))\n+ >>> S = librosa.util.abs2(librosa.stft(y))\n >>> mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)\n >>> S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)\n", "issue": "Confused by example in mel_to_stft doc\nIn the example for [mel_to_stft](https://librosa.org/doc/main/generated/librosa.feature.inverse.mel_to_stft.html) we have the following code:\r\n```\r\ny, sr = librosa.load(librosa.ex('trumpet'))\r\nS = np.abs(librosa.stft(y))\r\nmel_spec = librosa.feature.melspectrogram(S=S, sr=sr)\r\nS_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)\r\n```\r\n\r\nS is the magnitude, mel_spec is therefore a mel magnitude spectrogram (not mel power spectrogram). However, the code for `librosa.feature.inverse.mel_to_stft` does the following:\r\n```\r\nM : np.ndarray [shape=(..., n_mels, n), non-negative]\r\n The spectrogram as produced by `feature.melspectrogram`\r\ninverse = nnls(mel_basis, M)\r\nreturn np.power(inverse, 1.0 / power, out=inverse)\r\n```\r\n\r\nThe `power` variable is not passed in the example and is therefore the default value `power = 2.0`. This would be correct when assuming a power spectrogram as input, which is not the case here.\r\n\r\nMy understanding is, that `S_inv` is now the approximate square root of `S`? However, in the example we proceed to compare the two assuming they should be equal, apart from the approximation error.\r\n\r\nIMO we either need to convert magnitude into power with `S = np.abs(librosa.stft(y))**2` or pass the `power` parameter in `S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, power=1)`.\nConfused by example in mel_to_stft doc\nIn the example for [mel_to_stft](https://librosa.org/doc/main/generated/librosa.feature.inverse.mel_to_stft.html) we have the following code:\r\n```\r\ny, sr = librosa.load(librosa.ex('trumpet'))\r\nS = np.abs(librosa.stft(y))\r\nmel_spec = librosa.feature.melspectrogram(S=S, sr=sr)\r\nS_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)\r\n```\r\n\r\nS is the magnitude, mel_spec is therefore a mel magnitude spectrogram (not mel power spectrogram). However, the code for `librosa.feature.inverse.mel_to_stft` does the following:\r\n```\r\nM : np.ndarray [shape=(..., n_mels, n), non-negative]\r\n The spectrogram as produced by `feature.melspectrogram`\r\ninverse = nnls(mel_basis, M)\r\nreturn np.power(inverse, 1.0 / power, out=inverse)\r\n```\r\n\r\nThe `power` variable is not passed in the example and is therefore the default value `power = 2.0`. This would be correct when assuming a power spectrogram as input, which is not the case here.\r\n\r\nMy understanding is, that `S_inv` is now the approximate square root of `S`? However, in the example we proceed to compare the two assuming they should be equal, apart from the approximation error.\r\n\r\nIMO we either need to convert magnitude into power with `S = np.abs(librosa.stft(y))**2` or pass the `power` parameter in `S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr, power=1)`.\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Feature inversion\"\"\"\n\nimport warnings\nimport numpy as np\nimport scipy.fftpack\n\nfrom ..util.exceptions import ParameterError\nfrom ..core.spectrum import griffinlim\nfrom ..core.spectrum import db_to_power\nfrom ..util.utils import tiny\nfrom .. import filters\nfrom ..util import nnls, expand_to\nfrom numpy.typing import DTypeLike\nfrom typing import Any, Callable, Optional, Union\nfrom .._typing import _WindowSpec, _PadModeSTFT\n\n__all__ = [\"mel_to_stft\", \"mel_to_audio\", \"mfcc_to_mel\", \"mfcc_to_audio\"]\n\n\ndef mel_to_stft(\n M: np.ndarray,\n *,\n sr: float = 22050,\n n_fft: int = 2048,\n power: float = 2.0,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Approximate STFT magnitude from a Mel power spectrogram.\n\n Parameters\n ----------\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n **kwargs : additional keyword arguments for Mel filter bank parameters\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n norm : {None, 'slaney', or number} [scalar]\n If 'slaney', divide the triangular mel weights by the width of\n the mel band (area normalization).\n If numeric, use `librosa.util.normalize` to normalize each filter\n by to unit l_p norm. See `librosa.util.normalize` for a full\n description of supported norm values (including `+-np.inf`).\n Otherwise, leave all the triangles aiming for a peak value of 1.0\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n Returns\n -------\n S : np.ndarray [shape=(..., n_fft, t), non-negative]\n An approximate linear magnitude spectrogram\n\n See Also\n --------\n librosa.feature.melspectrogram\n librosa.stft\n librosa.filters.mel\n librosa.util.nnls\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> S = np.abs(librosa.stft(y))\n >>> mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)\n >>> S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)\n\n Compare the results visually\n\n >>> import matplotlib.pyplot as plt\n >>> fig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)\n >>> img = librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max, top_db=None),\n ... y_axis='log', x_axis='time', ax=ax[0])\n >>> ax[0].set(title='Original STFT')\n >>> ax[0].label_outer()\n >>> librosa.display.specshow(librosa.amplitude_to_db(S_inv, ref=np.max, top_db=None),\n ... y_axis='log', x_axis='time', ax=ax[1])\n >>> ax[1].set(title='Reconstructed STFT')\n >>> ax[1].label_outer()\n >>> librosa.display.specshow(librosa.amplitude_to_db(np.abs(S_inv - S),\n ... ref=S.max(), top_db=None),\n ... vmax=0, y_axis='log', x_axis='time', cmap='magma', ax=ax[2])\n >>> ax[2].set(title='Residual error (dB)')\n >>> fig.colorbar(img, ax=ax, format=\"%+2.f dB\")\n \"\"\"\n\n # Construct a mel basis with dtype matching the input data\n mel_basis = filters.mel(\n sr=sr, n_fft=n_fft, n_mels=M.shape[-2], dtype=M.dtype, **kwargs\n )\n\n # Find the non-negative least squares solution, and apply\n # the inverse exponent.\n # We'll do the exponentiation in-place.\n inverse = nnls(mel_basis, M)\n return np.power(inverse, 1.0 / power, out=inverse)\n\n\ndef mel_to_audio(\n M: np.ndarray,\n *,\n sr: float = 22050,\n n_fft: int = 2048,\n hop_length: Optional[int] = None,\n win_length: Optional[int] = None,\n window: _WindowSpec = \"hann\",\n center: bool = True,\n pad_mode: _PadModeSTFT = \"constant\",\n power: float = 2.0,\n n_iter: int = 32,\n length: Optional[int] = None,\n dtype: DTypeLike = np.float32,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Invert a mel power spectrogram to audio using Griffin-Lim.\n\n This is primarily a convenience wrapper for:\n\n >>> S = librosa.feature.inverse.mel_to_stft(M)\n >>> y = librosa.griffinlim(S)\n\n Parameters\n ----------\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n hop_length : None or int > 0\n The hop length of the STFT. If not provided, it will default to ``n_fft // 4``\n win_length : None or int > 0\n The window length of the STFT. By default, it will equal ``n_fft``\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n A window specification as supported by `stft` or `istft`\n center : boolean\n If `True`, the STFT is assumed to use centered frames.\n If `False`, the STFT is assumed to use left-aligned frames.\n pad_mode : string\n If ``center=True``, the padding mode to use at the edges of the signal.\n By default, STFT uses zero padding.\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n n_iter : int > 0\n The number of iterations for Griffin-Lim\n length : None or int > 0\n If provided, the output ``y`` is zero-padded or clipped to exactly ``length``\n samples.\n dtype : np.dtype\n Real numeric type for the time-domain signal. Default is 32-bit float.\n **kwargs : additional keyword arguments for Mel filter bank parameters\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n norm : {None, 'slaney', or number} [scalar]\n If 'slaney', divide the triangular mel weights by the width of\n the mel band (area normalization).\n If numeric, use `librosa.util.normalize` to normalize each filter\n by to unit l_p norm. See `librosa.util.normalize` for a full\n description of supported norm values (including `+-np.inf`).\n Otherwise, leave all the triangles aiming for a peak value of 1.0\n\n Returns\n -------\n y : np.ndarray [shape(..., n,)]\n time-domain signal reconstructed from ``M``\n\n See Also\n --------\n librosa.griffinlim\n librosa.feature.melspectrogram\n librosa.filters.mel\n librosa.feature.inverse.mel_to_stft\n \"\"\"\n\n stft = mel_to_stft(M, sr=sr, n_fft=n_fft, power=power, **kwargs)\n\n return griffinlim(\n stft,\n n_iter=n_iter,\n hop_length=hop_length,\n win_length=win_length,\n n_fft=n_fft,\n window=window,\n center=center,\n dtype=dtype,\n length=length,\n pad_mode=pad_mode,\n )\n\n\ndef mfcc_to_mel(\n mfcc: np.ndarray,\n *,\n n_mels: int = 128,\n dct_type: int = 2,\n norm: Optional[str] = \"ortho\",\n ref: float = 1.0,\n lifter: float = 0,\n) -> np.ndarray:\n \"\"\"Invert Mel-frequency cepstral coefficients to approximate a Mel power\n spectrogram.\n\n This inversion proceeds in two steps:\n\n 1. The inverse DCT is applied to the MFCCs\n 2. `librosa.db_to_power` is applied to map the dB-scaled result to a power spectrogram\n\n Parameters\n ----------\n mfcc : np.ndarray [shape=(..., n_mfcc, n)]\n The Mel-frequency cepstral coefficients\n n_mels : int > 0\n The number of Mel frequencies\n dct_type : {1, 2, 3}\n Discrete cosine transform (DCT) type\n By default, DCT type-2 is used.\n norm : None or 'ortho'\n If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal\n DCT basis.\n Normalization is not supported for `dct_type=1`.\n ref : float\n Reference power for (inverse) decibel calculation\n lifter : number >= 0\n If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::\n M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter) * lifter / 2)\n\n Returns\n -------\n M : np.ndarray [shape=(..., n_mels, n)]\n An approximate Mel power spectrum recovered from ``mfcc``\n\n Warns\n -----\n UserWarning\n due to critical values in lifter array that invokes underflow.\n\n See Also\n --------\n librosa.feature.mfcc\n librosa.feature.melspectrogram\n scipy.fftpack.dct\n \"\"\"\n if lifter > 0:\n n_mfcc = mfcc.shape[-2]\n idx = np.arange(1, 1 + n_mfcc, dtype=mfcc.dtype)\n idx = expand_to(idx, ndim=mfcc.ndim, axes=-2)\n lifter_sine = 1 + lifter * 0.5 * np.sin(np.pi * idx / lifter)\n\n # raise a UserWarning if lifter array includes critical values\n if np.any(np.abs(lifter_sine) < np.finfo(lifter_sine.dtype).eps):\n warnings.warn(\n message=\"lifter array includes critical values that may invoke underflow.\",\n category=UserWarning,\n stacklevel=2,\n )\n\n # lifter mfcc values\n mfcc = mfcc / (lifter_sine + tiny(mfcc))\n\n elif lifter != 0:\n raise ParameterError(\"MFCC to mel lifter must be a non-negative number.\")\n\n logmel = scipy.fftpack.idct(mfcc, axis=-2, type=dct_type, norm=norm, n=n_mels)\n return db_to_power(logmel, ref=ref)\n\n\ndef mfcc_to_audio(\n mfcc: np.ndarray,\n *,\n n_mels: int = 128,\n dct_type: int = 2,\n norm: Optional[str] = \"ortho\",\n ref: float = 1.0,\n lifter: float = 0,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Convert Mel-frequency cepstral coefficients to a time-domain audio signal\n\n This function is primarily a convenience wrapper for the following steps:\n\n 1. Convert mfcc to Mel power spectrum (`mfcc_to_mel`)\n 2. Convert Mel power spectrum to time-domain audio (`mel_to_audio`)\n\n Parameters\n ----------\n mfcc : np.ndarray [shape=(..., n_mfcc, n)]\n The Mel-frequency cepstral coefficients\n n_mels : int > 0\n The number of Mel frequencies\n dct_type : {1, 2, 3}\n Discrete cosine transform (DCT) type\n By default, DCT type-2 is used.\n norm : None or 'ortho'\n If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal\n DCT basis.\n Normalization is not supported for ``dct_type=1``.\n ref : float\n Reference power for (inverse) decibel calculation\n lifter : number >= 0\n If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::\n M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter)) * lifter / 2\n **kwargs : additional keyword arguments to pass through to `mel_to_audio`\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n hop_length : None or int > 0\n The hop length of the STFT. If not provided, it will default to ``n_fft // 4``\n win_length : None or int > 0\n The window length of the STFT. By default, it will equal ``n_fft``\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n A window specification as supported by `stft` or `istft`\n center : boolean\n If `True`, the STFT is assumed to use centered frames.\n If `False`, the STFT is assumed to use left-aligned frames.\n pad_mode : string\n If ``center=True``, the padding mode to use at the edges of the signal.\n By default, STFT uses zero padding.\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n n_iter : int > 0\n The number of iterations for Griffin-Lim\n length : None or int > 0\n If provided, the output ``y`` is zero-padded or clipped to exactly ``length``\n samples.\n dtype : np.dtype\n Real numeric type for the time-domain signal. Default is 32-bit float.\n **kwargs : additional keyword arguments for Mel filter bank parameters\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n\n Returns\n -------\n y : np.ndarray [shape=(..., n)]\n A time-domain signal reconstructed from `mfcc`\n\n See Also\n --------\n mfcc_to_mel\n mel_to_audio\n librosa.feature.mfcc\n librosa.griffinlim\n scipy.fftpack.dct\n \"\"\"\n mel_spec = mfcc_to_mel(\n mfcc, n_mels=n_mels, dct_type=dct_type, norm=norm, ref=ref, lifter=lifter\n )\n\n return mel_to_audio(mel_spec, **kwargs)\n", "path": "librosa/feature/inverse.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"Feature inversion\"\"\"\n\nimport warnings\nimport numpy as np\nimport scipy.fftpack\n\nfrom ..util.exceptions import ParameterError\nfrom ..core.spectrum import griffinlim\nfrom ..core.spectrum import db_to_power\nfrom ..util.utils import tiny\nfrom .. import filters\nfrom ..util import nnls, expand_to\nfrom numpy.typing import DTypeLike\nfrom typing import Any, Callable, Optional, Union\nfrom .._typing import _WindowSpec, _PadModeSTFT\n\n__all__ = [\"mel_to_stft\", \"mel_to_audio\", \"mfcc_to_mel\", \"mfcc_to_audio\"]\n\n\ndef mel_to_stft(\n M: np.ndarray,\n *,\n sr: float = 22050,\n n_fft: int = 2048,\n power: float = 2.0,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Approximate STFT magnitude from a Mel power spectrogram.\n\n Parameters\n ----------\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n **kwargs : additional keyword arguments for Mel filter bank parameters\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n norm : {None, 'slaney', or number} [scalar]\n If 'slaney', divide the triangular mel weights by the width of\n the mel band (area normalization).\n If numeric, use `librosa.util.normalize` to normalize each filter\n by to unit l_p norm. See `librosa.util.normalize` for a full\n description of supported norm values (including `+-np.inf`).\n Otherwise, leave all the triangles aiming for a peak value of 1.0\n dtype : np.dtype\n The data type of the output basis.\n By default, uses 32-bit (single-precision) floating point.\n\n Returns\n -------\n S : np.ndarray [shape=(..., n_fft, t), non-negative]\n An approximate linear magnitude spectrogram\n\n See Also\n --------\n librosa.feature.melspectrogram\n librosa.stft\n librosa.filters.mel\n librosa.util.nnls\n\n Examples\n --------\n >>> y, sr = librosa.load(librosa.ex('trumpet'))\n >>> S = librosa.util.abs2(librosa.stft(y))\n >>> mel_spec = librosa.feature.melspectrogram(S=S, sr=sr)\n >>> S_inv = librosa.feature.inverse.mel_to_stft(mel_spec, sr=sr)\n\n Compare the results visually\n\n >>> import matplotlib.pyplot as plt\n >>> fig, ax = plt.subplots(nrows=3, sharex=True, sharey=True)\n >>> img = librosa.display.specshow(librosa.amplitude_to_db(S, ref=np.max, top_db=None),\n ... y_axis='log', x_axis='time', ax=ax[0])\n >>> ax[0].set(title='Original STFT')\n >>> ax[0].label_outer()\n >>> librosa.display.specshow(librosa.amplitude_to_db(S_inv, ref=np.max, top_db=None),\n ... y_axis='log', x_axis='time', ax=ax[1])\n >>> ax[1].set(title='Reconstructed STFT')\n >>> ax[1].label_outer()\n >>> librosa.display.specshow(librosa.amplitude_to_db(np.abs(S_inv - S),\n ... ref=S.max(), top_db=None),\n ... vmax=0, y_axis='log', x_axis='time', cmap='magma', ax=ax[2])\n >>> ax[2].set(title='Residual error (dB)')\n >>> fig.colorbar(img, ax=ax, format=\"%+2.f dB\")\n \"\"\"\n\n # Construct a mel basis with dtype matching the input data\n mel_basis = filters.mel(\n sr=sr, n_fft=n_fft, n_mels=M.shape[-2], dtype=M.dtype, **kwargs\n )\n\n # Find the non-negative least squares solution, and apply\n # the inverse exponent.\n # We'll do the exponentiation in-place.\n inverse = nnls(mel_basis, M)\n return np.power(inverse, 1.0 / power, out=inverse)\n\n\ndef mel_to_audio(\n M: np.ndarray,\n *,\n sr: float = 22050,\n n_fft: int = 2048,\n hop_length: Optional[int] = None,\n win_length: Optional[int] = None,\n window: _WindowSpec = \"hann\",\n center: bool = True,\n pad_mode: _PadModeSTFT = \"constant\",\n power: float = 2.0,\n n_iter: int = 32,\n length: Optional[int] = None,\n dtype: DTypeLike = np.float32,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Invert a mel power spectrogram to audio using Griffin-Lim.\n\n This is primarily a convenience wrapper for:\n\n >>> S = librosa.feature.inverse.mel_to_stft(M)\n >>> y = librosa.griffinlim(S)\n\n Parameters\n ----------\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n hop_length : None or int > 0\n The hop length of the STFT. If not provided, it will default to ``n_fft // 4``\n win_length : None or int > 0\n The window length of the STFT. By default, it will equal ``n_fft``\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n A window specification as supported by `stft` or `istft`\n center : boolean\n If `True`, the STFT is assumed to use centered frames.\n If `False`, the STFT is assumed to use left-aligned frames.\n pad_mode : string\n If ``center=True``, the padding mode to use at the edges of the signal.\n By default, STFT uses zero padding.\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n n_iter : int > 0\n The number of iterations for Griffin-Lim\n length : None or int > 0\n If provided, the output ``y`` is zero-padded or clipped to exactly ``length``\n samples.\n dtype : np.dtype\n Real numeric type for the time-domain signal. Default is 32-bit float.\n **kwargs : additional keyword arguments for Mel filter bank parameters\n n_mels : int > 0 [scalar]\n number of Mel bands to generate\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n norm : {None, 'slaney', or number} [scalar]\n If 'slaney', divide the triangular mel weights by the width of\n the mel band (area normalization).\n If numeric, use `librosa.util.normalize` to normalize each filter\n by to unit l_p norm. See `librosa.util.normalize` for a full\n description of supported norm values (including `+-np.inf`).\n Otherwise, leave all the triangles aiming for a peak value of 1.0\n\n Returns\n -------\n y : np.ndarray [shape(..., n,)]\n time-domain signal reconstructed from ``M``\n\n See Also\n --------\n librosa.griffinlim\n librosa.feature.melspectrogram\n librosa.filters.mel\n librosa.feature.inverse.mel_to_stft\n \"\"\"\n\n stft = mel_to_stft(M, sr=sr, n_fft=n_fft, power=power, **kwargs)\n\n return griffinlim(\n stft,\n n_iter=n_iter,\n hop_length=hop_length,\n win_length=win_length,\n n_fft=n_fft,\n window=window,\n center=center,\n dtype=dtype,\n length=length,\n pad_mode=pad_mode,\n )\n\n\ndef mfcc_to_mel(\n mfcc: np.ndarray,\n *,\n n_mels: int = 128,\n dct_type: int = 2,\n norm: Optional[str] = \"ortho\",\n ref: float = 1.0,\n lifter: float = 0,\n) -> np.ndarray:\n \"\"\"Invert Mel-frequency cepstral coefficients to approximate a Mel power\n spectrogram.\n\n This inversion proceeds in two steps:\n\n 1. The inverse DCT is applied to the MFCCs\n 2. `librosa.db_to_power` is applied to map the dB-scaled result to a power spectrogram\n\n Parameters\n ----------\n mfcc : np.ndarray [shape=(..., n_mfcc, n)]\n The Mel-frequency cepstral coefficients\n n_mels : int > 0\n The number of Mel frequencies\n dct_type : {1, 2, 3}\n Discrete cosine transform (DCT) type\n By default, DCT type-2 is used.\n norm : None or 'ortho'\n If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal\n DCT basis.\n Normalization is not supported for `dct_type=1`.\n ref : float\n Reference power for (inverse) decibel calculation\n lifter : number >= 0\n If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::\n M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter) * lifter / 2)\n\n Returns\n -------\n M : np.ndarray [shape=(..., n_mels, n)]\n An approximate Mel power spectrum recovered from ``mfcc``\n\n Warns\n -----\n UserWarning\n due to critical values in lifter array that invokes underflow.\n\n See Also\n --------\n librosa.feature.mfcc\n librosa.feature.melspectrogram\n scipy.fftpack.dct\n \"\"\"\n if lifter > 0:\n n_mfcc = mfcc.shape[-2]\n idx = np.arange(1, 1 + n_mfcc, dtype=mfcc.dtype)\n idx = expand_to(idx, ndim=mfcc.ndim, axes=-2)\n lifter_sine = 1 + lifter * 0.5 * np.sin(np.pi * idx / lifter)\n\n # raise a UserWarning if lifter array includes critical values\n if np.any(np.abs(lifter_sine) < np.finfo(lifter_sine.dtype).eps):\n warnings.warn(\n message=\"lifter array includes critical values that may invoke underflow.\",\n category=UserWarning,\n stacklevel=2,\n )\n\n # lifter mfcc values\n mfcc = mfcc / (lifter_sine + tiny(mfcc))\n\n elif lifter != 0:\n raise ParameterError(\"MFCC to mel lifter must be a non-negative number.\")\n\n logmel = scipy.fftpack.idct(mfcc, axis=-2, type=dct_type, norm=norm, n=n_mels)\n return db_to_power(logmel, ref=ref)\n\n\ndef mfcc_to_audio(\n mfcc: np.ndarray,\n *,\n n_mels: int = 128,\n dct_type: int = 2,\n norm: Optional[str] = \"ortho\",\n ref: float = 1.0,\n lifter: float = 0,\n **kwargs: Any,\n) -> np.ndarray:\n \"\"\"Convert Mel-frequency cepstral coefficients to a time-domain audio signal\n\n This function is primarily a convenience wrapper for the following steps:\n\n 1. Convert mfcc to Mel power spectrum (`mfcc_to_mel`)\n 2. Convert Mel power spectrum to time-domain audio (`mel_to_audio`)\n\n Parameters\n ----------\n mfcc : np.ndarray [shape=(..., n_mfcc, n)]\n The Mel-frequency cepstral coefficients\n n_mels : int > 0\n The number of Mel frequencies\n dct_type : {1, 2, 3}\n Discrete cosine transform (DCT) type\n By default, DCT type-2 is used.\n norm : None or 'ortho'\n If ``dct_type`` is `2 or 3`, setting ``norm='ortho'`` uses an orthonormal\n DCT basis.\n Normalization is not supported for ``dct_type=1``.\n ref : float\n Reference power for (inverse) decibel calculation\n lifter : number >= 0\n If ``lifter>0``, apply inverse liftering (inverse cepstral filtering)::\n M[n, :] <- M[n, :] / (1 + sin(pi * (n + 1) / lifter)) * lifter / 2\n **kwargs : additional keyword arguments to pass through to `mel_to_audio`\n M : np.ndarray [shape=(..., n_mels, n), non-negative]\n The spectrogram as produced by `feature.melspectrogram`\n sr : number > 0 [scalar]\n sampling rate of the underlying signal\n n_fft : int > 0 [scalar]\n number of FFT components in the resulting STFT\n hop_length : None or int > 0\n The hop length of the STFT. If not provided, it will default to ``n_fft // 4``\n win_length : None or int > 0\n The window length of the STFT. By default, it will equal ``n_fft``\n window : string, tuple, number, function, or np.ndarray [shape=(n_fft,)]\n A window specification as supported by `stft` or `istft`\n center : boolean\n If `True`, the STFT is assumed to use centered frames.\n If `False`, the STFT is assumed to use left-aligned frames.\n pad_mode : string\n If ``center=True``, the padding mode to use at the edges of the signal.\n By default, STFT uses zero padding.\n power : float > 0 [scalar]\n Exponent for the magnitude melspectrogram\n n_iter : int > 0\n The number of iterations for Griffin-Lim\n length : None or int > 0\n If provided, the output ``y`` is zero-padded or clipped to exactly ``length``\n samples.\n dtype : np.dtype\n Real numeric type for the time-domain signal. Default is 32-bit float.\n **kwargs : additional keyword arguments for Mel filter bank parameters\n fmin : float >= 0 [scalar]\n lowest frequency (in Hz)\n fmax : float >= 0 [scalar]\n highest frequency (in Hz).\n If `None`, use ``fmax = sr / 2.0``\n htk : bool [scalar]\n use HTK formula instead of Slaney\n\n Returns\n -------\n y : np.ndarray [shape=(..., n)]\n A time-domain signal reconstructed from `mfcc`\n\n See Also\n --------\n mfcc_to_mel\n mel_to_audio\n librosa.feature.mfcc\n librosa.griffinlim\n scipy.fftpack.dct\n \"\"\"\n mel_spec = mfcc_to_mel(\n mfcc, n_mels=n_mels, dct_type=dct_type, norm=norm, ref=ref, lifter=lifter\n )\n\n return mel_to_audio(mel_spec, **kwargs)\n", "path": "librosa/feature/inverse.py"}]} |
gh_patches_debug_1134 | rasdani/github-patches | git_diff | docarray__docarray-60 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
fix: fix tags type after pydantic model
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docarray/document/pydantic_model.py`
Content:
```
1 from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union
2
3 from pydantic import BaseModel, validator
4
5 from ..math.ndarray import to_list
6
7 if TYPE_CHECKING:
8 from ..types import ArrayType
9
10 _ProtoValueType = Optional[Union[str, bool, float]]
11 _StructValueType = Union[
12 _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]
13 ]
14
15
16 def _convert_ndarray_to_list(v: 'ArrayType'):
17 if v is not None:
18 return to_list(v)
19
20
21 class PydanticDocument(BaseModel):
22 id: str
23 parent_id: Optional[str]
24 granularity: Optional[int]
25 adjacency: Optional[int]
26 blob: Optional[bytes]
27 tensor: Optional[Any]
28 mime_type: Optional[str]
29 text: Optional[str]
30 weight: Optional[float]
31 uri: Optional[str]
32 tags: Optional[Dict[str, '_StructValueType']]
33 offset: Optional[float]
34 location: Optional[List[float]]
35 embedding: Optional[Any]
36 modality: Optional[str]
37 evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]
38 scores: Optional[Dict[str, Dict[str, '_StructValueType']]]
39 chunks: Optional[List['PydanticDocument']]
40 matches: Optional[List['PydanticDocument']]
41
42 _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)
43 _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)
44
45
46 PydanticDocument.update_forward_refs()
47
48 PydanticDocumentArray = List[PydanticDocument]
49
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docarray/document/pydantic_model.py b/docarray/document/pydantic_model.py
--- a/docarray/document/pydantic_model.py
+++ b/docarray/document/pydantic_model.py
@@ -7,7 +7,8 @@
if TYPE_CHECKING:
from ..types import ArrayType
-_ProtoValueType = Optional[Union[str, bool, float]]
+# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions
+_ProtoValueType = Optional[Union[bool, float, str]]
_StructValueType = Union[
_ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]
]
| {"golden_diff": "diff --git a/docarray/document/pydantic_model.py b/docarray/document/pydantic_model.py\n--- a/docarray/document/pydantic_model.py\n+++ b/docarray/document/pydantic_model.py\n@@ -7,7 +7,8 @@\n if TYPE_CHECKING:\n from ..types import ArrayType\n \n-_ProtoValueType = Optional[Union[str, bool, float]]\n+# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions\n+_ProtoValueType = Optional[Union[bool, float, str]]\n _StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n ]\n", "issue": "fix: fix tags type after pydantic model\n\n", "before_files": [{"content": "from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union\n\nfrom pydantic import BaseModel, validator\n\nfrom ..math.ndarray import to_list\n\nif TYPE_CHECKING:\n from ..types import ArrayType\n\n_ProtoValueType = Optional[Union[str, bool, float]]\n_StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n]\n\n\ndef _convert_ndarray_to_list(v: 'ArrayType'):\n if v is not None:\n return to_list(v)\n\n\nclass PydanticDocument(BaseModel):\n id: str\n parent_id: Optional[str]\n granularity: Optional[int]\n adjacency: Optional[int]\n blob: Optional[bytes]\n tensor: Optional[Any]\n mime_type: Optional[str]\n text: Optional[str]\n weight: Optional[float]\n uri: Optional[str]\n tags: Optional[Dict[str, '_StructValueType']]\n offset: Optional[float]\n location: Optional[List[float]]\n embedding: Optional[Any]\n modality: Optional[str]\n evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]\n scores: Optional[Dict[str, Dict[str, '_StructValueType']]]\n chunks: Optional[List['PydanticDocument']]\n matches: Optional[List['PydanticDocument']]\n\n _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)\n _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)\n\n\nPydanticDocument.update_forward_refs()\n\nPydanticDocumentArray = List[PydanticDocument]\n", "path": "docarray/document/pydantic_model.py"}], "after_files": [{"content": "from typing import Optional, List, Dict, Any, TYPE_CHECKING, Union\n\nfrom pydantic import BaseModel, validator\n\nfrom ..math.ndarray import to_list\n\nif TYPE_CHECKING:\n from ..types import ArrayType\n\n# this order must be preserved: https://pydantic-docs.helpmanual.io/usage/types/#unions\n_ProtoValueType = Optional[Union[bool, float, str]]\n_StructValueType = Union[\n _ProtoValueType, List[_ProtoValueType], Dict[str, _ProtoValueType]\n]\n\n\ndef _convert_ndarray_to_list(v: 'ArrayType'):\n if v is not None:\n return to_list(v)\n\n\nclass PydanticDocument(BaseModel):\n id: str\n parent_id: Optional[str]\n granularity: Optional[int]\n adjacency: Optional[int]\n blob: Optional[bytes]\n tensor: Optional[Any]\n mime_type: Optional[str]\n text: Optional[str]\n weight: Optional[float]\n uri: Optional[str]\n tags: Optional[Dict[str, '_StructValueType']]\n offset: Optional[float]\n location: Optional[List[float]]\n embedding: Optional[Any]\n modality: Optional[str]\n evaluations: Optional[Dict[str, Dict[str, '_StructValueType']]]\n scores: Optional[Dict[str, Dict[str, '_StructValueType']]]\n chunks: Optional[List['PydanticDocument']]\n matches: Optional[List['PydanticDocument']]\n\n _tensor2list = validator('tensor', allow_reuse=True)(_convert_ndarray_to_list)\n _embedding2list = validator('embedding', allow_reuse=True)(_convert_ndarray_to_list)\n\n\nPydanticDocument.update_forward_refs()\n\nPydanticDocumentArray = List[PydanticDocument]\n", "path": "docarray/document/pydantic_model.py"}]} |
gh_patches_debug_1135 | rasdani/github-patches | git_diff | Cog-Creators__Red-DiscordBot-2754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[3.1.2][Core] Conflicting commands empty error message
# Command bugs
#### Command name
`[p]load`
#### What cog is this command from?
Core
#### What were you expecting to happen?
When a cog with a conflicting command is loaded, it should show a related error message
#### What actually happened?
No specific error is shown, however only the wrapping "This package could not be loaded for the following reason:" error seems to be shown.
#### How can we reproduce this issue?
1) Load a cog with a conflicting command with an already loaded cog

Tried force-reinstalling Red with --no-cache-dir included, tried re-adding repo, reinstalling cog etc.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `redbot/core/bot.py`
Content:
```
1 import asyncio
2 import inspect
3 import os
4 import logging
5 from collections import Counter
6 from enum import Enum
7 from importlib.machinery import ModuleSpec
8 from pathlib import Path
9 from typing import Optional, Union, List
10
11 import discord
12 import sys
13 from discord.ext.commands import when_mentioned_or
14
15 from . import Config, i18n, commands, errors
16 from .cog_manager import CogManager
17
18 from .rpc import RPCMixin
19 from .utils import common_filters
20
21 CUSTOM_GROUPS = "CUSTOM_GROUPS"
22
23 log = logging.getLogger("redbot")
24
25
26 def _is_submodule(parent, child):
27 return parent == child or child.startswith(parent + ".")
28
29
30 class RedBase(commands.GroupMixin, commands.bot.BotBase, RPCMixin):
31 """Mixin for the main bot class.
32
33 This exists because `Red` inherits from `discord.AutoShardedClient`, which
34 is something other bot classes may not want to have as a parent class.
35 """
36
37 def __init__(self, *args, cli_flags=None, bot_dir: Path = Path.cwd(), **kwargs):
38 self._shutdown_mode = ExitCodes.CRITICAL
39 self.db = Config.get_core_conf(force_registration=True)
40 self._co_owners = cli_flags.co_owner
41 self.rpc_enabled = cli_flags.rpc
42 self._last_exception = None
43 self.db.register_global(
44 token=None,
45 prefix=[],
46 packages=[],
47 owner=None,
48 whitelist=[],
49 blacklist=[],
50 locale="en-US",
51 embeds=True,
52 color=15158332,
53 fuzzy=False,
54 custom_info=None,
55 help__page_char_limit=1000,
56 help__max_pages_in_guild=2,
57 help__use_menus=False,
58 help__show_hidden=False,
59 help__verify_checks=True,
60 help__verify_exists=False,
61 help__tagline="",
62 disabled_commands=[],
63 disabled_command_msg="That command is disabled.",
64 api_tokens={},
65 extra_owner_destinations=[],
66 owner_opt_out_list=[],
67 )
68
69 self.db.register_guild(
70 prefix=[],
71 whitelist=[],
72 blacklist=[],
73 admin_role=None,
74 mod_role=None,
75 embeds=None,
76 use_bot_color=False,
77 fuzzy=False,
78 disabled_commands=[],
79 autoimmune_ids=[],
80 )
81
82 self.db.register_user(embeds=None)
83
84 self.db.init_custom(CUSTOM_GROUPS, 2)
85 self.db.register_custom(CUSTOM_GROUPS)
86
87 async def prefix_manager(bot, message):
88 if not cli_flags.prefix:
89 global_prefix = await bot.db.prefix()
90 else:
91 global_prefix = cli_flags.prefix
92 if message.guild is None:
93 return global_prefix
94 server_prefix = await bot.db.guild(message.guild).prefix()
95 if cli_flags.mentionable:
96 return (
97 when_mentioned_or(*server_prefix)(bot, message)
98 if server_prefix
99 else when_mentioned_or(*global_prefix)(bot, message)
100 )
101 else:
102 return server_prefix if server_prefix else global_prefix
103
104 if "command_prefix" not in kwargs:
105 kwargs["command_prefix"] = prefix_manager
106
107 if cli_flags.owner and "owner_id" not in kwargs:
108 kwargs["owner_id"] = cli_flags.owner
109
110 if "owner_id" not in kwargs:
111 loop = asyncio.get_event_loop()
112 loop.run_until_complete(self._dict_abuse(kwargs))
113
114 if "command_not_found" not in kwargs:
115 kwargs["command_not_found"] = "Command {} not found.\n{}"
116
117 self.counter = Counter()
118 self.uptime = None
119 self.checked_time_accuracy = None
120 self.color = discord.Embed.Empty # This is needed or color ends up 0x000000
121
122 self.main_dir = bot_dir
123
124 self.cog_mgr = CogManager()
125
126 super().__init__(*args, help_command=None, **kwargs)
127 # Do not manually use the help formatter attribute here, see `send_help_for`,
128 # for a documented API. The internals of this object are still subject to change.
129 self._help_formatter = commands.help.RedHelpFormatter()
130 self.add_command(commands.help.red_help)
131
132 self._permissions_hooks: List[commands.CheckPredicate] = []
133
134 async def send_help_for(
135 self, ctx: commands.Context, help_for: Union[commands.Command, commands.GroupMixin, str]
136 ):
137 """
138 Invokes Red's helpformatter for a given context and object.
139 """
140 return await self._help_formatter.send_help(ctx, help_for)
141
142 async def _dict_abuse(self, indict):
143 """
144 Please blame <@269933075037814786> for this.
145
146 :param indict:
147 :return:
148 """
149
150 indict["owner_id"] = await self.db.owner()
151 i18n.set_locale(await self.db.locale())
152
153 async def embed_requested(self, channel, user, command=None) -> bool:
154 """
155 Determine if an embed is requested for a response.
156
157 Parameters
158 ----------
159 channel : `discord.abc.GuildChannel` or `discord.abc.PrivateChannel`
160 The channel to check embed settings for.
161 user : `discord.abc.User`
162 The user to check embed settings for.
163 command
164 (Optional) the command ran.
165
166 Returns
167 -------
168 bool
169 :code:`True` if an embed is requested
170 """
171 if isinstance(channel, discord.abc.PrivateChannel) or (
172 command and command == self.get_command("help")
173 ):
174 user_setting = await self.db.user(user).embeds()
175 if user_setting is not None:
176 return user_setting
177 else:
178 guild_setting = await self.db.guild(channel.guild).embeds()
179 if guild_setting is not None:
180 return guild_setting
181 global_setting = await self.db.embeds()
182 return global_setting
183
184 async def is_owner(self, user):
185 if user.id in self._co_owners:
186 return True
187 return await super().is_owner(user)
188
189 async def is_admin(self, member: discord.Member):
190 """Checks if a member is an admin of their guild."""
191 admin_role = await self.db.guild(member.guild).admin_role()
192 try:
193 if any(role.id == admin_role for role in member.roles):
194 return True
195 except AttributeError: # someone passed a webhook to this
196 pass
197 return False
198
199 async def is_mod(self, member: discord.Member):
200 """Checks if a member is a mod or admin of their guild."""
201 mod_role = await self.db.guild(member.guild).mod_role()
202 admin_role = await self.db.guild(member.guild).admin_role()
203 try:
204 if any(role.id in (mod_role, admin_role) for role in member.roles):
205 return True
206 except AttributeError: # someone passed a webhook to this
207 pass
208 return False
209
210 async def get_context(self, message, *, cls=commands.Context):
211 return await super().get_context(message, cls=cls)
212
213 async def process_commands(self, message: discord.Message):
214 """
215 modification from the base to do the same thing in the command case
216
217 but dispatch an additional event for cogs which want to handle normal messages
218 differently to command messages,
219 without the overhead of additional get_context calls per cog
220 """
221 if not message.author.bot:
222 ctx = await self.get_context(message)
223 if ctx.valid:
224 return await self.invoke(ctx)
225
226 self.dispatch("message_without_command", message)
227
228 @staticmethod
229 def list_packages():
230 """Lists packages present in the cogs the folder"""
231 return os.listdir("cogs")
232
233 async def save_packages_status(self, packages):
234 await self.db.packages.set(packages)
235
236 async def add_loaded_package(self, pkg_name: str):
237 async with self.db.packages() as curr_pkgs:
238 if pkg_name not in curr_pkgs:
239 curr_pkgs.append(pkg_name)
240
241 async def remove_loaded_package(self, pkg_name: str):
242 async with self.db.packages() as curr_pkgs:
243 while pkg_name in curr_pkgs:
244 curr_pkgs.remove(pkg_name)
245
246 async def load_extension(self, spec: ModuleSpec):
247 # NB: this completely bypasses `discord.ext.commands.Bot._load_from_module_spec`
248 name = spec.name.split(".")[-1]
249 if name in self.extensions:
250 raise errors.PackageAlreadyLoaded(spec)
251
252 lib = spec.loader.load_module()
253 if not hasattr(lib, "setup"):
254 del lib
255 raise discord.ClientException(f"extension {name} does not have a setup function")
256
257 try:
258 if asyncio.iscoroutinefunction(lib.setup):
259 await lib.setup(self)
260 else:
261 lib.setup(self)
262 except Exception as e:
263 self._remove_module_references(lib.__name__)
264 self._call_module_finalizers(lib, name)
265 raise errors.CogLoadError() from e
266 else:
267 self._BotBase__extensions[name] = lib
268
269 def remove_cog(self, cogname: str):
270 cog = self.get_cog(cogname)
271 if cog is None:
272 return
273
274 for cls in inspect.getmro(cog.__class__):
275 try:
276 hook = getattr(cog, f"_{cls.__name__}__permissions_hook")
277 except AttributeError:
278 pass
279 else:
280 self.remove_permissions_hook(hook)
281
282 super().remove_cog(cogname)
283
284 for meth in self.rpc_handlers.pop(cogname.upper(), ()):
285 self.unregister_rpc_handler(meth)
286
287 async def is_automod_immune(
288 self, to_check: Union[discord.Message, commands.Context, discord.abc.User, discord.Role]
289 ) -> bool:
290 """
291 Checks if the user, message, context, or role should be considered immune from automated
292 moderation actions.
293
294 This will return ``False`` in direct messages.
295
296 Parameters
297 ----------
298 to_check : `discord.Message` or `commands.Context` or `discord.abc.User` or `discord.Role`
299 Something to check if it would be immune
300
301 Returns
302 -------
303 bool
304 ``True`` if immune
305
306 """
307 guild = to_check.guild
308 if not guild:
309 return False
310
311 if isinstance(to_check, discord.Role):
312 ids_to_check = [to_check.id]
313 else:
314 author = getattr(to_check, "author", to_check)
315 try:
316 ids_to_check = [r.id for r in author.roles]
317 except AttributeError:
318 # webhook messages are a user not member,
319 # cheaper than isinstance
320 return True # webhooks require significant permissions to enable.
321 else:
322 ids_to_check.append(author.id)
323
324 immune_ids = await self.db.guild(guild).autoimmune_ids()
325
326 return any(i in immune_ids for i in ids_to_check)
327
328 @staticmethod
329 async def send_filtered(
330 destination: discord.abc.Messageable,
331 filter_mass_mentions=True,
332 filter_invite_links=True,
333 filter_all_links=False,
334 **kwargs,
335 ):
336 """
337 This is a convienience wrapper around
338
339 discord.abc.Messageable.send
340
341 It takes the destination you'd like to send to, which filters to apply
342 (defaults on mass mentions, and invite links) and any other parameters
343 normally accepted by destination.send
344
345 This should realistically only be used for responding using user provided
346 input. (unfortunately, including usernames)
347 Manually crafted messages which dont take any user input have no need of this
348 """
349
350 content = kwargs.pop("content", None)
351
352 if content:
353 if filter_mass_mentions:
354 content = common_filters.filter_mass_mentions(content)
355 if filter_invite_links:
356 content = common_filters.filter_invites(content)
357 if filter_all_links:
358 content = common_filters.filter_urls(content)
359
360 await destination.send(content=content, **kwargs)
361
362 def add_cog(self, cog: commands.Cog):
363 if not isinstance(cog, commands.Cog):
364 raise RuntimeError(
365 f"The {cog.__class__.__name__} cog in the {cog.__module__} package does "
366 f"not inherit from the commands.Cog base class. The cog author must update "
367 f"the cog to adhere to this requirement."
368 )
369 if not hasattr(cog, "requires"):
370 commands.Cog.__init__(cog)
371
372 for cls in inspect.getmro(cog.__class__):
373 try:
374 hook = getattr(cog, f"_{cls.__name__}__permissions_hook")
375 except AttributeError:
376 pass
377 else:
378 self.add_permissions_hook(hook)
379
380 for command in cog.__cog_commands__:
381
382 if not isinstance(command, commands.Command):
383 raise RuntimeError(
384 f"The {cog.__class__.__name__} cog in the {cog.__module__} package,"
385 " is not using Red's command module, and cannot be added. "
386 "If this is your cog, please use `from redbot.core import commands`"
387 "in place of `from discord.ext import commands`. For more details on "
388 "this requirement, see this page: "
389 "http://red-discordbot.readthedocs.io/en/v3-develop/framework_commands.html"
390 )
391 super().add_cog(cog)
392 self.dispatch("cog_add", cog)
393 for command in cog.__cog_commands__:
394 self.dispatch("command_add", command)
395
396 def clear_permission_rules(self, guild_id: Optional[int]) -> None:
397 """Clear all permission overrides in a scope.
398
399 Parameters
400 ----------
401 guild_id : Optional[int]
402 The guild ID to wipe permission overrides for. If
403 ``None``, this will clear all global rules and leave all
404 guild rules untouched.
405
406 """
407 for cog in self.cogs.values():
408 cog.requires.clear_all_rules(guild_id)
409 for command in self.walk_commands():
410 command.requires.clear_all_rules(guild_id)
411
412 def add_permissions_hook(self, hook: commands.CheckPredicate) -> None:
413 """Add a permissions hook.
414
415 Permissions hooks are check predicates which are called before
416 calling `Requires.verify`, and they can optionally return an
417 override: ``True`` to allow, ``False`` to deny, and ``None`` to
418 default to normal behaviour.
419
420 Parameters
421 ----------
422 hook
423 A command check predicate which returns ``True``, ``False``
424 or ``None``.
425
426 """
427 self._permissions_hooks.append(hook)
428
429 def remove_permissions_hook(self, hook: commands.CheckPredicate) -> None:
430 """Remove a permissions hook.
431
432 Parameters are the same as those in `add_permissions_hook`.
433
434 Raises
435 ------
436 ValueError
437 If the permissions hook has not been added.
438
439 """
440 self._permissions_hooks.remove(hook)
441
442 async def verify_permissions_hooks(self, ctx: commands.Context) -> Optional[bool]:
443 """Run permissions hooks.
444
445 Parameters
446 ----------
447 ctx : commands.Context
448 The context for the command being invoked.
449
450 Returns
451 -------
452 Optional[bool]
453 ``False`` if any hooks returned ``False``, ``True`` if any
454 hooks return ``True`` and none returned ``False``, ``None``
455 otherwise.
456
457 """
458 hook_results = []
459 for hook in self._permissions_hooks:
460 result = await discord.utils.maybe_coroutine(hook, ctx)
461 if result is not None:
462 hook_results.append(result)
463 if hook_results:
464 if all(hook_results):
465 ctx.permission_state = commands.PermState.ALLOWED_BY_HOOK
466 return True
467 else:
468 ctx.permission_state = commands.PermState.DENIED_BY_HOOK
469 return False
470
471 async def get_owner_notification_destinations(self) -> List[discord.abc.Messageable]:
472 """
473 Gets the users and channels to send to
474 """
475 destinations = []
476 opt_outs = await self.db.owner_opt_out_list()
477 for user_id in (self.owner_id, *self._co_owners):
478 if user_id not in opt_outs:
479 user = self.get_user(user_id)
480 if user:
481 destinations.append(user)
482
483 channel_ids = await self.db.extra_owner_destinations()
484 for channel_id in channel_ids:
485 channel = self.get_channel(channel_id)
486 if channel:
487 destinations.append(channel)
488
489 return destinations
490
491 async def send_to_owners(self, content=None, **kwargs):
492 """
493 This sends something to all owners and their configured extra destinations.
494
495 This takes the same arguments as discord.abc.Messageable.send
496
497 This logs failing sends
498 """
499 destinations = await self.get_owner_notification_destinations()
500
501 async def wrapped_send(location, content=None, **kwargs):
502 try:
503 await location.send(content, **kwargs)
504 except Exception as _exc:
505 log.exception(
506 f"I could not send an owner notification to ({location.id}){location}"
507 )
508
509 sends = [wrapped_send(d, content, **kwargs) for d in destinations]
510 await asyncio.gather(*sends)
511
512
513 class Red(RedBase, discord.AutoShardedClient):
514 """
515 You're welcome Caleb.
516 """
517
518 async def logout(self):
519 """Logs out of Discord and closes all connections."""
520
521 await super().logout()
522
523 async def shutdown(self, *, restart: bool = False):
524 """Gracefully quit Red.
525
526 The program will exit with code :code:`0` by default.
527
528 Parameters
529 ----------
530 restart : bool
531 If :code:`True`, the program will exit with code :code:`26`. If the
532 launcher sees this, it will attempt to restart the bot.
533
534 """
535 if not restart:
536 self._shutdown_mode = ExitCodes.SHUTDOWN
537 else:
538 self._shutdown_mode = ExitCodes.RESTART
539
540 await self.logout()
541
542
543 class ExitCodes(Enum):
544 CRITICAL = 1
545 SHUTDOWN = 0
546 RESTART = 26
547
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/redbot/core/bot.py b/redbot/core/bot.py
--- a/redbot/core/bot.py
+++ b/redbot/core/bot.py
@@ -262,7 +262,7 @@
except Exception as e:
self._remove_module_references(lib.__name__)
self._call_module_finalizers(lib, name)
- raise errors.CogLoadError() from e
+ raise errors.CogLoadError(e) from e
else:
self._BotBase__extensions[name] = lib
| {"golden_diff": "diff --git a/redbot/core/bot.py b/redbot/core/bot.py\n--- a/redbot/core/bot.py\n+++ b/redbot/core/bot.py\n@@ -262,7 +262,7 @@\n except Exception as e:\n self._remove_module_references(lib.__name__)\n self._call_module_finalizers(lib, name)\n- raise errors.CogLoadError() from e\n+ raise errors.CogLoadError(e) from e\n else:\n self._BotBase__extensions[name] = lib\n", "issue": "[3.1.2][Core] Conflicting commands empty error message\n# Command bugs\r\n\r\n#### Command name\r\n\r\n`[p]load`\r\n\r\n#### What cog is this command from?\r\n\r\nCore\r\n\r\n#### What were you expecting to happen?\r\n\r\nWhen a cog with a conflicting command is loaded, it should show a related error message\r\n\r\n#### What actually happened?\r\n\r\nNo specific error is shown, however only the wrapping \"This package could not be loaded for the following reason:\" error seems to be shown.\r\n\r\n#### How can we reproduce this issue?\r\n\r\n1) Load a cog with a conflicting command with an already loaded cog\r\n\r\n\r\n\r\n\r\nTried force-reinstalling Red with --no-cache-dir included, tried re-adding repo, reinstalling cog etc.\n", "before_files": [{"content": "import asyncio\nimport inspect\nimport os\nimport logging\nfrom collections import Counter\nfrom enum import Enum\nfrom importlib.machinery import ModuleSpec\nfrom pathlib import Path\nfrom typing import Optional, Union, List\n\nimport discord\nimport sys\nfrom discord.ext.commands import when_mentioned_or\n\nfrom . import Config, i18n, commands, errors\nfrom .cog_manager import CogManager\n\nfrom .rpc import RPCMixin\nfrom .utils import common_filters\n\nCUSTOM_GROUPS = \"CUSTOM_GROUPS\"\n\nlog = logging.getLogger(\"redbot\")\n\n\ndef _is_submodule(parent, child):\n return parent == child or child.startswith(parent + \".\")\n\n\nclass RedBase(commands.GroupMixin, commands.bot.BotBase, RPCMixin):\n \"\"\"Mixin for the main bot class.\n\n This exists because `Red` inherits from `discord.AutoShardedClient`, which\n is something other bot classes may not want to have as a parent class.\n \"\"\"\n\n def __init__(self, *args, cli_flags=None, bot_dir: Path = Path.cwd(), **kwargs):\n self._shutdown_mode = ExitCodes.CRITICAL\n self.db = Config.get_core_conf(force_registration=True)\n self._co_owners = cli_flags.co_owner\n self.rpc_enabled = cli_flags.rpc\n self._last_exception = None\n self.db.register_global(\n token=None,\n prefix=[],\n packages=[],\n owner=None,\n whitelist=[],\n blacklist=[],\n locale=\"en-US\",\n embeds=True,\n color=15158332,\n fuzzy=False,\n custom_info=None,\n help__page_char_limit=1000,\n help__max_pages_in_guild=2,\n help__use_menus=False,\n help__show_hidden=False,\n help__verify_checks=True,\n help__verify_exists=False,\n help__tagline=\"\",\n disabled_commands=[],\n disabled_command_msg=\"That command is disabled.\",\n api_tokens={},\n extra_owner_destinations=[],\n owner_opt_out_list=[],\n )\n\n self.db.register_guild(\n prefix=[],\n whitelist=[],\n blacklist=[],\n admin_role=None,\n mod_role=None,\n embeds=None,\n use_bot_color=False,\n fuzzy=False,\n disabled_commands=[],\n autoimmune_ids=[],\n )\n\n self.db.register_user(embeds=None)\n\n self.db.init_custom(CUSTOM_GROUPS, 2)\n self.db.register_custom(CUSTOM_GROUPS)\n\n async def prefix_manager(bot, message):\n if not cli_flags.prefix:\n global_prefix = await bot.db.prefix()\n else:\n global_prefix = cli_flags.prefix\n if message.guild is None:\n return global_prefix\n server_prefix = await bot.db.guild(message.guild).prefix()\n if cli_flags.mentionable:\n return (\n when_mentioned_or(*server_prefix)(bot, message)\n if server_prefix\n else when_mentioned_or(*global_prefix)(bot, message)\n )\n else:\n return server_prefix if server_prefix else global_prefix\n\n if \"command_prefix\" not in kwargs:\n kwargs[\"command_prefix\"] = prefix_manager\n\n if cli_flags.owner and \"owner_id\" not in kwargs:\n kwargs[\"owner_id\"] = cli_flags.owner\n\n if \"owner_id\" not in kwargs:\n loop = asyncio.get_event_loop()\n loop.run_until_complete(self._dict_abuse(kwargs))\n\n if \"command_not_found\" not in kwargs:\n kwargs[\"command_not_found\"] = \"Command {} not found.\\n{}\"\n\n self.counter = Counter()\n self.uptime = None\n self.checked_time_accuracy = None\n self.color = discord.Embed.Empty # This is needed or color ends up 0x000000\n\n self.main_dir = bot_dir\n\n self.cog_mgr = CogManager()\n\n super().__init__(*args, help_command=None, **kwargs)\n # Do not manually use the help formatter attribute here, see `send_help_for`,\n # for a documented API. The internals of this object are still subject to change.\n self._help_formatter = commands.help.RedHelpFormatter()\n self.add_command(commands.help.red_help)\n\n self._permissions_hooks: List[commands.CheckPredicate] = []\n\n async def send_help_for(\n self, ctx: commands.Context, help_for: Union[commands.Command, commands.GroupMixin, str]\n ):\n \"\"\"\n Invokes Red's helpformatter for a given context and object.\n \"\"\"\n return await self._help_formatter.send_help(ctx, help_for)\n\n async def _dict_abuse(self, indict):\n \"\"\"\n Please blame <@269933075037814786> for this.\n\n :param indict:\n :return:\n \"\"\"\n\n indict[\"owner_id\"] = await self.db.owner()\n i18n.set_locale(await self.db.locale())\n\n async def embed_requested(self, channel, user, command=None) -> bool:\n \"\"\"\n Determine if an embed is requested for a response.\n\n Parameters\n ----------\n channel : `discord.abc.GuildChannel` or `discord.abc.PrivateChannel`\n The channel to check embed settings for.\n user : `discord.abc.User`\n The user to check embed settings for.\n command\n (Optional) the command ran.\n\n Returns\n -------\n bool\n :code:`True` if an embed is requested\n \"\"\"\n if isinstance(channel, discord.abc.PrivateChannel) or (\n command and command == self.get_command(\"help\")\n ):\n user_setting = await self.db.user(user).embeds()\n if user_setting is not None:\n return user_setting\n else:\n guild_setting = await self.db.guild(channel.guild).embeds()\n if guild_setting is not None:\n return guild_setting\n global_setting = await self.db.embeds()\n return global_setting\n\n async def is_owner(self, user):\n if user.id in self._co_owners:\n return True\n return await super().is_owner(user)\n\n async def is_admin(self, member: discord.Member):\n \"\"\"Checks if a member is an admin of their guild.\"\"\"\n admin_role = await self.db.guild(member.guild).admin_role()\n try:\n if any(role.id == admin_role for role in member.roles):\n return True\n except AttributeError: # someone passed a webhook to this\n pass\n return False\n\n async def is_mod(self, member: discord.Member):\n \"\"\"Checks if a member is a mod or admin of their guild.\"\"\"\n mod_role = await self.db.guild(member.guild).mod_role()\n admin_role = await self.db.guild(member.guild).admin_role()\n try:\n if any(role.id in (mod_role, admin_role) for role in member.roles):\n return True\n except AttributeError: # someone passed a webhook to this\n pass\n return False\n\n async def get_context(self, message, *, cls=commands.Context):\n return await super().get_context(message, cls=cls)\n\n async def process_commands(self, message: discord.Message):\n \"\"\"\n modification from the base to do the same thing in the command case\n \n but dispatch an additional event for cogs which want to handle normal messages\n differently to command messages, \n without the overhead of additional get_context calls per cog\n \"\"\"\n if not message.author.bot:\n ctx = await self.get_context(message)\n if ctx.valid:\n return await self.invoke(ctx)\n\n self.dispatch(\"message_without_command\", message)\n\n @staticmethod\n def list_packages():\n \"\"\"Lists packages present in the cogs the folder\"\"\"\n return os.listdir(\"cogs\")\n\n async def save_packages_status(self, packages):\n await self.db.packages.set(packages)\n\n async def add_loaded_package(self, pkg_name: str):\n async with self.db.packages() as curr_pkgs:\n if pkg_name not in curr_pkgs:\n curr_pkgs.append(pkg_name)\n\n async def remove_loaded_package(self, pkg_name: str):\n async with self.db.packages() as curr_pkgs:\n while pkg_name in curr_pkgs:\n curr_pkgs.remove(pkg_name)\n\n async def load_extension(self, spec: ModuleSpec):\n # NB: this completely bypasses `discord.ext.commands.Bot._load_from_module_spec`\n name = spec.name.split(\".\")[-1]\n if name in self.extensions:\n raise errors.PackageAlreadyLoaded(spec)\n\n lib = spec.loader.load_module()\n if not hasattr(lib, \"setup\"):\n del lib\n raise discord.ClientException(f\"extension {name} does not have a setup function\")\n\n try:\n if asyncio.iscoroutinefunction(lib.setup):\n await lib.setup(self)\n else:\n lib.setup(self)\n except Exception as e:\n self._remove_module_references(lib.__name__)\n self._call_module_finalizers(lib, name)\n raise errors.CogLoadError() from e\n else:\n self._BotBase__extensions[name] = lib\n\n def remove_cog(self, cogname: str):\n cog = self.get_cog(cogname)\n if cog is None:\n return\n\n for cls in inspect.getmro(cog.__class__):\n try:\n hook = getattr(cog, f\"_{cls.__name__}__permissions_hook\")\n except AttributeError:\n pass\n else:\n self.remove_permissions_hook(hook)\n\n super().remove_cog(cogname)\n\n for meth in self.rpc_handlers.pop(cogname.upper(), ()):\n self.unregister_rpc_handler(meth)\n\n async def is_automod_immune(\n self, to_check: Union[discord.Message, commands.Context, discord.abc.User, discord.Role]\n ) -> bool:\n \"\"\"\n Checks if the user, message, context, or role should be considered immune from automated\n moderation actions.\n\n This will return ``False`` in direct messages.\n\n Parameters\n ----------\n to_check : `discord.Message` or `commands.Context` or `discord.abc.User` or `discord.Role`\n Something to check if it would be immune\n\n Returns\n -------\n bool\n ``True`` if immune\n\n \"\"\"\n guild = to_check.guild\n if not guild:\n return False\n\n if isinstance(to_check, discord.Role):\n ids_to_check = [to_check.id]\n else:\n author = getattr(to_check, \"author\", to_check)\n try:\n ids_to_check = [r.id for r in author.roles]\n except AttributeError:\n # webhook messages are a user not member,\n # cheaper than isinstance\n return True # webhooks require significant permissions to enable.\n else:\n ids_to_check.append(author.id)\n\n immune_ids = await self.db.guild(guild).autoimmune_ids()\n\n return any(i in immune_ids for i in ids_to_check)\n\n @staticmethod\n async def send_filtered(\n destination: discord.abc.Messageable,\n filter_mass_mentions=True,\n filter_invite_links=True,\n filter_all_links=False,\n **kwargs,\n ):\n \"\"\"\n This is a convienience wrapper around\n\n discord.abc.Messageable.send\n\n It takes the destination you'd like to send to, which filters to apply\n (defaults on mass mentions, and invite links) and any other parameters\n normally accepted by destination.send\n\n This should realistically only be used for responding using user provided\n input. (unfortunately, including usernames)\n Manually crafted messages which dont take any user input have no need of this\n \"\"\"\n\n content = kwargs.pop(\"content\", None)\n\n if content:\n if filter_mass_mentions:\n content = common_filters.filter_mass_mentions(content)\n if filter_invite_links:\n content = common_filters.filter_invites(content)\n if filter_all_links:\n content = common_filters.filter_urls(content)\n\n await destination.send(content=content, **kwargs)\n\n def add_cog(self, cog: commands.Cog):\n if not isinstance(cog, commands.Cog):\n raise RuntimeError(\n f\"The {cog.__class__.__name__} cog in the {cog.__module__} package does \"\n f\"not inherit from the commands.Cog base class. The cog author must update \"\n f\"the cog to adhere to this requirement.\"\n )\n if not hasattr(cog, \"requires\"):\n commands.Cog.__init__(cog)\n\n for cls in inspect.getmro(cog.__class__):\n try:\n hook = getattr(cog, f\"_{cls.__name__}__permissions_hook\")\n except AttributeError:\n pass\n else:\n self.add_permissions_hook(hook)\n\n for command in cog.__cog_commands__:\n\n if not isinstance(command, commands.Command):\n raise RuntimeError(\n f\"The {cog.__class__.__name__} cog in the {cog.__module__} package,\"\n \" is not using Red's command module, and cannot be added. \"\n \"If this is your cog, please use `from redbot.core import commands`\"\n \"in place of `from discord.ext import commands`. For more details on \"\n \"this requirement, see this page: \"\n \"http://red-discordbot.readthedocs.io/en/v3-develop/framework_commands.html\"\n )\n super().add_cog(cog)\n self.dispatch(\"cog_add\", cog)\n for command in cog.__cog_commands__:\n self.dispatch(\"command_add\", command)\n\n def clear_permission_rules(self, guild_id: Optional[int]) -> None:\n \"\"\"Clear all permission overrides in a scope.\n\n Parameters\n ----------\n guild_id : Optional[int]\n The guild ID to wipe permission overrides for. If\n ``None``, this will clear all global rules and leave all\n guild rules untouched.\n\n \"\"\"\n for cog in self.cogs.values():\n cog.requires.clear_all_rules(guild_id)\n for command in self.walk_commands():\n command.requires.clear_all_rules(guild_id)\n\n def add_permissions_hook(self, hook: commands.CheckPredicate) -> None:\n \"\"\"Add a permissions hook.\n\n Permissions hooks are check predicates which are called before\n calling `Requires.verify`, and they can optionally return an\n override: ``True`` to allow, ``False`` to deny, and ``None`` to\n default to normal behaviour.\n\n Parameters\n ----------\n hook\n A command check predicate which returns ``True``, ``False``\n or ``None``.\n\n \"\"\"\n self._permissions_hooks.append(hook)\n\n def remove_permissions_hook(self, hook: commands.CheckPredicate) -> None:\n \"\"\"Remove a permissions hook.\n\n Parameters are the same as those in `add_permissions_hook`.\n\n Raises\n ------\n ValueError\n If the permissions hook has not been added.\n\n \"\"\"\n self._permissions_hooks.remove(hook)\n\n async def verify_permissions_hooks(self, ctx: commands.Context) -> Optional[bool]:\n \"\"\"Run permissions hooks.\n\n Parameters\n ----------\n ctx : commands.Context\n The context for the command being invoked.\n\n Returns\n -------\n Optional[bool]\n ``False`` if any hooks returned ``False``, ``True`` if any\n hooks return ``True`` and none returned ``False``, ``None``\n otherwise.\n\n \"\"\"\n hook_results = []\n for hook in self._permissions_hooks:\n result = await discord.utils.maybe_coroutine(hook, ctx)\n if result is not None:\n hook_results.append(result)\n if hook_results:\n if all(hook_results):\n ctx.permission_state = commands.PermState.ALLOWED_BY_HOOK\n return True\n else:\n ctx.permission_state = commands.PermState.DENIED_BY_HOOK\n return False\n\n async def get_owner_notification_destinations(self) -> List[discord.abc.Messageable]:\n \"\"\"\n Gets the users and channels to send to\n \"\"\"\n destinations = []\n opt_outs = await self.db.owner_opt_out_list()\n for user_id in (self.owner_id, *self._co_owners):\n if user_id not in opt_outs:\n user = self.get_user(user_id)\n if user:\n destinations.append(user)\n\n channel_ids = await self.db.extra_owner_destinations()\n for channel_id in channel_ids:\n channel = self.get_channel(channel_id)\n if channel:\n destinations.append(channel)\n\n return destinations\n\n async def send_to_owners(self, content=None, **kwargs):\n \"\"\"\n This sends something to all owners and their configured extra destinations.\n\n This takes the same arguments as discord.abc.Messageable.send\n\n This logs failing sends\n \"\"\"\n destinations = await self.get_owner_notification_destinations()\n\n async def wrapped_send(location, content=None, **kwargs):\n try:\n await location.send(content, **kwargs)\n except Exception as _exc:\n log.exception(\n f\"I could not send an owner notification to ({location.id}){location}\"\n )\n\n sends = [wrapped_send(d, content, **kwargs) for d in destinations]\n await asyncio.gather(*sends)\n\n\nclass Red(RedBase, discord.AutoShardedClient):\n \"\"\"\n You're welcome Caleb.\n \"\"\"\n\n async def logout(self):\n \"\"\"Logs out of Discord and closes all connections.\"\"\"\n\n await super().logout()\n\n async def shutdown(self, *, restart: bool = False):\n \"\"\"Gracefully quit Red.\n\n The program will exit with code :code:`0` by default.\n\n Parameters\n ----------\n restart : bool\n If :code:`True`, the program will exit with code :code:`26`. If the\n launcher sees this, it will attempt to restart the bot.\n\n \"\"\"\n if not restart:\n self._shutdown_mode = ExitCodes.SHUTDOWN\n else:\n self._shutdown_mode = ExitCodes.RESTART\n\n await self.logout()\n\n\nclass ExitCodes(Enum):\n CRITICAL = 1\n SHUTDOWN = 0\n RESTART = 26\n", "path": "redbot/core/bot.py"}], "after_files": [{"content": "import asyncio\nimport inspect\nimport os\nimport logging\nfrom collections import Counter\nfrom enum import Enum\nfrom importlib.machinery import ModuleSpec\nfrom pathlib import Path\nfrom typing import Optional, Union, List\n\nimport discord\nimport sys\nfrom discord.ext.commands import when_mentioned_or\n\nfrom . import Config, i18n, commands, errors\nfrom .cog_manager import CogManager\n\nfrom .rpc import RPCMixin\nfrom .utils import common_filters\n\nCUSTOM_GROUPS = \"CUSTOM_GROUPS\"\n\nlog = logging.getLogger(\"redbot\")\n\n\ndef _is_submodule(parent, child):\n return parent == child or child.startswith(parent + \".\")\n\n\nclass RedBase(commands.GroupMixin, commands.bot.BotBase, RPCMixin):\n \"\"\"Mixin for the main bot class.\n\n This exists because `Red` inherits from `discord.AutoShardedClient`, which\n is something other bot classes may not want to have as a parent class.\n \"\"\"\n\n def __init__(self, *args, cli_flags=None, bot_dir: Path = Path.cwd(), **kwargs):\n self._shutdown_mode = ExitCodes.CRITICAL\n self.db = Config.get_core_conf(force_registration=True)\n self._co_owners = cli_flags.co_owner\n self.rpc_enabled = cli_flags.rpc\n self._last_exception = None\n self.db.register_global(\n token=None,\n prefix=[],\n packages=[],\n owner=None,\n whitelist=[],\n blacklist=[],\n locale=\"en-US\",\n embeds=True,\n color=15158332,\n fuzzy=False,\n custom_info=None,\n help__page_char_limit=1000,\n help__max_pages_in_guild=2,\n help__use_menus=False,\n help__show_hidden=False,\n help__verify_checks=True,\n help__verify_exists=False,\n help__tagline=\"\",\n disabled_commands=[],\n disabled_command_msg=\"That command is disabled.\",\n api_tokens={},\n extra_owner_destinations=[],\n owner_opt_out_list=[],\n )\n\n self.db.register_guild(\n prefix=[],\n whitelist=[],\n blacklist=[],\n admin_role=None,\n mod_role=None,\n embeds=None,\n use_bot_color=False,\n fuzzy=False,\n disabled_commands=[],\n autoimmune_ids=[],\n )\n\n self.db.register_user(embeds=None)\n\n self.db.init_custom(CUSTOM_GROUPS, 2)\n self.db.register_custom(CUSTOM_GROUPS)\n\n async def prefix_manager(bot, message):\n if not cli_flags.prefix:\n global_prefix = await bot.db.prefix()\n else:\n global_prefix = cli_flags.prefix\n if message.guild is None:\n return global_prefix\n server_prefix = await bot.db.guild(message.guild).prefix()\n if cli_flags.mentionable:\n return (\n when_mentioned_or(*server_prefix)(bot, message)\n if server_prefix\n else when_mentioned_or(*global_prefix)(bot, message)\n )\n else:\n return server_prefix if server_prefix else global_prefix\n\n if \"command_prefix\" not in kwargs:\n kwargs[\"command_prefix\"] = prefix_manager\n\n if cli_flags.owner and \"owner_id\" not in kwargs:\n kwargs[\"owner_id\"] = cli_flags.owner\n\n if \"owner_id\" not in kwargs:\n loop = asyncio.get_event_loop()\n loop.run_until_complete(self._dict_abuse(kwargs))\n\n if \"command_not_found\" not in kwargs:\n kwargs[\"command_not_found\"] = \"Command {} not found.\\n{}\"\n\n self.counter = Counter()\n self.uptime = None\n self.checked_time_accuracy = None\n self.color = discord.Embed.Empty # This is needed or color ends up 0x000000\n\n self.main_dir = bot_dir\n\n self.cog_mgr = CogManager()\n\n super().__init__(*args, help_command=None, **kwargs)\n # Do not manually use the help formatter attribute here, see `send_help_for`,\n # for a documented API. The internals of this object are still subject to change.\n self._help_formatter = commands.help.RedHelpFormatter()\n self.add_command(commands.help.red_help)\n\n self._permissions_hooks: List[commands.CheckPredicate] = []\n\n async def send_help_for(\n self, ctx: commands.Context, help_for: Union[commands.Command, commands.GroupMixin, str]\n ):\n \"\"\"\n Invokes Red's helpformatter for a given context and object.\n \"\"\"\n return await self._help_formatter.send_help(ctx, help_for)\n\n async def _dict_abuse(self, indict):\n \"\"\"\n Please blame <@269933075037814786> for this.\n\n :param indict:\n :return:\n \"\"\"\n\n indict[\"owner_id\"] = await self.db.owner()\n i18n.set_locale(await self.db.locale())\n\n async def embed_requested(self, channel, user, command=None) -> bool:\n \"\"\"\n Determine if an embed is requested for a response.\n\n Parameters\n ----------\n channel : `discord.abc.GuildChannel` or `discord.abc.PrivateChannel`\n The channel to check embed settings for.\n user : `discord.abc.User`\n The user to check embed settings for.\n command\n (Optional) the command ran.\n\n Returns\n -------\n bool\n :code:`True` if an embed is requested\n \"\"\"\n if isinstance(channel, discord.abc.PrivateChannel) or (\n command and command == self.get_command(\"help\")\n ):\n user_setting = await self.db.user(user).embeds()\n if user_setting is not None:\n return user_setting\n else:\n guild_setting = await self.db.guild(channel.guild).embeds()\n if guild_setting is not None:\n return guild_setting\n global_setting = await self.db.embeds()\n return global_setting\n\n async def is_owner(self, user):\n if user.id in self._co_owners:\n return True\n return await super().is_owner(user)\n\n async def is_admin(self, member: discord.Member):\n \"\"\"Checks if a member is an admin of their guild.\"\"\"\n admin_role = await self.db.guild(member.guild).admin_role()\n try:\n if any(role.id == admin_role for role in member.roles):\n return True\n except AttributeError: # someone passed a webhook to this\n pass\n return False\n\n async def is_mod(self, member: discord.Member):\n \"\"\"Checks if a member is a mod or admin of their guild.\"\"\"\n mod_role = await self.db.guild(member.guild).mod_role()\n admin_role = await self.db.guild(member.guild).admin_role()\n try:\n if any(role.id in (mod_role, admin_role) for role in member.roles):\n return True\n except AttributeError: # someone passed a webhook to this\n pass\n return False\n\n async def get_context(self, message, *, cls=commands.Context):\n return await super().get_context(message, cls=cls)\n\n async def process_commands(self, message: discord.Message):\n \"\"\"\n modification from the base to do the same thing in the command case\n \n but dispatch an additional event for cogs which want to handle normal messages\n differently to command messages, \n without the overhead of additional get_context calls per cog\n \"\"\"\n if not message.author.bot:\n ctx = await self.get_context(message)\n if ctx.valid:\n return await self.invoke(ctx)\n\n self.dispatch(\"message_without_command\", message)\n\n @staticmethod\n def list_packages():\n \"\"\"Lists packages present in the cogs the folder\"\"\"\n return os.listdir(\"cogs\")\n\n async def save_packages_status(self, packages):\n await self.db.packages.set(packages)\n\n async def add_loaded_package(self, pkg_name: str):\n async with self.db.packages() as curr_pkgs:\n if pkg_name not in curr_pkgs:\n curr_pkgs.append(pkg_name)\n\n async def remove_loaded_package(self, pkg_name: str):\n async with self.db.packages() as curr_pkgs:\n while pkg_name in curr_pkgs:\n curr_pkgs.remove(pkg_name)\n\n async def load_extension(self, spec: ModuleSpec):\n # NB: this completely bypasses `discord.ext.commands.Bot._load_from_module_spec`\n name = spec.name.split(\".\")[-1]\n if name in self.extensions:\n raise errors.PackageAlreadyLoaded(spec)\n\n lib = spec.loader.load_module()\n if not hasattr(lib, \"setup\"):\n del lib\n raise discord.ClientException(f\"extension {name} does not have a setup function\")\n\n try:\n if asyncio.iscoroutinefunction(lib.setup):\n await lib.setup(self)\n else:\n lib.setup(self)\n except Exception as e:\n self._remove_module_references(lib.__name__)\n self._call_module_finalizers(lib, name)\n raise errors.CogLoadError(e) from e\n else:\n self._BotBase__extensions[name] = lib\n\n def remove_cog(self, cogname: str):\n cog = self.get_cog(cogname)\n if cog is None:\n return\n\n for cls in inspect.getmro(cog.__class__):\n try:\n hook = getattr(cog, f\"_{cls.__name__}__permissions_hook\")\n except AttributeError:\n pass\n else:\n self.remove_permissions_hook(hook)\n\n super().remove_cog(cogname)\n\n for meth in self.rpc_handlers.pop(cogname.upper(), ()):\n self.unregister_rpc_handler(meth)\n\n async def is_automod_immune(\n self, to_check: Union[discord.Message, commands.Context, discord.abc.User, discord.Role]\n ) -> bool:\n \"\"\"\n Checks if the user, message, context, or role should be considered immune from automated\n moderation actions.\n\n This will return ``False`` in direct messages.\n\n Parameters\n ----------\n to_check : `discord.Message` or `commands.Context` or `discord.abc.User` or `discord.Role`\n Something to check if it would be immune\n\n Returns\n -------\n bool\n ``True`` if immune\n\n \"\"\"\n guild = to_check.guild\n if not guild:\n return False\n\n if isinstance(to_check, discord.Role):\n ids_to_check = [to_check.id]\n else:\n author = getattr(to_check, \"author\", to_check)\n try:\n ids_to_check = [r.id for r in author.roles]\n except AttributeError:\n # webhook messages are a user not member,\n # cheaper than isinstance\n return True # webhooks require significant permissions to enable.\n else:\n ids_to_check.append(author.id)\n\n immune_ids = await self.db.guild(guild).autoimmune_ids()\n\n return any(i in immune_ids for i in ids_to_check)\n\n @staticmethod\n async def send_filtered(\n destination: discord.abc.Messageable,\n filter_mass_mentions=True,\n filter_invite_links=True,\n filter_all_links=False,\n **kwargs,\n ):\n \"\"\"\n This is a convienience wrapper around\n\n discord.abc.Messageable.send\n\n It takes the destination you'd like to send to, which filters to apply\n (defaults on mass mentions, and invite links) and any other parameters\n normally accepted by destination.send\n\n This should realistically only be used for responding using user provided\n input. (unfortunately, including usernames)\n Manually crafted messages which dont take any user input have no need of this\n \"\"\"\n\n content = kwargs.pop(\"content\", None)\n\n if content:\n if filter_mass_mentions:\n content = common_filters.filter_mass_mentions(content)\n if filter_invite_links:\n content = common_filters.filter_invites(content)\n if filter_all_links:\n content = common_filters.filter_urls(content)\n\n await destination.send(content=content, **kwargs)\n\n def add_cog(self, cog: commands.Cog):\n if not isinstance(cog, commands.Cog):\n raise RuntimeError(\n f\"The {cog.__class__.__name__} cog in the {cog.__module__} package does \"\n f\"not inherit from the commands.Cog base class. The cog author must update \"\n f\"the cog to adhere to this requirement.\"\n )\n if not hasattr(cog, \"requires\"):\n commands.Cog.__init__(cog)\n\n for cls in inspect.getmro(cog.__class__):\n try:\n hook = getattr(cog, f\"_{cls.__name__}__permissions_hook\")\n except AttributeError:\n pass\n else:\n self.add_permissions_hook(hook)\n\n for command in cog.__cog_commands__:\n\n if not isinstance(command, commands.Command):\n raise RuntimeError(\n f\"The {cog.__class__.__name__} cog in the {cog.__module__} package,\"\n \" is not using Red's command module, and cannot be added. \"\n \"If this is your cog, please use `from redbot.core import commands`\"\n \"in place of `from discord.ext import commands`. For more details on \"\n \"this requirement, see this page: \"\n \"http://red-discordbot.readthedocs.io/en/v3-develop/framework_commands.html\"\n )\n super().add_cog(cog)\n self.dispatch(\"cog_add\", cog)\n for command in cog.__cog_commands__:\n self.dispatch(\"command_add\", command)\n\n def clear_permission_rules(self, guild_id: Optional[int]) -> None:\n \"\"\"Clear all permission overrides in a scope.\n\n Parameters\n ----------\n guild_id : Optional[int]\n The guild ID to wipe permission overrides for. If\n ``None``, this will clear all global rules and leave all\n guild rules untouched.\n\n \"\"\"\n for cog in self.cogs.values():\n cog.requires.clear_all_rules(guild_id)\n for command in self.walk_commands():\n command.requires.clear_all_rules(guild_id)\n\n def add_permissions_hook(self, hook: commands.CheckPredicate) -> None:\n \"\"\"Add a permissions hook.\n\n Permissions hooks are check predicates which are called before\n calling `Requires.verify`, and they can optionally return an\n override: ``True`` to allow, ``False`` to deny, and ``None`` to\n default to normal behaviour.\n\n Parameters\n ----------\n hook\n A command check predicate which returns ``True``, ``False``\n or ``None``.\n\n \"\"\"\n self._permissions_hooks.append(hook)\n\n def remove_permissions_hook(self, hook: commands.CheckPredicate) -> None:\n \"\"\"Remove a permissions hook.\n\n Parameters are the same as those in `add_permissions_hook`.\n\n Raises\n ------\n ValueError\n If the permissions hook has not been added.\n\n \"\"\"\n self._permissions_hooks.remove(hook)\n\n async def verify_permissions_hooks(self, ctx: commands.Context) -> Optional[bool]:\n \"\"\"Run permissions hooks.\n\n Parameters\n ----------\n ctx : commands.Context\n The context for the command being invoked.\n\n Returns\n -------\n Optional[bool]\n ``False`` if any hooks returned ``False``, ``True`` if any\n hooks return ``True`` and none returned ``False``, ``None``\n otherwise.\n\n \"\"\"\n hook_results = []\n for hook in self._permissions_hooks:\n result = await discord.utils.maybe_coroutine(hook, ctx)\n if result is not None:\n hook_results.append(result)\n if hook_results:\n if all(hook_results):\n ctx.permission_state = commands.PermState.ALLOWED_BY_HOOK\n return True\n else:\n ctx.permission_state = commands.PermState.DENIED_BY_HOOK\n return False\n\n async def get_owner_notification_destinations(self) -> List[discord.abc.Messageable]:\n \"\"\"\n Gets the users and channels to send to\n \"\"\"\n destinations = []\n opt_outs = await self.db.owner_opt_out_list()\n for user_id in (self.owner_id, *self._co_owners):\n if user_id not in opt_outs:\n user = self.get_user(user_id)\n if user:\n destinations.append(user)\n\n channel_ids = await self.db.extra_owner_destinations()\n for channel_id in channel_ids:\n channel = self.get_channel(channel_id)\n if channel:\n destinations.append(channel)\n\n return destinations\n\n async def send_to_owners(self, content=None, **kwargs):\n \"\"\"\n This sends something to all owners and their configured extra destinations.\n\n This takes the same arguments as discord.abc.Messageable.send\n\n This logs failing sends\n \"\"\"\n destinations = await self.get_owner_notification_destinations()\n\n async def wrapped_send(location, content=None, **kwargs):\n try:\n await location.send(content, **kwargs)\n except Exception as _exc:\n log.exception(\n f\"I could not send an owner notification to ({location.id}){location}\"\n )\n\n sends = [wrapped_send(d, content, **kwargs) for d in destinations]\n await asyncio.gather(*sends)\n\n\nclass Red(RedBase, discord.AutoShardedClient):\n \"\"\"\n You're welcome Caleb.\n \"\"\"\n\n async def logout(self):\n \"\"\"Logs out of Discord and closes all connections.\"\"\"\n\n await super().logout()\n\n async def shutdown(self, *, restart: bool = False):\n \"\"\"Gracefully quit Red.\n\n The program will exit with code :code:`0` by default.\n\n Parameters\n ----------\n restart : bool\n If :code:`True`, the program will exit with code :code:`26`. If the\n launcher sees this, it will attempt to restart the bot.\n\n \"\"\"\n if not restart:\n self._shutdown_mode = ExitCodes.SHUTDOWN\n else:\n self._shutdown_mode = ExitCodes.RESTART\n\n await self.logout()\n\n\nclass ExitCodes(Enum):\n CRITICAL = 1\n SHUTDOWN = 0\n RESTART = 26\n", "path": "redbot/core/bot.py"}]} |
gh_patches_debug_1136 | rasdani/github-patches | git_diff | googleapis__python-bigquery-587 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
loosen opentelemetry dependencies
See Spanner PR: https://github.com/googleapis/python-spanner/pull/298
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "google-api-core[grpc] >= 1.23.0, < 2.0.0dev",
33 "proto-plus >= 1.10.0",
34 "google-cloud-core >= 1.4.1, < 2.0dev",
35 "google-resumable-media >= 0.6.0, < 2.0dev",
36 "packaging >= 14.3",
37 "protobuf >= 3.12.0",
38 "requests >= 2.18.0, < 3.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
43 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
44 # installed, even though `google-cloud-bigquery-storage` specifies it
45 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
46 # See: https://github.com/googleapis/python-bigquery/issues/83 The
47 # grpc.Channel.close() method isn't added until 1.32.0.
48 # https://github.com/grpc/grpc/pull/15254
49 "grpcio >= 1.32.0, < 2.0dev",
50 "pyarrow >= 1.0.0, < 4.0dev",
51 ],
52 "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 4.0dev"],
53 "bignumeric_type": ["pyarrow >= 3.0.0, < 4.0dev"],
54 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
55 "opentelemetry": [
56 "opentelemetry-api==0.11b0",
57 "opentelemetry-sdk==0.11b0",
58 "opentelemetry-instrumentation==0.11b0",
59 ],
60 }
61
62 all_extras = []
63
64 for extra in extras:
65 # Exclude this extra from all to avoid overly strict dependencies on core
66 # libraries such as pyarrow.
67 # https://github.com/googleapis/python-bigquery/issues/563
68 if extra in {"bignumeric_type"}:
69 continue
70 all_extras.extend(extras[extra])
71
72 extras["all"] = all_extras
73
74 # Setup boilerplate below this line.
75
76 package_root = os.path.abspath(os.path.dirname(__file__))
77
78 readme_filename = os.path.join(package_root, "README.rst")
79 with io.open(readme_filename, encoding="utf-8") as readme_file:
80 readme = readme_file.read()
81
82 version = {}
83 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
84 exec(fp.read(), version)
85 version = version["__version__"]
86
87 # Only include packages under the 'google' namespace. Do not include tests,
88 # benchmarks, etc.
89 packages = [
90 package
91 for package in setuptools.PEP420PackageFinder.find()
92 if package.startswith("google")
93 ]
94
95 # Determine which namespaces are needed.
96 namespaces = ["google"]
97 if "google.cloud" in packages:
98 namespaces.append("google.cloud")
99
100
101 setuptools.setup(
102 name=name,
103 version=version,
104 description=description,
105 long_description=readme,
106 author="Google LLC",
107 author_email="[email protected]",
108 license="Apache 2.0",
109 url="https://github.com/googleapis/python-bigquery",
110 classifiers=[
111 release_status,
112 "Intended Audience :: Developers",
113 "License :: OSI Approved :: Apache Software License",
114 "Programming Language :: Python",
115 "Programming Language :: Python :: 3",
116 "Programming Language :: Python :: 3.6",
117 "Programming Language :: Python :: 3.7",
118 "Programming Language :: Python :: 3.8",
119 "Programming Language :: Python :: 3.9",
120 "Operating System :: OS Independent",
121 "Topic :: Internet",
122 ],
123 platforms="Posix; MacOS X; Windows",
124 packages=packages,
125 namespace_packages=namespaces,
126 install_requires=dependencies,
127 extras_require=extras,
128 python_requires=">=3.6, <3.10",
129 include_package_data=True,
130 zip_safe=False,
131 )
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,9 +53,9 @@
"bignumeric_type": ["pyarrow >= 3.0.0, < 4.0dev"],
"tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
"opentelemetry": [
- "opentelemetry-api==0.11b0",
- "opentelemetry-sdk==0.11b0",
- "opentelemetry-instrumentation==0.11b0",
+ "opentelemetry-api >= 0.11b0",
+ "opentelemetry-sdk >= 0.11b0",
+ "opentelemetry-instrumentation >= 0.11b0",
],
}
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,9 +53,9 @@\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 4.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n- \"opentelemetry-api==0.11b0\",\n- \"opentelemetry-sdk==0.11b0\",\n- \"opentelemetry-instrumentation==0.11b0\",\n+ \"opentelemetry-api >= 0.11b0\",\n+ \"opentelemetry-sdk >= 0.11b0\",\n+ \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n }\n", "issue": "loosen opentelemetry dependencies\nSee Spanner PR: https://github.com/googleapis/python-spanner/pull/298\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.23.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.6.0, < 2.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.32.0, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 4.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 4.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 4.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api==0.11b0\",\n \"opentelemetry-sdk==0.11b0\",\n \"opentelemetry-instrumentation==0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.23.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.6.0, < 2.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.32.0, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 4.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 4.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 4.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1137 | rasdani/github-patches | git_diff | django-cms__django-cms-3171 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug in plugin_rendering.py
I'm currently facing an extremely weird problem where the publish button and content/structure mode switch is not shown in the toolbar - effectively rendering the whole CMS useless. Unfortunately I don't know when this started, so I'm having a very hard time to pin down if this is my fault or not.
Anyways... while debugging, I found this:
https://github.com/divio/django-cms/blob/develop/cms/plugin_rendering.py#L100
That seems to be a bug to me. Shouldn't it be
```
if not hasattr(request, 'placeholders'):
```
Note: `placeholders` should be plural, no?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `cms/plugin_rendering.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from cms.models.placeholdermodel import Placeholder
3 from cms.plugin_processors import (plugin_meta_context_processor, mark_safe_plugin_processor)
4 from cms.utils import get_language_from_request
5 from cms.utils.compat.type_checks import string_types
6 from cms.utils.conf import get_cms_setting
7 from cms.utils.django_load import iterload_objects
8 from cms.utils.placeholder import get_placeholder_conf, restore_sekizai_context
9 from django.template import Template, Context
10 from django.template.loader import render_to_string
11 from django.utils.safestring import mark_safe
12
13
14 # these are always called before all other plugin context processors
15 from sekizai.helpers import Watcher
16
17 DEFAULT_PLUGIN_CONTEXT_PROCESSORS = (
18 plugin_meta_context_processor,
19 )
20
21 # these are always called after all other plugin processors
22 DEFAULT_PLUGIN_PROCESSORS = (
23 mark_safe_plugin_processor,
24 )
25
26
27 class PluginContext(Context):
28 """
29 This subclass of template.Context automatically populates itself using
30 the processors defined in CMS_PLUGIN_CONTEXT_PROCESSORS.
31 Additional processors can be specified as a list of callables
32 using the "processors" keyword argument.
33 """
34
35 def __init__(self, dict, instance, placeholder, processors=None, current_app=None):
36 super(PluginContext, self).__init__(dict, current_app=current_app)
37 if not processors:
38 processors = []
39 for processor in DEFAULT_PLUGIN_CONTEXT_PROCESSORS:
40 self.update(processor(instance, placeholder, self))
41 for processor in iterload_objects(get_cms_setting('PLUGIN_CONTEXT_PROCESSORS')):
42 self.update(processor(instance, placeholder, self))
43 for processor in processors:
44 self.update(processor(instance, placeholder, self))
45
46
47 def render_plugin(context, instance, placeholder, template, processors=None, current_app=None):
48 """
49 Renders a single plugin and applies the post processors to it's rendered
50 content.
51 """
52 if not processors:
53 processors = []
54 if isinstance(template, string_types):
55 content = render_to_string(template, context_instance=context)
56 elif isinstance(template, Template):
57 content = template.render(context)
58 else:
59 content = ''
60 for processor in iterload_objects(get_cms_setting('PLUGIN_PROCESSORS')):
61 content = processor(instance, placeholder, content, context)
62 for processor in processors:
63 content = processor(instance, placeholder, content, context)
64 for processor in DEFAULT_PLUGIN_PROCESSORS:
65 content = processor(instance, placeholder, content, context)
66 return content
67
68
69 def render_plugins(plugins, context, placeholder, processors=None):
70 """
71 Renders a collection of plugins with the given context, using the appropriate processors
72 for a given placeholder name, and returns a list containing a "rendered content" string
73 for each plugin.
74
75 This is the main plugin rendering utility function, use this function rather than
76 Plugin.render_plugin().
77 """
78 out = []
79 total = len(plugins)
80 for index, plugin in enumerate(plugins):
81 plugin._render_meta.total = total
82 plugin._render_meta.index = index
83 context.push()
84 out.append(plugin.render_plugin(context, placeholder, processors=processors))
85 context.pop()
86 return out
87
88
89 def render_placeholder(placeholder, context_to_copy, name_fallback="Placeholder", lang=None, default=None):
90 """
91 Renders plugins for a placeholder on the given page using shallow copies of the
92 given context, and returns a string containing the rendered output.
93 """
94 if not placeholder:
95 return
96 from cms.utils.plugins import get_plugins
97 context = context_to_copy
98 context.push()
99 request = context['request']
100 if not hasattr(request, 'placeholder'):
101 request.placeholders = []
102 request.placeholders.append(placeholder)
103 if hasattr(placeholder, 'content_cache'):
104 return mark_safe(placeholder.content_cache)
105 page = placeholder.page if placeholder else None
106 # It's kind of duplicate of the similar call in `get_plugins`, but it's required
107 # to have a valid language in this function for `get_fallback_languages` to work
108 if lang:
109 save_language = lang
110 else:
111 lang = get_language_from_request(request)
112 save_language = lang
113
114 # Prepend frontedit toolbar output if applicable
115 edit = False
116 toolbar = getattr(request, 'toolbar', None)
117
118 if getattr(toolbar, 'edit_mode', False):
119 edit = True
120 if edit:
121 from cms.middleware.toolbar import toolbar_plugin_processor
122
123 processors = (toolbar_plugin_processor,)
124 else:
125 processors = None
126 from django.core.cache import cache
127 if get_cms_setting('PLACEHOLDER_CACHE'):
128 cache_key = placeholder.get_cache_key(lang)
129 if not edit and placeholder and not hasattr(placeholder, 'cache_checked'):
130 cached_value = cache.get(cache_key)
131 if not cached_value is None:
132 restore_sekizai_context(context, cached_value['sekizai'])
133 return mark_safe(cached_value['content'])
134 if page:
135 template = page.template
136 else:
137 template = None
138
139 plugins = [plugin for plugin in get_plugins(request, placeholder, template, lang=lang)]
140
141 # Add extra context as defined in settings, but do not overwrite existing context variables,
142 # since settings are general and database/template are specific
143 # TODO this should actually happen as a plugin context processor, but these currently overwrite
144 # existing context -- maybe change this order?
145 slot = getattr(placeholder, 'slot', None)
146 extra_context = {}
147 if slot:
148 extra_context = get_placeholder_conf("extra_context", slot, template, {})
149 for key, value in extra_context.items():
150 if key not in context:
151 context[key] = value
152
153 content = []
154 watcher = Watcher(context)
155 content.extend(render_plugins(plugins, context, placeholder, processors))
156 toolbar_content = ''
157
158 if edit:
159 if not hasattr(request.toolbar, 'placeholders'):
160 request.toolbar.placeholders = {}
161 if placeholder.pk not in request.toolbar.placeholders:
162 request.toolbar.placeholders[placeholder.pk] = placeholder
163 if edit:
164 toolbar_content = mark_safe(render_placeholder_toolbar(placeholder, context, name_fallback, save_language))
165 if content:
166 content = mark_safe("".join(content))
167 elif default:
168 #should be nodelist from a template
169 content = mark_safe(default.render(context_to_copy))
170 else:
171 content = ''
172 context['content'] = content
173 context['placeholder'] = toolbar_content
174 context['edit'] = edit
175 result = render_to_string("cms/toolbar/content.html", context)
176 changes = watcher.get_changes()
177 if placeholder and not edit and placeholder.cache_placeholder and get_cms_setting('PLACEHOLDER_CACHE'):
178 cache.set(cache_key, {'content': result, 'sekizai': changes}, get_cms_setting('CACHE_DURATIONS')['content'])
179 context.pop()
180 return result
181
182
183 def render_placeholder_toolbar(placeholder, context, name_fallback, save_language):
184 from cms.plugin_pool import plugin_pool
185 request = context['request']
186 page = placeholder.page if placeholder else None
187 if not page:
188 page = getattr(request, 'current_page', None)
189 if page:
190 if name_fallback and not placeholder:
191 placeholder = Placeholder.objects.create(slot=name_fallback)
192 page.placeholders.add(placeholder)
193 placeholder.page = page
194 if placeholder:
195 slot = placeholder.slot
196 else:
197 slot = None
198 context.push()
199
200 # to restrict child-only plugins from draggables..
201 context['allowed_plugins'] = [cls.__name__ for cls in plugin_pool.get_all_plugins(slot, page)]
202 context['placeholder'] = placeholder
203 context['language'] = save_language
204 context['page'] = page
205 toolbar = render_to_string("cms/toolbar/placeholder.html", context)
206 context.pop()
207 return toolbar
208
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/cms/plugin_rendering.py b/cms/plugin_rendering.py
--- a/cms/plugin_rendering.py
+++ b/cms/plugin_rendering.py
@@ -97,7 +97,7 @@
context = context_to_copy
context.push()
request = context['request']
- if not hasattr(request, 'placeholder'):
+ if not hasattr(request, 'placeholders'):
request.placeholders = []
request.placeholders.append(placeholder)
if hasattr(placeholder, 'content_cache'):
| {"golden_diff": "diff --git a/cms/plugin_rendering.py b/cms/plugin_rendering.py\n--- a/cms/plugin_rendering.py\n+++ b/cms/plugin_rendering.py\n@@ -97,7 +97,7 @@\n context = context_to_copy\n context.push()\n request = context['request']\n- if not hasattr(request, 'placeholder'):\n+ if not hasattr(request, 'placeholders'):\n request.placeholders = []\n request.placeholders.append(placeholder)\n if hasattr(placeholder, 'content_cache'):\n", "issue": "Bug in plugin_rendering.py\nI'm currently facing an extremely weird problem where the publish button and content/structure mode switch is not shown in the toolbar - effectively rendering the whole CMS useless. Unfortunately I don't know when this started, so I'm having a very hard time to pin down if this is my fault or not.\n\nAnyways... while debugging, I found this:\n\nhttps://github.com/divio/django-cms/blob/develop/cms/plugin_rendering.py#L100\n\nThat seems to be a bug to me. Shouldn't it be \n\n```\nif not hasattr(request, 'placeholders'):\n```\n\nNote: `placeholders` should be plural, no?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom cms.models.placeholdermodel import Placeholder\nfrom cms.plugin_processors import (plugin_meta_context_processor, mark_safe_plugin_processor)\nfrom cms.utils import get_language_from_request\nfrom cms.utils.compat.type_checks import string_types\nfrom cms.utils.conf import get_cms_setting\nfrom cms.utils.django_load import iterload_objects\nfrom cms.utils.placeholder import get_placeholder_conf, restore_sekizai_context\nfrom django.template import Template, Context\nfrom django.template.loader import render_to_string\nfrom django.utils.safestring import mark_safe\n\n\n# these are always called before all other plugin context processors\nfrom sekizai.helpers import Watcher\n\nDEFAULT_PLUGIN_CONTEXT_PROCESSORS = (\n plugin_meta_context_processor,\n)\n\n# these are always called after all other plugin processors\nDEFAULT_PLUGIN_PROCESSORS = (\n mark_safe_plugin_processor,\n)\n\n\nclass PluginContext(Context):\n \"\"\"\n This subclass of template.Context automatically populates itself using\n the processors defined in CMS_PLUGIN_CONTEXT_PROCESSORS.\n Additional processors can be specified as a list of callables\n using the \"processors\" keyword argument.\n \"\"\"\n\n def __init__(self, dict, instance, placeholder, processors=None, current_app=None):\n super(PluginContext, self).__init__(dict, current_app=current_app)\n if not processors:\n processors = []\n for processor in DEFAULT_PLUGIN_CONTEXT_PROCESSORS:\n self.update(processor(instance, placeholder, self))\n for processor in iterload_objects(get_cms_setting('PLUGIN_CONTEXT_PROCESSORS')):\n self.update(processor(instance, placeholder, self))\n for processor in processors:\n self.update(processor(instance, placeholder, self))\n\n\ndef render_plugin(context, instance, placeholder, template, processors=None, current_app=None):\n \"\"\"\n Renders a single plugin and applies the post processors to it's rendered\n content.\n \"\"\"\n if not processors:\n processors = []\n if isinstance(template, string_types):\n content = render_to_string(template, context_instance=context)\n elif isinstance(template, Template):\n content = template.render(context)\n else:\n content = ''\n for processor in iterload_objects(get_cms_setting('PLUGIN_PROCESSORS')):\n content = processor(instance, placeholder, content, context)\n for processor in processors:\n content = processor(instance, placeholder, content, context)\n for processor in DEFAULT_PLUGIN_PROCESSORS:\n content = processor(instance, placeholder, content, context)\n return content\n\n\ndef render_plugins(plugins, context, placeholder, processors=None):\n \"\"\"\n Renders a collection of plugins with the given context, using the appropriate processors\n for a given placeholder name, and returns a list containing a \"rendered content\" string\n for each plugin.\n\n This is the main plugin rendering utility function, use this function rather than\n Plugin.render_plugin().\n \"\"\"\n out = []\n total = len(plugins)\n for index, plugin in enumerate(plugins):\n plugin._render_meta.total = total\n plugin._render_meta.index = index\n context.push()\n out.append(plugin.render_plugin(context, placeholder, processors=processors))\n context.pop()\n return out\n\n\ndef render_placeholder(placeholder, context_to_copy, name_fallback=\"Placeholder\", lang=None, default=None):\n \"\"\"\n Renders plugins for a placeholder on the given page using shallow copies of the\n given context, and returns a string containing the rendered output.\n \"\"\"\n if not placeholder:\n return\n from cms.utils.plugins import get_plugins\n context = context_to_copy\n context.push()\n request = context['request']\n if not hasattr(request, 'placeholder'):\n request.placeholders = []\n request.placeholders.append(placeholder)\n if hasattr(placeholder, 'content_cache'):\n return mark_safe(placeholder.content_cache)\n page = placeholder.page if placeholder else None\n # It's kind of duplicate of the similar call in `get_plugins`, but it's required\n # to have a valid language in this function for `get_fallback_languages` to work\n if lang:\n save_language = lang\n else:\n lang = get_language_from_request(request)\n save_language = lang\n\n # Prepend frontedit toolbar output if applicable\n edit = False\n toolbar = getattr(request, 'toolbar', None)\n\n if getattr(toolbar, 'edit_mode', False):\n edit = True\n if edit:\n from cms.middleware.toolbar import toolbar_plugin_processor\n\n processors = (toolbar_plugin_processor,)\n else:\n processors = None\n from django.core.cache import cache\n if get_cms_setting('PLACEHOLDER_CACHE'):\n cache_key = placeholder.get_cache_key(lang)\n if not edit and placeholder and not hasattr(placeholder, 'cache_checked'):\n cached_value = cache.get(cache_key)\n if not cached_value is None:\n restore_sekizai_context(context, cached_value['sekizai'])\n return mark_safe(cached_value['content'])\n if page:\n template = page.template\n else:\n template = None\n\n plugins = [plugin for plugin in get_plugins(request, placeholder, template, lang=lang)]\n\n # Add extra context as defined in settings, but do not overwrite existing context variables,\n # since settings are general and database/template are specific\n # TODO this should actually happen as a plugin context processor, but these currently overwrite\n # existing context -- maybe change this order?\n slot = getattr(placeholder, 'slot', None)\n extra_context = {}\n if slot:\n extra_context = get_placeholder_conf(\"extra_context\", slot, template, {})\n for key, value in extra_context.items():\n if key not in context:\n context[key] = value\n\n content = []\n watcher = Watcher(context)\n content.extend(render_plugins(plugins, context, placeholder, processors))\n toolbar_content = ''\n\n if edit:\n if not hasattr(request.toolbar, 'placeholders'):\n request.toolbar.placeholders = {}\n if placeholder.pk not in request.toolbar.placeholders:\n request.toolbar.placeholders[placeholder.pk] = placeholder\n if edit:\n toolbar_content = mark_safe(render_placeholder_toolbar(placeholder, context, name_fallback, save_language))\n if content:\n content = mark_safe(\"\".join(content))\n elif default:\n #should be nodelist from a template\n content = mark_safe(default.render(context_to_copy))\n else:\n content = ''\n context['content'] = content\n context['placeholder'] = toolbar_content\n context['edit'] = edit\n result = render_to_string(\"cms/toolbar/content.html\", context)\n changes = watcher.get_changes()\n if placeholder and not edit and placeholder.cache_placeholder and get_cms_setting('PLACEHOLDER_CACHE'):\n cache.set(cache_key, {'content': result, 'sekizai': changes}, get_cms_setting('CACHE_DURATIONS')['content'])\n context.pop()\n return result\n\n\ndef render_placeholder_toolbar(placeholder, context, name_fallback, save_language):\n from cms.plugin_pool import plugin_pool\n request = context['request']\n page = placeholder.page if placeholder else None\n if not page:\n page = getattr(request, 'current_page', None)\n if page:\n if name_fallback and not placeholder:\n placeholder = Placeholder.objects.create(slot=name_fallback)\n page.placeholders.add(placeholder)\n placeholder.page = page\n if placeholder:\n slot = placeholder.slot\n else:\n slot = None\n context.push()\n\n # to restrict child-only plugins from draggables..\n context['allowed_plugins'] = [cls.__name__ for cls in plugin_pool.get_all_plugins(slot, page)]\n context['placeholder'] = placeholder\n context['language'] = save_language\n context['page'] = page\n toolbar = render_to_string(\"cms/toolbar/placeholder.html\", context)\n context.pop()\n return toolbar\n", "path": "cms/plugin_rendering.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom cms.models.placeholdermodel import Placeholder\nfrom cms.plugin_processors import (plugin_meta_context_processor, mark_safe_plugin_processor)\nfrom cms.utils import get_language_from_request\nfrom cms.utils.compat.type_checks import string_types\nfrom cms.utils.conf import get_cms_setting\nfrom cms.utils.django_load import iterload_objects\nfrom cms.utils.placeholder import get_placeholder_conf, restore_sekizai_context\nfrom django.template import Template, Context\nfrom django.template.loader import render_to_string\nfrom django.utils.safestring import mark_safe\n\n\n# these are always called before all other plugin context processors\nfrom sekizai.helpers import Watcher\n\nDEFAULT_PLUGIN_CONTEXT_PROCESSORS = (\n plugin_meta_context_processor,\n)\n\n# these are always called after all other plugin processors\nDEFAULT_PLUGIN_PROCESSORS = (\n mark_safe_plugin_processor,\n)\n\n\nclass PluginContext(Context):\n \"\"\"\n This subclass of template.Context automatically populates itself using\n the processors defined in CMS_PLUGIN_CONTEXT_PROCESSORS.\n Additional processors can be specified as a list of callables\n using the \"processors\" keyword argument.\n \"\"\"\n\n def __init__(self, dict, instance, placeholder, processors=None, current_app=None):\n super(PluginContext, self).__init__(dict, current_app=current_app)\n if not processors:\n processors = []\n for processor in DEFAULT_PLUGIN_CONTEXT_PROCESSORS:\n self.update(processor(instance, placeholder, self))\n for processor in iterload_objects(get_cms_setting('PLUGIN_CONTEXT_PROCESSORS')):\n self.update(processor(instance, placeholder, self))\n for processor in processors:\n self.update(processor(instance, placeholder, self))\n\n\ndef render_plugin(context, instance, placeholder, template, processors=None, current_app=None):\n \"\"\"\n Renders a single plugin and applies the post processors to it's rendered\n content.\n \"\"\"\n if not processors:\n processors = []\n if isinstance(template, string_types):\n content = render_to_string(template, context_instance=context)\n elif isinstance(template, Template):\n content = template.render(context)\n else:\n content = ''\n for processor in iterload_objects(get_cms_setting('PLUGIN_PROCESSORS')):\n content = processor(instance, placeholder, content, context)\n for processor in processors:\n content = processor(instance, placeholder, content, context)\n for processor in DEFAULT_PLUGIN_PROCESSORS:\n content = processor(instance, placeholder, content, context)\n return content\n\n\ndef render_plugins(plugins, context, placeholder, processors=None):\n \"\"\"\n Renders a collection of plugins with the given context, using the appropriate processors\n for a given placeholder name, and returns a list containing a \"rendered content\" string\n for each plugin.\n\n This is the main plugin rendering utility function, use this function rather than\n Plugin.render_plugin().\n \"\"\"\n out = []\n total = len(plugins)\n for index, plugin in enumerate(plugins):\n plugin._render_meta.total = total\n plugin._render_meta.index = index\n context.push()\n out.append(plugin.render_plugin(context, placeholder, processors=processors))\n context.pop()\n return out\n\n\ndef render_placeholder(placeholder, context_to_copy, name_fallback=\"Placeholder\", lang=None, default=None):\n \"\"\"\n Renders plugins for a placeholder on the given page using shallow copies of the\n given context, and returns a string containing the rendered output.\n \"\"\"\n if not placeholder:\n return\n from cms.utils.plugins import get_plugins\n context = context_to_copy\n context.push()\n request = context['request']\n if not hasattr(request, 'placeholders'):\n request.placeholders = []\n request.placeholders.append(placeholder)\n if hasattr(placeholder, 'content_cache'):\n return mark_safe(placeholder.content_cache)\n page = placeholder.page if placeholder else None\n # It's kind of duplicate of the similar call in `get_plugins`, but it's required\n # to have a valid language in this function for `get_fallback_languages` to work\n if lang:\n save_language = lang\n else:\n lang = get_language_from_request(request)\n save_language = lang\n\n # Prepend frontedit toolbar output if applicable\n edit = False\n toolbar = getattr(request, 'toolbar', None)\n\n if getattr(toolbar, 'edit_mode', False):\n edit = True\n if edit:\n from cms.middleware.toolbar import toolbar_plugin_processor\n\n processors = (toolbar_plugin_processor,)\n else:\n processors = None\n from django.core.cache import cache\n if get_cms_setting('PLACEHOLDER_CACHE'):\n cache_key = placeholder.get_cache_key(lang)\n if not edit and placeholder and not hasattr(placeholder, 'cache_checked'):\n cached_value = cache.get(cache_key)\n if not cached_value is None:\n restore_sekizai_context(context, cached_value['sekizai'])\n return mark_safe(cached_value['content'])\n if page:\n template = page.template\n else:\n template = None\n\n plugins = [plugin for plugin in get_plugins(request, placeholder, template, lang=lang)]\n\n # Add extra context as defined in settings, but do not overwrite existing context variables,\n # since settings are general and database/template are specific\n # TODO this should actually happen as a plugin context processor, but these currently overwrite\n # existing context -- maybe change this order?\n slot = getattr(placeholder, 'slot', None)\n extra_context = {}\n if slot:\n extra_context = get_placeholder_conf(\"extra_context\", slot, template, {})\n for key, value in extra_context.items():\n if key not in context:\n context[key] = value\n\n content = []\n watcher = Watcher(context)\n content.extend(render_plugins(plugins, context, placeholder, processors))\n toolbar_content = ''\n\n if edit:\n if not hasattr(request.toolbar, 'placeholders'):\n request.toolbar.placeholders = {}\n if placeholder.pk not in request.toolbar.placeholders:\n request.toolbar.placeholders[placeholder.pk] = placeholder\n if edit:\n toolbar_content = mark_safe(render_placeholder_toolbar(placeholder, context, name_fallback, save_language))\n if content:\n content = mark_safe(\"\".join(content))\n elif default:\n #should be nodelist from a template\n content = mark_safe(default.render(context_to_copy))\n else:\n content = ''\n context['content'] = content\n context['placeholder'] = toolbar_content\n context['edit'] = edit\n result = render_to_string(\"cms/toolbar/content.html\", context)\n changes = watcher.get_changes()\n if placeholder and not edit and placeholder.cache_placeholder and get_cms_setting('PLACEHOLDER_CACHE'):\n cache.set(cache_key, {'content': result, 'sekizai': changes}, get_cms_setting('CACHE_DURATIONS')['content'])\n context.pop()\n return result\n\n\ndef render_placeholder_toolbar(placeholder, context, name_fallback, save_language):\n from cms.plugin_pool import plugin_pool\n request = context['request']\n page = placeholder.page if placeholder else None\n if not page:\n page = getattr(request, 'current_page', None)\n if page:\n if name_fallback and not placeholder:\n placeholder = Placeholder.objects.create(slot=name_fallback)\n page.placeholders.add(placeholder)\n placeholder.page = page\n if placeholder:\n slot = placeholder.slot\n else:\n slot = None\n context.push()\n\n # to restrict child-only plugins from draggables..\n context['allowed_plugins'] = [cls.__name__ for cls in plugin_pool.get_all_plugins(slot, page)]\n context['placeholder'] = placeholder\n context['language'] = save_language\n context['page'] = page\n toolbar = render_to_string(\"cms/toolbar/placeholder.html\", context)\n context.pop()\n return toolbar\n", "path": "cms/plugin_rendering.py"}]} |
gh_patches_debug_1138 | rasdani/github-patches | git_diff | e-valuation__EvaP-1531 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Access denied on manager login
Currently, after logging in, a manager is redirected to /staff/, but staff mode will not be active, so they will get a 403 access denied.
@janno42 what behavior do we want here? Redirect as if they weren't a manager or enable staff mode?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `evap/evaluation/views.py`
Content:
```
1 import logging
2 from datetime import date, timedelta
3
4 from django.conf import settings
5 from django.contrib import messages, auth
6 from django.contrib.auth.decorators import login_required
7 from django.core.mail import EmailMessage
8 from django.http import HttpResponse, HttpResponseBadRequest
9 from django.shortcuts import redirect, render
10 from django.utils.translation import gettext as _
11 from django.views.decorators.http import require_POST
12 from django.views.decorators.debug import sensitive_post_parameters
13 from django.views.i18n import set_language
14
15 from evap.evaluation.forms import NewKeyForm, LoginEmailForm
16 from evap.middleware import no_login_required
17 from evap.evaluation.models import FaqSection, EmailTemplate, Semester
18
19 logger = logging.getLogger(__name__)
20
21
22 def redirect_user_to_start_page(user):
23 # pylint: disable=too-many-return-statements
24 active_semester = Semester.active_semester()
25
26 if user.is_reviewer:
27 if active_semester is not None:
28 return redirect('staff:semester_view', active_semester.id)
29 return redirect('staff:index')
30
31 if user.is_grade_publisher:
32 if active_semester is not None:
33 return redirect('grades:semester_view', active_semester.id)
34 return redirect('grades:index')
35
36 if user.is_student:
37 return redirect('student:index')
38 if user.is_responsible_or_contributor_or_delegate:
39 return redirect('contributor:index')
40
41 return redirect('results:index')
42
43
44 @no_login_required
45 @sensitive_post_parameters("password")
46 def index(request):
47 """Main entry page into EvaP providing all the login options available. The OpenID login is thought to be used for
48 internal users. The login key mechanism is meant to be used to include external participants, e.g. visiting
49 students or visiting contributors. A login with email and password is available if OpenID is deactivated.
50 """
51
52 # parse the form data into the respective form
53 submit_type = request.POST.get("submit_type", "no_submit")
54 new_key_form = NewKeyForm(request.POST if submit_type == "new_key" else None)
55 login_email_form = LoginEmailForm(request, request.POST if submit_type == "login_email" else None)
56
57 # process form data
58 if request.method == 'POST':
59 if new_key_form.is_valid():
60 # user wants a new login key
61 profile = new_key_form.get_user()
62 profile.ensure_valid_login_key()
63 profile.save()
64
65 EmailTemplate.send_login_url_to_user(new_key_form.get_user())
66
67 messages.success(request, _("We sent you an email with a one-time login URL. Please check your inbox."))
68 return redirect('evaluation:index')
69
70 if login_email_form.is_valid():
71 # user would like to login with email and password and passed password test
72 auth.login(request, login_email_form.get_user())
73
74 # clean up our test cookie
75 if request.session.test_cookie_worked():
76 request.session.delete_test_cookie()
77
78 # if not logged in by now, render form
79 if not request.user.is_authenticated:
80 # set test cookie to verify whether they work in the next step
81 request.session.set_test_cookie()
82
83 template_data = dict(
84 new_key_form=new_key_form,
85 login_email_form=login_email_form,
86 openid_active=settings.ACTIVATE_OPEN_ID_LOGIN,
87 )
88 return render(request, "index.html", template_data)
89
90 # check for redirect variable
91 redirect_to = request.GET.get("next", None)
92 if redirect_to is not None:
93 return redirect(redirect_to)
94
95 return redirect_user_to_start_page(request.user)
96
97
98 @no_login_required
99 def login_key_authentication(request, key):
100 user = auth.authenticate(request, key=key)
101
102 if user and not user.is_active:
103 messages.error(request, _("Inactive users are not allowed to login."))
104 return redirect('evaluation:index')
105
106 # If we already have an authenticated user don't try to login a new user. Show an error message if another user
107 # tries to login with a URL in this situation.
108 if request.user.is_authenticated:
109 if user != request.user:
110 messages.error(request, _("Another user is currently logged in. Please logout first and then use the login URL again."))
111 return redirect('evaluation:index')
112
113 if user and user.login_key_valid_until >= date.today():
114 if request.method != "POST":
115 template_data = {
116 'username': user.full_name
117 }
118 return render(request, "external_user_confirm_login.html", template_data)
119
120 # User is valid. Set request.user and persist user in the session by logging the user in.
121 request.user = user
122 auth.login(request, user)
123 messages.success(request, _("Logged in as %s.") % user.full_name)
124 # Invalidate the login key, but keep it stored so we can later identify the user that is trying to login and send a new link
125 user.login_key_valid_until = date.today() - timedelta(1)
126 user.save()
127 elif user:
128 # A user exists, but the login key is not valid anymore. Send the user a new one.
129 user.ensure_valid_login_key()
130 EmailTemplate.send_login_url_to_user(user)
131 messages.warning(request, _("The login URL is not valid anymore. We sent you a new one to your email address."))
132 else:
133 messages.warning(request, _("Invalid login URL. Please request a new one below."))
134
135 return redirect('evaluation:index')
136
137
138 @no_login_required
139 def faq(request):
140 return render(request, "faq.html", dict(sections=FaqSection.objects.all()))
141
142
143 @no_login_required
144 def legal_notice(request):
145 return render(request, "legal_notice.html", dict())
146
147
148 @require_POST
149 @login_required
150 def contact(request):
151 message = request.POST.get("message")
152 title = request.POST.get("title")
153 email = request.user.email or f"User {request.user.id}"
154 subject = f"[EvaP] Message from {email}"
155
156 if message:
157 mail = EmailMessage(
158 subject=subject,
159 body="{}\n{}\n\n{}".format(title, request.user.email, message),
160 to=[settings.CONTACT_EMAIL])
161 try:
162 mail.send()
163 logger.info('Sent contact email: \n{}\n'.format(mail.message()))
164 return HttpResponse()
165 except Exception:
166 logger.exception('An exception occurred when sending the following contact email:\n{}\n'.format(mail.message()))
167 raise
168
169 return HttpResponseBadRequest()
170
171
172 @no_login_required
173 @require_POST
174 def set_lang(request):
175 if request.user.is_authenticated:
176 user = request.user
177 user.language = request.POST['language']
178 user.save()
179
180 return set_language(request)
181
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py
--- a/evap/evaluation/views.py
+++ b/evap/evaluation/views.py
@@ -74,6 +74,7 @@
# clean up our test cookie
if request.session.test_cookie_worked():
request.session.delete_test_cookie()
+ return redirect('evaluation:index')
# if not logged in by now, render form
if not request.user.is_authenticated:
| {"golden_diff": "diff --git a/evap/evaluation/views.py b/evap/evaluation/views.py\n--- a/evap/evaluation/views.py\n+++ b/evap/evaluation/views.py\n@@ -74,6 +74,7 @@\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n+ return redirect('evaluation:index')\n \n # if not logged in by now, render form\n if not request.user.is_authenticated:\n", "issue": "Access denied on manager login\nCurrently, after logging in, a manager is redirected to /staff/, but staff mode will not be active, so they will get a 403 access denied.\r\n\r\n@janno42 what behavior do we want here? Redirect as if they weren't a manager or enable staff mode?\n", "before_files": [{"content": "import logging\nfrom datetime import date, timedelta\n\nfrom django.conf import settings\nfrom django.contrib import messages, auth\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.mail import EmailMessage\nfrom django.http import HttpResponse, HttpResponseBadRequest\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_POST\nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.i18n import set_language\n\nfrom evap.evaluation.forms import NewKeyForm, LoginEmailForm\nfrom evap.middleware import no_login_required\nfrom evap.evaluation.models import FaqSection, EmailTemplate, Semester\n\nlogger = logging.getLogger(__name__)\n\n\ndef redirect_user_to_start_page(user):\n # pylint: disable=too-many-return-statements\n active_semester = Semester.active_semester()\n\n if user.is_reviewer:\n if active_semester is not None:\n return redirect('staff:semester_view', active_semester.id)\n return redirect('staff:index')\n\n if user.is_grade_publisher:\n if active_semester is not None:\n return redirect('grades:semester_view', active_semester.id)\n return redirect('grades:index')\n\n if user.is_student:\n return redirect('student:index')\n if user.is_responsible_or_contributor_or_delegate:\n return redirect('contributor:index')\n\n return redirect('results:index')\n\n\n@no_login_required\n@sensitive_post_parameters(\"password\")\ndef index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. The OpenID login is thought to be used for\n internal users. The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors. A login with email and password is available if OpenID is deactivated.\n \"\"\"\n\n # parse the form data into the respective form\n submit_type = request.POST.get(\"submit_type\", \"no_submit\")\n new_key_form = NewKeyForm(request.POST if submit_type == \"new_key\" else None)\n login_email_form = LoginEmailForm(request, request.POST if submit_type == \"login_email\" else None)\n\n # process form data\n if request.method == 'POST':\n if new_key_form.is_valid():\n # user wants a new login key\n profile = new_key_form.get_user()\n profile.ensure_valid_login_key()\n profile.save()\n\n EmailTemplate.send_login_url_to_user(new_key_form.get_user())\n\n messages.success(request, _(\"We sent you an email with a one-time login URL. Please check your inbox.\"))\n return redirect('evaluation:index')\n\n if login_email_form.is_valid():\n # user would like to login with email and password and passed password test\n auth.login(request, login_email_form.get_user())\n\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n\n # if not logged in by now, render form\n if not request.user.is_authenticated:\n # set test cookie to verify whether they work in the next step\n request.session.set_test_cookie()\n\n template_data = dict(\n new_key_form=new_key_form,\n login_email_form=login_email_form,\n openid_active=settings.ACTIVATE_OPEN_ID_LOGIN,\n )\n return render(request, \"index.html\", template_data)\n\n # check for redirect variable\n redirect_to = request.GET.get(\"next\", None)\n if redirect_to is not None:\n return redirect(redirect_to)\n\n return redirect_user_to_start_page(request.user)\n\n\n@no_login_required\ndef login_key_authentication(request, key):\n user = auth.authenticate(request, key=key)\n\n if user and not user.is_active:\n messages.error(request, _(\"Inactive users are not allowed to login.\"))\n return redirect('evaluation:index')\n\n # If we already have an authenticated user don't try to login a new user. Show an error message if another user\n # tries to login with a URL in this situation.\n if request.user.is_authenticated:\n if user != request.user:\n messages.error(request, _(\"Another user is currently logged in. Please logout first and then use the login URL again.\"))\n return redirect('evaluation:index')\n\n if user and user.login_key_valid_until >= date.today():\n if request.method != \"POST\":\n template_data = {\n 'username': user.full_name\n }\n return render(request, \"external_user_confirm_login.html\", template_data)\n\n # User is valid. Set request.user and persist user in the session by logging the user in.\n request.user = user\n auth.login(request, user)\n messages.success(request, _(\"Logged in as %s.\") % user.full_name)\n # Invalidate the login key, but keep it stored so we can later identify the user that is trying to login and send a new link\n user.login_key_valid_until = date.today() - timedelta(1)\n user.save()\n elif user:\n # A user exists, but the login key is not valid anymore. Send the user a new one.\n user.ensure_valid_login_key()\n EmailTemplate.send_login_url_to_user(user)\n messages.warning(request, _(\"The login URL is not valid anymore. We sent you a new one to your email address.\"))\n else:\n messages.warning(request, _(\"Invalid login URL. Please request a new one below.\"))\n\n return redirect('evaluation:index')\n\n\n@no_login_required\ndef faq(request):\n return render(request, \"faq.html\", dict(sections=FaqSection.objects.all()))\n\n\n@no_login_required\ndef legal_notice(request):\n return render(request, \"legal_notice.html\", dict())\n\n\n@require_POST\n@login_required\ndef contact(request):\n message = request.POST.get(\"message\")\n title = request.POST.get(\"title\")\n email = request.user.email or f\"User {request.user.id}\"\n subject = f\"[EvaP] Message from {email}\"\n\n if message:\n mail = EmailMessage(\n subject=subject,\n body=\"{}\\n{}\\n\\n{}\".format(title, request.user.email, message),\n to=[settings.CONTACT_EMAIL])\n try:\n mail.send()\n logger.info('Sent contact email: \\n{}\\n'.format(mail.message()))\n return HttpResponse()\n except Exception:\n logger.exception('An exception occurred when sending the following contact email:\\n{}\\n'.format(mail.message()))\n raise\n\n return HttpResponseBadRequest()\n\n\n@no_login_required\n@require_POST\ndef set_lang(request):\n if request.user.is_authenticated:\n user = request.user\n user.language = request.POST['language']\n user.save()\n\n return set_language(request)\n", "path": "evap/evaluation/views.py"}], "after_files": [{"content": "import logging\nfrom datetime import date, timedelta\n\nfrom django.conf import settings\nfrom django.contrib import messages, auth\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.mail import EmailMessage\nfrom django.http import HttpResponse, HttpResponseBadRequest\nfrom django.shortcuts import redirect, render\nfrom django.utils.translation import gettext as _\nfrom django.views.decorators.http import require_POST\nfrom django.views.decorators.debug import sensitive_post_parameters\nfrom django.views.i18n import set_language\n\nfrom evap.evaluation.forms import NewKeyForm, LoginEmailForm\nfrom evap.middleware import no_login_required\nfrom evap.evaluation.models import FaqSection, EmailTemplate, Semester\n\nlogger = logging.getLogger(__name__)\n\n\ndef redirect_user_to_start_page(user):\n # pylint: disable=too-many-return-statements\n active_semester = Semester.active_semester()\n\n if user.is_reviewer:\n if active_semester is not None:\n return redirect('staff:semester_view', active_semester.id)\n return redirect('staff:index')\n\n if user.is_grade_publisher:\n if active_semester is not None:\n return redirect('grades:semester_view', active_semester.id)\n return redirect('grades:index')\n\n if user.is_student:\n return redirect('student:index')\n if user.is_responsible_or_contributor_or_delegate:\n return redirect('contributor:index')\n\n return redirect('results:index')\n\n\n@no_login_required\n@sensitive_post_parameters(\"password\")\ndef index(request):\n \"\"\"Main entry page into EvaP providing all the login options available. The OpenID login is thought to be used for\n internal users. The login key mechanism is meant to be used to include external participants, e.g. visiting\n students or visiting contributors. A login with email and password is available if OpenID is deactivated.\n \"\"\"\n\n # parse the form data into the respective form\n submit_type = request.POST.get(\"submit_type\", \"no_submit\")\n new_key_form = NewKeyForm(request.POST if submit_type == \"new_key\" else None)\n login_email_form = LoginEmailForm(request, request.POST if submit_type == \"login_email\" else None)\n\n # process form data\n if request.method == 'POST':\n if new_key_form.is_valid():\n # user wants a new login key\n profile = new_key_form.get_user()\n profile.ensure_valid_login_key()\n profile.save()\n\n EmailTemplate.send_login_url_to_user(new_key_form.get_user())\n\n messages.success(request, _(\"We sent you an email with a one-time login URL. Please check your inbox.\"))\n return redirect('evaluation:index')\n\n if login_email_form.is_valid():\n # user would like to login with email and password and passed password test\n auth.login(request, login_email_form.get_user())\n\n # clean up our test cookie\n if request.session.test_cookie_worked():\n request.session.delete_test_cookie()\n return redirect('evaluation:index')\n\n # if not logged in by now, render form\n if not request.user.is_authenticated:\n # set test cookie to verify whether they work in the next step\n request.session.set_test_cookie()\n\n template_data = dict(\n new_key_form=new_key_form,\n login_email_form=login_email_form,\n openid_active=settings.ACTIVATE_OPEN_ID_LOGIN,\n )\n return render(request, \"index.html\", template_data)\n\n # check for redirect variable\n redirect_to = request.GET.get(\"next\", None)\n if redirect_to is not None:\n return redirect(redirect_to)\n\n return redirect_user_to_start_page(request.user)\n\n\n@no_login_required\ndef login_key_authentication(request, key):\n user = auth.authenticate(request, key=key)\n\n if user and not user.is_active:\n messages.error(request, _(\"Inactive users are not allowed to login.\"))\n return redirect('evaluation:index')\n\n # If we already have an authenticated user don't try to login a new user. Show an error message if another user\n # tries to login with a URL in this situation.\n if request.user.is_authenticated:\n if user != request.user:\n messages.error(request, _(\"Another user is currently logged in. Please logout first and then use the login URL again.\"))\n return redirect('evaluation:index')\n\n if user and user.login_key_valid_until >= date.today():\n if request.method != \"POST\":\n template_data = {\n 'username': user.full_name\n }\n return render(request, \"external_user_confirm_login.html\", template_data)\n\n # User is valid. Set request.user and persist user in the session by logging the user in.\n request.user = user\n auth.login(request, user)\n messages.success(request, _(\"Logged in as %s.\") % user.full_name)\n # Invalidate the login key, but keep it stored so we can later identify the user that is trying to login and send a new link\n user.login_key_valid_until = date.today() - timedelta(1)\n user.save()\n elif user:\n # A user exists, but the login key is not valid anymore. Send the user a new one.\n user.ensure_valid_login_key()\n EmailTemplate.send_login_url_to_user(user)\n messages.warning(request, _(\"The login URL is not valid anymore. We sent you a new one to your email address.\"))\n else:\n messages.warning(request, _(\"Invalid login URL. Please request a new one below.\"))\n\n return redirect('evaluation:index')\n\n\n@no_login_required\ndef faq(request):\n return render(request, \"faq.html\", dict(sections=FaqSection.objects.all()))\n\n\n@no_login_required\ndef legal_notice(request):\n return render(request, \"legal_notice.html\", dict())\n\n\n@require_POST\n@login_required\ndef contact(request):\n message = request.POST.get(\"message\")\n title = request.POST.get(\"title\")\n email = request.user.email or f\"User {request.user.id}\"\n subject = f\"[EvaP] Message from {email}\"\n\n if message:\n mail = EmailMessage(\n subject=subject,\n body=\"{}\\n{}\\n\\n{}\".format(title, request.user.email, message),\n to=[settings.CONTACT_EMAIL])\n try:\n mail.send()\n logger.info('Sent contact email: \\n{}\\n'.format(mail.message()))\n return HttpResponse()\n except Exception:\n logger.exception('An exception occurred when sending the following contact email:\\n{}\\n'.format(mail.message()))\n raise\n\n return HttpResponseBadRequest()\n\n\n@no_login_required\n@require_POST\ndef set_lang(request):\n if request.user.is_authenticated:\n user = request.user\n user.language = request.POST['language']\n user.save()\n\n return set_language(request)\n", "path": "evap/evaluation/views.py"}]} |
gh_patches_debug_1139 | rasdani/github-patches | git_diff | boto__boto-2521 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
glacier: tree_hash returned as bytes by compute_hashes_from_fileobj
When uploading a file to glacier, `compute_hashes_from_fileobj` uses `bytes_to_hex` from `glacier.utils`. `bytes_to_hex`, in turn, uses `binascii.hexlify()`. In Python 3 (I'm running v3.4), this now returns a `bytes` object, not a `str`.
This is eventually causing a `TypeError: Type str doesn't support the buffer API` exception in `auth.py`'s `canonical_headers` function since the hash value is used as a request header and is never converted from `bytes` to `str` but is operated on as if it were a `str`.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `boto/auth.py`
Content:
```
1 # Copyright 2010 Google Inc.
2 # Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
3 # Copyright (c) 2011, Eucalyptus Systems, Inc.
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining a
6 # copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish, dis-
9 # tribute, sublicense, and/or sell copies of the Software, and to permit
10 # persons to whom the Software is furnished to do so, subject to the fol-
11 # lowing conditions:
12 #
13 # The above copyright notice and this permission notice shall be included
14 # in all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
17 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
18 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
19 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
20 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
22 # IN THE SOFTWARE.
23
24
25 """
26 Handles authentication required to AWS and GS
27 """
28
29 import base64
30 import boto
31 import boto.auth_handler
32 import boto.exception
33 import boto.plugin
34 import boto.utils
35 import copy
36 import datetime
37 from email.utils import formatdate
38 import hmac
39 import os
40 import sys
41 import time
42 import posixpath
43
44 from boto.compat import urllib, encodebytes
45 from boto.auth_handler import AuthHandler
46 from boto.exception import BotoClientError
47
48 try:
49 from hashlib import sha1 as sha
50 from hashlib import sha256 as sha256
51 except ImportError:
52 import sha
53 sha256 = None
54
55
56 class HmacKeys(object):
57 """Key based Auth handler helper."""
58
59 def __init__(self, host, config, provider):
60 if provider.access_key is None or provider.secret_key is None:
61 raise boto.auth_handler.NotReadyToAuthenticate()
62 self.host = host
63 self.update_provider(provider)
64
65 def update_provider(self, provider):
66 self._provider = provider
67 self._hmac = hmac.new(self._provider.secret_key.encode('utf-8'),
68 digestmod=sha)
69 if sha256:
70 self._hmac_256 = hmac.new(self._provider.secret_key.encode('utf-8'),
71 digestmod=sha256)
72 else:
73 self._hmac_256 = None
74
75 def algorithm(self):
76 if self._hmac_256:
77 return 'HmacSHA256'
78 else:
79 return 'HmacSHA1'
80
81 def _get_hmac(self):
82 if self._hmac_256:
83 digestmod = sha256
84 else:
85 digestmod = sha
86 return hmac.new(self._provider.secret_key.encode('utf-8'),
87 digestmod=digestmod)
88
89 def sign_string(self, string_to_sign):
90 new_hmac = self._get_hmac()
91 new_hmac.update(string_to_sign.encode('utf-8'))
92 return encodebytes(new_hmac.digest()).decode('utf-8').strip()
93
94 def __getstate__(self):
95 pickled_dict = copy.copy(self.__dict__)
96 del pickled_dict['_hmac']
97 del pickled_dict['_hmac_256']
98 return pickled_dict
99
100 def __setstate__(self, dct):
101 self.__dict__ = dct
102 self.update_provider(self._provider)
103
104
105 class AnonAuthHandler(AuthHandler, HmacKeys):
106 """
107 Implements Anonymous requests.
108 """
109
110 capability = ['anon']
111
112 def __init__(self, host, config, provider):
113 super(AnonAuthHandler, self).__init__(host, config, provider)
114
115 def add_auth(self, http_request, **kwargs):
116 pass
117
118
119 class HmacAuthV1Handler(AuthHandler, HmacKeys):
120 """ Implements the HMAC request signing used by S3 and GS."""
121
122 capability = ['hmac-v1', 's3']
123
124 def __init__(self, host, config, provider):
125 AuthHandler.__init__(self, host, config, provider)
126 HmacKeys.__init__(self, host, config, provider)
127 self._hmac_256 = None
128
129 def update_provider(self, provider):
130 super(HmacAuthV1Handler, self).update_provider(provider)
131 self._hmac_256 = None
132
133 def add_auth(self, http_request, **kwargs):
134 headers = http_request.headers
135 method = http_request.method
136 auth_path = http_request.auth_path
137 if 'Date' not in headers:
138 headers['Date'] = formatdate(usegmt=True)
139
140 if self._provider.security_token:
141 key = self._provider.security_token_header
142 headers[key] = self._provider.security_token
143 string_to_sign = boto.utils.canonical_string(method, auth_path,
144 headers, None,
145 self._provider)
146 boto.log.debug('StringToSign:\n%s' % string_to_sign)
147 b64_hmac = self.sign_string(string_to_sign)
148 auth_hdr = self._provider.auth_header
149 auth = ("%s %s:%s" % (auth_hdr, self._provider.access_key, b64_hmac))
150 boto.log.debug('Signature:\n%s' % auth)
151 headers['Authorization'] = auth
152
153
154 class HmacAuthV2Handler(AuthHandler, HmacKeys):
155 """
156 Implements the simplified HMAC authorization used by CloudFront.
157 """
158 capability = ['hmac-v2', 'cloudfront']
159
160 def __init__(self, host, config, provider):
161 AuthHandler.__init__(self, host, config, provider)
162 HmacKeys.__init__(self, host, config, provider)
163 self._hmac_256 = None
164
165 def update_provider(self, provider):
166 super(HmacAuthV2Handler, self).update_provider(provider)
167 self._hmac_256 = None
168
169 def add_auth(self, http_request, **kwargs):
170 headers = http_request.headers
171 if 'Date' not in headers:
172 headers['Date'] = formatdate(usegmt=True)
173 if self._provider.security_token:
174 key = self._provider.security_token_header
175 headers[key] = self._provider.security_token
176
177 b64_hmac = self.sign_string(headers['Date'])
178 auth_hdr = self._provider.auth_header
179 headers['Authorization'] = ("%s %s:%s" %
180 (auth_hdr,
181 self._provider.access_key, b64_hmac))
182
183
184 class HmacAuthV3Handler(AuthHandler, HmacKeys):
185 """Implements the new Version 3 HMAC authorization used by Route53."""
186
187 capability = ['hmac-v3', 'route53', 'ses']
188
189 def __init__(self, host, config, provider):
190 AuthHandler.__init__(self, host, config, provider)
191 HmacKeys.__init__(self, host, config, provider)
192
193 def add_auth(self, http_request, **kwargs):
194 headers = http_request.headers
195 if 'Date' not in headers:
196 headers['Date'] = formatdate(usegmt=True)
197
198 if self._provider.security_token:
199 key = self._provider.security_token_header
200 headers[key] = self._provider.security_token
201
202 b64_hmac = self.sign_string(headers['Date'])
203 s = "AWS3-HTTPS AWSAccessKeyId=%s," % self._provider.access_key
204 s += "Algorithm=%s,Signature=%s" % (self.algorithm(), b64_hmac)
205 headers['X-Amzn-Authorization'] = s
206
207
208 class HmacAuthV3HTTPHandler(AuthHandler, HmacKeys):
209 """
210 Implements the new Version 3 HMAC authorization used by DynamoDB.
211 """
212
213 capability = ['hmac-v3-http']
214
215 def __init__(self, host, config, provider):
216 AuthHandler.__init__(self, host, config, provider)
217 HmacKeys.__init__(self, host, config, provider)
218
219 def headers_to_sign(self, http_request):
220 """
221 Select the headers from the request that need to be included
222 in the StringToSign.
223 """
224 headers_to_sign = {'Host': self.host}
225 for name, value in http_request.headers.items():
226 lname = name.lower()
227 if lname.startswith('x-amz'):
228 headers_to_sign[name] = value
229 return headers_to_sign
230
231 def canonical_headers(self, headers_to_sign):
232 """
233 Return the headers that need to be included in the StringToSign
234 in their canonical form by converting all header keys to lower
235 case, sorting them in alphabetical order and then joining
236 them into a string, separated by newlines.
237 """
238 l = sorted(['%s:%s' % (n.lower().strip(),
239 headers_to_sign[n].strip()) for n in headers_to_sign])
240 return '\n'.join(l)
241
242 def string_to_sign(self, http_request):
243 """
244 Return the canonical StringToSign as well as a dict
245 containing the original version of all headers that
246 were included in the StringToSign.
247 """
248 headers_to_sign = self.headers_to_sign(http_request)
249 canonical_headers = self.canonical_headers(headers_to_sign)
250 string_to_sign = '\n'.join([http_request.method,
251 http_request.auth_path,
252 '',
253 canonical_headers,
254 '',
255 http_request.body])
256 return string_to_sign, headers_to_sign
257
258 def add_auth(self, req, **kwargs):
259 """
260 Add AWS3 authentication to a request.
261
262 :type req: :class`boto.connection.HTTPRequest`
263 :param req: The HTTPRequest object.
264 """
265 # This could be a retry. Make sure the previous
266 # authorization header is removed first.
267 if 'X-Amzn-Authorization' in req.headers:
268 del req.headers['X-Amzn-Authorization']
269 req.headers['X-Amz-Date'] = formatdate(usegmt=True)
270 if self._provider.security_token:
271 req.headers['X-Amz-Security-Token'] = self._provider.security_token
272 string_to_sign, headers_to_sign = self.string_to_sign(req)
273 boto.log.debug('StringToSign:\n%s' % string_to_sign)
274 hash_value = sha256(string_to_sign.encode('utf-8')).digest()
275 b64_hmac = self.sign_string(hash_value)
276 s = "AWS3 AWSAccessKeyId=%s," % self._provider.access_key
277 s += "Algorithm=%s," % self.algorithm()
278 s += "SignedHeaders=%s," % ';'.join(headers_to_sign)
279 s += "Signature=%s" % b64_hmac
280 req.headers['X-Amzn-Authorization'] = s
281
282
283 class HmacAuthV4Handler(AuthHandler, HmacKeys):
284 """
285 Implements the new Version 4 HMAC authorization.
286 """
287
288 capability = ['hmac-v4']
289
290 def __init__(self, host, config, provider,
291 service_name=None, region_name=None):
292 AuthHandler.__init__(self, host, config, provider)
293 HmacKeys.__init__(self, host, config, provider)
294 # You can set the service_name and region_name to override the
295 # values which would otherwise come from the endpoint, e.g.
296 # <service>.<region>.amazonaws.com.
297 self.service_name = service_name
298 self.region_name = region_name
299
300 def _sign(self, key, msg, hex=False):
301 if not isinstance(key, bytes):
302 key = key.encode('utf-8')
303
304 if hex:
305 sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest()
306 else:
307 sig = hmac.new(key, msg.encode('utf-8'), sha256).digest()
308 return sig
309
310 def headers_to_sign(self, http_request):
311 """
312 Select the headers from the request that need to be included
313 in the StringToSign.
314 """
315 host_header_value = self.host_header(self.host, http_request)
316 headers_to_sign = {'Host': host_header_value}
317 for name, value in http_request.headers.items():
318 lname = name.lower()
319 if lname.startswith('x-amz'):
320 headers_to_sign[name] = value
321 return headers_to_sign
322
323 def host_header(self, host, http_request):
324 port = http_request.port
325 secure = http_request.protocol == 'https'
326 if ((port == 80 and not secure) or (port == 443 and secure)):
327 return host
328 return '%s:%s' % (host, port)
329
330 def query_string(self, http_request):
331 parameter_names = sorted(http_request.params.keys())
332 pairs = []
333 for pname in parameter_names:
334 pval = boto.utils.get_utf8_value(http_request.params[pname])
335 pairs.append(urllib.parse.quote(pname, safe='') + '=' +
336 urllib.parse.quote(pval, safe='-_~'))
337 return '&'.join(pairs)
338
339 def canonical_query_string(self, http_request):
340 # POST requests pass parameters in through the
341 # http_request.body field.
342 if http_request.method == 'POST':
343 return ""
344 l = []
345 for param in sorted(http_request.params):
346 value = boto.utils.get_utf8_value(http_request.params[param])
347 l.append('%s=%s' % (urllib.parse.quote(param, safe='-_.~'),
348 urllib.parse.quote(value.decode('utf-8'), safe='-_.~')))
349 return '&'.join(l)
350
351 def canonical_headers(self, headers_to_sign):
352 """
353 Return the headers that need to be included in the StringToSign
354 in their canonical form by converting all header keys to lower
355 case, sorting them in alphabetical order and then joining
356 them into a string, separated by newlines.
357 """
358 canonical = []
359
360 for header in headers_to_sign:
361 c_name = header.lower().strip()
362 raw_value = headers_to_sign[header]
363 if '"' in raw_value:
364 c_value = raw_value.strip()
365 else:
366 c_value = ' '.join(raw_value.strip().split())
367 canonical.append('%s:%s' % (c_name, c_value))
368 return '\n'.join(sorted(canonical))
369
370 def signed_headers(self, headers_to_sign):
371 l = ['%s' % n.lower().strip() for n in headers_to_sign]
372 l = sorted(l)
373 return ';'.join(l)
374
375 def canonical_uri(self, http_request):
376 path = http_request.auth_path
377 # Normalize the path
378 # in windows normpath('/') will be '\\' so we chane it back to '/'
379 normalized = posixpath.normpath(path).replace('\\','/')
380 # Then urlencode whatever's left.
381 encoded = urllib.parse.quote(normalized)
382 if len(path) > 1 and path.endswith('/'):
383 encoded += '/'
384 return encoded
385
386 def payload(self, http_request):
387 body = http_request.body
388 # If the body is a file like object, we can use
389 # boto.utils.compute_hash, which will avoid reading
390 # the entire body into memory.
391 if hasattr(body, 'seek') and hasattr(body, 'read'):
392 return boto.utils.compute_hash(body, hash_algorithm=sha256)[0]
393 elif not isinstance(body, bytes):
394 body = body.encode('utf-8')
395 return sha256(body).hexdigest()
396
397 def canonical_request(self, http_request):
398 cr = [http_request.method.upper()]
399 cr.append(self.canonical_uri(http_request))
400 cr.append(self.canonical_query_string(http_request))
401 headers_to_sign = self.headers_to_sign(http_request)
402 cr.append(self.canonical_headers(headers_to_sign) + '\n')
403 cr.append(self.signed_headers(headers_to_sign))
404 cr.append(self.payload(http_request))
405 return '\n'.join(cr)
406
407 def scope(self, http_request):
408 scope = [self._provider.access_key]
409 scope.append(http_request.timestamp)
410 scope.append(http_request.region_name)
411 scope.append(http_request.service_name)
412 scope.append('aws4_request')
413 return '/'.join(scope)
414
415 def split_host_parts(self, host):
416 return host.split('.')
417
418 def determine_region_name(self, host):
419 parts = self.split_host_parts(host)
420 if self.region_name is not None:
421 region_name = self.region_name
422 elif len(parts) > 1:
423 if parts[1] == 'us-gov':
424 region_name = 'us-gov-west-1'
425 else:
426 if len(parts) == 3:
427 region_name = 'us-east-1'
428 else:
429 region_name = parts[1]
430 else:
431 region_name = parts[0]
432
433 return region_name
434
435 def determine_service_name(self, host):
436 parts = self.split_host_parts(host)
437 if self.service_name is not None:
438 service_name = self.service_name
439 else:
440 service_name = parts[0]
441 return service_name
442
443 def credential_scope(self, http_request):
444 scope = []
445 http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]
446 scope.append(http_request.timestamp)
447 # The service_name and region_name either come from:
448 # * The service_name/region_name attrs or (if these values are None)
449 # * parsed from the endpoint <service>.<region>.amazonaws.com.
450 region_name = self.determine_region_name(http_request.host)
451 service_name = self.determine_service_name(http_request.host)
452 http_request.service_name = service_name
453 http_request.region_name = region_name
454
455 scope.append(http_request.region_name)
456 scope.append(http_request.service_name)
457 scope.append('aws4_request')
458 return '/'.join(scope)
459
460 def string_to_sign(self, http_request, canonical_request):
461 """
462 Return the canonical StringToSign as well as a dict
463 containing the original version of all headers that
464 were included in the StringToSign.
465 """
466 sts = ['AWS4-HMAC-SHA256']
467 sts.append(http_request.headers['X-Amz-Date'])
468 sts.append(self.credential_scope(http_request))
469 sts.append(sha256(canonical_request.encode('utf-8')).hexdigest())
470 return '\n'.join(sts)
471
472 def signature(self, http_request, string_to_sign):
473 key = self._provider.secret_key
474 k_date = self._sign(('AWS4' + key).encode('utf-8'),
475 http_request.timestamp)
476 k_region = self._sign(k_date, http_request.region_name)
477 k_service = self._sign(k_region, http_request.service_name)
478 k_signing = self._sign(k_service, 'aws4_request')
479 return self._sign(k_signing, string_to_sign, hex=True)
480
481 def add_auth(self, req, **kwargs):
482 """
483 Add AWS4 authentication to a request.
484
485 :type req: :class`boto.connection.HTTPRequest`
486 :param req: The HTTPRequest object.
487 """
488 # This could be a retry. Make sure the previous
489 # authorization header is removed first.
490 if 'X-Amzn-Authorization' in req.headers:
491 del req.headers['X-Amzn-Authorization']
492 now = datetime.datetime.utcnow()
493 req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')
494 if self._provider.security_token:
495 req.headers['X-Amz-Security-Token'] = self._provider.security_token
496 qs = self.query_string(req)
497 if qs and req.method == 'POST':
498 # Stash request parameters into post body
499 # before we generate the signature.
500 req.body = qs
501 req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
502 req.headers['Content-Length'] = str(len(req.body))
503 else:
504 # Safe to modify req.path here since
505 # the signature will use req.auth_path.
506 req.path = req.path.split('?')[0]
507
508 if qs:
509 # Don't insert the '?' unless there's actually a query string
510 req.path = req.path + '?' + qs
511 canonical_request = self.canonical_request(req)
512 boto.log.debug('CanonicalRequest:\n%s' % canonical_request)
513 string_to_sign = self.string_to_sign(req, canonical_request)
514 boto.log.debug('StringToSign:\n%s' % string_to_sign)
515 signature = self.signature(req, string_to_sign)
516 boto.log.debug('Signature:\n%s' % signature)
517 headers_to_sign = self.headers_to_sign(req)
518 l = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(req)]
519 l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))
520 l.append('Signature=%s' % signature)
521 req.headers['Authorization'] = ','.join(l)
522
523
524 class S3HmacAuthV4Handler(HmacAuthV4Handler, AuthHandler):
525 """
526 Implements a variant of Version 4 HMAC authorization specific to S3.
527 """
528 capability = ['hmac-v4-s3']
529
530 def __init__(self, *args, **kwargs):
531 super(S3HmacAuthV4Handler, self).__init__(*args, **kwargs)
532
533 if self.region_name:
534 self.region_name = self.clean_region_name(self.region_name)
535
536 def clean_region_name(self, region_name):
537 if region_name.startswith('s3-'):
538 return region_name[3:]
539
540 return region_name
541
542 def canonical_uri(self, http_request):
543 # S3 does **NOT** do path normalization that SigV4 typically does.
544 # Urlencode the path, **NOT** ``auth_path`` (because vhosting).
545 path = urllib.parse.urlparse(http_request.path)
546 # Because some quoting may have already been applied, let's back it out.
547 unquoted = urllib.parse.unquote(path.path)
548 # Requote, this time addressing all characters.
549 encoded = urllib.parse.quote(unquoted)
550 return encoded
551
552 def host_header(self, host, http_request):
553 port = http_request.port
554 secure = http_request.protocol == 'https'
555 if ((port == 80 and not secure) or (port == 443 and secure)):
556 return http_request.host
557 return '%s:%s' % (http_request.host, port)
558
559 def headers_to_sign(self, http_request):
560 """
561 Select the headers from the request that need to be included
562 in the StringToSign.
563 """
564 host_header_value = self.host_header(self.host, http_request)
565 headers_to_sign = {'Host': host_header_value}
566 for name, value in http_request.headers.items():
567 lname = name.lower()
568 # Hooray for the only difference! The main SigV4 signer only does
569 # ``Host`` + ``x-amz-*``. But S3 wants pretty much everything
570 # signed, except for authorization itself.
571 if not lname in ['authorization']:
572 headers_to_sign[name] = value
573 return headers_to_sign
574
575 def determine_region_name(self, host):
576 # S3's different format(s) of representing region/service from the
577 # rest of AWS makes this hurt too.
578 #
579 # Possible domain formats:
580 # - s3.amazonaws.com (Classic)
581 # - s3-us-west-2.amazonaws.com (Specific region)
582 # - bukkit.s3.amazonaws.com (Vhosted Classic)
583 # - bukkit.s3-ap-northeast-1.amazonaws.com (Vhosted specific region)
584 # - s3.cn-north-1.amazonaws.com.cn - (Bejing region)
585 # - bukkit.s3.cn-north-1.amazonaws.com.cn - (Vhosted Bejing region)
586 parts = self.split_host_parts(host)
587
588 if self.region_name is not None:
589 region_name = self.region_name
590 else:
591 # Classic URLs - s3-us-west-2.amazonaws.com
592 if len(parts) == 3:
593 region_name = self.clean_region_name(parts[0])
594
595 # Special-case for Classic.
596 if region_name == 's3':
597 region_name = 'us-east-1'
598 else:
599 # Iterate over the parts in reverse order.
600 for offset, part in enumerate(reversed(parts)):
601 part = part.lower()
602
603 # Look for the first thing starting with 's3'.
604 # Until there's a ``.s3`` TLD, we should be OK. :P
605 if part == 's3':
606 # If it's by itself, the region is the previous part.
607 region_name = parts[-offset]
608
609 # Unless it's Vhosted classic
610 if region_name == 'amazonaws':
611 region_name = 'us-east-1'
612
613 break
614 elif part.startswith('s3-'):
615 region_name = self.clean_region_name(part)
616 break
617
618 return region_name
619
620 def determine_service_name(self, host):
621 # Should this signing mechanism ever be used for anything else, this
622 # will fail. Consider utilizing the logic from the parent class should
623 # you find yourself here.
624 return 's3'
625
626 def mangle_path_and_params(self, req):
627 """
628 Returns a copy of the request object with fixed ``auth_path/params``
629 attributes from the original.
630 """
631 modified_req = copy.copy(req)
632
633 # Unlike the most other services, in S3, ``req.params`` isn't the only
634 # source of query string parameters.
635 # Because of the ``query_args``, we may already have a query string
636 # **ON** the ``path/auth_path``.
637 # Rip them apart, so the ``auth_path/params`` can be signed
638 # appropriately.
639 parsed_path = urllib.parse.urlparse(modified_req.auth_path)
640 modified_req.auth_path = parsed_path.path
641
642 if modified_req.params is None:
643 modified_req.params = {}
644
645 raw_qs = parsed_path.query
646 existing_qs = urllib.parse.parse_qs(
647 raw_qs,
648 keep_blank_values=True
649 )
650
651 # ``parse_qs`` will return lists. Don't do that unless there's a real,
652 # live list provided.
653 for key, value in existing_qs.items():
654 if isinstance(value, (list, tuple)):
655 if len(value) == 1:
656 existing_qs[key] = value[0]
657
658 modified_req.params.update(existing_qs)
659 return modified_req
660
661 def payload(self, http_request):
662 if http_request.headers.get('x-amz-content-sha256'):
663 return http_request.headers['x-amz-content-sha256']
664
665 return super(S3HmacAuthV4Handler, self).payload(http_request)
666
667 def add_auth(self, req, **kwargs):
668 if not 'x-amz-content-sha256' in req.headers:
669 if '_sha256' in req.headers:
670 req.headers['x-amz-content-sha256'] = req.headers.pop('_sha256')
671 else:
672 req.headers['x-amz-content-sha256'] = self.payload(req)
673
674 req = self.mangle_path_and_params(req)
675 return super(S3HmacAuthV4Handler, self).add_auth(req, **kwargs)
676
677 def presign(self, req, expires, iso_date=None):
678 """
679 Presign a request using SigV4 query params. Takes in an HTTP request
680 and an expiration time in seconds and returns a URL.
681
682 http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html
683 """
684 if iso_date is None:
685 iso_date = datetime.datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')
686
687 region = self.determine_region_name(req.host)
688 service = self.determine_service_name(req.host)
689
690 params = {
691 'X-Amz-Algorithm': 'AWS4-HMAC-SHA256',
692 'X-Amz-Credential': '%s/%s/%s/%s/aws4_request' % (
693 self._provider.access_key,
694 iso_date[:8],
695 region,
696 service
697 ),
698 'X-Amz-Date': iso_date,
699 'X-Amz-Expires': expires,
700 'X-Amz-SignedHeaders': 'host'
701 }
702
703 if self._provider.security_token:
704 params['X-Amz-Security-Token'] = self._provider.security_token
705
706 req.params.update(params)
707
708 cr = self.canonical_request(req)
709
710 # We need to replace the payload SHA with a constant
711 cr = '\n'.join(cr.split('\n')[:-1]) + '\nUNSIGNED-PAYLOAD'
712
713 # Date header is expected for string_to_sign, but unused otherwise
714 req.headers['X-Amz-Date'] = iso_date
715
716 sts = self.string_to_sign(req, cr)
717 signature = self.signature(req, sts)
718
719 # Add signature to params now that we have it
720 req.params['X-Amz-Signature'] = signature
721
722 return 'https://%s%s?%s' % (req.host, req.path,
723 urllib.parse.urlencode(req.params))
724
725
726 class QueryAuthHandler(AuthHandler):
727 """
728 Provides pure query construction (no actual signing).
729
730 Mostly useful for STS' ``assume_role_with_web_identity``.
731
732 Does **NOT** escape query string values!
733 """
734
735 capability = ['pure-query']
736
737 def _escape_value(self, value):
738 # Would normally be ``return urllib.parse.quote(value)``.
739 return value
740
741 def _build_query_string(self, params):
742 keys = list(params.keys())
743 keys.sort(key=lambda x: x.lower())
744 pairs = []
745 for key in keys:
746 val = boto.utils.get_utf8_value(params[key])
747 pairs.append(key + '=' + self._escape_value(val.decode('utf-8')))
748 return '&'.join(pairs)
749
750 def add_auth(self, http_request, **kwargs):
751 headers = http_request.headers
752 params = http_request.params
753 qs = self._build_query_string(
754 http_request.params
755 )
756 boto.log.debug('query_string: %s' % qs)
757 headers['Content-Type'] = 'application/json; charset=UTF-8'
758 http_request.body = ''
759 # if this is a retried request, the qs from the previous try will
760 # already be there, we need to get rid of that and rebuild it
761 http_request.path = http_request.path.split('?')[0]
762 http_request.path = http_request.path + '?' + qs
763
764
765 class QuerySignatureHelper(HmacKeys):
766 """
767 Helper for Query signature based Auth handler.
768
769 Concrete sub class need to implement _calc_sigature method.
770 """
771
772 def add_auth(self, http_request, **kwargs):
773 headers = http_request.headers
774 params = http_request.params
775 params['AWSAccessKeyId'] = self._provider.access_key
776 params['SignatureVersion'] = self.SignatureVersion
777 params['Timestamp'] = boto.utils.get_ts()
778 qs, signature = self._calc_signature(
779 http_request.params, http_request.method,
780 http_request.auth_path, http_request.host)
781 boto.log.debug('query_string: %s Signature: %s' % (qs, signature))
782 if http_request.method == 'POST':
783 headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'
784 http_request.body = qs + '&Signature=' + urllib.parse.quote_plus(signature)
785 http_request.headers['Content-Length'] = str(len(http_request.body))
786 else:
787 http_request.body = ''
788 # if this is a retried request, the qs from the previous try will
789 # already be there, we need to get rid of that and rebuild it
790 http_request.path = http_request.path.split('?')[0]
791 http_request.path = (http_request.path + '?' + qs +
792 '&Signature=' + urllib.parse.quote_plus(signature))
793
794
795 class QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler):
796 """Provides Signature V0 Signing"""
797
798 SignatureVersion = 0
799 capability = ['sign-v0']
800
801 def _calc_signature(self, params, *args):
802 boto.log.debug('using _calc_signature_0')
803 hmac = self._get_hmac()
804 s = params['Action'] + params['Timestamp']
805 hmac.update(s.encode('utf-8'))
806 keys = params.keys()
807 keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))
808 pairs = []
809 for key in keys:
810 val = boto.utils.get_utf8_value(params[key])
811 pairs.append(key + '=' + urllib.parse.quote(val))
812 qs = '&'.join(pairs)
813 return (qs, base64.b64encode(hmac.digest()))
814
815
816 class QuerySignatureV1AuthHandler(QuerySignatureHelper, AuthHandler):
817 """
818 Provides Query Signature V1 Authentication.
819 """
820
821 SignatureVersion = 1
822 capability = ['sign-v1', 'mturk']
823
824 def __init__(self, *args, **kw):
825 QuerySignatureHelper.__init__(self, *args, **kw)
826 AuthHandler.__init__(self, *args, **kw)
827 self._hmac_256 = None
828
829 def _calc_signature(self, params, *args):
830 boto.log.debug('using _calc_signature_1')
831 hmac = self._get_hmac()
832 keys = params.keys()
833 keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))
834 pairs = []
835 for key in keys:
836 hmac.update(key.encode('utf-8'))
837 val = boto.utils.get_utf8_value(params[key])
838 hmac.update(val)
839 pairs.append(key + '=' + urllib.parse.quote(val))
840 qs = '&'.join(pairs)
841 return (qs, base64.b64encode(hmac.digest()))
842
843
844 class QuerySignatureV2AuthHandler(QuerySignatureHelper, AuthHandler):
845 """Provides Query Signature V2 Authentication."""
846
847 SignatureVersion = 2
848 capability = ['sign-v2', 'ec2', 'ec2', 'emr', 'fps', 'ecs',
849 'sdb', 'iam', 'rds', 'sns', 'sqs', 'cloudformation']
850
851 def _calc_signature(self, params, verb, path, server_name):
852 boto.log.debug('using _calc_signature_2')
853 string_to_sign = '%s\n%s\n%s\n' % (verb, server_name.lower(), path)
854 hmac = self._get_hmac()
855 params['SignatureMethod'] = self.algorithm()
856 if self._provider.security_token:
857 params['SecurityToken'] = self._provider.security_token
858 keys = sorted(params.keys())
859 pairs = []
860 for key in keys:
861 val = boto.utils.get_utf8_value(params[key])
862 pairs.append(urllib.parse.quote(key, safe='') + '=' +
863 urllib.parse.quote(val, safe='-_~'))
864 qs = '&'.join(pairs)
865 boto.log.debug('query string: %s' % qs)
866 string_to_sign += qs
867 boto.log.debug('string_to_sign: %s' % string_to_sign)
868 hmac.update(string_to_sign.encode('utf-8'))
869 b64 = base64.b64encode(hmac.digest())
870 boto.log.debug('len(b64)=%d' % len(b64))
871 boto.log.debug('base64 encoded digest: %s' % b64)
872 return (qs, b64)
873
874
875 class POSTPathQSV2AuthHandler(QuerySignatureV2AuthHandler, AuthHandler):
876 """
877 Query Signature V2 Authentication relocating signed query
878 into the path and allowing POST requests with Content-Types.
879 """
880
881 capability = ['mws']
882
883 def add_auth(self, req, **kwargs):
884 req.params['AWSAccessKeyId'] = self._provider.access_key
885 req.params['SignatureVersion'] = self.SignatureVersion
886 req.params['Timestamp'] = boto.utils.get_ts()
887 qs, signature = self._calc_signature(req.params, req.method,
888 req.auth_path, req.host)
889 boto.log.debug('query_string: %s Signature: %s' % (qs, signature))
890 if req.method == 'POST':
891 req.headers['Content-Length'] = str(len(req.body))
892 req.headers['Content-Type'] = req.headers.get('Content-Type',
893 'text/plain')
894 else:
895 req.body = ''
896 # if this is a retried req, the qs from the previous try will
897 # already be there, we need to get rid of that and rebuild it
898 req.path = req.path.split('?')[0]
899 req.path = (req.path + '?' + qs +
900 '&Signature=' + urllib.parse.quote_plus(signature))
901
902
903 def get_auth_handler(host, config, provider, requested_capability=None):
904 """Finds an AuthHandler that is ready to authenticate.
905
906 Lists through all the registered AuthHandlers to find one that is willing
907 to handle for the requested capabilities, config and provider.
908
909 :type host: string
910 :param host: The name of the host
911
912 :type config:
913 :param config:
914
915 :type provider:
916 :param provider:
917
918 Returns:
919 An implementation of AuthHandler.
920
921 Raises:
922 boto.exception.NoAuthHandlerFound
923 """
924 ready_handlers = []
925 auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)
926 total_handlers = len(auth_handlers)
927 for handler in auth_handlers:
928 try:
929 ready_handlers.append(handler(host, config, provider))
930 except boto.auth_handler.NotReadyToAuthenticate:
931 pass
932
933 if not ready_handlers:
934 checked_handlers = auth_handlers
935 names = [handler.__name__ for handler in checked_handlers]
936 raise boto.exception.NoAuthHandlerFound(
937 'No handler was ready to authenticate. %d handlers were checked.'
938 ' %s '
939 'Check your credentials' % (len(names), str(names)))
940
941 # We select the last ready auth handler that was loaded, to allow users to
942 # customize how auth works in environments where there are shared boto
943 # config files (e.g., /etc/boto.cfg and ~/.boto): The more general,
944 # system-wide shared configs should be loaded first, and the user's
945 # customizations loaded last. That way, for example, the system-wide
946 # config might include a plugin_directory that includes a service account
947 # auth plugin shared by all users of a Google Compute Engine instance
948 # (allowing sharing of non-user data between various services), and the
949 # user could override this with a .boto config that includes user-specific
950 # credentials (for access to user data).
951 return ready_handlers[-1]
952
953
954 def detect_potential_sigv4(func):
955 def _wrapper(self):
956 if os.environ.get('EC2_USE_SIGV4', False):
957 return ['hmac-v4']
958
959 if boto.config.get('ec2', 'use-sigv4', False):
960 return ['hmac-v4']
961
962 if hasattr(self, 'region'):
963 # If you're making changes here, you should also check
964 # ``boto/iam/connection.py``, as several things there are also
965 # endpoint-related.
966 if getattr(self.region, 'endpoint', ''):
967 if '.cn-' in self.region.endpoint:
968 return ['hmac-v4']
969
970 return func(self)
971 return _wrapper
972
973
974 def detect_potential_s3sigv4(func):
975 def _wrapper(self):
976 if os.environ.get('S3_USE_SIGV4', False):
977 return ['hmac-v4-s3']
978
979 if boto.config.get('s3', 'use-sigv4', False):
980 return ['hmac-v4-s3']
981
982 if hasattr(self, 'host'):
983 # If you're making changes here, you should also check
984 # ``boto/iam/connection.py``, as several things there are also
985 # endpoint-related.
986 if '.cn-' in self.host:
987 return ['hmac-v4-s3']
988
989 return func(self)
990 return _wrapper
991
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/boto/auth.py b/boto/auth.py
--- a/boto/auth.py
+++ b/boto/auth.py
@@ -317,6 +317,8 @@
for name, value in http_request.headers.items():
lname = name.lower()
if lname.startswith('x-amz'):
+ if isinstance(value, bytes):
+ value = value.decode('utf-8')
headers_to_sign[name] = value
return headers_to_sign
| {"golden_diff": "diff --git a/boto/auth.py b/boto/auth.py\n--- a/boto/auth.py\n+++ b/boto/auth.py\n@@ -317,6 +317,8 @@\n for name, value in http_request.headers.items():\n lname = name.lower()\n if lname.startswith('x-amz'):\n+ if isinstance(value, bytes):\n+ value = value.decode('utf-8')\n headers_to_sign[name] = value\n return headers_to_sign\n", "issue": "glacier: tree_hash returned as bytes by compute_hashes_from_fileobj \nWhen uploading a file to glacier, `compute_hashes_from_fileobj` uses `bytes_to_hex` from `glacier.utils`. `bytes_to_hex`, in turn, uses `binascii.hexlify()`. In Python 3 (I'm running v3.4), this now returns a `bytes` object, not a `str`.\n\nThis is eventually causing a `TypeError: Type str doesn't support the buffer API` exception in `auth.py`'s `canonical_headers` function since the hash value is used as a request header and is never converted from `bytes` to `str` but is operated on as if it were a `str`.\n\n", "before_files": [{"content": "# Copyright 2010 Google Inc.\n# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2011, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\n\"\"\"\nHandles authentication required to AWS and GS\n\"\"\"\n\nimport base64\nimport boto\nimport boto.auth_handler\nimport boto.exception\nimport boto.plugin\nimport boto.utils\nimport copy\nimport datetime\nfrom email.utils import formatdate\nimport hmac\nimport os\nimport sys\nimport time\nimport posixpath\n\nfrom boto.compat import urllib, encodebytes\nfrom boto.auth_handler import AuthHandler\nfrom boto.exception import BotoClientError\n\ntry:\n from hashlib import sha1 as sha\n from hashlib import sha256 as sha256\nexcept ImportError:\n import sha\n sha256 = None\n\n\nclass HmacKeys(object):\n \"\"\"Key based Auth handler helper.\"\"\"\n\n def __init__(self, host, config, provider):\n if provider.access_key is None or provider.secret_key is None:\n raise boto.auth_handler.NotReadyToAuthenticate()\n self.host = host\n self.update_provider(provider)\n\n def update_provider(self, provider):\n self._provider = provider\n self._hmac = hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=sha)\n if sha256:\n self._hmac_256 = hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=sha256)\n else:\n self._hmac_256 = None\n\n def algorithm(self):\n if self._hmac_256:\n return 'HmacSHA256'\n else:\n return 'HmacSHA1'\n\n def _get_hmac(self):\n if self._hmac_256:\n digestmod = sha256\n else:\n digestmod = sha\n return hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=digestmod)\n\n def sign_string(self, string_to_sign):\n new_hmac = self._get_hmac()\n new_hmac.update(string_to_sign.encode('utf-8'))\n return encodebytes(new_hmac.digest()).decode('utf-8').strip()\n\n def __getstate__(self):\n pickled_dict = copy.copy(self.__dict__)\n del pickled_dict['_hmac']\n del pickled_dict['_hmac_256']\n return pickled_dict\n\n def __setstate__(self, dct):\n self.__dict__ = dct\n self.update_provider(self._provider)\n\n\nclass AnonAuthHandler(AuthHandler, HmacKeys):\n \"\"\"\n Implements Anonymous requests.\n \"\"\"\n\n capability = ['anon']\n\n def __init__(self, host, config, provider):\n super(AnonAuthHandler, self).__init__(host, config, provider)\n\n def add_auth(self, http_request, **kwargs):\n pass\n\n\nclass HmacAuthV1Handler(AuthHandler, HmacKeys):\n \"\"\" Implements the HMAC request signing used by S3 and GS.\"\"\"\n\n capability = ['hmac-v1', 's3']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n self._hmac_256 = None\n\n def update_provider(self, provider):\n super(HmacAuthV1Handler, self).update_provider(provider)\n self._hmac_256 = None\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n method = http_request.method\n auth_path = http_request.auth_path\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n string_to_sign = boto.utils.canonical_string(method, auth_path,\n headers, None,\n self._provider)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n b64_hmac = self.sign_string(string_to_sign)\n auth_hdr = self._provider.auth_header\n auth = (\"%s %s:%s\" % (auth_hdr, self._provider.access_key, b64_hmac))\n boto.log.debug('Signature:\\n%s' % auth)\n headers['Authorization'] = auth\n\n\nclass HmacAuthV2Handler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the simplified HMAC authorization used by CloudFront.\n \"\"\"\n capability = ['hmac-v2', 'cloudfront']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n self._hmac_256 = None\n\n def update_provider(self, provider):\n super(HmacAuthV2Handler, self).update_provider(provider)\n self._hmac_256 = None\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n\n b64_hmac = self.sign_string(headers['Date'])\n auth_hdr = self._provider.auth_header\n headers['Authorization'] = (\"%s %s:%s\" %\n (auth_hdr,\n self._provider.access_key, b64_hmac))\n\n\nclass HmacAuthV3Handler(AuthHandler, HmacKeys):\n \"\"\"Implements the new Version 3 HMAC authorization used by Route53.\"\"\"\n\n capability = ['hmac-v3', 'route53', 'ses']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n\n b64_hmac = self.sign_string(headers['Date'])\n s = \"AWS3-HTTPS AWSAccessKeyId=%s,\" % self._provider.access_key\n s += \"Algorithm=%s,Signature=%s\" % (self.algorithm(), b64_hmac)\n headers['X-Amzn-Authorization'] = s\n\n\nclass HmacAuthV3HTTPHandler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the new Version 3 HMAC authorization used by DynamoDB.\n \"\"\"\n\n capability = ['hmac-v3-http']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n headers_to_sign = {'Host': self.host}\n for name, value in http_request.headers.items():\n lname = name.lower()\n if lname.startswith('x-amz'):\n headers_to_sign[name] = value\n return headers_to_sign\n\n def canonical_headers(self, headers_to_sign):\n \"\"\"\n Return the headers that need to be included in the StringToSign\n in their canonical form by converting all header keys to lower\n case, sorting them in alphabetical order and then joining\n them into a string, separated by newlines.\n \"\"\"\n l = sorted(['%s:%s' % (n.lower().strip(),\n headers_to_sign[n].strip()) for n in headers_to_sign])\n return '\\n'.join(l)\n\n def string_to_sign(self, http_request):\n \"\"\"\n Return the canonical StringToSign as well as a dict\n containing the original version of all headers that\n were included in the StringToSign.\n \"\"\"\n headers_to_sign = self.headers_to_sign(http_request)\n canonical_headers = self.canonical_headers(headers_to_sign)\n string_to_sign = '\\n'.join([http_request.method,\n http_request.auth_path,\n '',\n canonical_headers,\n '',\n http_request.body])\n return string_to_sign, headers_to_sign\n\n def add_auth(self, req, **kwargs):\n \"\"\"\n Add AWS3 authentication to a request.\n\n :type req: :class`boto.connection.HTTPRequest`\n :param req: The HTTPRequest object.\n \"\"\"\n # This could be a retry. Make sure the previous\n # authorization header is removed first.\n if 'X-Amzn-Authorization' in req.headers:\n del req.headers['X-Amzn-Authorization']\n req.headers['X-Amz-Date'] = formatdate(usegmt=True)\n if self._provider.security_token:\n req.headers['X-Amz-Security-Token'] = self._provider.security_token\n string_to_sign, headers_to_sign = self.string_to_sign(req)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n hash_value = sha256(string_to_sign.encode('utf-8')).digest()\n b64_hmac = self.sign_string(hash_value)\n s = \"AWS3 AWSAccessKeyId=%s,\" % self._provider.access_key\n s += \"Algorithm=%s,\" % self.algorithm()\n s += \"SignedHeaders=%s,\" % ';'.join(headers_to_sign)\n s += \"Signature=%s\" % b64_hmac\n req.headers['X-Amzn-Authorization'] = s\n\n\nclass HmacAuthV4Handler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the new Version 4 HMAC authorization.\n \"\"\"\n\n capability = ['hmac-v4']\n\n def __init__(self, host, config, provider,\n service_name=None, region_name=None):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n # You can set the service_name and region_name to override the\n # values which would otherwise come from the endpoint, e.g.\n # <service>.<region>.amazonaws.com.\n self.service_name = service_name\n self.region_name = region_name\n\n def _sign(self, key, msg, hex=False):\n if not isinstance(key, bytes):\n key = key.encode('utf-8')\n\n if hex:\n sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest()\n else:\n sig = hmac.new(key, msg.encode('utf-8'), sha256).digest()\n return sig\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n host_header_value = self.host_header(self.host, http_request)\n headers_to_sign = {'Host': host_header_value}\n for name, value in http_request.headers.items():\n lname = name.lower()\n if lname.startswith('x-amz'):\n headers_to_sign[name] = value\n return headers_to_sign\n\n def host_header(self, host, http_request):\n port = http_request.port\n secure = http_request.protocol == 'https'\n if ((port == 80 and not secure) or (port == 443 and secure)):\n return host\n return '%s:%s' % (host, port)\n\n def query_string(self, http_request):\n parameter_names = sorted(http_request.params.keys())\n pairs = []\n for pname in parameter_names:\n pval = boto.utils.get_utf8_value(http_request.params[pname])\n pairs.append(urllib.parse.quote(pname, safe='') + '=' +\n urllib.parse.quote(pval, safe='-_~'))\n return '&'.join(pairs)\n\n def canonical_query_string(self, http_request):\n # POST requests pass parameters in through the\n # http_request.body field.\n if http_request.method == 'POST':\n return \"\"\n l = []\n for param in sorted(http_request.params):\n value = boto.utils.get_utf8_value(http_request.params[param])\n l.append('%s=%s' % (urllib.parse.quote(param, safe='-_.~'),\n urllib.parse.quote(value.decode('utf-8'), safe='-_.~')))\n return '&'.join(l)\n\n def canonical_headers(self, headers_to_sign):\n \"\"\"\n Return the headers that need to be included in the StringToSign\n in their canonical form by converting all header keys to lower\n case, sorting them in alphabetical order and then joining\n them into a string, separated by newlines.\n \"\"\"\n canonical = []\n\n for header in headers_to_sign:\n c_name = header.lower().strip()\n raw_value = headers_to_sign[header]\n if '\"' in raw_value:\n c_value = raw_value.strip()\n else:\n c_value = ' '.join(raw_value.strip().split())\n canonical.append('%s:%s' % (c_name, c_value))\n return '\\n'.join(sorted(canonical))\n\n def signed_headers(self, headers_to_sign):\n l = ['%s' % n.lower().strip() for n in headers_to_sign]\n l = sorted(l)\n return ';'.join(l)\n\n def canonical_uri(self, http_request):\n path = http_request.auth_path\n # Normalize the path\n # in windows normpath('/') will be '\\\\' so we chane it back to '/'\n normalized = posixpath.normpath(path).replace('\\\\','/')\n # Then urlencode whatever's left.\n encoded = urllib.parse.quote(normalized)\n if len(path) > 1 and path.endswith('/'):\n encoded += '/'\n return encoded\n\n def payload(self, http_request):\n body = http_request.body\n # If the body is a file like object, we can use\n # boto.utils.compute_hash, which will avoid reading\n # the entire body into memory.\n if hasattr(body, 'seek') and hasattr(body, 'read'):\n return boto.utils.compute_hash(body, hash_algorithm=sha256)[0]\n elif not isinstance(body, bytes):\n body = body.encode('utf-8')\n return sha256(body).hexdigest()\n\n def canonical_request(self, http_request):\n cr = [http_request.method.upper()]\n cr.append(self.canonical_uri(http_request))\n cr.append(self.canonical_query_string(http_request))\n headers_to_sign = self.headers_to_sign(http_request)\n cr.append(self.canonical_headers(headers_to_sign) + '\\n')\n cr.append(self.signed_headers(headers_to_sign))\n cr.append(self.payload(http_request))\n return '\\n'.join(cr)\n\n def scope(self, http_request):\n scope = [self._provider.access_key]\n scope.append(http_request.timestamp)\n scope.append(http_request.region_name)\n scope.append(http_request.service_name)\n scope.append('aws4_request')\n return '/'.join(scope)\n\n def split_host_parts(self, host):\n return host.split('.')\n\n def determine_region_name(self, host):\n parts = self.split_host_parts(host)\n if self.region_name is not None:\n region_name = self.region_name\n elif len(parts) > 1:\n if parts[1] == 'us-gov':\n region_name = 'us-gov-west-1'\n else:\n if len(parts) == 3:\n region_name = 'us-east-1'\n else:\n region_name = parts[1]\n else:\n region_name = parts[0]\n\n return region_name\n\n def determine_service_name(self, host):\n parts = self.split_host_parts(host)\n if self.service_name is not None:\n service_name = self.service_name\n else:\n service_name = parts[0]\n return service_name\n\n def credential_scope(self, http_request):\n scope = []\n http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]\n scope.append(http_request.timestamp)\n # The service_name and region_name either come from:\n # * The service_name/region_name attrs or (if these values are None)\n # * parsed from the endpoint <service>.<region>.amazonaws.com.\n region_name = self.determine_region_name(http_request.host)\n service_name = self.determine_service_name(http_request.host)\n http_request.service_name = service_name\n http_request.region_name = region_name\n\n scope.append(http_request.region_name)\n scope.append(http_request.service_name)\n scope.append('aws4_request')\n return '/'.join(scope)\n\n def string_to_sign(self, http_request, canonical_request):\n \"\"\"\n Return the canonical StringToSign as well as a dict\n containing the original version of all headers that\n were included in the StringToSign.\n \"\"\"\n sts = ['AWS4-HMAC-SHA256']\n sts.append(http_request.headers['X-Amz-Date'])\n sts.append(self.credential_scope(http_request))\n sts.append(sha256(canonical_request.encode('utf-8')).hexdigest())\n return '\\n'.join(sts)\n\n def signature(self, http_request, string_to_sign):\n key = self._provider.secret_key\n k_date = self._sign(('AWS4' + key).encode('utf-8'),\n http_request.timestamp)\n k_region = self._sign(k_date, http_request.region_name)\n k_service = self._sign(k_region, http_request.service_name)\n k_signing = self._sign(k_service, 'aws4_request')\n return self._sign(k_signing, string_to_sign, hex=True)\n\n def add_auth(self, req, **kwargs):\n \"\"\"\n Add AWS4 authentication to a request.\n\n :type req: :class`boto.connection.HTTPRequest`\n :param req: The HTTPRequest object.\n \"\"\"\n # This could be a retry. Make sure the previous\n # authorization header is removed first.\n if 'X-Amzn-Authorization' in req.headers:\n del req.headers['X-Amzn-Authorization']\n now = datetime.datetime.utcnow()\n req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')\n if self._provider.security_token:\n req.headers['X-Amz-Security-Token'] = self._provider.security_token\n qs = self.query_string(req)\n if qs and req.method == 'POST':\n # Stash request parameters into post body\n # before we generate the signature.\n req.body = qs\n req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'\n req.headers['Content-Length'] = str(len(req.body))\n else:\n # Safe to modify req.path here since\n # the signature will use req.auth_path.\n req.path = req.path.split('?')[0]\n\n if qs:\n # Don't insert the '?' unless there's actually a query string\n req.path = req.path + '?' + qs\n canonical_request = self.canonical_request(req)\n boto.log.debug('CanonicalRequest:\\n%s' % canonical_request)\n string_to_sign = self.string_to_sign(req, canonical_request)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n signature = self.signature(req, string_to_sign)\n boto.log.debug('Signature:\\n%s' % signature)\n headers_to_sign = self.headers_to_sign(req)\n l = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(req)]\n l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))\n l.append('Signature=%s' % signature)\n req.headers['Authorization'] = ','.join(l)\n\n\nclass S3HmacAuthV4Handler(HmacAuthV4Handler, AuthHandler):\n \"\"\"\n Implements a variant of Version 4 HMAC authorization specific to S3.\n \"\"\"\n capability = ['hmac-v4-s3']\n\n def __init__(self, *args, **kwargs):\n super(S3HmacAuthV4Handler, self).__init__(*args, **kwargs)\n\n if self.region_name:\n self.region_name = self.clean_region_name(self.region_name)\n\n def clean_region_name(self, region_name):\n if region_name.startswith('s3-'):\n return region_name[3:]\n\n return region_name\n\n def canonical_uri(self, http_request):\n # S3 does **NOT** do path normalization that SigV4 typically does.\n # Urlencode the path, **NOT** ``auth_path`` (because vhosting).\n path = urllib.parse.urlparse(http_request.path)\n # Because some quoting may have already been applied, let's back it out.\n unquoted = urllib.parse.unquote(path.path)\n # Requote, this time addressing all characters.\n encoded = urllib.parse.quote(unquoted)\n return encoded\n\n def host_header(self, host, http_request):\n port = http_request.port\n secure = http_request.protocol == 'https'\n if ((port == 80 and not secure) or (port == 443 and secure)):\n return http_request.host\n return '%s:%s' % (http_request.host, port)\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n host_header_value = self.host_header(self.host, http_request)\n headers_to_sign = {'Host': host_header_value}\n for name, value in http_request.headers.items():\n lname = name.lower()\n # Hooray for the only difference! The main SigV4 signer only does\n # ``Host`` + ``x-amz-*``. But S3 wants pretty much everything\n # signed, except for authorization itself.\n if not lname in ['authorization']:\n headers_to_sign[name] = value\n return headers_to_sign\n\n def determine_region_name(self, host):\n # S3's different format(s) of representing region/service from the\n # rest of AWS makes this hurt too.\n #\n # Possible domain formats:\n # - s3.amazonaws.com (Classic)\n # - s3-us-west-2.amazonaws.com (Specific region)\n # - bukkit.s3.amazonaws.com (Vhosted Classic)\n # - bukkit.s3-ap-northeast-1.amazonaws.com (Vhosted specific region)\n # - s3.cn-north-1.amazonaws.com.cn - (Bejing region)\n # - bukkit.s3.cn-north-1.amazonaws.com.cn - (Vhosted Bejing region)\n parts = self.split_host_parts(host)\n\n if self.region_name is not None:\n region_name = self.region_name\n else:\n # Classic URLs - s3-us-west-2.amazonaws.com\n if len(parts) == 3:\n region_name = self.clean_region_name(parts[0])\n\n # Special-case for Classic.\n if region_name == 's3':\n region_name = 'us-east-1'\n else:\n # Iterate over the parts in reverse order.\n for offset, part in enumerate(reversed(parts)):\n part = part.lower()\n\n # Look for the first thing starting with 's3'.\n # Until there's a ``.s3`` TLD, we should be OK. :P\n if part == 's3':\n # If it's by itself, the region is the previous part.\n region_name = parts[-offset]\n\n # Unless it's Vhosted classic\n if region_name == 'amazonaws':\n region_name = 'us-east-1'\n\n break\n elif part.startswith('s3-'):\n region_name = self.clean_region_name(part)\n break\n\n return region_name\n\n def determine_service_name(self, host):\n # Should this signing mechanism ever be used for anything else, this\n # will fail. Consider utilizing the logic from the parent class should\n # you find yourself here.\n return 's3'\n\n def mangle_path_and_params(self, req):\n \"\"\"\n Returns a copy of the request object with fixed ``auth_path/params``\n attributes from the original.\n \"\"\"\n modified_req = copy.copy(req)\n\n # Unlike the most other services, in S3, ``req.params`` isn't the only\n # source of query string parameters.\n # Because of the ``query_args``, we may already have a query string\n # **ON** the ``path/auth_path``.\n # Rip them apart, so the ``auth_path/params`` can be signed\n # appropriately.\n parsed_path = urllib.parse.urlparse(modified_req.auth_path)\n modified_req.auth_path = parsed_path.path\n\n if modified_req.params is None:\n modified_req.params = {}\n\n raw_qs = parsed_path.query\n existing_qs = urllib.parse.parse_qs(\n raw_qs,\n keep_blank_values=True\n )\n\n # ``parse_qs`` will return lists. Don't do that unless there's a real,\n # live list provided.\n for key, value in existing_qs.items():\n if isinstance(value, (list, tuple)):\n if len(value) == 1:\n existing_qs[key] = value[0]\n\n modified_req.params.update(existing_qs)\n return modified_req\n\n def payload(self, http_request):\n if http_request.headers.get('x-amz-content-sha256'):\n return http_request.headers['x-amz-content-sha256']\n\n return super(S3HmacAuthV4Handler, self).payload(http_request)\n\n def add_auth(self, req, **kwargs):\n if not 'x-amz-content-sha256' in req.headers:\n if '_sha256' in req.headers:\n req.headers['x-amz-content-sha256'] = req.headers.pop('_sha256')\n else:\n req.headers['x-amz-content-sha256'] = self.payload(req)\n\n req = self.mangle_path_and_params(req)\n return super(S3HmacAuthV4Handler, self).add_auth(req, **kwargs)\n\n def presign(self, req, expires, iso_date=None):\n \"\"\"\n Presign a request using SigV4 query params. Takes in an HTTP request\n and an expiration time in seconds and returns a URL.\n\n http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html\n \"\"\"\n if iso_date is None:\n iso_date = datetime.datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')\n\n region = self.determine_region_name(req.host)\n service = self.determine_service_name(req.host)\n\n params = {\n 'X-Amz-Algorithm': 'AWS4-HMAC-SHA256',\n 'X-Amz-Credential': '%s/%s/%s/%s/aws4_request' % (\n self._provider.access_key,\n iso_date[:8],\n region,\n service\n ),\n 'X-Amz-Date': iso_date,\n 'X-Amz-Expires': expires,\n 'X-Amz-SignedHeaders': 'host'\n }\n\n if self._provider.security_token:\n params['X-Amz-Security-Token'] = self._provider.security_token\n\n req.params.update(params)\n\n cr = self.canonical_request(req)\n\n # We need to replace the payload SHA with a constant\n cr = '\\n'.join(cr.split('\\n')[:-1]) + '\\nUNSIGNED-PAYLOAD'\n\n # Date header is expected for string_to_sign, but unused otherwise\n req.headers['X-Amz-Date'] = iso_date\n\n sts = self.string_to_sign(req, cr)\n signature = self.signature(req, sts)\n\n # Add signature to params now that we have it\n req.params['X-Amz-Signature'] = signature\n\n return 'https://%s%s?%s' % (req.host, req.path,\n urllib.parse.urlencode(req.params))\n\n\nclass QueryAuthHandler(AuthHandler):\n \"\"\"\n Provides pure query construction (no actual signing).\n\n Mostly useful for STS' ``assume_role_with_web_identity``.\n\n Does **NOT** escape query string values!\n \"\"\"\n\n capability = ['pure-query']\n\n def _escape_value(self, value):\n # Would normally be ``return urllib.parse.quote(value)``.\n return value\n\n def _build_query_string(self, params):\n keys = list(params.keys())\n keys.sort(key=lambda x: x.lower())\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(key + '=' + self._escape_value(val.decode('utf-8')))\n return '&'.join(pairs)\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n params = http_request.params\n qs = self._build_query_string(\n http_request.params\n )\n boto.log.debug('query_string: %s' % qs)\n headers['Content-Type'] = 'application/json; charset=UTF-8'\n http_request.body = ''\n # if this is a retried request, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n http_request.path = http_request.path.split('?')[0]\n http_request.path = http_request.path + '?' + qs\n\n\nclass QuerySignatureHelper(HmacKeys):\n \"\"\"\n Helper for Query signature based Auth handler.\n\n Concrete sub class need to implement _calc_sigature method.\n \"\"\"\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n params = http_request.params\n params['AWSAccessKeyId'] = self._provider.access_key\n params['SignatureVersion'] = self.SignatureVersion\n params['Timestamp'] = boto.utils.get_ts()\n qs, signature = self._calc_signature(\n http_request.params, http_request.method,\n http_request.auth_path, http_request.host)\n boto.log.debug('query_string: %s Signature: %s' % (qs, signature))\n if http_request.method == 'POST':\n headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'\n http_request.body = qs + '&Signature=' + urllib.parse.quote_plus(signature)\n http_request.headers['Content-Length'] = str(len(http_request.body))\n else:\n http_request.body = ''\n # if this is a retried request, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n http_request.path = http_request.path.split('?')[0]\n http_request.path = (http_request.path + '?' + qs +\n '&Signature=' + urllib.parse.quote_plus(signature))\n\n\nclass QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"Provides Signature V0 Signing\"\"\"\n\n SignatureVersion = 0\n capability = ['sign-v0']\n\n def _calc_signature(self, params, *args):\n boto.log.debug('using _calc_signature_0')\n hmac = self._get_hmac()\n s = params['Action'] + params['Timestamp']\n hmac.update(s.encode('utf-8'))\n keys = params.keys()\n keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(key + '=' + urllib.parse.quote(val))\n qs = '&'.join(pairs)\n return (qs, base64.b64encode(hmac.digest()))\n\n\nclass QuerySignatureV1AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"\n Provides Query Signature V1 Authentication.\n \"\"\"\n\n SignatureVersion = 1\n capability = ['sign-v1', 'mturk']\n\n def __init__(self, *args, **kw):\n QuerySignatureHelper.__init__(self, *args, **kw)\n AuthHandler.__init__(self, *args, **kw)\n self._hmac_256 = None\n\n def _calc_signature(self, params, *args):\n boto.log.debug('using _calc_signature_1')\n hmac = self._get_hmac()\n keys = params.keys()\n keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))\n pairs = []\n for key in keys:\n hmac.update(key.encode('utf-8'))\n val = boto.utils.get_utf8_value(params[key])\n hmac.update(val)\n pairs.append(key + '=' + urllib.parse.quote(val))\n qs = '&'.join(pairs)\n return (qs, base64.b64encode(hmac.digest()))\n\n\nclass QuerySignatureV2AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"Provides Query Signature V2 Authentication.\"\"\"\n\n SignatureVersion = 2\n capability = ['sign-v2', 'ec2', 'ec2', 'emr', 'fps', 'ecs',\n 'sdb', 'iam', 'rds', 'sns', 'sqs', 'cloudformation']\n\n def _calc_signature(self, params, verb, path, server_name):\n boto.log.debug('using _calc_signature_2')\n string_to_sign = '%s\\n%s\\n%s\\n' % (verb, server_name.lower(), path)\n hmac = self._get_hmac()\n params['SignatureMethod'] = self.algorithm()\n if self._provider.security_token:\n params['SecurityToken'] = self._provider.security_token\n keys = sorted(params.keys())\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(urllib.parse.quote(key, safe='') + '=' +\n urllib.parse.quote(val, safe='-_~'))\n qs = '&'.join(pairs)\n boto.log.debug('query string: %s' % qs)\n string_to_sign += qs\n boto.log.debug('string_to_sign: %s' % string_to_sign)\n hmac.update(string_to_sign.encode('utf-8'))\n b64 = base64.b64encode(hmac.digest())\n boto.log.debug('len(b64)=%d' % len(b64))\n boto.log.debug('base64 encoded digest: %s' % b64)\n return (qs, b64)\n\n\nclass POSTPathQSV2AuthHandler(QuerySignatureV2AuthHandler, AuthHandler):\n \"\"\"\n Query Signature V2 Authentication relocating signed query\n into the path and allowing POST requests with Content-Types.\n \"\"\"\n\n capability = ['mws']\n\n def add_auth(self, req, **kwargs):\n req.params['AWSAccessKeyId'] = self._provider.access_key\n req.params['SignatureVersion'] = self.SignatureVersion\n req.params['Timestamp'] = boto.utils.get_ts()\n qs, signature = self._calc_signature(req.params, req.method,\n req.auth_path, req.host)\n boto.log.debug('query_string: %s Signature: %s' % (qs, signature))\n if req.method == 'POST':\n req.headers['Content-Length'] = str(len(req.body))\n req.headers['Content-Type'] = req.headers.get('Content-Type',\n 'text/plain')\n else:\n req.body = ''\n # if this is a retried req, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n req.path = req.path.split('?')[0]\n req.path = (req.path + '?' + qs +\n '&Signature=' + urllib.parse.quote_plus(signature))\n\n\ndef get_auth_handler(host, config, provider, requested_capability=None):\n \"\"\"Finds an AuthHandler that is ready to authenticate.\n\n Lists through all the registered AuthHandlers to find one that is willing\n to handle for the requested capabilities, config and provider.\n\n :type host: string\n :param host: The name of the host\n\n :type config:\n :param config:\n\n :type provider:\n :param provider:\n\n Returns:\n An implementation of AuthHandler.\n\n Raises:\n boto.exception.NoAuthHandlerFound\n \"\"\"\n ready_handlers = []\n auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)\n total_handlers = len(auth_handlers)\n for handler in auth_handlers:\n try:\n ready_handlers.append(handler(host, config, provider))\n except boto.auth_handler.NotReadyToAuthenticate:\n pass\n\n if not ready_handlers:\n checked_handlers = auth_handlers\n names = [handler.__name__ for handler in checked_handlers]\n raise boto.exception.NoAuthHandlerFound(\n 'No handler was ready to authenticate. %d handlers were checked.'\n ' %s '\n 'Check your credentials' % (len(names), str(names)))\n\n # We select the last ready auth handler that was loaded, to allow users to\n # customize how auth works in environments where there are shared boto\n # config files (e.g., /etc/boto.cfg and ~/.boto): The more general,\n # system-wide shared configs should be loaded first, and the user's\n # customizations loaded last. That way, for example, the system-wide\n # config might include a plugin_directory that includes a service account\n # auth plugin shared by all users of a Google Compute Engine instance\n # (allowing sharing of non-user data between various services), and the\n # user could override this with a .boto config that includes user-specific\n # credentials (for access to user data).\n return ready_handlers[-1]\n\n\ndef detect_potential_sigv4(func):\n def _wrapper(self):\n if os.environ.get('EC2_USE_SIGV4', False):\n return ['hmac-v4']\n\n if boto.config.get('ec2', 'use-sigv4', False):\n return ['hmac-v4']\n\n if hasattr(self, 'region'):\n # If you're making changes here, you should also check\n # ``boto/iam/connection.py``, as several things there are also\n # endpoint-related.\n if getattr(self.region, 'endpoint', ''):\n if '.cn-' in self.region.endpoint:\n return ['hmac-v4']\n\n return func(self)\n return _wrapper\n\n\ndef detect_potential_s3sigv4(func):\n def _wrapper(self):\n if os.environ.get('S3_USE_SIGV4', False):\n return ['hmac-v4-s3']\n\n if boto.config.get('s3', 'use-sigv4', False):\n return ['hmac-v4-s3']\n\n if hasattr(self, 'host'):\n # If you're making changes here, you should also check\n # ``boto/iam/connection.py``, as several things there are also\n # endpoint-related.\n if '.cn-' in self.host:\n return ['hmac-v4-s3']\n\n return func(self)\n return _wrapper\n", "path": "boto/auth.py"}], "after_files": [{"content": "# Copyright 2010 Google Inc.\n# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2011, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\n\n\"\"\"\nHandles authentication required to AWS and GS\n\"\"\"\n\nimport base64\nimport boto\nimport boto.auth_handler\nimport boto.exception\nimport boto.plugin\nimport boto.utils\nimport copy\nimport datetime\nfrom email.utils import formatdate\nimport hmac\nimport os\nimport sys\nimport time\nimport posixpath\n\nfrom boto.compat import urllib, encodebytes\nfrom boto.auth_handler import AuthHandler\nfrom boto.exception import BotoClientError\n\ntry:\n from hashlib import sha1 as sha\n from hashlib import sha256 as sha256\nexcept ImportError:\n import sha\n sha256 = None\n\n\nclass HmacKeys(object):\n \"\"\"Key based Auth handler helper.\"\"\"\n\n def __init__(self, host, config, provider):\n if provider.access_key is None or provider.secret_key is None:\n raise boto.auth_handler.NotReadyToAuthenticate()\n self.host = host\n self.update_provider(provider)\n\n def update_provider(self, provider):\n self._provider = provider\n self._hmac = hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=sha)\n if sha256:\n self._hmac_256 = hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=sha256)\n else:\n self._hmac_256 = None\n\n def algorithm(self):\n if self._hmac_256:\n return 'HmacSHA256'\n else:\n return 'HmacSHA1'\n\n def _get_hmac(self):\n if self._hmac_256:\n digestmod = sha256\n else:\n digestmod = sha\n return hmac.new(self._provider.secret_key.encode('utf-8'),\n digestmod=digestmod)\n\n def sign_string(self, string_to_sign):\n new_hmac = self._get_hmac()\n new_hmac.update(string_to_sign.encode('utf-8'))\n return encodebytes(new_hmac.digest()).decode('utf-8').strip()\n\n def __getstate__(self):\n pickled_dict = copy.copy(self.__dict__)\n del pickled_dict['_hmac']\n del pickled_dict['_hmac_256']\n return pickled_dict\n\n def __setstate__(self, dct):\n self.__dict__ = dct\n self.update_provider(self._provider)\n\n\nclass AnonAuthHandler(AuthHandler, HmacKeys):\n \"\"\"\n Implements Anonymous requests.\n \"\"\"\n\n capability = ['anon']\n\n def __init__(self, host, config, provider):\n super(AnonAuthHandler, self).__init__(host, config, provider)\n\n def add_auth(self, http_request, **kwargs):\n pass\n\n\nclass HmacAuthV1Handler(AuthHandler, HmacKeys):\n \"\"\" Implements the HMAC request signing used by S3 and GS.\"\"\"\n\n capability = ['hmac-v1', 's3']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n self._hmac_256 = None\n\n def update_provider(self, provider):\n super(HmacAuthV1Handler, self).update_provider(provider)\n self._hmac_256 = None\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n method = http_request.method\n auth_path = http_request.auth_path\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n string_to_sign = boto.utils.canonical_string(method, auth_path,\n headers, None,\n self._provider)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n b64_hmac = self.sign_string(string_to_sign)\n auth_hdr = self._provider.auth_header\n auth = (\"%s %s:%s\" % (auth_hdr, self._provider.access_key, b64_hmac))\n boto.log.debug('Signature:\\n%s' % auth)\n headers['Authorization'] = auth\n\n\nclass HmacAuthV2Handler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the simplified HMAC authorization used by CloudFront.\n \"\"\"\n capability = ['hmac-v2', 'cloudfront']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n self._hmac_256 = None\n\n def update_provider(self, provider):\n super(HmacAuthV2Handler, self).update_provider(provider)\n self._hmac_256 = None\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n\n b64_hmac = self.sign_string(headers['Date'])\n auth_hdr = self._provider.auth_header\n headers['Authorization'] = (\"%s %s:%s\" %\n (auth_hdr,\n self._provider.access_key, b64_hmac))\n\n\nclass HmacAuthV3Handler(AuthHandler, HmacKeys):\n \"\"\"Implements the new Version 3 HMAC authorization used by Route53.\"\"\"\n\n capability = ['hmac-v3', 'route53', 'ses']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n if 'Date' not in headers:\n headers['Date'] = formatdate(usegmt=True)\n\n if self._provider.security_token:\n key = self._provider.security_token_header\n headers[key] = self._provider.security_token\n\n b64_hmac = self.sign_string(headers['Date'])\n s = \"AWS3-HTTPS AWSAccessKeyId=%s,\" % self._provider.access_key\n s += \"Algorithm=%s,Signature=%s\" % (self.algorithm(), b64_hmac)\n headers['X-Amzn-Authorization'] = s\n\n\nclass HmacAuthV3HTTPHandler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the new Version 3 HMAC authorization used by DynamoDB.\n \"\"\"\n\n capability = ['hmac-v3-http']\n\n def __init__(self, host, config, provider):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n headers_to_sign = {'Host': self.host}\n for name, value in http_request.headers.items():\n lname = name.lower()\n if lname.startswith('x-amz'):\n headers_to_sign[name] = value\n return headers_to_sign\n\n def canonical_headers(self, headers_to_sign):\n \"\"\"\n Return the headers that need to be included in the StringToSign\n in their canonical form by converting all header keys to lower\n case, sorting them in alphabetical order and then joining\n them into a string, separated by newlines.\n \"\"\"\n l = sorted(['%s:%s' % (n.lower().strip(),\n headers_to_sign[n].strip()) for n in headers_to_sign])\n return '\\n'.join(l)\n\n def string_to_sign(self, http_request):\n \"\"\"\n Return the canonical StringToSign as well as a dict\n containing the original version of all headers that\n were included in the StringToSign.\n \"\"\"\n headers_to_sign = self.headers_to_sign(http_request)\n canonical_headers = self.canonical_headers(headers_to_sign)\n string_to_sign = '\\n'.join([http_request.method,\n http_request.auth_path,\n '',\n canonical_headers,\n '',\n http_request.body])\n return string_to_sign, headers_to_sign\n\n def add_auth(self, req, **kwargs):\n \"\"\"\n Add AWS3 authentication to a request.\n\n :type req: :class`boto.connection.HTTPRequest`\n :param req: The HTTPRequest object.\n \"\"\"\n # This could be a retry. Make sure the previous\n # authorization header is removed first.\n if 'X-Amzn-Authorization' in req.headers:\n del req.headers['X-Amzn-Authorization']\n req.headers['X-Amz-Date'] = formatdate(usegmt=True)\n if self._provider.security_token:\n req.headers['X-Amz-Security-Token'] = self._provider.security_token\n string_to_sign, headers_to_sign = self.string_to_sign(req)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n hash_value = sha256(string_to_sign.encode('utf-8')).digest()\n b64_hmac = self.sign_string(hash_value)\n s = \"AWS3 AWSAccessKeyId=%s,\" % self._provider.access_key\n s += \"Algorithm=%s,\" % self.algorithm()\n s += \"SignedHeaders=%s,\" % ';'.join(headers_to_sign)\n s += \"Signature=%s\" % b64_hmac\n req.headers['X-Amzn-Authorization'] = s\n\n\nclass HmacAuthV4Handler(AuthHandler, HmacKeys):\n \"\"\"\n Implements the new Version 4 HMAC authorization.\n \"\"\"\n\n capability = ['hmac-v4']\n\n def __init__(self, host, config, provider,\n service_name=None, region_name=None):\n AuthHandler.__init__(self, host, config, provider)\n HmacKeys.__init__(self, host, config, provider)\n # You can set the service_name and region_name to override the\n # values which would otherwise come from the endpoint, e.g.\n # <service>.<region>.amazonaws.com.\n self.service_name = service_name\n self.region_name = region_name\n\n def _sign(self, key, msg, hex=False):\n if not isinstance(key, bytes):\n key = key.encode('utf-8')\n\n if hex:\n sig = hmac.new(key, msg.encode('utf-8'), sha256).hexdigest()\n else:\n sig = hmac.new(key, msg.encode('utf-8'), sha256).digest()\n return sig\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n host_header_value = self.host_header(self.host, http_request)\n headers_to_sign = {'Host': host_header_value}\n for name, value in http_request.headers.items():\n lname = name.lower()\n if lname.startswith('x-amz'):\n if isinstance(value, bytes):\n value = value.decode('utf-8')\n headers_to_sign[name] = value\n return headers_to_sign\n\n def host_header(self, host, http_request):\n port = http_request.port\n secure = http_request.protocol == 'https'\n if ((port == 80 and not secure) or (port == 443 and secure)):\n return host\n return '%s:%s' % (host, port)\n\n def query_string(self, http_request):\n parameter_names = sorted(http_request.params.keys())\n pairs = []\n for pname in parameter_names:\n pval = boto.utils.get_utf8_value(http_request.params[pname])\n pairs.append(urllib.parse.quote(pname, safe='') + '=' +\n urllib.parse.quote(pval, safe='-_~'))\n return '&'.join(pairs)\n\n def canonical_query_string(self, http_request):\n # POST requests pass parameters in through the\n # http_request.body field.\n if http_request.method == 'POST':\n return \"\"\n l = []\n for param in sorted(http_request.params):\n value = boto.utils.get_utf8_value(http_request.params[param])\n l.append('%s=%s' % (urllib.parse.quote(param, safe='-_.~'),\n urllib.parse.quote(value.decode('utf-8'), safe='-_.~')))\n return '&'.join(l)\n\n def canonical_headers(self, headers_to_sign):\n \"\"\"\n Return the headers that need to be included in the StringToSign\n in their canonical form by converting all header keys to lower\n case, sorting them in alphabetical order and then joining\n them into a string, separated by newlines.\n \"\"\"\n canonical = []\n\n for header in headers_to_sign:\n c_name = header.lower().strip()\n raw_value = headers_to_sign[header]\n if '\"' in raw_value:\n c_value = raw_value.strip()\n else:\n c_value = ' '.join(raw_value.strip().split())\n canonical.append('%s:%s' % (c_name, c_value))\n return '\\n'.join(sorted(canonical))\n\n def signed_headers(self, headers_to_sign):\n l = ['%s' % n.lower().strip() for n in headers_to_sign]\n l = sorted(l)\n return ';'.join(l)\n\n def canonical_uri(self, http_request):\n path = http_request.auth_path\n # Normalize the path\n # in windows normpath('/') will be '\\\\' so we chane it back to '/'\n normalized = posixpath.normpath(path).replace('\\\\','/')\n # Then urlencode whatever's left.\n encoded = urllib.parse.quote(normalized)\n if len(path) > 1 and path.endswith('/'):\n encoded += '/'\n return encoded\n\n def payload(self, http_request):\n body = http_request.body\n # If the body is a file like object, we can use\n # boto.utils.compute_hash, which will avoid reading\n # the entire body into memory.\n if hasattr(body, 'seek') and hasattr(body, 'read'):\n return boto.utils.compute_hash(body, hash_algorithm=sha256)[0]\n elif not isinstance(body, bytes):\n body = body.encode('utf-8')\n return sha256(body).hexdigest()\n\n def canonical_request(self, http_request):\n cr = [http_request.method.upper()]\n cr.append(self.canonical_uri(http_request))\n cr.append(self.canonical_query_string(http_request))\n headers_to_sign = self.headers_to_sign(http_request)\n cr.append(self.canonical_headers(headers_to_sign) + '\\n')\n cr.append(self.signed_headers(headers_to_sign))\n cr.append(self.payload(http_request))\n return '\\n'.join(cr)\n\n def scope(self, http_request):\n scope = [self._provider.access_key]\n scope.append(http_request.timestamp)\n scope.append(http_request.region_name)\n scope.append(http_request.service_name)\n scope.append('aws4_request')\n return '/'.join(scope)\n\n def split_host_parts(self, host):\n return host.split('.')\n\n def determine_region_name(self, host):\n parts = self.split_host_parts(host)\n if self.region_name is not None:\n region_name = self.region_name\n elif len(parts) > 1:\n if parts[1] == 'us-gov':\n region_name = 'us-gov-west-1'\n else:\n if len(parts) == 3:\n region_name = 'us-east-1'\n else:\n region_name = parts[1]\n else:\n region_name = parts[0]\n\n return region_name\n\n def determine_service_name(self, host):\n parts = self.split_host_parts(host)\n if self.service_name is not None:\n service_name = self.service_name\n else:\n service_name = parts[0]\n return service_name\n\n def credential_scope(self, http_request):\n scope = []\n http_request.timestamp = http_request.headers['X-Amz-Date'][0:8]\n scope.append(http_request.timestamp)\n # The service_name and region_name either come from:\n # * The service_name/region_name attrs or (if these values are None)\n # * parsed from the endpoint <service>.<region>.amazonaws.com.\n region_name = self.determine_region_name(http_request.host)\n service_name = self.determine_service_name(http_request.host)\n http_request.service_name = service_name\n http_request.region_name = region_name\n\n scope.append(http_request.region_name)\n scope.append(http_request.service_name)\n scope.append('aws4_request')\n return '/'.join(scope)\n\n def string_to_sign(self, http_request, canonical_request):\n \"\"\"\n Return the canonical StringToSign as well as a dict\n containing the original version of all headers that\n were included in the StringToSign.\n \"\"\"\n sts = ['AWS4-HMAC-SHA256']\n sts.append(http_request.headers['X-Amz-Date'])\n sts.append(self.credential_scope(http_request))\n sts.append(sha256(canonical_request.encode('utf-8')).hexdigest())\n return '\\n'.join(sts)\n\n def signature(self, http_request, string_to_sign):\n key = self._provider.secret_key\n k_date = self._sign(('AWS4' + key).encode('utf-8'),\n http_request.timestamp)\n k_region = self._sign(k_date, http_request.region_name)\n k_service = self._sign(k_region, http_request.service_name)\n k_signing = self._sign(k_service, 'aws4_request')\n return self._sign(k_signing, string_to_sign, hex=True)\n\n def add_auth(self, req, **kwargs):\n \"\"\"\n Add AWS4 authentication to a request.\n\n :type req: :class`boto.connection.HTTPRequest`\n :param req: The HTTPRequest object.\n \"\"\"\n # This could be a retry. Make sure the previous\n # authorization header is removed first.\n if 'X-Amzn-Authorization' in req.headers:\n del req.headers['X-Amzn-Authorization']\n now = datetime.datetime.utcnow()\n req.headers['X-Amz-Date'] = now.strftime('%Y%m%dT%H%M%SZ')\n if self._provider.security_token:\n req.headers['X-Amz-Security-Token'] = self._provider.security_token\n qs = self.query_string(req)\n if qs and req.method == 'POST':\n # Stash request parameters into post body\n # before we generate the signature.\n req.body = qs\n req.headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'\n req.headers['Content-Length'] = str(len(req.body))\n else:\n # Safe to modify req.path here since\n # the signature will use req.auth_path.\n req.path = req.path.split('?')[0]\n\n if qs:\n # Don't insert the '?' unless there's actually a query string\n req.path = req.path + '?' + qs\n canonical_request = self.canonical_request(req)\n boto.log.debug('CanonicalRequest:\\n%s' % canonical_request)\n string_to_sign = self.string_to_sign(req, canonical_request)\n boto.log.debug('StringToSign:\\n%s' % string_to_sign)\n signature = self.signature(req, string_to_sign)\n boto.log.debug('Signature:\\n%s' % signature)\n headers_to_sign = self.headers_to_sign(req)\n l = ['AWS4-HMAC-SHA256 Credential=%s' % self.scope(req)]\n l.append('SignedHeaders=%s' % self.signed_headers(headers_to_sign))\n l.append('Signature=%s' % signature)\n req.headers['Authorization'] = ','.join(l)\n\n\nclass S3HmacAuthV4Handler(HmacAuthV4Handler, AuthHandler):\n \"\"\"\n Implements a variant of Version 4 HMAC authorization specific to S3.\n \"\"\"\n capability = ['hmac-v4-s3']\n\n def __init__(self, *args, **kwargs):\n super(S3HmacAuthV4Handler, self).__init__(*args, **kwargs)\n\n if self.region_name:\n self.region_name = self.clean_region_name(self.region_name)\n\n def clean_region_name(self, region_name):\n if region_name.startswith('s3-'):\n return region_name[3:]\n\n return region_name\n\n def canonical_uri(self, http_request):\n # S3 does **NOT** do path normalization that SigV4 typically does.\n # Urlencode the path, **NOT** ``auth_path`` (because vhosting).\n path = urllib.parse.urlparse(http_request.path)\n # Because some quoting may have already been applied, let's back it out.\n unquoted = urllib.parse.unquote(path.path)\n # Requote, this time addressing all characters.\n encoded = urllib.parse.quote(unquoted)\n return encoded\n\n def host_header(self, host, http_request):\n port = http_request.port\n secure = http_request.protocol == 'https'\n if ((port == 80 and not secure) or (port == 443 and secure)):\n return http_request.host\n return '%s:%s' % (http_request.host, port)\n\n def headers_to_sign(self, http_request):\n \"\"\"\n Select the headers from the request that need to be included\n in the StringToSign.\n \"\"\"\n host_header_value = self.host_header(self.host, http_request)\n headers_to_sign = {'Host': host_header_value}\n for name, value in http_request.headers.items():\n lname = name.lower()\n # Hooray for the only difference! The main SigV4 signer only does\n # ``Host`` + ``x-amz-*``. But S3 wants pretty much everything\n # signed, except for authorization itself.\n if not lname in ['authorization']:\n headers_to_sign[name] = value\n return headers_to_sign\n\n def determine_region_name(self, host):\n # S3's different format(s) of representing region/service from the\n # rest of AWS makes this hurt too.\n #\n # Possible domain formats:\n # - s3.amazonaws.com (Classic)\n # - s3-us-west-2.amazonaws.com (Specific region)\n # - bukkit.s3.amazonaws.com (Vhosted Classic)\n # - bukkit.s3-ap-northeast-1.amazonaws.com (Vhosted specific region)\n # - s3.cn-north-1.amazonaws.com.cn - (Bejing region)\n # - bukkit.s3.cn-north-1.amazonaws.com.cn - (Vhosted Bejing region)\n parts = self.split_host_parts(host)\n\n if self.region_name is not None:\n region_name = self.region_name\n else:\n # Classic URLs - s3-us-west-2.amazonaws.com\n if len(parts) == 3:\n region_name = self.clean_region_name(parts[0])\n\n # Special-case for Classic.\n if region_name == 's3':\n region_name = 'us-east-1'\n else:\n # Iterate over the parts in reverse order.\n for offset, part in enumerate(reversed(parts)):\n part = part.lower()\n\n # Look for the first thing starting with 's3'.\n # Until there's a ``.s3`` TLD, we should be OK. :P\n if part == 's3':\n # If it's by itself, the region is the previous part.\n region_name = parts[-offset]\n\n # Unless it's Vhosted classic\n if region_name == 'amazonaws':\n region_name = 'us-east-1'\n\n break\n elif part.startswith('s3-'):\n region_name = self.clean_region_name(part)\n break\n\n return region_name\n\n def determine_service_name(self, host):\n # Should this signing mechanism ever be used for anything else, this\n # will fail. Consider utilizing the logic from the parent class should\n # you find yourself here.\n return 's3'\n\n def mangle_path_and_params(self, req):\n \"\"\"\n Returns a copy of the request object with fixed ``auth_path/params``\n attributes from the original.\n \"\"\"\n modified_req = copy.copy(req)\n\n # Unlike the most other services, in S3, ``req.params`` isn't the only\n # source of query string parameters.\n # Because of the ``query_args``, we may already have a query string\n # **ON** the ``path/auth_path``.\n # Rip them apart, so the ``auth_path/params`` can be signed\n # appropriately.\n parsed_path = urllib.parse.urlparse(modified_req.auth_path)\n modified_req.auth_path = parsed_path.path\n\n if modified_req.params is None:\n modified_req.params = {}\n\n raw_qs = parsed_path.query\n existing_qs = urllib.parse.parse_qs(\n raw_qs,\n keep_blank_values=True\n )\n\n # ``parse_qs`` will return lists. Don't do that unless there's a real,\n # live list provided.\n for key, value in existing_qs.items():\n if isinstance(value, (list, tuple)):\n if len(value) == 1:\n existing_qs[key] = value[0]\n\n modified_req.params.update(existing_qs)\n return modified_req\n\n def payload(self, http_request):\n if http_request.headers.get('x-amz-content-sha256'):\n return http_request.headers['x-amz-content-sha256']\n\n return super(S3HmacAuthV4Handler, self).payload(http_request)\n\n def add_auth(self, req, **kwargs):\n if not 'x-amz-content-sha256' in req.headers:\n if '_sha256' in req.headers:\n req.headers['x-amz-content-sha256'] = req.headers.pop('_sha256')\n else:\n req.headers['x-amz-content-sha256'] = self.payload(req)\n\n req = self.mangle_path_and_params(req)\n return super(S3HmacAuthV4Handler, self).add_auth(req, **kwargs)\n\n def presign(self, req, expires, iso_date=None):\n \"\"\"\n Presign a request using SigV4 query params. Takes in an HTTP request\n and an expiration time in seconds and returns a URL.\n\n http://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html\n \"\"\"\n if iso_date is None:\n iso_date = datetime.datetime.utcnow().strftime('%Y%m%dT%H%M%SZ')\n\n region = self.determine_region_name(req.host)\n service = self.determine_service_name(req.host)\n\n params = {\n 'X-Amz-Algorithm': 'AWS4-HMAC-SHA256',\n 'X-Amz-Credential': '%s/%s/%s/%s/aws4_request' % (\n self._provider.access_key,\n iso_date[:8],\n region,\n service\n ),\n 'X-Amz-Date': iso_date,\n 'X-Amz-Expires': expires,\n 'X-Amz-SignedHeaders': 'host'\n }\n\n if self._provider.security_token:\n params['X-Amz-Security-Token'] = self._provider.security_token\n\n req.params.update(params)\n\n cr = self.canonical_request(req)\n\n # We need to replace the payload SHA with a constant\n cr = '\\n'.join(cr.split('\\n')[:-1]) + '\\nUNSIGNED-PAYLOAD'\n\n # Date header is expected for string_to_sign, but unused otherwise\n req.headers['X-Amz-Date'] = iso_date\n\n sts = self.string_to_sign(req, cr)\n signature = self.signature(req, sts)\n\n # Add signature to params now that we have it\n req.params['X-Amz-Signature'] = signature\n\n return 'https://%s%s?%s' % (req.host, req.path,\n urllib.parse.urlencode(req.params))\n\n\nclass QueryAuthHandler(AuthHandler):\n \"\"\"\n Provides pure query construction (no actual signing).\n\n Mostly useful for STS' ``assume_role_with_web_identity``.\n\n Does **NOT** escape query string values!\n \"\"\"\n\n capability = ['pure-query']\n\n def _escape_value(self, value):\n # Would normally be ``return urllib.parse.quote(value)``.\n return value\n\n def _build_query_string(self, params):\n keys = list(params.keys())\n keys.sort(key=lambda x: x.lower())\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(key + '=' + self._escape_value(val.decode('utf-8')))\n return '&'.join(pairs)\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n params = http_request.params\n qs = self._build_query_string(\n http_request.params\n )\n boto.log.debug('query_string: %s' % qs)\n headers['Content-Type'] = 'application/json; charset=UTF-8'\n http_request.body = ''\n # if this is a retried request, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n http_request.path = http_request.path.split('?')[0]\n http_request.path = http_request.path + '?' + qs\n\n\nclass QuerySignatureHelper(HmacKeys):\n \"\"\"\n Helper for Query signature based Auth handler.\n\n Concrete sub class need to implement _calc_sigature method.\n \"\"\"\n\n def add_auth(self, http_request, **kwargs):\n headers = http_request.headers\n params = http_request.params\n params['AWSAccessKeyId'] = self._provider.access_key\n params['SignatureVersion'] = self.SignatureVersion\n params['Timestamp'] = boto.utils.get_ts()\n qs, signature = self._calc_signature(\n http_request.params, http_request.method,\n http_request.auth_path, http_request.host)\n boto.log.debug('query_string: %s Signature: %s' % (qs, signature))\n if http_request.method == 'POST':\n headers['Content-Type'] = 'application/x-www-form-urlencoded; charset=UTF-8'\n http_request.body = qs + '&Signature=' + urllib.parse.quote_plus(signature)\n http_request.headers['Content-Length'] = str(len(http_request.body))\n else:\n http_request.body = ''\n # if this is a retried request, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n http_request.path = http_request.path.split('?')[0]\n http_request.path = (http_request.path + '?' + qs +\n '&Signature=' + urllib.parse.quote_plus(signature))\n\n\nclass QuerySignatureV0AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"Provides Signature V0 Signing\"\"\"\n\n SignatureVersion = 0\n capability = ['sign-v0']\n\n def _calc_signature(self, params, *args):\n boto.log.debug('using _calc_signature_0')\n hmac = self._get_hmac()\n s = params['Action'] + params['Timestamp']\n hmac.update(s.encode('utf-8'))\n keys = params.keys()\n keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(key + '=' + urllib.parse.quote(val))\n qs = '&'.join(pairs)\n return (qs, base64.b64encode(hmac.digest()))\n\n\nclass QuerySignatureV1AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"\n Provides Query Signature V1 Authentication.\n \"\"\"\n\n SignatureVersion = 1\n capability = ['sign-v1', 'mturk']\n\n def __init__(self, *args, **kw):\n QuerySignatureHelper.__init__(self, *args, **kw)\n AuthHandler.__init__(self, *args, **kw)\n self._hmac_256 = None\n\n def _calc_signature(self, params, *args):\n boto.log.debug('using _calc_signature_1')\n hmac = self._get_hmac()\n keys = params.keys()\n keys.sort(cmp=lambda x, y: cmp(x.lower(), y.lower()))\n pairs = []\n for key in keys:\n hmac.update(key.encode('utf-8'))\n val = boto.utils.get_utf8_value(params[key])\n hmac.update(val)\n pairs.append(key + '=' + urllib.parse.quote(val))\n qs = '&'.join(pairs)\n return (qs, base64.b64encode(hmac.digest()))\n\n\nclass QuerySignatureV2AuthHandler(QuerySignatureHelper, AuthHandler):\n \"\"\"Provides Query Signature V2 Authentication.\"\"\"\n\n SignatureVersion = 2\n capability = ['sign-v2', 'ec2', 'ec2', 'emr', 'fps', 'ecs',\n 'sdb', 'iam', 'rds', 'sns', 'sqs', 'cloudformation']\n\n def _calc_signature(self, params, verb, path, server_name):\n boto.log.debug('using _calc_signature_2')\n string_to_sign = '%s\\n%s\\n%s\\n' % (verb, server_name.lower(), path)\n hmac = self._get_hmac()\n params['SignatureMethod'] = self.algorithm()\n if self._provider.security_token:\n params['SecurityToken'] = self._provider.security_token\n keys = sorted(params.keys())\n pairs = []\n for key in keys:\n val = boto.utils.get_utf8_value(params[key])\n pairs.append(urllib.parse.quote(key, safe='') + '=' +\n urllib.parse.quote(val, safe='-_~'))\n qs = '&'.join(pairs)\n boto.log.debug('query string: %s' % qs)\n string_to_sign += qs\n boto.log.debug('string_to_sign: %s' % string_to_sign)\n hmac.update(string_to_sign.encode('utf-8'))\n b64 = base64.b64encode(hmac.digest())\n boto.log.debug('len(b64)=%d' % len(b64))\n boto.log.debug('base64 encoded digest: %s' % b64)\n return (qs, b64)\n\n\nclass POSTPathQSV2AuthHandler(QuerySignatureV2AuthHandler, AuthHandler):\n \"\"\"\n Query Signature V2 Authentication relocating signed query\n into the path and allowing POST requests with Content-Types.\n \"\"\"\n\n capability = ['mws']\n\n def add_auth(self, req, **kwargs):\n req.params['AWSAccessKeyId'] = self._provider.access_key\n req.params['SignatureVersion'] = self.SignatureVersion\n req.params['Timestamp'] = boto.utils.get_ts()\n qs, signature = self._calc_signature(req.params, req.method,\n req.auth_path, req.host)\n boto.log.debug('query_string: %s Signature: %s' % (qs, signature))\n if req.method == 'POST':\n req.headers['Content-Length'] = str(len(req.body))\n req.headers['Content-Type'] = req.headers.get('Content-Type',\n 'text/plain')\n else:\n req.body = ''\n # if this is a retried req, the qs from the previous try will\n # already be there, we need to get rid of that and rebuild it\n req.path = req.path.split('?')[0]\n req.path = (req.path + '?' + qs +\n '&Signature=' + urllib.parse.quote_plus(signature))\n\n\ndef get_auth_handler(host, config, provider, requested_capability=None):\n \"\"\"Finds an AuthHandler that is ready to authenticate.\n\n Lists through all the registered AuthHandlers to find one that is willing\n to handle for the requested capabilities, config and provider.\n\n :type host: string\n :param host: The name of the host\n\n :type config:\n :param config:\n\n :type provider:\n :param provider:\n\n Returns:\n An implementation of AuthHandler.\n\n Raises:\n boto.exception.NoAuthHandlerFound\n \"\"\"\n ready_handlers = []\n auth_handlers = boto.plugin.get_plugin(AuthHandler, requested_capability)\n total_handlers = len(auth_handlers)\n for handler in auth_handlers:\n try:\n ready_handlers.append(handler(host, config, provider))\n except boto.auth_handler.NotReadyToAuthenticate:\n pass\n\n if not ready_handlers:\n checked_handlers = auth_handlers\n names = [handler.__name__ for handler in checked_handlers]\n raise boto.exception.NoAuthHandlerFound(\n 'No handler was ready to authenticate. %d handlers were checked.'\n ' %s '\n 'Check your credentials' % (len(names), str(names)))\n\n # We select the last ready auth handler that was loaded, to allow users to\n # customize how auth works in environments where there are shared boto\n # config files (e.g., /etc/boto.cfg and ~/.boto): The more general,\n # system-wide shared configs should be loaded first, and the user's\n # customizations loaded last. That way, for example, the system-wide\n # config might include a plugin_directory that includes a service account\n # auth plugin shared by all users of a Google Compute Engine instance\n # (allowing sharing of non-user data between various services), and the\n # user could override this with a .boto config that includes user-specific\n # credentials (for access to user data).\n return ready_handlers[-1]\n\n\ndef detect_potential_sigv4(func):\n def _wrapper(self):\n if os.environ.get('EC2_USE_SIGV4', False):\n return ['hmac-v4']\n\n if boto.config.get('ec2', 'use-sigv4', False):\n return ['hmac-v4']\n\n if hasattr(self, 'region'):\n # If you're making changes here, you should also check\n # ``boto/iam/connection.py``, as several things there are also\n # endpoint-related.\n if getattr(self.region, 'endpoint', ''):\n if '.cn-' in self.region.endpoint:\n return ['hmac-v4']\n\n return func(self)\n return _wrapper\n\n\ndef detect_potential_s3sigv4(func):\n def _wrapper(self):\n if os.environ.get('S3_USE_SIGV4', False):\n return ['hmac-v4-s3']\n\n if boto.config.get('s3', 'use-sigv4', False):\n return ['hmac-v4-s3']\n\n if hasattr(self, 'host'):\n # If you're making changes here, you should also check\n # ``boto/iam/connection.py``, as several things there are also\n # endpoint-related.\n if '.cn-' in self.host:\n return ['hmac-v4-s3']\n\n return func(self)\n return _wrapper\n", "path": "boto/auth.py"}]} |
gh_patches_debug_1140 | rasdani/github-patches | git_diff | litestar-org__litestar-1000 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: Viewing default schema then swagger schema results in error
**Describe the bug**
Viewing the standard `/schema` route and then viewing `/schema/swagger` results in an empty page. The console log on the swagger route mentions parsing invalid JSON (Trying to parse nothing - `JSON.parse()`)
I believe the problem is the caching located [here](https://github.com/starlite-api/starlite/blob/55eea965b2ac9e56aca77797512ab878b0b7499b/starlite/openapi/controller.py#L306). It should be checking for `self._dumped_modified_schema`. Changing to this seems to fix it.
**To Reproduce**
Run the hello world example from the documentation:
```python
from typing import Dict
from starlite import Starlite, get
@get("/")
def hello_world() -> Dict[str, str]:
"""Handler function that returns a greeting dictionary."""
return {"hello": "world"}
app = Starlite(route_handlers=[hello_world])
```
Then visit `/schema` and the page should be fine. Then visit `/schema/swagger` and it should be an empty page.
**Additional context**
Add any other context about the problem here.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `starlite/openapi/controller.py`
Content:
```
1 from functools import cached_property
2 from typing import TYPE_CHECKING, Callable, Dict, Literal, cast
3
4 from yaml import dump as dump_yaml
5
6 from starlite.connection import Request
7 from starlite.controller import Controller
8 from starlite.enums import MediaType, OpenAPIMediaType
9 from starlite.exceptions import ImproperlyConfiguredException
10 from starlite.handlers import get
11 from starlite.response import Response
12 from starlite.status_codes import HTTP_404_NOT_FOUND
13 from starlite.utils.serialization import encode_json
14
15 if TYPE_CHECKING:
16
17 from pydantic_openapi_schema.v3_1_0.open_api import OpenAPI
18
19 MSG_OPENAPI_NOT_INITIALIZED = "Starlite has not been instantiated with OpenAPIConfig"
20
21
22 class OpenAPISchemaResponse(Response):
23 """Response class for OpenAPI Schemas."""
24
25 def render(self, content: "OpenAPI") -> bytes:
26 """Handle rendering of schema into the correct format - either YAML or JSON.
27
28 Args:
29 content: The [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance to render.
30
31 Returns:
32 Rendered bytes.
33 """
34 content_dict = content.dict(by_alias=True, exclude_none=True)
35 if self.media_type == OpenAPIMediaType.OPENAPI_YAML:
36 return cast("bytes", dump_yaml(content_dict, default_flow_style=False).encode("utf-8"))
37 return encode_json(content_dict)
38
39
40 class OpenAPIController(Controller):
41 """Controller for OpenAPI endpoints."""
42
43 path: str = "/schema"
44 """Base path for the OpenAPI documentation endpoints."""
45 style: str = "body { margin: 0; padding: 0 }"
46 """Base styling of the html body."""
47 redoc_version: str = "next"
48 """Redoc version to download from the CDN."""
49 swagger_ui_version: str = "4.15.5"
50 """SwaggerUI version to download from the CDN."""
51 stoplight_elements_version: str = "7.7.5"
52 """StopLight Elements version to download from the CDN."""
53 favicon_url: str = ""
54 """URL to download a favicon from."""
55 redoc_google_fonts: bool = True
56 """Download google fonts via CDN.
57
58 Should be set to `False` when not using a CDN.
59 """
60 redoc_js_url: str = f"https://cdn.jsdelivr.net/npm/redoc@{redoc_version}/bundles/redoc.standalone.js"
61 """Download url for the Redoc JS bundle."""
62 swagger_css_url: str = f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui.css"
63 """Download url for the Swagger UI CSS bundle."""
64 swagger_ui_bundle_js_url: str = (
65 f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-bundle.js"
66 )
67 """Download url for the Swagger UI JS bundle."""
68 swagger_ui_standalone_preset_js_url: str = (
69 f"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-standalone-preset.js"
70 )
71 """Download url for the Swagger Standalone Preset JS bundle."""
72 stoplight_elements_css_url: str = (
73 f"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/styles.min.css"
74 )
75 """Download url for the Stoplight Elements CSS bundle."""
76 stoplight_elements_js_url: str = (
77 f"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/web-components.min.js"
78 )
79 """Download url for the Stoplight Elements JS bundle."""
80
81 # internal
82 _dumped_schema: str = ""
83 # until swagger-ui supports v3.1.* of OpenAPI officially, we need to modify the schema for it and keep it
84 # separate from the redoc version of the schema, which is unmodified.
85 _dumped_modified_schema: str = ""
86
87 @staticmethod
88 def get_schema_from_request(request: Request) -> "OpenAPI":
89 """Return the OpenAPI pydantic model from the request instance.
90
91 Args:
92 request: A [Starlite][starlite.connection.Request] instance.
93
94 Returns:
95 An [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance.
96
97 Raises:
98 ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.
99 """
100 if not request.app.openapi_schema: # pragma: no cover
101 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
102 return request.app.openapi_schema
103
104 def should_serve_endpoint(self, request: "Request") -> bool:
105 """Verify that the requested path is within the enabled endpoints in the openapi_config.
106
107 Args:
108 request: To be tested if endpoint enabled.
109
110 Returns:
111 A boolean.
112
113 Raises:
114 ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.
115 """
116 if not request.app.openapi_config: # pragma: no cover
117 raise ImproperlyConfiguredException("Starlite has not been instantiated with an OpenAPIConfig")
118
119 asgi_root_path = set(filter(None, request.scope.get("root_path", "").split("/")))
120 full_request_path = set(filter(None, request.url.path.split("/")))
121 request_path = full_request_path.difference(asgi_root_path)
122 root_path = set(filter(None, self.path.split("/")))
123
124 config = request.app.openapi_config
125
126 if request_path == root_path and config.root_schema_site in config.enabled_endpoints:
127 return True
128
129 if request_path & config.enabled_endpoints:
130 return True
131
132 return False
133
134 @property
135 def favicon(self) -> str:
136 """Return favicon `<link>` tag, if applicable.
137
138 Returns:
139 A `<link>` tag if self.favicon_url is not empty, otherwise returns a placeholder meta tag.
140 """
141 return f"<link rel='icon' type='image/x-icon' href='{self.favicon_url}'>" if self.favicon_url else "<meta/>"
142
143 @cached_property
144 def render_methods_map(self) -> Dict[Literal["redoc", "swagger", "elements"], Callable[[Request], str]]:
145 """Map render method names to render methods.
146
147 Returns:
148 A mapping of string keys to render methods.
149 """
150 return {
151 "redoc": self.render_redoc,
152 "swagger": self.render_swagger_ui,
153 "elements": self.render_stoplight_elements,
154 }
155
156 @get(
157 path="/openapi.yaml",
158 media_type=OpenAPIMediaType.OPENAPI_YAML,
159 include_in_schema=False,
160 )
161 def retrieve_schema_yaml(self, request: Request) -> Response:
162 """Return the OpenAPI schema as YAML with an 'application/vnd.oai.openapi' Content-Type header.
163
164 Args:
165 request:
166 A [Request][starlite.connection.Request] instance.
167
168 Returns:
169 A Response instance with the YAML object rendered into a string.
170 """
171 if not request.app.openapi_config: # pragma: no cover
172 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
173
174 if self.should_serve_endpoint(request):
175 return OpenAPISchemaResponse(
176 content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_YAML
177 )
178 return Response(content={}, status_code=HTTP_404_NOT_FOUND)
179
180 @get(path="/openapi.json", media_type=OpenAPIMediaType.OPENAPI_JSON, include_in_schema=False)
181 def retrieve_schema_json(self, request: Request) -> Response:
182 """Return the OpenAPI schema as JSON with an 'application/vnd.oai.openapi+json' Content-Type header.
183
184 Args:
185 request:
186 A [Request][starlite.connection.Request] instance.
187
188 Returns:
189 A Response instance with the JSON object rendered into a string.
190 """
191 if not request.app.openapi_config: # pragma: no cover
192 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
193
194 if self.should_serve_endpoint(request):
195 return OpenAPISchemaResponse(
196 content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_JSON
197 )
198 return Response(content={}, status_code=HTTP_404_NOT_FOUND)
199
200 @get(path="/", media_type=MediaType.HTML, include_in_schema=False)
201 def root(self, request: Request) -> Response:
202 """Render a static documentation site.
203
204 The site to be rendered is based on the `root_schema_site` value set in the
205 application's [OpenAPIConfig][starlite.config.openapi.OpenAPIConfig].
206 Defaults to `redoc`.
207
208 Args:
209 request:
210 A [Request][starlite.connection.Request] instance.
211
212 Returns:
213 A response with the rendered site defined in root_schema_site.
214
215 Raises:
216 ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.
217 """
218 config = request.app.openapi_config
219 if not config: # pragma: no cover
220 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
221
222 render_method = self.render_methods_map[config.root_schema_site]
223
224 if self.should_serve_endpoint(request):
225 return Response(content=render_method(request), media_type=MediaType.HTML)
226
227 return Response(
228 content=self.render_404_page(),
229 status_code=HTTP_404_NOT_FOUND,
230 media_type=MediaType.HTML,
231 )
232
233 @get(path="/swagger", media_type=MediaType.HTML, include_in_schema=False)
234 def swagger_ui(self, request: Request) -> Response:
235 """Route handler responsible for rendering Swagger-UI.
236
237 Args:
238 request:
239 A [Request][starlite.connection.Request] instance.
240
241 Returns:
242 response: With a rendered swagger documentation site
243 """
244 if not request.app.openapi_config: # pragma: no cover
245 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
246
247 if self.should_serve_endpoint(request):
248 return Response(content=self.render_swagger_ui(request), media_type=MediaType.HTML)
249 return Response(
250 content=self.render_404_page(),
251 status_code=HTTP_404_NOT_FOUND,
252 media_type=MediaType.HTML,
253 )
254
255 @get(path="/elements", media_type=MediaType.HTML, include_in_schema=False)
256 def stoplight_elements(self, request: Request) -> Response:
257 """Route handler responsible for rendering StopLight Elements.
258
259 Args:
260 request:
261 A [Request][starlite.connection.Request] instance.
262
263 Returns:
264 A response with a rendered stoplight elements documentation site
265 """
266 if not request.app.openapi_config: # pragma: no cover
267 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
268
269 if self.should_serve_endpoint(request):
270 return Response(content=self.render_stoplight_elements(request), media_type=MediaType.HTML)
271 return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)
272
273 @get(path="/redoc", media_type=MediaType.HTML, include_in_schema=False)
274 def redoc(self, request: Request) -> Response: # pragma: no cover
275 """Route handler responsible for rendering Redoc.
276
277 Args:
278 request:
279 A [Request][starlite.connection.Request] instance.
280
281 Returns:
282 A response with a rendered redoc documentation site
283 """
284 if not request.app.openapi_config: # pragma: no cover
285 raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)
286
287 if self.should_serve_endpoint(request):
288 return Response(content=self.render_redoc(request), media_type=MediaType.HTML)
289 return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)
290
291 def render_swagger_ui(self, request: Request) -> str:
292 """Render an HTML page for Swagger-UI.
293
294 Notes:
295 - override this method to customize the template.
296
297 Args:
298 request:
299 A [Request][starlite.connection.Request] instance.
300
301 Returns:
302 A rendered html string.
303 """
304 schema = self.get_schema_from_request(request)
305 # Note: Fix for Swagger rejection OpenAPI >=3.1
306 if not self._dumped_schema:
307 schema_copy = schema.copy()
308 schema_copy.openapi = "3.0.3"
309
310 self._dumped_modified_schema = encode_json(schema_copy.json(by_alias=True, exclude_none=True)).decode(
311 "utf-8"
312 )
313
314 head = f"""
315 <head>
316 <title>{schema.info.title}</title>
317 {self.favicon}
318 <meta charset="utf-8"/>
319 <meta name="viewport" content="width=device-width, initial-scale=1">
320 <link href="{self.swagger_css_url}" rel="stylesheet">
321 <script src="{self.swagger_ui_bundle_js_url}" crossorigin></script>
322 <script src="{self.swagger_ui_standalone_preset_js_url}" crossorigin></script>
323 <style>{self.style}</style>
324 </head>
325 """
326
327 body = f"""
328 <body>
329 <div id='swagger-container'/>
330 <script type="text/javascript">
331 const ui = SwaggerUIBundle({{
332 spec: JSON.parse({self._dumped_modified_schema}),
333 dom_id: '#swagger-container',
334 deepLinking: true,
335 showExtensions: true,
336 showCommonExtensions: true,
337 presets: [
338 SwaggerUIBundle.presets.apis,
339 SwaggerUIBundle.SwaggerUIStandalonePreset
340 ],
341 }})
342 </script>
343 </body>
344 """
345
346 return f"""
347 <!DOCTYPE html>
348 <html>
349 {head}
350 {body}
351 </html>
352 """
353
354 def render_stoplight_elements(self, request: Request) -> str:
355 """Render an HTML page for StopLight Elements.
356
357 Notes:
358 - override this method to customize the template.
359
360 Args:
361 request:
362 A [Request][starlite.connection.Request] instance.
363
364 Returns:
365 A rendered html string.
366 """
367 schema = self.get_schema_from_request(request)
368 head = f"""
369 <head>
370 <title>{schema.info.title}</title>
371 {self.favicon}
372 <meta charset="utf-8"/>
373 <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
374 <link rel="stylesheet" href="{self.stoplight_elements_css_url}">
375 <script src="{self.stoplight_elements_js_url}" crossorigin></script>
376 <style>{self.style}</style>
377 </head>
378 """
379
380 body = f"""
381 <body>
382 <elements-api
383 apiDescriptionUrl="{self.path}/openapi.json"
384 router="hash"
385 layout="sidebar"
386 />
387 </body>
388 """
389
390 return f"""
391 <!DOCTYPE html>
392 <html>
393 {head}
394 {body}
395 </html>
396 """
397
398 def render_redoc(self, request: Request) -> str: # pragma: no cover
399 """Render an HTML page for Redoc.
400
401 Notes:
402 - override this method to customize the template.
403
404 Args:
405 request:
406 A [Request][starlite.connection.Request] instance.
407
408 Returns:
409 A rendered html string.
410 """
411 schema = self.get_schema_from_request(request)
412
413 if not self._dumped_schema:
414 self._dumped_schema = encode_json(schema.json(by_alias=True, exclude_none=True)).decode("utf-8")
415
416 head = f"""
417 <head>
418 <title>{schema.info.title}</title>
419 {self.favicon}
420 <meta charset="utf-8"/>
421 <meta name="viewport" content="width=device-width, initial-scale=1">
422 """
423
424 if self.redoc_google_fonts:
425 head += """
426 <link href="https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700" rel="stylesheet">
427 """
428
429 head += f"""
430 <script src="{self.redoc_js_url}" crossorigin></script>
431 <style>
432 {self.style}
433 </style>
434 </head>
435 """
436
437 body = f"""
438 <body>
439 <div id='redoc-container'/>
440 <script type="text/javascript">
441 Redoc.init(
442 JSON.parse({self._dumped_schema}),
443 undefined,
444 document.getElementById('redoc-container')
445 )
446 </script>
447 </body>
448 """
449
450 return f"""
451 <!DOCTYPE html>
452 <html>
453 {head}
454 {body}
455 </html>
456 """
457
458 def render_404_page(self) -> str:
459 """Render an HTML 404 page.
460
461 Returns:
462 A rendered html string.
463 """
464
465 return f"""
466 <!DOCTYPE html>
467 <html>
468 <head>
469 <title>404 Not found</title>
470 {self.favicon}
471 <meta charset="utf-8"/>
472 <meta name="viewport" content="width=device-width, initial-scale=1">
473 <style>
474 {self.style}
475 </style>
476 </head>
477 <body>
478 <h1>Error 404</h1>
479 </body>
480 </html>
481 """
482
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/starlite/openapi/controller.py b/starlite/openapi/controller.py
--- a/starlite/openapi/controller.py
+++ b/starlite/openapi/controller.py
@@ -303,7 +303,7 @@
"""
schema = self.get_schema_from_request(request)
# Note: Fix for Swagger rejection OpenAPI >=3.1
- if not self._dumped_schema:
+ if not self._dumped_modified_schema:
schema_copy = schema.copy()
schema_copy.openapi = "3.0.3"
| {"golden_diff": "diff --git a/starlite/openapi/controller.py b/starlite/openapi/controller.py\n--- a/starlite/openapi/controller.py\n+++ b/starlite/openapi/controller.py\n@@ -303,7 +303,7 @@\n \"\"\"\n schema = self.get_schema_from_request(request)\n # Note: Fix for Swagger rejection OpenAPI >=3.1\n- if not self._dumped_schema:\n+ if not self._dumped_modified_schema:\n schema_copy = schema.copy()\n schema_copy.openapi = \"3.0.3\"\n", "issue": "Bug: Viewing default schema then swagger schema results in error\n**Describe the bug**\r\nViewing the standard `/schema` route and then viewing `/schema/swagger` results in an empty page. The console log on the swagger route mentions parsing invalid JSON (Trying to parse nothing - `JSON.parse()`)\r\n\r\nI believe the problem is the caching located [here](https://github.com/starlite-api/starlite/blob/55eea965b2ac9e56aca77797512ab878b0b7499b/starlite/openapi/controller.py#L306). It should be checking for `self._dumped_modified_schema`. Changing to this seems to fix it.\r\n\r\n**To Reproduce**\r\nRun the hello world example from the documentation:\r\n```python\r\n\r\nfrom typing import Dict\r\n\r\nfrom starlite import Starlite, get\r\n\r\n\r\n@get(\"/\")\r\ndef hello_world() -> Dict[str, str]:\r\n \"\"\"Handler function that returns a greeting dictionary.\"\"\"\r\n return {\"hello\": \"world\"}\r\n\r\n\r\napp = Starlite(route_handlers=[hello_world])\r\n```\r\nThen visit `/schema` and the page should be fine. Then visit `/schema/swagger` and it should be an empty page.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "from functools import cached_property\nfrom typing import TYPE_CHECKING, Callable, Dict, Literal, cast\n\nfrom yaml import dump as dump_yaml\n\nfrom starlite.connection import Request\nfrom starlite.controller import Controller\nfrom starlite.enums import MediaType, OpenAPIMediaType\nfrom starlite.exceptions import ImproperlyConfiguredException\nfrom starlite.handlers import get\nfrom starlite.response import Response\nfrom starlite.status_codes import HTTP_404_NOT_FOUND\nfrom starlite.utils.serialization import encode_json\n\nif TYPE_CHECKING:\n\n from pydantic_openapi_schema.v3_1_0.open_api import OpenAPI\n\nMSG_OPENAPI_NOT_INITIALIZED = \"Starlite has not been instantiated with OpenAPIConfig\"\n\n\nclass OpenAPISchemaResponse(Response):\n \"\"\"Response class for OpenAPI Schemas.\"\"\"\n\n def render(self, content: \"OpenAPI\") -> bytes:\n \"\"\"Handle rendering of schema into the correct format - either YAML or JSON.\n\n Args:\n content: The [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance to render.\n\n Returns:\n Rendered bytes.\n \"\"\"\n content_dict = content.dict(by_alias=True, exclude_none=True)\n if self.media_type == OpenAPIMediaType.OPENAPI_YAML:\n return cast(\"bytes\", dump_yaml(content_dict, default_flow_style=False).encode(\"utf-8\"))\n return encode_json(content_dict)\n\n\nclass OpenAPIController(Controller):\n \"\"\"Controller for OpenAPI endpoints.\"\"\"\n\n path: str = \"/schema\"\n \"\"\"Base path for the OpenAPI documentation endpoints.\"\"\"\n style: str = \"body { margin: 0; padding: 0 }\"\n \"\"\"Base styling of the html body.\"\"\"\n redoc_version: str = \"next\"\n \"\"\"Redoc version to download from the CDN.\"\"\"\n swagger_ui_version: str = \"4.15.5\"\n \"\"\"SwaggerUI version to download from the CDN.\"\"\"\n stoplight_elements_version: str = \"7.7.5\"\n \"\"\"StopLight Elements version to download from the CDN.\"\"\"\n favicon_url: str = \"\"\n \"\"\"URL to download a favicon from.\"\"\"\n redoc_google_fonts: bool = True\n \"\"\"Download google fonts via CDN.\n\n Should be set to `False` when not using a CDN.\n \"\"\"\n redoc_js_url: str = f\"https://cdn.jsdelivr.net/npm/redoc@{redoc_version}/bundles/redoc.standalone.js\"\n \"\"\"Download url for the Redoc JS bundle.\"\"\"\n swagger_css_url: str = f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui.css\"\n \"\"\"Download url for the Swagger UI CSS bundle.\"\"\"\n swagger_ui_bundle_js_url: str = (\n f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-bundle.js\"\n )\n \"\"\"Download url for the Swagger UI JS bundle.\"\"\"\n swagger_ui_standalone_preset_js_url: str = (\n f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-standalone-preset.js\"\n )\n \"\"\"Download url for the Swagger Standalone Preset JS bundle.\"\"\"\n stoplight_elements_css_url: str = (\n f\"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/styles.min.css\"\n )\n \"\"\"Download url for the Stoplight Elements CSS bundle.\"\"\"\n stoplight_elements_js_url: str = (\n f\"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/web-components.min.js\"\n )\n \"\"\"Download url for the Stoplight Elements JS bundle.\"\"\"\n\n # internal\n _dumped_schema: str = \"\"\n # until swagger-ui supports v3.1.* of OpenAPI officially, we need to modify the schema for it and keep it\n # separate from the redoc version of the schema, which is unmodified.\n _dumped_modified_schema: str = \"\"\n\n @staticmethod\n def get_schema_from_request(request: Request) -> \"OpenAPI\":\n \"\"\"Return the OpenAPI pydantic model from the request instance.\n\n Args:\n request: A [Starlite][starlite.connection.Request] instance.\n\n Returns:\n An [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n if not request.app.openapi_schema: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n return request.app.openapi_schema\n\n def should_serve_endpoint(self, request: \"Request\") -> bool:\n \"\"\"Verify that the requested path is within the enabled endpoints in the openapi_config.\n\n Args:\n request: To be tested if endpoint enabled.\n\n Returns:\n A boolean.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(\"Starlite has not been instantiated with an OpenAPIConfig\")\n\n asgi_root_path = set(filter(None, request.scope.get(\"root_path\", \"\").split(\"/\")))\n full_request_path = set(filter(None, request.url.path.split(\"/\")))\n request_path = full_request_path.difference(asgi_root_path)\n root_path = set(filter(None, self.path.split(\"/\")))\n\n config = request.app.openapi_config\n\n if request_path == root_path and config.root_schema_site in config.enabled_endpoints:\n return True\n\n if request_path & config.enabled_endpoints:\n return True\n\n return False\n\n @property\n def favicon(self) -> str:\n \"\"\"Return favicon `<link>` tag, if applicable.\n\n Returns:\n A `<link>` tag if self.favicon_url is not empty, otherwise returns a placeholder meta tag.\n \"\"\"\n return f\"<link rel='icon' type='image/x-icon' href='{self.favicon_url}'>\" if self.favicon_url else \"<meta/>\"\n\n @cached_property\n def render_methods_map(self) -> Dict[Literal[\"redoc\", \"swagger\", \"elements\"], Callable[[Request], str]]:\n \"\"\"Map render method names to render methods.\n\n Returns:\n A mapping of string keys to render methods.\n \"\"\"\n return {\n \"redoc\": self.render_redoc,\n \"swagger\": self.render_swagger_ui,\n \"elements\": self.render_stoplight_elements,\n }\n\n @get(\n path=\"/openapi.yaml\",\n media_type=OpenAPIMediaType.OPENAPI_YAML,\n include_in_schema=False,\n )\n def retrieve_schema_yaml(self, request: Request) -> Response:\n \"\"\"Return the OpenAPI schema as YAML with an 'application/vnd.oai.openapi' Content-Type header.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A Response instance with the YAML object rendered into a string.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return OpenAPISchemaResponse(\n content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_YAML\n )\n return Response(content={}, status_code=HTTP_404_NOT_FOUND)\n\n @get(path=\"/openapi.json\", media_type=OpenAPIMediaType.OPENAPI_JSON, include_in_schema=False)\n def retrieve_schema_json(self, request: Request) -> Response:\n \"\"\"Return the OpenAPI schema as JSON with an 'application/vnd.oai.openapi+json' Content-Type header.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A Response instance with the JSON object rendered into a string.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return OpenAPISchemaResponse(\n content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_JSON\n )\n return Response(content={}, status_code=HTTP_404_NOT_FOUND)\n\n @get(path=\"/\", media_type=MediaType.HTML, include_in_schema=False)\n def root(self, request: Request) -> Response:\n \"\"\"Render a static documentation site.\n\n The site to be rendered is based on the `root_schema_site` value set in the\n application's [OpenAPIConfig][starlite.config.openapi.OpenAPIConfig].\n Defaults to `redoc`.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with the rendered site defined in root_schema_site.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n config = request.app.openapi_config\n if not config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n render_method = self.render_methods_map[config.root_schema_site]\n\n if self.should_serve_endpoint(request):\n return Response(content=render_method(request), media_type=MediaType.HTML)\n\n return Response(\n content=self.render_404_page(),\n status_code=HTTP_404_NOT_FOUND,\n media_type=MediaType.HTML,\n )\n\n @get(path=\"/swagger\", media_type=MediaType.HTML, include_in_schema=False)\n def swagger_ui(self, request: Request) -> Response:\n \"\"\"Route handler responsible for rendering Swagger-UI.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n response: With a rendered swagger documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_swagger_ui(request), media_type=MediaType.HTML)\n return Response(\n content=self.render_404_page(),\n status_code=HTTP_404_NOT_FOUND,\n media_type=MediaType.HTML,\n )\n\n @get(path=\"/elements\", media_type=MediaType.HTML, include_in_schema=False)\n def stoplight_elements(self, request: Request) -> Response:\n \"\"\"Route handler responsible for rendering StopLight Elements.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with a rendered stoplight elements documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_stoplight_elements(request), media_type=MediaType.HTML)\n return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)\n\n @get(path=\"/redoc\", media_type=MediaType.HTML, include_in_schema=False)\n def redoc(self, request: Request) -> Response: # pragma: no cover\n \"\"\"Route handler responsible for rendering Redoc.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with a rendered redoc documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_redoc(request), media_type=MediaType.HTML)\n return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)\n\n def render_swagger_ui(self, request: Request) -> str:\n \"\"\"Render an HTML page for Swagger-UI.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n # Note: Fix for Swagger rejection OpenAPI >=3.1\n if not self._dumped_schema:\n schema_copy = schema.copy()\n schema_copy.openapi = \"3.0.3\"\n\n self._dumped_modified_schema = encode_json(schema_copy.json(by_alias=True, exclude_none=True)).decode(\n \"utf-8\"\n )\n\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <link href=\"{self.swagger_css_url}\" rel=\"stylesheet\">\n <script src=\"{self.swagger_ui_bundle_js_url}\" crossorigin></script>\n <script src=\"{self.swagger_ui_standalone_preset_js_url}\" crossorigin></script>\n <style>{self.style}</style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <div id='swagger-container'/>\n <script type=\"text/javascript\">\n const ui = SwaggerUIBundle({{\n spec: JSON.parse({self._dumped_modified_schema}),\n dom_id: '#swagger-container',\n deepLinking: true,\n showExtensions: true,\n showCommonExtensions: true,\n presets: [\n SwaggerUIBundle.presets.apis,\n SwaggerUIBundle.SwaggerUIStandalonePreset\n ],\n }})\n </script>\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_stoplight_elements(self, request: Request) -> str:\n \"\"\"Render an HTML page for StopLight Elements.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\">\n <link rel=\"stylesheet\" href=\"{self.stoplight_elements_css_url}\">\n <script src=\"{self.stoplight_elements_js_url}\" crossorigin></script>\n <style>{self.style}</style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <elements-api\n apiDescriptionUrl=\"{self.path}/openapi.json\"\n router=\"hash\"\n layout=\"sidebar\"\n />\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_redoc(self, request: Request) -> str: # pragma: no cover\n \"\"\"Render an HTML page for Redoc.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n\n if not self._dumped_schema:\n self._dumped_schema = encode_json(schema.json(by_alias=True, exclude_none=True)).decode(\"utf-8\")\n\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n \"\"\"\n\n if self.redoc_google_fonts:\n head += \"\"\"\n <link href=\"https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700\" rel=\"stylesheet\">\n \"\"\"\n\n head += f\"\"\"\n <script src=\"{self.redoc_js_url}\" crossorigin></script>\n <style>\n {self.style}\n </style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <div id='redoc-container'/>\n <script type=\"text/javascript\">\n Redoc.init(\n JSON.parse({self._dumped_schema}),\n undefined,\n document.getElementById('redoc-container')\n )\n </script>\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_404_page(self) -> str:\n \"\"\"Render an HTML 404 page.\n\n Returns:\n A rendered html string.\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n <head>\n <title>404 Not found</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <style>\n {self.style}\n </style>\n </head>\n <body>\n <h1>Error 404</h1>\n </body>\n </html>\n \"\"\"\n", "path": "starlite/openapi/controller.py"}], "after_files": [{"content": "from functools import cached_property\nfrom typing import TYPE_CHECKING, Callable, Dict, Literal, cast\n\nfrom yaml import dump as dump_yaml\n\nfrom starlite.connection import Request\nfrom starlite.controller import Controller\nfrom starlite.enums import MediaType, OpenAPIMediaType\nfrom starlite.exceptions import ImproperlyConfiguredException\nfrom starlite.handlers import get\nfrom starlite.response import Response\nfrom starlite.status_codes import HTTP_404_NOT_FOUND\nfrom starlite.utils.serialization import encode_json\n\nif TYPE_CHECKING:\n\n from pydantic_openapi_schema.v3_1_0.open_api import OpenAPI\n\nMSG_OPENAPI_NOT_INITIALIZED = \"Starlite has not been instantiated with OpenAPIConfig\"\n\n\nclass OpenAPISchemaResponse(Response):\n \"\"\"Response class for OpenAPI Schemas.\"\"\"\n\n def render(self, content: \"OpenAPI\") -> bytes:\n \"\"\"Handle rendering of schema into the correct format - either YAML or JSON.\n\n Args:\n content: The [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance to render.\n\n Returns:\n Rendered bytes.\n \"\"\"\n content_dict = content.dict(by_alias=True, exclude_none=True)\n if self.media_type == OpenAPIMediaType.OPENAPI_YAML:\n return cast(\"bytes\", dump_yaml(content_dict, default_flow_style=False).encode(\"utf-8\"))\n return encode_json(content_dict)\n\n\nclass OpenAPIController(Controller):\n \"\"\"Controller for OpenAPI endpoints.\"\"\"\n\n path: str = \"/schema\"\n \"\"\"Base path for the OpenAPI documentation endpoints.\"\"\"\n style: str = \"body { margin: 0; padding: 0 }\"\n \"\"\"Base styling of the html body.\"\"\"\n redoc_version: str = \"next\"\n \"\"\"Redoc version to download from the CDN.\"\"\"\n swagger_ui_version: str = \"4.15.5\"\n \"\"\"SwaggerUI version to download from the CDN.\"\"\"\n stoplight_elements_version: str = \"7.7.5\"\n \"\"\"StopLight Elements version to download from the CDN.\"\"\"\n favicon_url: str = \"\"\n \"\"\"URL to download a favicon from.\"\"\"\n redoc_google_fonts: bool = True\n \"\"\"Download google fonts via CDN.\n\n Should be set to `False` when not using a CDN.\n \"\"\"\n redoc_js_url: str = f\"https://cdn.jsdelivr.net/npm/redoc@{redoc_version}/bundles/redoc.standalone.js\"\n \"\"\"Download url for the Redoc JS bundle.\"\"\"\n swagger_css_url: str = f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui.css\"\n \"\"\"Download url for the Swagger UI CSS bundle.\"\"\"\n swagger_ui_bundle_js_url: str = (\n f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-bundle.js\"\n )\n \"\"\"Download url for the Swagger UI JS bundle.\"\"\"\n swagger_ui_standalone_preset_js_url: str = (\n f\"https://cdn.jsdelivr.net/npm/swagger-ui-dist@{swagger_ui_version}/swagger-ui-standalone-preset.js\"\n )\n \"\"\"Download url for the Swagger Standalone Preset JS bundle.\"\"\"\n stoplight_elements_css_url: str = (\n f\"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/styles.min.css\"\n )\n \"\"\"Download url for the Stoplight Elements CSS bundle.\"\"\"\n stoplight_elements_js_url: str = (\n f\"https://unpkg.com/@stoplight/elements@{stoplight_elements_version}/web-components.min.js\"\n )\n \"\"\"Download url for the Stoplight Elements JS bundle.\"\"\"\n\n # internal\n _dumped_schema: str = \"\"\n # until swagger-ui supports v3.1.* of OpenAPI officially, we need to modify the schema for it and keep it\n # separate from the redoc version of the schema, which is unmodified.\n _dumped_modified_schema: str = \"\"\n\n @staticmethod\n def get_schema_from_request(request: Request) -> \"OpenAPI\":\n \"\"\"Return the OpenAPI pydantic model from the request instance.\n\n Args:\n request: A [Starlite][starlite.connection.Request] instance.\n\n Returns:\n An [OpenAPI][pydantic_openapi_schema.v3_1_0.open_api.OpenAPI] instance.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n if not request.app.openapi_schema: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n return request.app.openapi_schema\n\n def should_serve_endpoint(self, request: \"Request\") -> bool:\n \"\"\"Verify that the requested path is within the enabled endpoints in the openapi_config.\n\n Args:\n request: To be tested if endpoint enabled.\n\n Returns:\n A boolean.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(\"Starlite has not been instantiated with an OpenAPIConfig\")\n\n asgi_root_path = set(filter(None, request.scope.get(\"root_path\", \"\").split(\"/\")))\n full_request_path = set(filter(None, request.url.path.split(\"/\")))\n request_path = full_request_path.difference(asgi_root_path)\n root_path = set(filter(None, self.path.split(\"/\")))\n\n config = request.app.openapi_config\n\n if request_path == root_path and config.root_schema_site in config.enabled_endpoints:\n return True\n\n if request_path & config.enabled_endpoints:\n return True\n\n return False\n\n @property\n def favicon(self) -> str:\n \"\"\"Return favicon `<link>` tag, if applicable.\n\n Returns:\n A `<link>` tag if self.favicon_url is not empty, otherwise returns a placeholder meta tag.\n \"\"\"\n return f\"<link rel='icon' type='image/x-icon' href='{self.favicon_url}'>\" if self.favicon_url else \"<meta/>\"\n\n @cached_property\n def render_methods_map(self) -> Dict[Literal[\"redoc\", \"swagger\", \"elements\"], Callable[[Request], str]]:\n \"\"\"Map render method names to render methods.\n\n Returns:\n A mapping of string keys to render methods.\n \"\"\"\n return {\n \"redoc\": self.render_redoc,\n \"swagger\": self.render_swagger_ui,\n \"elements\": self.render_stoplight_elements,\n }\n\n @get(\n path=\"/openapi.yaml\",\n media_type=OpenAPIMediaType.OPENAPI_YAML,\n include_in_schema=False,\n )\n def retrieve_schema_yaml(self, request: Request) -> Response:\n \"\"\"Return the OpenAPI schema as YAML with an 'application/vnd.oai.openapi' Content-Type header.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A Response instance with the YAML object rendered into a string.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return OpenAPISchemaResponse(\n content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_YAML\n )\n return Response(content={}, status_code=HTTP_404_NOT_FOUND)\n\n @get(path=\"/openapi.json\", media_type=OpenAPIMediaType.OPENAPI_JSON, include_in_schema=False)\n def retrieve_schema_json(self, request: Request) -> Response:\n \"\"\"Return the OpenAPI schema as JSON with an 'application/vnd.oai.openapi+json' Content-Type header.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A Response instance with the JSON object rendered into a string.\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return OpenAPISchemaResponse(\n content=self.get_schema_from_request(request), media_type=OpenAPIMediaType.OPENAPI_JSON\n )\n return Response(content={}, status_code=HTTP_404_NOT_FOUND)\n\n @get(path=\"/\", media_type=MediaType.HTML, include_in_schema=False)\n def root(self, request: Request) -> Response:\n \"\"\"Render a static documentation site.\n\n The site to be rendered is based on the `root_schema_site` value set in the\n application's [OpenAPIConfig][starlite.config.openapi.OpenAPIConfig].\n Defaults to `redoc`.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with the rendered site defined in root_schema_site.\n\n Raises:\n ImproperlyConfiguredException: If the application `openapi_config` attribute is `None`.\n \"\"\"\n config = request.app.openapi_config\n if not config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n render_method = self.render_methods_map[config.root_schema_site]\n\n if self.should_serve_endpoint(request):\n return Response(content=render_method(request), media_type=MediaType.HTML)\n\n return Response(\n content=self.render_404_page(),\n status_code=HTTP_404_NOT_FOUND,\n media_type=MediaType.HTML,\n )\n\n @get(path=\"/swagger\", media_type=MediaType.HTML, include_in_schema=False)\n def swagger_ui(self, request: Request) -> Response:\n \"\"\"Route handler responsible for rendering Swagger-UI.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n response: With a rendered swagger documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_swagger_ui(request), media_type=MediaType.HTML)\n return Response(\n content=self.render_404_page(),\n status_code=HTTP_404_NOT_FOUND,\n media_type=MediaType.HTML,\n )\n\n @get(path=\"/elements\", media_type=MediaType.HTML, include_in_schema=False)\n def stoplight_elements(self, request: Request) -> Response:\n \"\"\"Route handler responsible for rendering StopLight Elements.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with a rendered stoplight elements documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_stoplight_elements(request), media_type=MediaType.HTML)\n return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)\n\n @get(path=\"/redoc\", media_type=MediaType.HTML, include_in_schema=False)\n def redoc(self, request: Request) -> Response: # pragma: no cover\n \"\"\"Route handler responsible for rendering Redoc.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A response with a rendered redoc documentation site\n \"\"\"\n if not request.app.openapi_config: # pragma: no cover\n raise ImproperlyConfiguredException(MSG_OPENAPI_NOT_INITIALIZED)\n\n if self.should_serve_endpoint(request):\n return Response(content=self.render_redoc(request), media_type=MediaType.HTML)\n return Response(content=self.render_404_page(), status_code=HTTP_404_NOT_FOUND, media_type=MediaType.HTML)\n\n def render_swagger_ui(self, request: Request) -> str:\n \"\"\"Render an HTML page for Swagger-UI.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n # Note: Fix for Swagger rejection OpenAPI >=3.1\n if not self._dumped_modified_schema:\n schema_copy = schema.copy()\n schema_copy.openapi = \"3.0.3\"\n\n self._dumped_modified_schema = encode_json(schema_copy.json(by_alias=True, exclude_none=True)).decode(\n \"utf-8\"\n )\n\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <link href=\"{self.swagger_css_url}\" rel=\"stylesheet\">\n <script src=\"{self.swagger_ui_bundle_js_url}\" crossorigin></script>\n <script src=\"{self.swagger_ui_standalone_preset_js_url}\" crossorigin></script>\n <style>{self.style}</style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <div id='swagger-container'/>\n <script type=\"text/javascript\">\n const ui = SwaggerUIBundle({{\n spec: JSON.parse({self._dumped_modified_schema}),\n dom_id: '#swagger-container',\n deepLinking: true,\n showExtensions: true,\n showCommonExtensions: true,\n presets: [\n SwaggerUIBundle.presets.apis,\n SwaggerUIBundle.SwaggerUIStandalonePreset\n ],\n }})\n </script>\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_stoplight_elements(self, request: Request) -> str:\n \"\"\"Render an HTML page for StopLight Elements.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1, shrink-to-fit=no\">\n <link rel=\"stylesheet\" href=\"{self.stoplight_elements_css_url}\">\n <script src=\"{self.stoplight_elements_js_url}\" crossorigin></script>\n <style>{self.style}</style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <elements-api\n apiDescriptionUrl=\"{self.path}/openapi.json\"\n router=\"hash\"\n layout=\"sidebar\"\n />\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_redoc(self, request: Request) -> str: # pragma: no cover\n \"\"\"Render an HTML page for Redoc.\n\n Notes:\n - override this method to customize the template.\n\n Args:\n request:\n A [Request][starlite.connection.Request] instance.\n\n Returns:\n A rendered html string.\n \"\"\"\n schema = self.get_schema_from_request(request)\n\n if not self._dumped_schema:\n self._dumped_schema = encode_json(schema.json(by_alias=True, exclude_none=True)).decode(\"utf-8\")\n\n head = f\"\"\"\n <head>\n <title>{schema.info.title}</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n \"\"\"\n\n if self.redoc_google_fonts:\n head += \"\"\"\n <link href=\"https://fonts.googleapis.com/css?family=Montserrat:300,400,700|Roboto:300,400,700\" rel=\"stylesheet\">\n \"\"\"\n\n head += f\"\"\"\n <script src=\"{self.redoc_js_url}\" crossorigin></script>\n <style>\n {self.style}\n </style>\n </head>\n \"\"\"\n\n body = f\"\"\"\n <body>\n <div id='redoc-container'/>\n <script type=\"text/javascript\">\n Redoc.init(\n JSON.parse({self._dumped_schema}),\n undefined,\n document.getElementById('redoc-container')\n )\n </script>\n </body>\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n {head}\n {body}\n </html>\n \"\"\"\n\n def render_404_page(self) -> str:\n \"\"\"Render an HTML 404 page.\n\n Returns:\n A rendered html string.\n \"\"\"\n\n return f\"\"\"\n <!DOCTYPE html>\n <html>\n <head>\n <title>404 Not found</title>\n {self.favicon}\n <meta charset=\"utf-8\"/>\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <style>\n {self.style}\n </style>\n </head>\n <body>\n <h1>Error 404</h1>\n </body>\n </html>\n \"\"\"\n", "path": "starlite/openapi/controller.py"}]} |
gh_patches_debug_1141 | rasdani/github-patches | git_diff | aio-libs__aiohttp-5780 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix missing references in docs
**:wave: THIS IS A GOOD FIRST ISSUE**
🐞 **Describe the bug**
<!-- A clear and concise description of what the bug is, on the next line. -->
I noticed that the docs have some RST references to internal and external intersphinx objects. We should fix those and enable `nitpicky = True` by default.
_P.S. It is okay to send small PRs to fix references gradually_
💡 **To Reproduce**
Uncomment `nitpicky = True` in `docs/conf.py` and run `make doc`.
💡 **Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
No errors.
📋 **Logs/tracebacks**
<!-- If applicable, add logs/tracebacks to help explain your problem. -->
```console
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:class reference target not found: AbstractRouter
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:meth reference target not found: AbstractRouter.resolve
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:26: WARNING: py:meth reference target not found: AbstractMatchInfo.handler
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:26: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:meth reference target not found: AbstractMatchInfo.handler
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:52: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:57: WARNING: py:meth reference target not found: AbstractRouter.resolve
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:105: WARNING: py:attr reference target not found: ClientSession.cookie_jar
/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:169: WARNING: py:class reference target not found: RequestHandler
/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:190: WARNING: py:class reference target not found: aiohttp.SimpleCookie
/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:296: WARNING: py:class reference target not found: SimpleNamespace
/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:312: WARNING: py:class reference target not found: SimpleNampespace
/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:425: WARNING: py:class reference target not found: aiohttp.NamedPipeConnector
/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:588: WARNING: py:class reference target not found: aiohttp.connector.BaseConnector
/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:102: WARNING: py:class reference target not found: MultiDict
/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:320: WARNING: py:class reference target not found: aiohttp.streams.StreamReader
/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:330: WARNING: py:mod reference target not found: aiohttp
/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:369: WARNING: py:mod reference target not found: aiohttp
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:91: WARNING: py:class reference target not found: aiohttp.istr
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:98: WARNING: py:class reference target not found: AbstractCookieJar
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:779: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:213: WARNING: py:class reference target not found: aiohttp.AbstractCookieJar
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:264: WARNING: py:class reference target not found: aiohttp.istr
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:296: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:353: WARNING: py:class reference target not found: aiohttp.MultiDict
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:353: WARNING: py:class reference target not found: aiohttp.MultiDictProxy
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:389: WARNING: py:class reference target not found: aiohttp.istr
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:: WARNING: py:class reference target not found: abc.Mapping
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:516: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:534: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:541: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:554: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:561: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:574: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:589: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:607: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:626: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:632: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:991: WARNING: py:class reference target not found: aiohttp.ClientRequest
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1137: WARNING: py:class reference target not found: aiohttp.abc.AbstractResolver
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1084: WARNING: py:const reference target not found: socket.AF_INET
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1084: WARNING: py:const reference target not found: socket.AF_INET6
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1088: WARNING: py:const reference target not found: socket.AF_INET
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1088: WARNING: py:const reference target not found: socket.AF_INET6
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1113: WARNING: py:const reference target not found: socket.AF_INET
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1113: WARNING: py:const reference target not found: socket.AF_INET6
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1216: WARNING: py:meth reference target not found: ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1232: WARNING: py:class reference target not found: HttpVersion
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1441: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1444: WARNING: py:class reference target not found: ClientRequest
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1467: WARNING: py:func reference target not found: aiohttp.ws_connect
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1483: WARNING: py:meth reference target not found: start
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1580: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1641: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1713: WARNING: py:class reference target not found: ClientRequest
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1747: WARNING: py:meth reference target not found: ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1798: WARNING: py:class reference target not found: aiohttp.AbstractCookieJar
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1813: WARNING: py:class reference target not found: aiohttp.abc.AbstractCookieJar
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1920: WARNING: py:class reference target not found: Payload
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1955: WARNING: py:class reference target not found: bytesarray
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1962: WARNING: py:class reference target not found: bytesarray
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2105: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2166: WARNING: py:exc reference target not found: ServerDisconnectionError
/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2195: WARNING: py:exc reference target not found: ClientHttpProxyError
/home/wk/src/github/aio-libs/aiohttp/docs/faq.rst:288: WARNING: py:meth reference target not found: aiohttp.web.Response.write_eof
/home/wk/src/github/aio-libs/aiohttp/docs/logging.rst:41: WARNING: py:obj reference target not found: logging.DEBUG
/home/wk/src/github/aio-libs/aiohttp/docs/logging.rst:128: WARNING: py:class reference target not found: aiohttp.abc.AbstractAsyncAccessLogger
/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:154: WARNING: py:class reference target not found: Payload
/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:154: WARNING: py:meth reference target not found: Payload.set_content_disposition
/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:177: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:218: WARNING: py:class reference target not found: cgi.FieldStorage
/home/wk/src/github/aio-libs/aiohttp/docs/multipart_reference.rst:10: WARNING: py:class reference target not found: MultipartBodyReader
/home/wk/src/github/aio-libs/aiohttp/docs/multipart_reference.rst:138: WARNING: py:class reference target not found: aiohttp.client.ClientResponse
/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:47: WARNING: py:class reference target not found: aiohttp.web.Route
/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:68: WARNING: py:class reference target not found: aiohttp.web.BaseResource
/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:68: WARNING: py:class reference target not found: aiohttp.web.Route
/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:83: WARNING: py:class reference target not found: aiohttp.web.ResourceAdapter
/home/wk/src/github/aio-libs/aiohttp/docs/streams.rst:9: WARNING: py:attr reference target not found: aiohttp.web.Request.content
/home/wk/src/github/aio-libs/aiohttp/docs/streams.rst:18: WARNING: py:attr reference target not found: aiohttp.web.Request.content
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:43: WARNING: py:mod reference target not found: aiohttp.test_tools
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:58: WARNING: py:class reference target not found: aiohttp.web.WebServer
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:341: WARNING: py:meth reference target not found: get_app
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: list of pairs
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: aiohttp.StreamWriter
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: asyncio.transports.Transport
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:625: WARNING: py:meth reference target not found: aiohttp.test_utils.TestServer.start_server
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:625: WARNING: py:meth reference target not found: aiohttp.test_utils.TestServer.close
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:663: WARNING: py:class reference target not found: aiohttp.web.WebServer
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:667: WARNING: py:class reference target not found: asyncio.AbstractServer
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:775: WARNING: py:attr reference target not found: self.server.app
/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:803: WARNING: py:meth reference target not found: aiohttp.ClientSession.request
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:437: WARNING: py:class reference target not found: RequestHandler
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:751: WARNING: py:meth reference target not found: UrlDispatcher.register_resource
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:851: WARNING: py:data reference target not found: zmq.SUB
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:851: WARNING: py:obj reference target not found: application['websockets']
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:896: WARNING: py:func reference target not found: listen_to_redis
/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:971: WARNING: py:func reference target not found: aiohttp_debugtoolbar.setup
/home/wk/src/github/aio-libs/aiohttp/docs/web_exceptions.rst:481: WARNING: py:class reference target not found: URL
/home/wk/src/github/aio-libs/aiohttp/docs/web_exceptions.rst:488: WARNING: py:class reference target not found: URL
/home/wk/src/github/aio-libs/aiohttp/docs/web_lowlevel.rst:20: WARNING: py:meth reference target not found: asyncio.AbstractEventLoop.create_server
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:139: WARNING: py:attr reference target not found: Request.method
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:358: WARNING: py:meth reference target not found: aiohttp.web.UrlDispather.add_routes
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:401: WARNING: py:meth reference target not found: aiohttp.web.UrlDispather.add_routes
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:470: WARNING: py:attr reference target not found: Request.query
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:473: WARNING: py:meth reference target not found: Request.post
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:473: WARNING: py:meth reference target not found: Request.multipart
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:476: WARNING: py:meth reference target not found: Request.post
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:476: WARNING: py:meth reference target not found: Request.multipart
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:554: WARNING: py:meth reference target not found: Request.post
/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:554: WARNING: py:meth reference target not found: Request.multipart
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:32: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:121: WARNING: py:func reference target not found: socket.gtfqdn
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:323: WARNING: py:class reference target not found: ETag
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:326: WARNING: py:class reference target not found: ETag
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:333: WARNING: py:class reference target not found: ETag
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:336: WARNING: py:class reference target not found: ETag
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:387: WARNING: py:meth reference target not found: Request.read
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:399: WARNING: py:meth reference target not found: Request.text
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:405: WARNING: py:class reference target not found: web.HTTPBadRequest
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:421: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:418: WARNING: py:meth reference target not found: Request.json
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:424: WARNING: py:class reference target not found: aiohttp.multipart.MultipartReader
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:462: WARNING: py:meth reference target not found: Request.post
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:473: WARNING: py:meth reference target not found: Request.release
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:502: WARNING: py:class reference target not found: aiohttp.abc.AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:634: WARNING: py:attr reference target not found: Request.keep_alive
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:801: WARNING: py:class reference target not found: ETag
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:898: WARNING: py:class reference target not found: aiohttp.payload.StringPayload
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:912: WARNING: py:attr reference target not found: StreamResponse.body
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:930: WARNING: py:attr reference target not found: body
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1053: WARNING: py:meth reference target not found: start
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1153: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1250: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1325: WARNING: py:meth reference target not found: Application.copy
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1476: WARNING: py:class reference target not found: MatchedSubAppResource
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1529: WARNING: py:class reference target not found: AbstractRouter
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1529: WARNING: py:meth reference target not found: AbstractRouter.resolve
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1542: WARNING: py:meth reference target not found: asyncio.AbstreactEventLoop.create_server
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1570: WARNING: py:class reference target not found: AbstractRouter
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1574: WARNING: py:meth reference target not found: router
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1639: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1639: WARNING: py:class reference target not found: coroutine
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1638: WARNING: py:class reference target not found: PlainRoute
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1638: WARNING: py:class reference target not found: DynamicRoute
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1762: WARNING: py:class reference target not found: coroutine
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1754: WARNING: py:meth reference target not found: StaticRoute.url
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1754: WARNING: py:meth reference target not found: StaticRoute.url_for
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1761: WARNING: py:class reference target not found: StaticRoute
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1765: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1768: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1771: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1773: WARNING: py:attr reference target not found: AbstractMatchInfo.handler
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1782: WARNING: py:attr reference target not found: Request.raw_path
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1813: WARNING: py:class reference target not found: BaseResource
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:attr reference target not found: AbstractMatchInfo.handler
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1952: WARNING: py:class reference target not found: callable
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1952: WARNING: py:class reference target not found: coroutine
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2334: WARNING: py:class reference target not found: abc.collections.Sequence
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2418: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2424: WARNING: py:class reference target not found: AbstractMatchInfo
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2437: WARNING: py:class reference target not found: Route
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2445: WARNING: py:class reference target not found: AbstractView
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2522: WARNING: py:meth reference target not found: socket.getsockname
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2562: WARNING: py:data reference target not found: aiohttp.log.server_logger
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2564: WARNING: py:data reference target not found: aiohttp.log.access_logger
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2566: WARNING: py:data reference target not found: aiohttp.helpers.AccessLogger
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2566: WARNING: py:class reference target not found: aiohttp.abc.AbstractAccessLogger
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2569: WARNING: py:attr reference target not found: helpers.AccessLogger.LOG_FORMAT
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2668: WARNING: py:meth reference target not found: socket.listen
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2703: WARNING: py:meth reference target not found: socket.listen
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2729: WARNING: py:class reference target not found: socket.socket
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2739: WARNING: py:meth reference target not found: socket.listen
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2750: WARNING: py:class reference target not found: collections.namedtuple
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2750: WARNING: py:meth reference target not found: Request.POST
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2883: WARNING: py:class reference target not found: socket
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2848: WARNING: py:data reference target not found: aiohttp.helpers.AccessLogger
/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2848: WARNING: py:class reference target not found: aiohttp.abc.AbstractAccessLogger
/home/wk/src/github/aio-libs/aiohttp/docs/websocket_utilities.rst:42: WARNING: py:attr reference target not found: WSCloseCode.unsupported_data
/home/wk/src/github/aio-libs/aiohttp/docs/websocket_utilities.rst:42: WARNING: py:attr reference target not found: WSCloseCode.message_too_big
/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:23: WARNING: py:class reference target not found: web.Request.url
/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:23: WARNING: py:class reference target not found: web.Request.rel_url
/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:35: WARNING: py:class reference target not found: aiohttp.web.Request.url_for(name, **kwargs)
```
📋 **Your version of the Python**
<!-- Attach your version of the Python. -->
N/A
📋 **Your version of the aiohttp/yarl/multidict distributions**
<!-- Attach your version of the distributions in the code blocks below. -->
N/A
📋 **Additional context**
<!-- Add any other context about the problem here, in the next line. -->
All the existing references can be found at https://webknjaz.github.io/intersphinx-untangled/
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `docs/conf.py`
Content:
```
1 #!/usr/bin/env python3
2 #
3 # aiohttp documentation build configuration file, created by
4 # sphinx-quickstart on Wed Mar 5 12:35:35 2014.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import io
16 import os
17 import re
18
19 _docs_path = os.path.dirname(__file__)
20 _version_path = os.path.abspath(
21 os.path.join(_docs_path, "..", "aiohttp", "__init__.py")
22 )
23 with open(_version_path, encoding="latin1") as fp:
24 try:
25 _version_info = re.search(
26 r'^__version__ = "'
27 r"(?P<major>\d+)"
28 r"\.(?P<minor>\d+)"
29 r"\.(?P<patch>\d+)"
30 r'(?P<tag>.*)?"$',
31 fp.read(),
32 re.M,
33 ).groupdict()
34 except IndexError:
35 raise RuntimeError("Unable to determine version.")
36
37
38 # -- General configuration ------------------------------------------------
39
40 # If your documentation needs a minimal Sphinx version, state it here.
41 # needs_sphinx = '1.0'
42
43 # Add any Sphinx extension module names here, as strings. They can be
44 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
45 # ones.
46 extensions = [
47 "sphinx.ext.viewcode",
48 "sphinx.ext.intersphinx",
49 "sphinxcontrib.asyncio",
50 "sphinxcontrib.blockdiag",
51 ]
52
53
54 try:
55 import sphinxcontrib.spelling # noqa
56
57 extensions.append("sphinxcontrib.spelling")
58 except ImportError:
59 pass
60
61
62 intersphinx_mapping = {
63 "python": ("http://docs.python.org/3", None),
64 "multidict": ("https://multidict.readthedocs.io/en/stable/", None),
65 "yarl": ("https://yarl.readthedocs.io/en/stable/", None),
66 "aiohttpjinja2": ("https://aiohttp-jinja2.readthedocs.io/en/stable/", None),
67 "aiohttpremotes": ("https://aiohttp-remotes.readthedocs.io/en/stable/", None),
68 "aiohttpsession": ("https://aiohttp-session.readthedocs.io/en/stable/", None),
69 "aiohttpdemos": ("https://aiohttp-demos.readthedocs.io/en/latest/", None),
70 }
71
72 # Add any paths that contain templates here, relative to this directory.
73 templates_path = ["_templates"]
74
75 # The suffix of source filenames.
76 source_suffix = ".rst"
77
78 # The encoding of source files.
79 # source_encoding = 'utf-8-sig'
80
81 # The master toctree document.
82 master_doc = "index"
83
84 # General information about the project.
85 project = "aiohttp"
86 copyright = "2013-2020, aiohttp maintainers"
87
88 # The version info for the project you're documenting, acts as replacement for
89 # |version| and |release|, also used in various other places throughout the
90 # built documents.
91 #
92 # The short X.Y version.
93 version = "{major}.{minor}".format(**_version_info)
94 # The full version, including alpha/beta/rc tags.
95 release = "{major}.{minor}.{patch}{tag}".format(**_version_info)
96
97 # The language for content autogenerated by Sphinx. Refer to documentation
98 # for a list of supported languages.
99 # language = None
100
101 # There are two options for replacing |today|: either, you set today to some
102 # non-false value, then it is used:
103 # today = ''
104 # Else, today_fmt is used as the format for a strftime call.
105 # today_fmt = '%B %d, %Y'
106
107 # List of patterns, relative to source directory, that match files and
108 # directories to ignore when looking for source files.
109 exclude_patterns = ["_build"]
110
111 # The reST default role (used for this markup: `text`) to use for all
112 # documents.
113 # default_role = None
114
115 # If true, '()' will be appended to :func: etc. cross-reference text.
116 # add_function_parentheses = True
117
118 # If true, the current module name will be prepended to all description
119 # unit titles (such as .. function::).
120 # add_module_names = True
121
122 # If true, sectionauthor and moduleauthor directives will be shown in the
123 # output. They are ignored by default.
124 # show_authors = False
125
126 # The name of the Pygments (syntax highlighting) style to use.
127 # pygments_style = 'sphinx'
128
129 # The default language to highlight source code in.
130 highlight_language = "python3"
131
132 # A list of ignored prefixes for module index sorting.
133 # modindex_common_prefix = []
134
135 # If true, keep warnings as "system message" paragraphs in the built documents.
136 # keep_warnings = False
137
138
139 # -- Options for HTML output ----------------------------------------------
140
141 # The theme to use for HTML and HTML Help pages. See the documentation for
142 # a list of builtin themes.
143 html_theme = "aiohttp_theme"
144
145 # Theme options are theme-specific and customize the look and feel of a theme
146 # further. For a list of options available for each theme, see the
147 # documentation.
148 html_theme_options = {
149 "description": "Async HTTP client/server for asyncio and Python",
150 "canonical_url": "http://docs.aiohttp.org/en/stable/",
151 "github_user": "aio-libs",
152 "github_repo": "aiohttp",
153 "github_button": True,
154 "github_type": "star",
155 "github_banner": True,
156 "badges": [
157 {
158 "image": "https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg",
159 "target": "https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI",
160 "height": "20",
161 "alt": "Azure Pipelines CI status",
162 },
163 {
164 "image": "https://codecov.io/github/aio-libs/aiohttp/coverage.svg?branch=master",
165 "target": "https://codecov.io/github/aio-libs/aiohttp",
166 "height": "20",
167 "alt": "Code coverage status",
168 },
169 {
170 "image": "https://badge.fury.io/py/aiohttp.svg",
171 "target": "https://badge.fury.io/py/aiohttp",
172 "height": "20",
173 "alt": "Latest PyPI package version",
174 },
175 {
176 "image": "https://img.shields.io/discourse/status?server=https%3A%2F%2Faio-libs.discourse.group",
177 "target": "https://aio-libs.discourse.group",
178 "height": "20",
179 "alt": "Discourse status",
180 },
181 {
182 "image": "https://badges.gitter.im/Join%20Chat.svg",
183 "target": "https://gitter.im/aio-libs/Lobby",
184 "height": "20",
185 "alt": "Chat on Gitter",
186 },
187 ],
188 }
189
190 html_css_files = [
191 "css/logo-adjustments.css",
192 ]
193
194 # Add any paths that contain custom themes here, relative to this directory.
195 # html_theme_path = [alabaster.get_path()]
196
197 # The name for this set of Sphinx documents. If None, it defaults to
198 # "<project> v<release> documentation".
199 # html_title = None
200
201 # A shorter title for the navigation bar. Default is the same as html_title.
202 # html_short_title = None
203
204 # The name of an image file (relative to this directory) to place at the top
205 # of the sidebar.
206 html_logo = "aiohttp-plain.svg"
207
208 # The name of an image file (within the static path) to use as favicon of the
209 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
210 # pixels large.
211 html_favicon = "favicon.ico"
212
213 # Add any paths that contain custom static files (such as style sheets) here,
214 # relative to this directory. They are copied after the builtin static files,
215 # so a file named "default.css" will overwrite the builtin "default.css".
216 html_static_path = ["_static"]
217
218 # Add any extra paths that contain custom files (such as robots.txt or
219 # .htaccess) here, relative to this directory. These files are copied
220 # directly to the root of the documentation.
221 # html_extra_path = []
222
223 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
224 # using the given strftime format.
225 # html_last_updated_fmt = '%b %d, %Y'
226
227 # If true, SmartyPants will be used to convert quotes and dashes to
228 # typographically correct entities.
229 # html_use_smartypants = True
230
231 # Custom sidebar templates, maps document names to template names.
232 html_sidebars = {
233 "**": [
234 "about.html",
235 "navigation.html",
236 "searchbox.html",
237 ]
238 }
239
240 # Additional templates that should be rendered to pages, maps page names to
241 # template names.
242 # html_additional_pages = {}
243
244 # If false, no module index is generated.
245 # html_domain_indices = True
246
247 # If false, no index is generated.
248 # html_use_index = True
249
250 # If true, the index is split into individual pages for each letter.
251 # html_split_index = False
252
253 # If true, links to the reST sources are added to the pages.
254 # html_show_sourcelink = True
255
256 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
257 # html_show_sphinx = True
258
259 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
260 # html_show_copyright = True
261
262 # If true, an OpenSearch description file will be output, and all pages will
263 # contain a <link> tag referring to it. The value of this option must be the
264 # base URL from which the finished HTML is served.
265 # html_use_opensearch = ''
266
267 # This is the file name suffix for HTML files (e.g. ".xhtml").
268 # html_file_suffix = None
269
270 # Output file base name for HTML help builder.
271 htmlhelp_basename = "aiohttpdoc"
272
273
274 # -- Options for LaTeX output ---------------------------------------------
275
276 latex_elements = {
277 # The paper size ('letterpaper' or 'a4paper').
278 # 'papersize': 'letterpaper',
279 # The font size ('10pt', '11pt' or '12pt').
280 # 'pointsize': '10pt',
281 # Additional stuff for the LaTeX preamble.
282 # 'preamble': '',
283 }
284
285 # Grouping the document tree into LaTeX files. List of tuples
286 # (source start file, target name, title,
287 # author, documentclass [howto, manual, or own class]).
288 latex_documents = [
289 ("index", "aiohttp.tex", "aiohttp Documentation", "aiohttp contributors", "manual"),
290 ]
291
292 # The name of an image file (relative to this directory) to place at the top of
293 # the title page.
294 # latex_logo = None
295
296 # For "manual" documents, if this is true, then toplevel headings are parts,
297 # not chapters.
298 # latex_use_parts = False
299
300 # If true, show page references after internal links.
301 # latex_show_pagerefs = False
302
303 # If true, show URL addresses after external links.
304 # latex_show_urls = False
305
306 # Documents to append as an appendix to all manuals.
307 # latex_appendices = []
308
309 # If false, no module index is generated.
310 # latex_domain_indices = True
311
312
313 # -- Options for manual page output ---------------------------------------
314
315 # One entry per manual page. List of tuples
316 # (source start file, name, description, authors, manual section).
317 man_pages = [("index", "aiohttp", "aiohttp Documentation", ["aiohttp"], 1)]
318
319 # If true, show URL addresses after external links.
320 # man_show_urls = False
321
322
323 # -- Options for Texinfo output -------------------------------------------
324
325 # Grouping the document tree into Texinfo files. List of tuples
326 # (source start file, target name, title, author,
327 # dir menu entry, description, category)
328 texinfo_documents = [
329 (
330 "index",
331 "aiohttp",
332 "aiohttp Documentation",
333 "Aiohttp contributors",
334 "aiohttp",
335 "One line description of project.",
336 "Miscellaneous",
337 ),
338 ]
339
340 # Documents to append as an appendix to all manuals.
341 # texinfo_appendices = []
342
343 # If false, no module index is generated.
344 # texinfo_domain_indices = True
345
346 # How to display URL addresses: 'footnote', 'no', or 'inline'.
347 # texinfo_show_urls = 'footnote'
348
349 # If true, do not generate a @detailmenu in the "Top" node's menu.
350 # texinfo_no_detailmenu = False
351
352
353 # -------------------------------------------------------------------------
354 # nitpicky = True
355 nitpick_ignore = [
356 ("py:mod", "aiohttp"), # undocumented, no `.. currentmodule:: aiohttp` in docs
357 ("py:class", "aiohttp.SimpleCookie"), # undocumented
358 ("py:class", "aiohttp.web.RequestHandler"), # undocumented
359 ("py:class", "aiohttp.NamedPipeConnector"), # undocumented
360 ("py:meth", "aiohttp.ClientSession.request"), # undocumented
361 ("py:class", "aiohttp.protocol.HttpVersion"), # undocumented
362 ("py:class", "aiohttp.ClientRequest"), # undocumented
363 ("py:class", "aiohttp.payload.Payload"), # undocumented
364 ("py:class", "aiohttp.abc.AbstractResolver"), # undocumented
365 ("py:func", "aiohttp.ws_connect"), # undocumented
366 ("py:meth", "start"), # undocumented
367 ("py:exc", "aiohttp.ServerDisconnectionError"), # undocumented
368 ("py:exc", "aiohttp.ClientHttpProxyError"), # undocumented
369 ("py:class", "asyncio.AbstractServer"), # undocumented
370 ("py:mod", "aiohttp.test_tools"), # undocumented
371 ("py:class", "list of pairs"), # undocumented
372 ("py:class", "aiohttp.protocol.HttpVersion"), # undocumented
373 ("py:meth", "aiohttp.ClientSession.request"), # undocumented
374 ("py:class", "aiohttp.StreamWriter"), # undocumented
375 ("py:attr", "aiohttp.StreamResponse.body"), # undocumented
376 ("py:class", "aiohttp.payload.StringPayload"), # undocumented
377 ("py:meth", "aiohttp.web.Application.copy"), # undocumented
378 ("py:meth", "asyncio.AbstractEventLoop.create_server"), # undocumented
379 ("py:data", "aiohttp.log.server_logger"), # undocumented
380 ("py:data", "aiohttp.log.access_logger"), # undocumented
381 ("py:data", "aiohttp.helpers.AccessLogger"), # undocumented
382 ("py:attr", "helpers.AccessLogger.LOG_FORMAT"), # undocumented
383 ("py:meth", "aiohttp.web.AbstractRoute.url"), # undocumented
384 ("py:class", "aiohttp.web.MatchedSubAppResource"), # undocumented
385 ("py:attr", "body"), # undocumented
386 ("py:class", "socket.socket"), # undocumented
387 ("py:obj", "logging.DEBUG"), # undocumented
388 ("py:class", "aiohttp.abc.AbstractAsyncAccessLogger"), # undocumented
389 ("py:meth", "aiohttp.web.Response.write_eof"), # undocumented
390 ("py:meth", "aiohttp.payload.Payload.set_content_disposition"), # undocumented
391 ("py:class", "cgi.FieldStorage"), # undocumented
392 ("py:meth", "aiohttp.web.UrlDispatcher.register_resource"), # undocumented
393 ("py:func", "aiohttp_debugtoolbar.setup"), # undocumented
394 ("py:exc", "HTTPInternalServerError"), # undocumented
395 ("py:exc", "HTTPForbidden"), # undocumented
396 ("py:exc", "HTTPExpectationFailed"), # undocumented
397 ("py:class", "HTTPFound"), # undocumented
398 ("py:class", "HTTPMultipleChoices"), # undocumented
399 ("py:class", "HTTPMovedPermanently"), # undocumented
400 ("py:class", "HTTPSeeOther"), # undocumented
401 ("py:class", "HTTPUseProxy"), # undocumented
402 ("py:class", "HTTPTemporaryRedirect"), # undocumented
403 ("py:class", "HTTPMethodNotAllowed"), # undocumented
404 ("py:class", "FileResponse"), # undocumented
405 ("py:exc", "HTTPNotFound"), # undocumented
406 ("py:exc", "HTTPMethodNotAllowed"), # undocumented
407 ("py:class", "HTTPMethodNotAllowed"), # undocumented
408 ]
409
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -351,7 +351,7 @@
# -------------------------------------------------------------------------
-# nitpicky = True
+nitpicky = True
nitpick_ignore = [
("py:mod", "aiohttp"), # undocumented, no `.. currentmodule:: aiohttp` in docs
("py:class", "aiohttp.SimpleCookie"), # undocumented
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -351,7 +351,7 @@\n \n \n # -------------------------------------------------------------------------\n-# nitpicky = True\n+nitpicky = True\n nitpick_ignore = [\n (\"py:mod\", \"aiohttp\"), # undocumented, no `.. currentmodule:: aiohttp` in docs\n (\"py:class\", \"aiohttp.SimpleCookie\"), # undocumented\n", "issue": "Fix missing references in docs\n**:wave: THIS IS A GOOD FIRST ISSUE**\r\n\r\n\ud83d\udc1e **Describe the bug**\r\n<!-- A clear and concise description of what the bug is, on the next line. -->\r\nI noticed that the docs have some RST references to internal and external intersphinx objects. We should fix those and enable `nitpicky = True` by default.\r\n\r\n_P.S. It is okay to send small PRs to fix references gradually_\r\n\r\n\r\n\ud83d\udca1 **To Reproduce**\r\n\r\nUncomment `nitpicky = True` in `docs/conf.py` and run `make doc`.\r\n\r\n\r\n\ud83d\udca1 **Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nNo errors.\r\n\r\n\r\n\ud83d\udccb **Logs/tracebacks**\r\n<!-- If applicable, add logs/tracebacks to help explain your problem. -->\r\n```console\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:class reference target not found: AbstractRouter\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:meth reference target not found: AbstractRouter.resolve\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:22: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:26: WARNING: py:meth reference target not found: AbstractMatchInfo.handler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:26: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:meth reference target not found: AbstractMatchInfo.handler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:30: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:52: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:57: WARNING: py:meth reference target not found: AbstractRouter.resolve\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:105: WARNING: py:attr reference target not found: ClientSession.cookie_jar\r\n/home/wk/src/github/aio-libs/aiohttp/docs/abc.rst:169: WARNING: py:class reference target not found: RequestHandler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:190: WARNING: py:class reference target not found: aiohttp.SimpleCookie\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:296: WARNING: py:class reference target not found: SimpleNamespace\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:312: WARNING: py:class reference target not found: SimpleNampespace\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:425: WARNING: py:class reference target not found: aiohttp.NamedPipeConnector\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_advanced.rst:588: WARNING: py:class reference target not found: aiohttp.connector.BaseConnector\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:102: WARNING: py:class reference target not found: MultiDict\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:320: WARNING: py:class reference target not found: aiohttp.streams.StreamReader\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:330: WARNING: py:mod reference target not found: aiohttp\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_quickstart.rst:369: WARNING: py:mod reference target not found: aiohttp\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:91: WARNING: py:class reference target not found: aiohttp.istr\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:98: WARNING: py:class reference target not found: AbstractCookieJar\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:779: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:213: WARNING: py:class reference target not found: aiohttp.AbstractCookieJar\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:264: WARNING: py:class reference target not found: aiohttp.istr\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:296: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:353: WARNING: py:class reference target not found: aiohttp.MultiDict\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:353: WARNING: py:class reference target not found: aiohttp.MultiDictProxy\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:389: WARNING: py:class reference target not found: aiohttp.istr\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:: WARNING: py:class reference target not found: abc.Mapping\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:516: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:534: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:541: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:554: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:561: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:574: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:589: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:607: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:626: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:632: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:991: WARNING: py:class reference target not found: aiohttp.ClientRequest\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1137: WARNING: py:class reference target not found: aiohttp.abc.AbstractResolver\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1084: WARNING: py:const reference target not found: socket.AF_INET\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1084: WARNING: py:const reference target not found: socket.AF_INET6\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1088: WARNING: py:const reference target not found: socket.AF_INET\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1088: WARNING: py:const reference target not found: socket.AF_INET6\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1113: WARNING: py:const reference target not found: socket.AF_INET\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1113: WARNING: py:const reference target not found: socket.AF_INET6\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1216: WARNING: py:meth reference target not found: ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1232: WARNING: py:class reference target not found: HttpVersion\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1441: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1444: WARNING: py:class reference target not found: ClientRequest\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1467: WARNING: py:func reference target not found: aiohttp.ws_connect\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1483: WARNING: py:meth reference target not found: start\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1580: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1641: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1713: WARNING: py:class reference target not found: ClientRequest\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1747: WARNING: py:meth reference target not found: ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1798: WARNING: py:class reference target not found: aiohttp.AbstractCookieJar\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1813: WARNING: py:class reference target not found: aiohttp.abc.AbstractCookieJar\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1920: WARNING: py:class reference target not found: Payload\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1955: WARNING: py:class reference target not found: bytesarray\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:1962: WARNING: py:class reference target not found: bytesarray\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2105: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2166: WARNING: py:exc reference target not found: ServerDisconnectionError\r\n/home/wk/src/github/aio-libs/aiohttp/docs/client_reference.rst:2195: WARNING: py:exc reference target not found: ClientHttpProxyError\r\n/home/wk/src/github/aio-libs/aiohttp/docs/faq.rst:288: WARNING: py:meth reference target not found: aiohttp.web.Response.write_eof\r\n/home/wk/src/github/aio-libs/aiohttp/docs/logging.rst:41: WARNING: py:obj reference target not found: logging.DEBUG\r\n/home/wk/src/github/aio-libs/aiohttp/docs/logging.rst:128: WARNING: py:class reference target not found: aiohttp.abc.AbstractAsyncAccessLogger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:154: WARNING: py:class reference target not found: Payload\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:154: WARNING: py:meth reference target not found: Payload.set_content_disposition\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:177: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart.rst:218: WARNING: py:class reference target not found: cgi.FieldStorage\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart_reference.rst:10: WARNING: py:class reference target not found: MultipartBodyReader\r\n/home/wk/src/github/aio-libs/aiohttp/docs/multipart_reference.rst:138: WARNING: py:class reference target not found: aiohttp.client.ClientResponse\r\n/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:47: WARNING: py:class reference target not found: aiohttp.web.Route\r\n/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:68: WARNING: py:class reference target not found: aiohttp.web.BaseResource\r\n/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:68: WARNING: py:class reference target not found: aiohttp.web.Route\r\n/home/wk/src/github/aio-libs/aiohttp/docs/new_router.rst:83: WARNING: py:class reference target not found: aiohttp.web.ResourceAdapter\r\n/home/wk/src/github/aio-libs/aiohttp/docs/streams.rst:9: WARNING: py:attr reference target not found: aiohttp.web.Request.content\r\n/home/wk/src/github/aio-libs/aiohttp/docs/streams.rst:18: WARNING: py:attr reference target not found: aiohttp.web.Request.content\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:43: WARNING: py:mod reference target not found: aiohttp.test_tools\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:58: WARNING: py:class reference target not found: aiohttp.web.WebServer\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:341: WARNING: py:meth reference target not found: get_app\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: list of pairs\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: aiohttp.StreamWriter\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:562: WARNING: py:class reference target not found: asyncio.transports.Transport\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:625: WARNING: py:meth reference target not found: aiohttp.test_utils.TestServer.start_server\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:625: WARNING: py:meth reference target not found: aiohttp.test_utils.TestServer.close\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:663: WARNING: py:class reference target not found: aiohttp.web.WebServer\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:667: WARNING: py:class reference target not found: asyncio.AbstractServer\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:775: WARNING: py:attr reference target not found: self.server.app\r\n/home/wk/src/github/aio-libs/aiohttp/docs/testing.rst:803: WARNING: py:meth reference target not found: aiohttp.ClientSession.request\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:437: WARNING: py:class reference target not found: RequestHandler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:751: WARNING: py:meth reference target not found: UrlDispatcher.register_resource\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:851: WARNING: py:data reference target not found: zmq.SUB\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:851: WARNING: py:obj reference target not found: application['websockets']\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:896: WARNING: py:func reference target not found: listen_to_redis\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_advanced.rst:971: WARNING: py:func reference target not found: aiohttp_debugtoolbar.setup\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_exceptions.rst:481: WARNING: py:class reference target not found: URL\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_exceptions.rst:488: WARNING: py:class reference target not found: URL\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_lowlevel.rst:20: WARNING: py:meth reference target not found: asyncio.AbstractEventLoop.create_server\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:139: WARNING: py:attr reference target not found: Request.method\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:358: WARNING: py:meth reference target not found: aiohttp.web.UrlDispather.add_routes\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:401: WARNING: py:meth reference target not found: aiohttp.web.UrlDispather.add_routes\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:470: WARNING: py:attr reference target not found: Request.query\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:473: WARNING: py:meth reference target not found: Request.post\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:473: WARNING: py:meth reference target not found: Request.multipart\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:476: WARNING: py:meth reference target not found: Request.post\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:476: WARNING: py:meth reference target not found: Request.multipart\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:554: WARNING: py:meth reference target not found: Request.post\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_quickstart.rst:554: WARNING: py:meth reference target not found: Request.multipart\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:32: WARNING: py:class reference target not found: aiohttp.protocol.HttpVersion\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:121: WARNING: py:func reference target not found: socket.gtfqdn\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:323: WARNING: py:class reference target not found: ETag\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:326: WARNING: py:class reference target not found: ETag\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:333: WARNING: py:class reference target not found: ETag\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:336: WARNING: py:class reference target not found: ETag\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:387: WARNING: py:meth reference target not found: Request.read\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:399: WARNING: py:meth reference target not found: Request.text\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:405: WARNING: py:class reference target not found: web.HTTPBadRequest\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:421: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:418: WARNING: py:meth reference target not found: Request.json\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:424: WARNING: py:class reference target not found: aiohttp.multipart.MultipartReader\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:462: WARNING: py:meth reference target not found: Request.post\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:473: WARNING: py:meth reference target not found: Request.release\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:502: WARNING: py:class reference target not found: aiohttp.abc.AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:634: WARNING: py:attr reference target not found: Request.keep_alive\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:801: WARNING: py:class reference target not found: ETag\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:898: WARNING: py:class reference target not found: aiohttp.payload.StringPayload\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:912: WARNING: py:attr reference target not found: StreamResponse.body\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:930: WARNING: py:attr reference target not found: body\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1053: WARNING: py:meth reference target not found: start\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1153: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1250: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1325: WARNING: py:meth reference target not found: Application.copy\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1476: WARNING: py:class reference target not found: MatchedSubAppResource\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1529: WARNING: py:class reference target not found: AbstractRouter\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1529: WARNING: py:meth reference target not found: AbstractRouter.resolve\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1542: WARNING: py:meth reference target not found: asyncio.AbstreactEventLoop.create_server\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1570: WARNING: py:class reference target not found: AbstractRouter\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1574: WARNING: py:meth reference target not found: router\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1639: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1639: WARNING: py:class reference target not found: coroutine\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1638: WARNING: py:class reference target not found: PlainRoute\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1638: WARNING: py:class reference target not found: DynamicRoute\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1762: WARNING: py:class reference target not found: coroutine\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1754: WARNING: py:meth reference target not found: StaticRoute.url\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1754: WARNING: py:meth reference target not found: StaticRoute.url_for\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1761: WARNING: py:class reference target not found: StaticRoute\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1765: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1768: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1771: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1773: WARNING: py:attr reference target not found: AbstractMatchInfo.handler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1782: WARNING: py:attr reference target not found: Request.raw_path\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1813: WARNING: py:class reference target not found: BaseResource\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:attr reference target not found: AbstractMatchInfo.http_exception\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1847: WARNING: py:attr reference target not found: AbstractMatchInfo.handler\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1952: WARNING: py:class reference target not found: callable\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:1952: WARNING: py:class reference target not found: coroutine\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2334: WARNING: py:class reference target not found: abc.collections.Sequence\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2418: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2424: WARNING: py:class reference target not found: AbstractMatchInfo\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2437: WARNING: py:class reference target not found: Route\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2445: WARNING: py:class reference target not found: AbstractView\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2522: WARNING: py:meth reference target not found: socket.getsockname\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2562: WARNING: py:data reference target not found: aiohttp.log.server_logger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2564: WARNING: py:data reference target not found: aiohttp.log.access_logger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2566: WARNING: py:data reference target not found: aiohttp.helpers.AccessLogger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2566: WARNING: py:class reference target not found: aiohttp.abc.AbstractAccessLogger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2569: WARNING: py:attr reference target not found: helpers.AccessLogger.LOG_FORMAT\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2668: WARNING: py:meth reference target not found: socket.listen\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2703: WARNING: py:meth reference target not found: socket.listen\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2729: WARNING: py:class reference target not found: socket.socket\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2739: WARNING: py:meth reference target not found: socket.listen\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2750: WARNING: py:class reference target not found: collections.namedtuple\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2750: WARNING: py:meth reference target not found: Request.POST\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2883: WARNING: py:class reference target not found: socket\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2848: WARNING: py:data reference target not found: aiohttp.helpers.AccessLogger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/web_reference.rst:2848: WARNING: py:class reference target not found: aiohttp.abc.AbstractAccessLogger\r\n/home/wk/src/github/aio-libs/aiohttp/docs/websocket_utilities.rst:42: WARNING: py:attr reference target not found: WSCloseCode.unsupported_data\r\n/home/wk/src/github/aio-libs/aiohttp/docs/websocket_utilities.rst:42: WARNING: py:attr reference target not found: WSCloseCode.message_too_big\r\n/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:23: WARNING: py:class reference target not found: web.Request.url\r\n/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:23: WARNING: py:class reference target not found: web.Request.rel_url\r\n/home/wk/src/github/aio-libs/aiohttp/docs/whats_new_1_1.rst:35: WARNING: py:class reference target not found: aiohttp.web.Request.url_for(name, **kwargs)\r\n```\r\n\r\n\ud83d\udccb **Your version of the Python**\r\n<!-- Attach your version of the Python. -->\r\nN/A\r\n\r\n\ud83d\udccb **Your version of the aiohttp/yarl/multidict distributions**\r\n<!-- Attach your version of the distributions in the code blocks below. -->\r\nN/A\r\n\r\n\ud83d\udccb **Additional context**\r\n<!-- Add any other context about the problem here, in the next line. -->\r\nAll the existing references can be found at https://webknjaz.github.io/intersphinx-untangled/\n", "before_files": [{"content": "#!/usr/bin/env python3\n#\n# aiohttp documentation build configuration file, created by\n# sphinx-quickstart on Wed Mar 5 12:35:35 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport io\nimport os\nimport re\n\n_docs_path = os.path.dirname(__file__)\n_version_path = os.path.abspath(\n os.path.join(_docs_path, \"..\", \"aiohttp\", \"__init__.py\")\n)\nwith open(_version_path, encoding=\"latin1\") as fp:\n try:\n _version_info = re.search(\n r'^__version__ = \"'\n r\"(?P<major>\\d+)\"\n r\"\\.(?P<minor>\\d+)\"\n r\"\\.(?P<patch>\\d+)\"\n r'(?P<tag>.*)?\"$',\n fp.read(),\n re.M,\n ).groupdict()\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.intersphinx\",\n \"sphinxcontrib.asyncio\",\n \"sphinxcontrib.blockdiag\",\n]\n\n\ntry:\n import sphinxcontrib.spelling # noqa\n\n extensions.append(\"sphinxcontrib.spelling\")\nexcept ImportError:\n pass\n\n\nintersphinx_mapping = {\n \"python\": (\"http://docs.python.org/3\", None),\n \"multidict\": (\"https://multidict.readthedocs.io/en/stable/\", None),\n \"yarl\": (\"https://yarl.readthedocs.io/en/stable/\", None),\n \"aiohttpjinja2\": (\"https://aiohttp-jinja2.readthedocs.io/en/stable/\", None),\n \"aiohttpremotes\": (\"https://aiohttp-remotes.readthedocs.io/en/stable/\", None),\n \"aiohttpsession\": (\"https://aiohttp-session.readthedocs.io/en/stable/\", None),\n \"aiohttpdemos\": (\"https://aiohttp-demos.readthedocs.io/en/latest/\", None),\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"aiohttp\"\ncopyright = \"2013-2020, aiohttp maintainers\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"{major}.{minor}\".format(**_version_info)\n# The full version, including alpha/beta/rc tags.\nrelease = \"{major}.{minor}.{patch}{tag}\".format(**_version_info)\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n# pygments_style = 'sphinx'\n\n# The default language to highlight source code in.\nhighlight_language = \"python3\"\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"aiohttp_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"description\": \"Async HTTP client/server for asyncio and Python\",\n \"canonical_url\": \"http://docs.aiohttp.org/en/stable/\",\n \"github_user\": \"aio-libs\",\n \"github_repo\": \"aiohttp\",\n \"github_button\": True,\n \"github_type\": \"star\",\n \"github_banner\": True,\n \"badges\": [\n {\n \"image\": \"https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg\",\n \"target\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\",\n \"height\": \"20\",\n \"alt\": \"Azure Pipelines CI status\",\n },\n {\n \"image\": \"https://codecov.io/github/aio-libs/aiohttp/coverage.svg?branch=master\",\n \"target\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"height\": \"20\",\n \"alt\": \"Code coverage status\",\n },\n {\n \"image\": \"https://badge.fury.io/py/aiohttp.svg\",\n \"target\": \"https://badge.fury.io/py/aiohttp\",\n \"height\": \"20\",\n \"alt\": \"Latest PyPI package version\",\n },\n {\n \"image\": \"https://img.shields.io/discourse/status?server=https%3A%2F%2Faio-libs.discourse.group\",\n \"target\": \"https://aio-libs.discourse.group\",\n \"height\": \"20\",\n \"alt\": \"Discourse status\",\n },\n {\n \"image\": \"https://badges.gitter.im/Join%20Chat.svg\",\n \"target\": \"https://gitter.im/aio-libs/Lobby\",\n \"height\": \"20\",\n \"alt\": \"Chat on Gitter\",\n },\n ],\n}\n\nhtml_css_files = [\n \"css/logo-adjustments.css\",\n]\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = [alabaster.get_path()]\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"aiohttp-plain.svg\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n \"**\": [\n \"about.html\",\n \"navigation.html\",\n \"searchbox.html\",\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"aiohttpdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\"index\", \"aiohttp.tex\", \"aiohttp Documentation\", \"aiohttp contributors\", \"manual\"),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(\"index\", \"aiohttp\", \"aiohttp Documentation\", [\"aiohttp\"], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n \"index\",\n \"aiohttp\",\n \"aiohttp Documentation\",\n \"Aiohttp contributors\",\n \"aiohttp\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n\n# -------------------------------------------------------------------------\n# nitpicky = True\nnitpick_ignore = [\n (\"py:mod\", \"aiohttp\"), # undocumented, no `.. currentmodule:: aiohttp` in docs\n (\"py:class\", \"aiohttp.SimpleCookie\"), # undocumented\n (\"py:class\", \"aiohttp.web.RequestHandler\"), # undocumented\n (\"py:class\", \"aiohttp.NamedPipeConnector\"), # undocumented\n (\"py:meth\", \"aiohttp.ClientSession.request\"), # undocumented\n (\"py:class\", \"aiohttp.protocol.HttpVersion\"), # undocumented\n (\"py:class\", \"aiohttp.ClientRequest\"), # undocumented\n (\"py:class\", \"aiohttp.payload.Payload\"), # undocumented\n (\"py:class\", \"aiohttp.abc.AbstractResolver\"), # undocumented\n (\"py:func\", \"aiohttp.ws_connect\"), # undocumented\n (\"py:meth\", \"start\"), # undocumented\n (\"py:exc\", \"aiohttp.ServerDisconnectionError\"), # undocumented\n (\"py:exc\", \"aiohttp.ClientHttpProxyError\"), # undocumented\n (\"py:class\", \"asyncio.AbstractServer\"), # undocumented\n (\"py:mod\", \"aiohttp.test_tools\"), # undocumented\n (\"py:class\", \"list of pairs\"), # undocumented\n (\"py:class\", \"aiohttp.protocol.HttpVersion\"), # undocumented\n (\"py:meth\", \"aiohttp.ClientSession.request\"), # undocumented\n (\"py:class\", \"aiohttp.StreamWriter\"), # undocumented\n (\"py:attr\", \"aiohttp.StreamResponse.body\"), # undocumented\n (\"py:class\", \"aiohttp.payload.StringPayload\"), # undocumented\n (\"py:meth\", \"aiohttp.web.Application.copy\"), # undocumented\n (\"py:meth\", \"asyncio.AbstractEventLoop.create_server\"), # undocumented\n (\"py:data\", \"aiohttp.log.server_logger\"), # undocumented\n (\"py:data\", \"aiohttp.log.access_logger\"), # undocumented\n (\"py:data\", \"aiohttp.helpers.AccessLogger\"), # undocumented\n (\"py:attr\", \"helpers.AccessLogger.LOG_FORMAT\"), # undocumented\n (\"py:meth\", \"aiohttp.web.AbstractRoute.url\"), # undocumented\n (\"py:class\", \"aiohttp.web.MatchedSubAppResource\"), # undocumented\n (\"py:attr\", \"body\"), # undocumented\n (\"py:class\", \"socket.socket\"), # undocumented\n (\"py:obj\", \"logging.DEBUG\"), # undocumented\n (\"py:class\", \"aiohttp.abc.AbstractAsyncAccessLogger\"), # undocumented\n (\"py:meth\", \"aiohttp.web.Response.write_eof\"), # undocumented\n (\"py:meth\", \"aiohttp.payload.Payload.set_content_disposition\"), # undocumented\n (\"py:class\", \"cgi.FieldStorage\"), # undocumented\n (\"py:meth\", \"aiohttp.web.UrlDispatcher.register_resource\"), # undocumented\n (\"py:func\", \"aiohttp_debugtoolbar.setup\"), # undocumented\n (\"py:exc\", \"HTTPInternalServerError\"), # undocumented\n (\"py:exc\", \"HTTPForbidden\"), # undocumented\n (\"py:exc\", \"HTTPExpectationFailed\"), # undocumented\n (\"py:class\", \"HTTPFound\"), # undocumented\n (\"py:class\", \"HTTPMultipleChoices\"), # undocumented\n (\"py:class\", \"HTTPMovedPermanently\"), # undocumented\n (\"py:class\", \"HTTPSeeOther\"), # undocumented\n (\"py:class\", \"HTTPUseProxy\"), # undocumented\n (\"py:class\", \"HTTPTemporaryRedirect\"), # undocumented\n (\"py:class\", \"HTTPMethodNotAllowed\"), # undocumented\n (\"py:class\", \"FileResponse\"), # undocumented\n (\"py:exc\", \"HTTPNotFound\"), # undocumented\n (\"py:exc\", \"HTTPMethodNotAllowed\"), # undocumented\n (\"py:class\", \"HTTPMethodNotAllowed\"), # undocumented\n]\n", "path": "docs/conf.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n#\n# aiohttp documentation build configuration file, created by\n# sphinx-quickstart on Wed Mar 5 12:35:35 2014.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport io\nimport os\nimport re\n\n_docs_path = os.path.dirname(__file__)\n_version_path = os.path.abspath(\n os.path.join(_docs_path, \"..\", \"aiohttp\", \"__init__.py\")\n)\nwith open(_version_path, encoding=\"latin1\") as fp:\n try:\n _version_info = re.search(\n r'^__version__ = \"'\n r\"(?P<major>\\d+)\"\n r\"\\.(?P<minor>\\d+)\"\n r\"\\.(?P<patch>\\d+)\"\n r'(?P<tag>.*)?\"$',\n fp.read(),\n re.M,\n ).groupdict()\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.intersphinx\",\n \"sphinxcontrib.asyncio\",\n \"sphinxcontrib.blockdiag\",\n]\n\n\ntry:\n import sphinxcontrib.spelling # noqa\n\n extensions.append(\"sphinxcontrib.spelling\")\nexcept ImportError:\n pass\n\n\nintersphinx_mapping = {\n \"python\": (\"http://docs.python.org/3\", None),\n \"multidict\": (\"https://multidict.readthedocs.io/en/stable/\", None),\n \"yarl\": (\"https://yarl.readthedocs.io/en/stable/\", None),\n \"aiohttpjinja2\": (\"https://aiohttp-jinja2.readthedocs.io/en/stable/\", None),\n \"aiohttpremotes\": (\"https://aiohttp-remotes.readthedocs.io/en/stable/\", None),\n \"aiohttpsession\": (\"https://aiohttp-session.readthedocs.io/en/stable/\", None),\n \"aiohttpdemos\": (\"https://aiohttp-demos.readthedocs.io/en/latest/\", None),\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix of source filenames.\nsource_suffix = \".rst\"\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"aiohttp\"\ncopyright = \"2013-2020, aiohttp maintainers\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"{major}.{minor}\".format(**_version_info)\n# The full version, including alpha/beta/rc tags.\nrelease = \"{major}.{minor}.{patch}{tag}\".format(**_version_info)\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\"]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n# pygments_style = 'sphinx'\n\n# The default language to highlight source code in.\nhighlight_language = \"python3\"\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = \"aiohttp_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n \"description\": \"Async HTTP client/server for asyncio and Python\",\n \"canonical_url\": \"http://docs.aiohttp.org/en/stable/\",\n \"github_user\": \"aio-libs\",\n \"github_repo\": \"aiohttp\",\n \"github_button\": True,\n \"github_type\": \"star\",\n \"github_banner\": True,\n \"badges\": [\n {\n \"image\": \"https://github.com/aio-libs/aiohttp/workflows/CI/badge.svg\",\n \"target\": \"https://github.com/aio-libs/aiohttp/actions?query=workflow%3ACI\",\n \"height\": \"20\",\n \"alt\": \"Azure Pipelines CI status\",\n },\n {\n \"image\": \"https://codecov.io/github/aio-libs/aiohttp/coverage.svg?branch=master\",\n \"target\": \"https://codecov.io/github/aio-libs/aiohttp\",\n \"height\": \"20\",\n \"alt\": \"Code coverage status\",\n },\n {\n \"image\": \"https://badge.fury.io/py/aiohttp.svg\",\n \"target\": \"https://badge.fury.io/py/aiohttp\",\n \"height\": \"20\",\n \"alt\": \"Latest PyPI package version\",\n },\n {\n \"image\": \"https://img.shields.io/discourse/status?server=https%3A%2F%2Faio-libs.discourse.group\",\n \"target\": \"https://aio-libs.discourse.group\",\n \"height\": \"20\",\n \"alt\": \"Discourse status\",\n },\n {\n \"image\": \"https://badges.gitter.im/Join%20Chat.svg\",\n \"target\": \"https://gitter.im/aio-libs/Lobby\",\n \"height\": \"20\",\n \"alt\": \"Chat on Gitter\",\n },\n ],\n}\n\nhtml_css_files = [\n \"css/logo-adjustments.css\",\n]\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = [alabaster.get_path()]\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = \"aiohttp-plain.svg\"\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n \"**\": [\n \"about.html\",\n \"navigation.html\",\n \"searchbox.html\",\n ]\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"aiohttpdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\"index\", \"aiohttp.tex\", \"aiohttp Documentation\", \"aiohttp contributors\", \"manual\"),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(\"index\", \"aiohttp\", \"aiohttp Documentation\", [\"aiohttp\"], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n \"index\",\n \"aiohttp\",\n \"aiohttp Documentation\",\n \"Aiohttp contributors\",\n \"aiohttp\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n\n# -------------------------------------------------------------------------\nnitpicky = True\nnitpick_ignore = [\n (\"py:mod\", \"aiohttp\"), # undocumented, no `.. currentmodule:: aiohttp` in docs\n (\"py:class\", \"aiohttp.SimpleCookie\"), # undocumented\n (\"py:class\", \"aiohttp.web.RequestHandler\"), # undocumented\n (\"py:class\", \"aiohttp.NamedPipeConnector\"), # undocumented\n (\"py:meth\", \"aiohttp.ClientSession.request\"), # undocumented\n (\"py:class\", \"aiohttp.protocol.HttpVersion\"), # undocumented\n (\"py:class\", \"aiohttp.ClientRequest\"), # undocumented\n (\"py:class\", \"aiohttp.payload.Payload\"), # undocumented\n (\"py:class\", \"aiohttp.abc.AbstractResolver\"), # undocumented\n (\"py:func\", \"aiohttp.ws_connect\"), # undocumented\n (\"py:meth\", \"start\"), # undocumented\n (\"py:exc\", \"aiohttp.ServerDisconnectionError\"), # undocumented\n (\"py:exc\", \"aiohttp.ClientHttpProxyError\"), # undocumented\n (\"py:class\", \"asyncio.AbstractServer\"), # undocumented\n (\"py:mod\", \"aiohttp.test_tools\"), # undocumented\n (\"py:class\", \"list of pairs\"), # undocumented\n (\"py:class\", \"aiohttp.protocol.HttpVersion\"), # undocumented\n (\"py:meth\", \"aiohttp.ClientSession.request\"), # undocumented\n (\"py:class\", \"aiohttp.StreamWriter\"), # undocumented\n (\"py:attr\", \"aiohttp.StreamResponse.body\"), # undocumented\n (\"py:class\", \"aiohttp.payload.StringPayload\"), # undocumented\n (\"py:meth\", \"aiohttp.web.Application.copy\"), # undocumented\n (\"py:meth\", \"asyncio.AbstractEventLoop.create_server\"), # undocumented\n (\"py:data\", \"aiohttp.log.server_logger\"), # undocumented\n (\"py:data\", \"aiohttp.log.access_logger\"), # undocumented\n (\"py:data\", \"aiohttp.helpers.AccessLogger\"), # undocumented\n (\"py:attr\", \"helpers.AccessLogger.LOG_FORMAT\"), # undocumented\n (\"py:meth\", \"aiohttp.web.AbstractRoute.url\"), # undocumented\n (\"py:class\", \"aiohttp.web.MatchedSubAppResource\"), # undocumented\n (\"py:attr\", \"body\"), # undocumented\n (\"py:class\", \"socket.socket\"), # undocumented\n (\"py:obj\", \"logging.DEBUG\"), # undocumented\n (\"py:class\", \"aiohttp.abc.AbstractAsyncAccessLogger\"), # undocumented\n (\"py:meth\", \"aiohttp.web.Response.write_eof\"), # undocumented\n (\"py:meth\", \"aiohttp.payload.Payload.set_content_disposition\"), # undocumented\n (\"py:class\", \"cgi.FieldStorage\"), # undocumented\n (\"py:meth\", \"aiohttp.web.UrlDispatcher.register_resource\"), # undocumented\n (\"py:func\", \"aiohttp_debugtoolbar.setup\"), # undocumented\n (\"py:exc\", \"HTTPInternalServerError\"), # undocumented\n (\"py:exc\", \"HTTPForbidden\"), # undocumented\n (\"py:exc\", \"HTTPExpectationFailed\"), # undocumented\n (\"py:class\", \"HTTPFound\"), # undocumented\n (\"py:class\", \"HTTPMultipleChoices\"), # undocumented\n (\"py:class\", \"HTTPMovedPermanently\"), # undocumented\n (\"py:class\", \"HTTPSeeOther\"), # undocumented\n (\"py:class\", \"HTTPUseProxy\"), # undocumented\n (\"py:class\", \"HTTPTemporaryRedirect\"), # undocumented\n (\"py:class\", \"HTTPMethodNotAllowed\"), # undocumented\n (\"py:class\", \"FileResponse\"), # undocumented\n (\"py:exc\", \"HTTPNotFound\"), # undocumented\n (\"py:exc\", \"HTTPMethodNotAllowed\"), # undocumented\n (\"py:class\", \"HTTPMethodNotAllowed\"), # undocumented\n]\n", "path": "docs/conf.py"}]} |
gh_patches_debug_1142 | rasdani/github-patches | git_diff | Qiskit__qiskit-7972 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
qasm3 exporter wrong placement of `input` declaration
### Environment
- **Qiskit Terra version**: 0.20.0
- **Python version**: 3.8.13
- **Operating system**: macOS Monterey 12.3.1
### What is happening?
When exporting parametrized `QuantumCircuit`s with custom gates to a QASM3 string, the exporter wrongly places the circuit's `input` declaration after the `gate` declarations, which is not according to the OpenQASM3.0 grammar - IO declarations should appear in the [header](https://github.com/Qiskit/openqasm/blob/c39eac9f5c87b80df9e8eaeced96cbf4b477e5d2/source/grammar/qasm3.g4#L12).
### How can we reproduce the issue?
Run the following:
```python
from qiskit import QuantumCircuit, qasm3
from qiskit.circuit import Parameter
theta = Parameter("theta")
inner_qc = QuantumCircuit(1)
inner_qc.rz(theta, 0)
qc = QuantumCircuit(1)
qc.append(inner_qc.to_gate(), range(1))
print(qasm3.dumps(qc))
```
The resulting OpenQASM3.0 code is then:
```qasm
OPENQASM 3;
include "stdgates.inc";
gate circuit_21(theta) _gate_q_0 {
rz(theta) _gate_q_0;
}
input float[64] theta;
qubit[1] _all_qubits;
let q = _all_qubits[0:0];
circuit_21(theta) q[0];
```
### What should happen?
The expected output according to the grammar is:
```qasm
OPENQASM 3;
include "stdgates.inc";
input float[64] theta;
gate circuit_21(theta) _gate_q_0 {
rz(theta) _gate_q_0;
}
qubit[1] _all_qubits;
let q = _all_qubits[0:0];
circuit_21(theta) q[0];
```
### Any suggestions?
_No response_
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `qiskit/qasm3/exporter.py`
Content:
```
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2021.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """QASM3 Exporter"""
14
15 import collections
16 import io
17 import itertools
18 import numbers
19 from os.path import dirname, join, abspath
20 from typing import Iterable, List, Sequence, Union
21
22 from qiskit.circuit import (
23 Barrier,
24 Clbit,
25 Gate,
26 Instruction,
27 Measure,
28 Parameter,
29 ParameterExpression,
30 QuantumCircuit,
31 QuantumRegister,
32 Qubit,
33 Reset,
34 Delay,
35 )
36 from qiskit.circuit.bit import Bit
37 from qiskit.circuit.controlflow import (
38 IfElseOp,
39 ForLoopOp,
40 WhileLoopOp,
41 ControlFlowOp,
42 BreakLoopOp,
43 ContinueLoopOp,
44 )
45 from qiskit.circuit.library import standard_gates
46 from qiskit.circuit.register import Register
47 from qiskit.circuit.tools import pi_check
48
49 from . import ast
50 from .exceptions import QASM3ExporterError
51 from .printer import BasicPrinter
52
53
54 # Reserved keywords that gates and variables cannot be named. It is possible that some of these
55 # _could_ be accepted as variable names by OpenQASM 3 parsers, but it's safer for us to just be very
56 # conservative.
57 _RESERVED_KEYWORDS = frozenset(
58 {
59 "OPENQASM",
60 "U",
61 "angle",
62 "array",
63 "barrier",
64 "bit",
65 "bool",
66 "box",
67 "break",
68 "cal",
69 "complex",
70 "const",
71 "continue",
72 "creg",
73 "ctrl",
74 "def",
75 "defcal",
76 "defcalgrammar",
77 "delay",
78 "duration",
79 "durationof",
80 "else",
81 "end",
82 "extern",
83 "float",
84 "for",
85 "gate",
86 "gphase",
87 "if",
88 "in",
89 "include",
90 "input",
91 "int",
92 "inv",
93 "let",
94 "measure",
95 "mutable",
96 "negctrl",
97 "output",
98 "pow",
99 "qreg",
100 "qubit",
101 "reset",
102 "return",
103 "sizeof",
104 "stretch",
105 "uint",
106 "while",
107 }
108 )
109
110
111 class Exporter:
112 """QASM3 expoter main class."""
113
114 def __init__(
115 self,
116 includes: Sequence[str] = ("stdgates.inc",),
117 basis_gates: Sequence[str] = ("U",),
118 disable_constants: bool = False,
119 alias_classical_registers: bool = False,
120 indent: str = " ",
121 ):
122 """
123 Args:
124 includes: the filenames that should be emitted as includes. These files will be parsed
125 for gates, and any objects dumped from this exporter will use those definitions
126 where possible.
127 basis_gates: the basic defined gate set of the backend.
128 disable_constants: if ``True``, always emit floating-point constants for numeric
129 parameter values. If ``False`` (the default), then values close to multiples of
130 QASM 3 constants (``pi``, ``euler``, and ``tau``) will be emitted in terms of those
131 constants instead, potentially improving accuracy in the output.
132 alias_classical_registers: If ``True``, then classical bit and classical register
133 declarations will look similar to quantum declarations, where the whole set of bits
134 will be declared in a flat array, and the registers will just be aliases to
135 collections of these bits. This is inefficient for running OpenQASM 3 programs,
136 however, and may not be well supported on backends. Instead, the default behaviour
137 of ``False`` means that individual classical registers will gain their own
138 ``bit[size] register;`` declarations, and loose :obj:`.Clbit`\\ s will go onto their
139 own declaration. In this form, each :obj:`.Clbit` must be in either zero or one
140 :obj:`.ClassicalRegister`\\ s.
141 indent: the indentation string to use for each level within an indented block. Can be
142 set to the empty string to disable indentation.
143 """
144 self.basis_gates = basis_gates
145 self.disable_constants = disable_constants
146 self.alias_classical_registers = alias_classical_registers
147 self.includes = list(includes)
148 self.indent = indent
149
150 def dumps(self, circuit):
151 """Convert the circuit to QASM 3, returning the result as a string."""
152 with io.StringIO() as stream:
153 self.dump(circuit, stream)
154 return stream.getvalue()
155
156 def dump(self, circuit, stream):
157 """Convert the circuit to QASM 3, dumping the result to a file or text stream."""
158 builder = QASM3Builder(
159 circuit,
160 includeslist=self.includes,
161 basis_gates=self.basis_gates,
162 disable_constants=self.disable_constants,
163 alias_classical_registers=self.alias_classical_registers,
164 )
165 BasicPrinter(stream, indent=self.indent).visit(builder.build_program())
166
167
168 class GlobalNamespace:
169 """Global namespace dict-like."""
170
171 qiskit_gates = {
172 "p": standard_gates.PhaseGate,
173 "x": standard_gates.XGate,
174 "y": standard_gates.YGate,
175 "z": standard_gates.ZGate,
176 "h": standard_gates.HGate,
177 "s": standard_gates.SGate,
178 "sdg": standard_gates.SdgGate,
179 "t": standard_gates.TGate,
180 "tdg": standard_gates.TdgGate,
181 "sx": standard_gates.SXGate,
182 "rx": standard_gates.RXGate,
183 "ry": standard_gates.RYGate,
184 "rz": standard_gates.RZGate,
185 "cx": standard_gates.CXGate,
186 "cy": standard_gates.CYGate,
187 "cz": standard_gates.CZGate,
188 "cp": standard_gates.CPhaseGate,
189 "crx": standard_gates.CRXGate,
190 "cry": standard_gates.CRYGate,
191 "crz": standard_gates.CRZGate,
192 "ch": standard_gates.CHGate,
193 "swap": standard_gates.SwapGate,
194 "ccx": standard_gates.CCXGate,
195 "cswap": standard_gates.CSwapGate,
196 "cu": standard_gates.CUGate,
197 "CX": standard_gates.CXGate,
198 "phase": standard_gates.PhaseGate,
199 "cphase": standard_gates.CPhaseGate,
200 "id": standard_gates.IGate,
201 "u1": standard_gates.U1Gate,
202 "u2": standard_gates.U2Gate,
203 "u3": standard_gates.U3Gate,
204 }
205 include_paths = [abspath(join(dirname(__file__), "..", "qasm", "libs"))]
206
207 def __init__(self, includelist, basis_gates=()):
208 self._data = {gate: None for gate in basis_gates}
209
210 for includefile in includelist:
211 if includefile == "stdgates.inc":
212 self._data.update(self.qiskit_gates)
213 else:
214 # TODO What do if an inc file is not standard?
215 # Should it be parsed?
216 pass
217
218 def __setitem__(self, name_str, instruction):
219 self._data[name_str] = type(instruction)
220 self._data[id(instruction)] = name_str
221
222 def __getitem__(self, key):
223 if isinstance(key, Instruction):
224 try:
225 # Registered gates.
226 return self._data[id(key)]
227 except KeyError:
228 pass
229 # Built-in gates.
230 if key.name not in self._data:
231 raise KeyError(key)
232 return key.name
233 return self._data[key]
234
235 def __iter__(self):
236 return iter(self._data)
237
238 def __contains__(self, instruction):
239 if isinstance(instruction, standard_gates.UGate):
240 return True
241 if id(instruction) in self._data:
242 return True
243 if type(instruction) in [Gate, Instruction]: # user-defined instructions/gate
244 return self._data.get(instruction.name, None) == instruction
245 if instruction.name in self._data:
246 if self._data.get(instruction.name) is None: # it is a basis gate:
247 return True
248 if isinstance(instruction, self._data.get(instruction.name)):
249 return True
250 return False
251
252 def register(self, instruction):
253 """Register an instruction in the namespace"""
254 # The second part of the condition is a nasty hack to ensure that gates that come with at
255 # least one parameter always have their id in the name. This is a workaround a bug, where
256 # gates with parameters do not contain the information required to build the gate definition
257 # in symbolic form (unless the parameters are all symbolic). The exporter currently
258 # (2021-12-01) builds gate declarations with parameters in the signature, but then ignores
259 # those parameters during the body, and just uses the concrete values from the first
260 # instance of the gate it sees, such as:
261 # gate rzx(_gate_p_0) _gate_q_0, _gate_q_1 {
262 # h _gate_q_1;
263 # cx _gate_q_0, _gate_q_1;
264 # rz(0.2) _gate_q_1; // <- note the concrete value.
265 # cx _gate_q_0, _gate_q_1;
266 # h _gate_q_1;
267 # }
268 # This then means that multiple calls to the same gate with different parameters will be
269 # incorrect. By forcing all gates to be defined including their id, we generate a QASM3
270 # program that does what was intended, even though the output QASM3 is silly. See gh-7335.
271 if instruction.name in self._data or (
272 isinstance(instruction, Gate)
273 and not all(isinstance(param, Parameter) for param in instruction.params)
274 ):
275 key = f"{instruction.name}_{id(instruction)}"
276 else:
277 key = instruction.name
278 self[key] = instruction
279
280
281 # A _Scope is the structure used in the builder to store the contexts and re-mappings of bits from
282 # the top-level scope where the bits were actually defined. In the class, 'circuit' is an instance
283 # of QuantumCircuit that defines this level, and 'bit_map' is a mapping of 'Bit: Bit', where the
284 # keys are bits in the circuit in this scope, and the values are the Bit in the top-level scope in
285 # this context that this bit actually represents. 'symbol_map' is a bidirectional mapping of
286 # '<Terra object>: Identifier' and 'str: <Terra object>', where the string in the second map is the
287 # name of the identifier. This is a cheap hack around actually implementing a proper symbol table.
288 _Scope = collections.namedtuple("_Scope", ("circuit", "bit_map", "symbol_map"))
289
290
291 class QASM3Builder:
292 """QASM3 builder constructs an AST from a QuantumCircuit."""
293
294 builtins = (Barrier, Measure, Reset, Delay, BreakLoopOp, ContinueLoopOp)
295 gate_parameter_prefix = "_gate_p"
296 gate_qubit_prefix = "_gate_q"
297
298 def __init__(
299 self,
300 quantumcircuit,
301 includeslist,
302 basis_gates,
303 disable_constants,
304 alias_classical_registers,
305 ):
306 # This is a stack of stacks; the outer stack is a list of "outer" look-up contexts, and the
307 # inner stack is for scopes within these. A "outer" look-up context in this sense means
308 # the main program body or a gate/subroutine definition, whereas the scopes are for things
309 # like the body of a ``for`` loop construct.
310 self._circuit_ctx = []
311 self.push_context(quantumcircuit)
312 self.includeslist = includeslist
313 self._gate_to_declare = {}
314 self._subroutine_to_declare = {}
315 self._opaque_to_declare = {}
316 self._flat_reg = False
317 self._physical_qubit = False
318 self._loose_clbit_index_lookup = {}
319 # An arbitrary counter to help with generation of unique ids for symbol names when there are
320 # clashes (though we generally prefer to keep user names if possible).
321 self._counter = itertools.count()
322 self.disable_constants = disable_constants
323 self.alias_classical_registers = alias_classical_registers
324 self.global_namespace = GlobalNamespace(includeslist, basis_gates)
325
326 def _register_gate(self, gate):
327 self.global_namespace.register(gate)
328 self._gate_to_declare[id(gate)] = gate
329
330 def _register_subroutine(self, instruction):
331 self.global_namespace.register(instruction)
332 self._subroutine_to_declare[id(instruction)] = instruction
333
334 def _register_opaque(self, instruction):
335 if instruction not in self.global_namespace:
336 self.global_namespace.register(instruction)
337 self._opaque_to_declare[id(instruction)] = instruction
338
339 def _register_variable(self, variable, name=None) -> ast.Identifier:
340 """Register a variable in the symbol table for the current scope, returning the name that
341 should be used to refer to the variable. The same name will be returned by subsequent calls
342 to :meth:`_lookup_variable` within the same scope.
343
344 If ``name`` is given explicitly, it must not already be defined in the scope.
345 """
346 # Note that the registration only checks for the existence of a variable that was declared
347 # in the current scope, not just one that's available. This is a rough implementation of
348 # the shadowing proposal currently being drafted for OpenQASM 3, though we expect it to be
349 # expanded and modified in the future (2022-03-07).
350 table = self.current_scope().symbol_map
351 if name is not None:
352 if name in _RESERVED_KEYWORDS:
353 raise QASM3ExporterError(f"cannot reserve the keyword '{name}' as a variable name")
354 if name in table:
355 raise QASM3ExporterError(
356 f"tried to reserve '{name}', but it is already used by '{table[name]}'"
357 )
358 else:
359 name = variable.name
360 while name in table or name in _RESERVED_KEYWORDS:
361 name = f"{variable.name}__generated{next(self._counter)}"
362 identifier = ast.Identifier(name)
363 table[identifier.string] = variable
364 table[variable] = identifier
365 return identifier
366
367 def _reserve_variable_name(self, name: ast.Identifier) -> ast.Identifier:
368 """Reserve a variable name in the current scope, raising a :class:`.QASM3ExporterError` if
369 the name is already in use.
370
371 This is useful for autogenerated names that the exporter itself reserves when dealing with
372 objects that have no standard Terra object backing them, such as the declaration of all
373 circuit qubits, so cannot be placed into the symbol table by the normal means.
374
375 Returns the same identifier, for convenience in chaining."""
376 table = self.current_scope().symbol_map
377 if name.string in table:
378 variable = table[name.string]
379 raise QASM3ExporterError(
380 f"tried to reserve '{name.string}', but it is already used by '{variable}'"
381 )
382 table[name.string] = "<internal object>"
383 return name
384
385 def _lookup_variable(self, variable) -> ast.Identifier:
386 """Lookup a Terra object within the current context, and return the name that should be used
387 to represent it in OpenQASM 3 programmes."""
388 for scope in reversed(self.current_context()):
389 if variable in scope.symbol_map:
390 return scope.symbol_map[variable]
391 raise KeyError(f"'{variable}' is not defined in the current context")
392
393 def build_header(self):
394 """Builds a Header"""
395 version = ast.Version("3")
396 includes = self.build_includes()
397 return ast.Header(version, includes)
398
399 def build_program(self):
400 """Builds a Program"""
401 self.hoist_declarations(self.global_scope(assert_=True).circuit.data)
402 return ast.Program(self.build_header(), self.build_global_statements())
403
404 def hoist_declarations(self, instructions):
405 """Walks the definitions in gates/instructions to make a list of gates to declare."""
406 for instruction in instructions:
407 if isinstance(instruction[0], ControlFlowOp):
408 for block in instruction[0].blocks:
409 self.hoist_declarations(block.data)
410 continue
411 if instruction[0] in self.global_namespace or isinstance(instruction[0], self.builtins):
412 continue
413
414 if instruction[0].definition is None:
415 self._register_opaque(instruction[0])
416 else:
417 self.hoist_declarations(instruction[0].definition.data)
418 if isinstance(instruction[0], Gate):
419 self._register_gate(instruction[0])
420 else:
421 self._register_subroutine(instruction[0])
422
423 def global_scope(self, assert_=False):
424 """Return the global circuit scope that is used as the basis of the full program. If
425 ``assert_=True``, then this raises :obj:`.QASM3ExporterError` if the current context is not
426 the global one."""
427 if assert_ and len(self._circuit_ctx) != 1 and len(self._circuit_ctx[0]) != 1:
428 # Defensive code to help catch logic errors.
429 raise QASM3ExporterError( # pragma: no cover
430 f"Not currently in the global context. Current contexts are: {self._circuit_ctx}"
431 )
432 return self._circuit_ctx[0][0]
433
434 def current_outermost_scope(self):
435 """Return the outermost scope for this context. If building the main program, then this is
436 the :obj:`.QuantumCircuit` instance that the full program is being built from. If building
437 a gate or subroutine definition, this is the body that defines the gate or subroutine."""
438 return self._circuit_ctx[-1][0]
439
440 def current_scope(self):
441 """Return the current circuit scope."""
442 return self._circuit_ctx[-1][-1]
443
444 def current_context(self):
445 """Return the current context (list of scopes)."""
446 return self._circuit_ctx[-1]
447
448 def push_scope(self, circuit: QuantumCircuit, qubits: Iterable[Qubit], clbits: Iterable[Clbit]):
449 """Push a new scope (like a ``for`` or ``while`` loop body) onto the current context
450 stack."""
451 current_map = self.current_scope().bit_map
452 qubits = tuple(current_map[qubit] for qubit in qubits)
453 clbits = tuple(current_map[clbit] for clbit in clbits)
454 if circuit.num_qubits != len(qubits):
455 raise QASM3ExporterError( # pragma: no cover
456 f"Tried to push a scope whose circuit needs {circuit.num_qubits} qubits, but only"
457 f" provided {len(qubits)} qubits to create the mapping."
458 )
459 if circuit.num_clbits != len(clbits):
460 raise QASM3ExporterError( # pragma: no cover
461 f"Tried to push a scope whose circuit needs {circuit.num_clbits} clbits, but only"
462 f" provided {len(clbits)} clbits to create the mapping."
463 )
464 mapping = dict(itertools.chain(zip(circuit.qubits, qubits), zip(circuit.clbits, clbits)))
465 self._circuit_ctx[-1].append(_Scope(circuit, mapping, {}))
466
467 def pop_scope(self) -> _Scope:
468 """Pop the current scope (like a ``for`` or ``while`` loop body) off the current context
469 stack."""
470 if len(self._circuit_ctx[-1]) <= 1:
471 raise QASM3ExporterError( # pragma: no cover
472 "Tried to pop a scope from the current context, but there are no current scopes."
473 )
474 return self._circuit_ctx[-1].pop()
475
476 def push_context(self, outer_context: QuantumCircuit):
477 """Push a new context (like for a ``gate`` or ``def`` body) onto the stack."""
478 mapping = {bit: bit for bit in itertools.chain(outer_context.qubits, outer_context.clbits)}
479 self._circuit_ctx.append([_Scope(outer_context, mapping, {})])
480
481 def pop_context(self):
482 """Pop the current context (like for a ``gate`` or ``def`` body) onto the stack."""
483 if len(self._circuit_ctx) == 1:
484 raise QASM3ExporterError( # pragma: no cover
485 "Tried to pop the current context, but that is the global context."
486 )
487 if len(self._circuit_ctx[-1]) != 1:
488 raise QASM3ExporterError( # pragma: no cover
489 "Tried to pop the current context while there are still"
490 f" {len(self._circuit_ctx[-1]) - 1} unclosed scopes."
491 )
492 self._circuit_ctx.pop()
493
494 def build_includes(self):
495 """Builds a list of included files."""
496 return [ast.Include(filename) for filename in self.includeslist]
497
498 def build_global_statements(self) -> List[ast.Statement]:
499 """
500 globalStatement
501 : subroutineDefinition
502 | kernelDeclaration
503 | quantumGateDefinition
504 | calibration
505 | quantumDeclarationStatement # build_quantumdeclaration
506 | pragma
507 ;
508
509 statement
510 : expressionStatement
511 | assignmentStatement
512 | classicalDeclarationStatement
513 | branchingStatement
514 | loopStatement
515 | endStatement
516 | aliasStatement
517 | quantumStatement # build_quantuminstruction
518 ;
519 """
520 definitions = self.build_definitions()
521 inputs, outputs, variables = self.build_variable_declarations()
522 bit_declarations = self.build_classical_declarations()
523 context = self.global_scope(assert_=True).circuit
524 if getattr(context, "_layout", None) is not None:
525 self._physical_qubit = True
526 quantum_declarations = []
527 else:
528 quantum_declarations = self.build_quantum_declarations()
529 quantum_instructions = self.build_quantum_instructions(context.data)
530 self._physical_qubit = False
531
532 return [
533 statement
534 for source in (
535 definitions,
536 inputs,
537 outputs,
538 variables,
539 bit_declarations,
540 quantum_declarations,
541 quantum_instructions,
542 )
543 for statement in source
544 ]
545
546 def build_definitions(self):
547 """Builds all the definition."""
548 ret = []
549 for instruction in self._opaque_to_declare.values():
550 ret.append(self.build_definition(instruction, self.build_opaque_definition))
551 for instruction in self._subroutine_to_declare.values():
552 ret.append(self.build_definition(instruction, self.build_subroutine_definition))
553 for instruction in self._gate_to_declare.values():
554 ret.append(self.build_definition(instruction, self.build_gate_definition))
555 return ret
556
557 def build_definition(self, instruction, builder):
558 """Using a given definition builder, builds that definition."""
559 try:
560 return instruction._define_qasm3()
561 except AttributeError:
562 pass
563 self._flat_reg = True
564 definition = builder(instruction)
565 self._flat_reg = False
566 return definition
567
568 def build_opaque_definition(self, instruction):
569 """Builds an Opaque gate definition as a CalibrationDefinition"""
570 # We can't do anything sensible with this yet, so it's better to loudly say that.
571 raise QASM3ExporterError(
572 "Exporting opaque instructions with pulse-level calibrations is not yet supported by"
573 " the OpenQASM 3 exporter. Received this instruction, which appears opaque:"
574 f"\n{instruction}"
575 )
576
577 def build_subroutine_definition(self, instruction):
578 """Builds a SubroutineDefinition"""
579 if instruction.definition.parameters:
580 # We don't yet have the type system to store the parameter types in a symbol table, and
581 # we currently don't have the correct logic in place to handle parameters correctly in
582 # the definition.
583 raise QASM3ExporterError(
584 "Exporting subroutines with parameters is not yet supported by the OpenQASM 3"
585 " exporter. Received this instruction, which appears parameterized:"
586 f"\n{instruction}"
587 )
588 name = self.global_namespace[instruction]
589 self.push_context(instruction.definition)
590 quantum_arguments = [
591 ast.QuantumArgument(
592 self._reserve_variable_name(ast.Identifier(f"{self.gate_qubit_prefix}_{n_qubit}"))
593 )
594 for n_qubit in range(len(instruction.definition.qubits))
595 ]
596 subroutine_body = ast.SubroutineBlock(
597 self.build_quantum_instructions(instruction.definition.data),
598 )
599 self.pop_context()
600 return ast.SubroutineDefinition(ast.Identifier(name), subroutine_body, quantum_arguments)
601
602 def build_gate_definition(self, gate):
603 """Builds a QuantumGateDefinition"""
604 self.push_context(gate.definition)
605 signature = self.build_gate_signature(gate)
606 body = ast.QuantumBlock(self.build_quantum_instructions(gate.definition.data))
607 self.pop_context()
608 return ast.QuantumGateDefinition(signature, body)
609
610 def build_gate_signature(self, gate):
611 """Builds a QuantumGateSignature"""
612 name = self.global_namespace[gate]
613 params = []
614 definition = gate.definition
615 # Dummy parameters
616 for num in range(len(gate.params) - len(definition.parameters)):
617 param_name = f"{self.gate_parameter_prefix}_{num}"
618 params.append(self._reserve_variable_name(ast.Identifier(param_name)))
619 params += [self._register_variable(param) for param in definition.parameters]
620 quantum_arguments = [
621 self._reserve_variable_name(ast.Identifier(f"{self.gate_qubit_prefix}_{n_qubit}"))
622 for n_qubit in range(len(definition.qubits))
623 ]
624 return ast.QuantumGateSignature(ast.Identifier(name), quantum_arguments, params or None)
625
626 def build_variable_declarations(self):
627 """Builds lists of the input, output and standard variables used in this program."""
628 inputs, outputs, variables = [], [], []
629 global_scope = self.global_scope(assert_=True).circuit
630 for parameter in global_scope.parameters:
631 parameter_name = self._register_variable(parameter)
632 declaration = _infer_variable_declaration(global_scope, parameter, parameter_name)
633 if declaration is None:
634 continue
635 if isinstance(declaration, ast.IODeclaration):
636 if declaration.modifier is ast.IOModifier.INPUT:
637 inputs.append(declaration)
638 else:
639 outputs.append(declaration)
640 else:
641 variables.append(declaration)
642 return inputs, outputs, variables
643
644 @property
645 def base_classical_register_name(self):
646 """The base register name"""
647 name = "_all_clbits" if self.alias_classical_registers else "_loose_clbits"
648 if name in self.global_namespace._data:
649 raise NotImplementedError # TODO choose a different name if there is a name collision
650 return name
651
652 @property
653 def base_quantum_register_name(self):
654 """The base register name"""
655 name = "_all_qubits"
656 if name in self.global_namespace._data:
657 raise NotImplementedError # TODO choose a different name if there is a name collision
658 return name
659
660 def build_classical_declarations(self):
661 """Return a list of AST nodes declaring all the classical bits and registers.
662
663 The behaviour of this function depends on the setting ``alias_classical_registers``. If this
664 is ``True``, then the output will be in the same form as the output of
665 :meth:`.build_classical_declarations`, with the registers being aliases. If ``False``, it
666 will instead return a :obj:`.ast.ClassicalDeclaration` for each classical register, and one
667 for the loose :obj:`.Clbit` instances, and will raise :obj:`QASM3ExporterError` if any
668 registers overlap.
669
670 This function populates the lookup table ``self._loose_clbit_index_lookup``.
671 """
672 circuit = self.current_scope().circuit
673 if self.alias_classical_registers:
674 self._loose_clbit_index_lookup = {
675 bit: index for index, bit in enumerate(circuit.clbits)
676 }
677 flat_declaration = self.build_clbit_declaration(
678 len(circuit.clbits),
679 self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),
680 )
681 return [flat_declaration] + self.build_aliases(circuit.cregs)
682 loose_register_size = 0
683 for index, bit in enumerate(circuit.clbits):
684 found_bit = circuit.find_bit(bit)
685 if len(found_bit.registers) > 1:
686 raise QASM3ExporterError(
687 f"Clbit {index} is in multiple registers, but 'alias_classical_registers' is"
688 f" False. Registers and indices: {found_bit.registers}."
689 )
690 if not found_bit.registers:
691 self._loose_clbit_index_lookup[bit] = loose_register_size
692 loose_register_size += 1
693 if loose_register_size > 0:
694 loose = [
695 self.build_clbit_declaration(
696 loose_register_size,
697 self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),
698 )
699 ]
700 else:
701 loose = []
702 return loose + [
703 self.build_clbit_declaration(len(register), self._register_variable(register))
704 for register in circuit.cregs
705 ]
706
707 def build_clbit_declaration(
708 self, n_clbits: int, name: ast.Identifier
709 ) -> ast.ClassicalDeclaration:
710 """Return a declaration of the :obj:`.Clbit`\\ s as a ``bit[n]``."""
711 return ast.ClassicalDeclaration(ast.BitArrayType(n_clbits), name)
712
713 def build_quantum_declarations(self):
714 """Return a list of AST nodes declaring all the qubits in the current scope, and all the
715 alias declarations for these qubits."""
716 return [self.build_qubit_declarations()] + self.build_aliases(
717 self.current_scope().circuit.qregs
718 )
719
720 def build_qubit_declarations(self):
721 """Return a declaration of all the :obj:`.Qubit`\\ s in the current scope."""
722 # Base register
723 return ast.QuantumDeclaration(
724 self._reserve_variable_name(ast.Identifier(self.base_quantum_register_name)),
725 ast.Designator(self.build_integer(self.current_scope().circuit.num_qubits)),
726 )
727
728 def build_aliases(self, registers: Iterable[Register]) -> List[ast.AliasStatement]:
729 """Return a list of alias declarations for the given registers. The registers can be either
730 classical or quantum."""
731 out = []
732 for register in registers:
733 elements = []
734 # Greedily consolidate runs of bits into ranges. We don't bother trying to handle
735 # steps; there's no need in generated code. Even single bits are referenced as ranges
736 # because the concatenation in an alias statement can only concatenate arraylike values.
737 start_index, prev_index = None, None
738 register_identifier = (
739 ast.Identifier(self.base_quantum_register_name)
740 if isinstance(register, QuantumRegister)
741 else ast.Identifier(self.base_classical_register_name)
742 )
743 for bit in register:
744 cur_index = self.find_bit(bit).index
745 if start_index is None:
746 start_index = cur_index
747 elif cur_index != prev_index + 1:
748 elements.append(
749 ast.SubscriptedIdentifier(
750 register_identifier,
751 ast.Range(
752 start=self.build_integer(start_index),
753 end=self.build_integer(prev_index),
754 ),
755 )
756 )
757 start_index = prev_index = cur_index
758 prev_index = cur_index
759 # After the loop, if there were any bits at all, there's always one unemitted range.
760 if len(register) != 0:
761 elements.append(
762 ast.SubscriptedIdentifier(
763 register_identifier,
764 ast.Range(
765 start=self.build_integer(start_index),
766 end=self.build_integer(prev_index),
767 ),
768 )
769 )
770 out.append(ast.AliasStatement(self._register_variable(register), elements))
771 return out
772
773 def build_quantum_instructions(self, instructions):
774 """Builds a list of call statements"""
775 ret = []
776 for instruction in instructions:
777 if isinstance(instruction[0], Gate):
778 if instruction[0].condition:
779 eqcondition = self.build_eqcondition(instruction[0].condition)
780 instruction_without_condition = instruction[0].copy()
781 instruction_without_condition.condition = None
782 true_body = self.build_program_block(
783 [(instruction_without_condition, instruction[1], instruction[2])]
784 )
785 ret.append(ast.BranchingStatement(eqcondition, true_body))
786 else:
787 ret.append(self.build_gate_call(instruction))
788 elif isinstance(instruction[0], Barrier):
789 operands = [self.build_single_bit_reference(operand) for operand in instruction[1]]
790 ret.append(ast.QuantumBarrier(operands))
791 elif isinstance(instruction[0], Measure):
792 measurement = ast.QuantumMeasurement(
793 [self.build_single_bit_reference(operand) for operand in instruction[1]]
794 )
795 qubit = self.build_single_bit_reference(instruction[2][0])
796 ret.append(ast.QuantumMeasurementAssignment(qubit, measurement))
797 elif isinstance(instruction[0], Reset):
798 for operand in instruction[1]:
799 ret.append(ast.QuantumReset(self.build_single_bit_reference(operand)))
800 elif isinstance(instruction[0], Delay):
801 ret.append(self.build_delay(*instruction))
802 elif isinstance(instruction[0], ForLoopOp):
803 ret.append(self.build_for_loop(*instruction))
804 elif isinstance(instruction[0], WhileLoopOp):
805 ret.append(self.build_while_loop(*instruction))
806 elif isinstance(instruction[0], IfElseOp):
807 ret.append(self.build_if_statement(*instruction))
808 elif isinstance(instruction[0], BreakLoopOp):
809 ret.append(ast.BreakStatement())
810 elif isinstance(instruction[0], ContinueLoopOp):
811 ret.append(ast.ContinueStatement())
812 else:
813 ret.append(self.build_subroutine_call(instruction))
814 return ret
815
816 def build_if_statement(
817 self, instruction: IfElseOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]
818 ) -> ast.BranchingStatement:
819 """Build an :obj:`.IfElseOp` into a :obj:`.ast.BranchingStatement`."""
820 condition = self.build_eqcondition(instruction.condition)
821
822 true_circuit = instruction.blocks[0]
823 self.push_scope(true_circuit, qubits, clbits)
824 true_body = self.build_program_block(true_circuit.data)
825 self.pop_scope()
826 if len(instruction.blocks) == 1:
827 return ast.BranchingStatement(condition, true_body, None)
828
829 false_circuit = instruction.blocks[1]
830 self.push_scope(false_circuit, qubits, clbits)
831 false_body = self.build_program_block(false_circuit.data)
832 self.pop_scope()
833 return ast.BranchingStatement(condition, true_body, false_body)
834
835 def build_while_loop(
836 self, instruction: WhileLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]
837 ) -> ast.WhileLoopStatement:
838 """Build a :obj:`.WhileLoopOp` into a :obj:`.ast.WhileLoopStatement`."""
839 condition = self.build_eqcondition(instruction.condition)
840 loop_circuit = instruction.blocks[0]
841 self.push_scope(loop_circuit, qubits, clbits)
842 loop_body = self.build_program_block(loop_circuit.data)
843 self.pop_scope()
844 return ast.WhileLoopStatement(condition, loop_body)
845
846 def build_for_loop(
847 self, instruction: ForLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]
848 ) -> ast.ForLoopStatement:
849 """Build a :obj:`.ForLoopOp` into a :obj:`.ast.ForLoopStatement`."""
850 indexset, loop_parameter, loop_circuit = instruction.params
851 self.push_scope(loop_circuit, qubits, clbits)
852 if loop_parameter is None:
853 # The loop parameter is implicitly declared by the ``for`` loop (see also
854 # _infer_parameter_declaration), so it doesn't matter that we haven't declared this.
855 loop_parameter_ast = self._reserve_variable_name(ast.Identifier("_"))
856 else:
857 loop_parameter_ast = self._register_variable(loop_parameter)
858 if isinstance(indexset, range):
859 # QASM 3 uses inclusive ranges on both ends, unlike Python.
860 indexset_ast = ast.Range(
861 start=self.build_integer(indexset.start),
862 end=self.build_integer(indexset.stop - 1),
863 step=self.build_integer(indexset.step) if indexset.step != 1 else None,
864 )
865 else:
866 try:
867 indexset_ast = ast.IndexSet([self.build_integer(value) for value in indexset])
868 except QASM3ExporterError:
869 raise QASM3ExporterError(
870 "The values in QASM 3 'for' loops must all be integers, but received"
871 f" '{indexset}'."
872 ) from None
873 body_ast = self.build_program_block(loop_circuit)
874 self.pop_scope()
875 return ast.ForLoopStatement(indexset_ast, loop_parameter_ast, body_ast)
876
877 def build_delay(
878 self, instruction: Delay, qubits: Iterable[Qubit], clbits: Iterable[Clbit]
879 ) -> ast.QuantumDelay:
880 """Build a built-in delay statement."""
881 clbits = tuple(clbits)
882 if clbits:
883 raise QASM3ExporterError(
884 f"Found a delay instruction acting on classical bits: {instruction} on {clbits}"
885 )
886 if instruction.unit == "ps":
887 duration = ast.DurationLiteral(1000 * instruction.duration, ast.DurationUnit.NANOSECOND)
888 else:
889 unit_map = {
890 "ns": ast.DurationUnit.NANOSECOND,
891 "us": ast.DurationUnit.MICROSECOND,
892 "ms": ast.DurationUnit.MILLISECOND,
893 "s": ast.DurationUnit.SECOND,
894 "dt": ast.DurationUnit.SAMPLE,
895 }
896 duration = ast.DurationLiteral(instruction.duration, unit_map[instruction.unit])
897 return ast.QuantumDelay(
898 duration, [self.build_single_bit_reference(qubit) for qubit in qubits]
899 )
900
901 def build_integer(self, value) -> ast.Integer:
902 """Build an integer literal, raising a :obj:`.QASM3ExporterError` if the input is not
903 actually an
904 integer."""
905 if not isinstance(value, numbers.Integral):
906 # This is meant to be purely defensive, in case a non-integer slips into the logic
907 # somewhere, but no valid Terra object should trigger this.
908 raise QASM3ExporterError(f"'{value}' is not an integer") # pragma: no cover
909 return ast.Integer(int(value))
910
911 def build_program_block(self, instructions):
912 """Builds a ProgramBlock"""
913 return ast.ProgramBlock(self.build_quantum_instructions(instructions))
914
915 def build_eqcondition(self, condition):
916 """Classical Conditional condition from a instruction.condition"""
917 if isinstance(condition[0], Clbit):
918 condition_on = self.build_single_bit_reference(condition[0])
919 else:
920 condition_on = self._lookup_variable(condition[0])
921 return ast.ComparisonExpression(
922 condition_on, ast.EqualsOperator(), self.build_integer(condition[1])
923 )
924
925 def _rebind_scoped_parameters(self, expression):
926 """If the input is a :class:`.ParameterExpression`, rebind any internal
927 :class:`.Parameter`\\ s so that their names match their names in the scope. Other inputs
928 are returned unchanged."""
929 # This is a little hacky, but the entirety of the Expression handling is essentially
930 # missing, pending a new system in Terra to replace it (2022-03-07).
931 if not isinstance(expression, ParameterExpression):
932 return expression
933 return expression.subs(
934 {
935 param: Parameter(self._lookup_variable(param).string)
936 for param in expression.parameters
937 }
938 )
939
940 def build_gate_call(self, instruction):
941 """Builds a QuantumGateCall"""
942 if isinstance(instruction[0], standard_gates.UGate):
943 gate_name = ast.Identifier("U")
944 else:
945 gate_name = ast.Identifier(self.global_namespace[instruction[0]])
946 qubits = [self.build_single_bit_reference(qubit) for qubit in instruction[1]]
947 if self.disable_constants:
948 parameters = [
949 ast.Expression(self._rebind_scoped_parameters(param))
950 for param in instruction[0].params
951 ]
952 else:
953 parameters = [
954 ast.Expression(pi_check(self._rebind_scoped_parameters(param), output="qasm"))
955 for param in instruction[0].params
956 ]
957
958 return ast.QuantumGateCall(gate_name, qubits, parameters=parameters)
959
960 def build_subroutine_call(self, instruction):
961 """Builds a SubroutineCall"""
962 identifier = ast.Identifier(self.global_namespace[instruction[0]])
963 expressions = [ast.Expression(param) for param in instruction[0].params]
964 # TODO: qubits should go inside the brackets of subroutine calls, but neither Terra nor the
965 # AST here really support the calls, so there's no sensible way of writing it yet.
966 bits = [self.build_single_bit_reference(bit) for bit in instruction[1]]
967 return ast.SubroutineCall(identifier, bits, expressions)
968
969 def build_single_bit_reference(self, bit: Bit) -> ast.Identifier:
970 """Get an identifier node that refers to one particular bit."""
971 found_bit = self.find_bit(bit)
972 if self._physical_qubit and isinstance(bit, Qubit):
973 return ast.PhysicalQubitIdentifier(ast.Identifier(str(found_bit.index)))
974 if self._flat_reg:
975 return ast.Identifier(f"{self.gate_qubit_prefix}_{found_bit.index}")
976 if found_bit.registers:
977 # We preferentially return a reference via a register in the hope that this is what the
978 # user is used to seeing as well.
979 register, index = found_bit.registers[0]
980 return ast.SubscriptedIdentifier(
981 self._lookup_variable(register), self.build_integer(index)
982 )
983 # Otherwise reference via the list of all qubits, or the list of loose clbits.
984 if isinstance(bit, Qubit):
985 return ast.SubscriptedIdentifier(
986 ast.Identifier(self.base_quantum_register_name), self.build_integer(found_bit.index)
987 )
988 return ast.SubscriptedIdentifier(
989 ast.Identifier(self.base_classical_register_name),
990 self.build_integer(self._loose_clbit_index_lookup[bit]),
991 )
992
993 def find_bit(self, bit: Bit):
994 """Look up the bit using :meth:`.QuantumCircuit.find_bit` in the current outermost scope."""
995 # This is a hacky work-around for now. Really this should be a proper symbol-table lookup,
996 # but with us expecting to put in a whole new AST for Terra 0.20, this should be sufficient
997 # for the use-cases we support. (Jake, 2021-11-22.)
998 if len(self.current_context()) > 1:
999 ancestor_bit = self.current_scope().bit_map[bit]
1000 return self.current_outermost_scope().circuit.find_bit(ancestor_bit)
1001 return self.current_scope().circuit.find_bit(bit)
1002
1003
1004 def _infer_variable_declaration(
1005 circuit: QuantumCircuit, parameter: Parameter, parameter_name: ast.Identifier
1006 ) -> Union[ast.ClassicalDeclaration, None]:
1007 """Attempt to infer what type a parameter should be declared as to work with a circuit.
1008
1009 This is very simplistic; it assumes all parameters are real numbers that need to be input to the
1010 program, unless one is used as a loop variable, in which case it shouldn't be declared at all,
1011 because the ``for`` loop declares it implicitly (per the Qiskit/QSS reading of the OpenQASM
1012 spec at Qiskit/openqasm@8ee55ec).
1013
1014 .. note::
1015
1016 This is a hack around not having a proper type system implemented in Terra, and really this
1017 whole function should be removed in favour of proper symbol-table building and lookups.
1018 This function is purely to try and hack the parameters for ``for`` loops into the exporter
1019 for now.
1020
1021 Args:
1022 circuit: The global-scope circuit, which is the base of the exported program.
1023 parameter: The parameter to infer the type of.
1024 parameter_name: The name of the parameter to use in the declaration.
1025
1026 Returns:
1027 A suitable :obj:`.ast.ClassicalDeclaration` node, or, if the parameter should *not* be
1028 declared, then ``None``.
1029 """
1030
1031 def is_loop_variable(circuit, parameter):
1032 """Recurse into the instructions a parameter is used in, checking at every level if it is
1033 used as the loop variable of a ``for`` loop."""
1034 # This private access is hacky, and shouldn't need to happen; the type of a parameter
1035 # _should_ be an intrinsic part of the parameter, or somewhere publicly accessible, but
1036 # Terra doesn't have those concepts yet. We can only try and guess at the type by looking
1037 # at all the places it's used in the circuit.
1038 for instruction, index in circuit._parameter_table[parameter]:
1039 if isinstance(instruction, ForLoopOp):
1040 # The parameters of ForLoopOp are (indexset, loop_parameter, body).
1041 if index == 1:
1042 return True
1043 if isinstance(instruction, ControlFlowOp):
1044 if is_loop_variable(instruction.params[index], parameter):
1045 return True
1046 return False
1047
1048 if is_loop_variable(circuit, parameter):
1049 return None
1050 # Arbitrary choice of double-precision float for all other parameters, but it's what we actually
1051 # expect people to be binding to their Parameters right now.
1052 return ast.IODeclaration(ast.IOModifier.INPUT, ast.FloatType.DOUBLE, parameter_name)
1053
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/qiskit/qasm3/exporter.py b/qiskit/qasm3/exporter.py
--- a/qiskit/qasm3/exporter.py
+++ b/qiskit/qasm3/exporter.py
@@ -532,9 +532,9 @@
return [
statement
for source in (
- definitions,
inputs,
outputs,
+ definitions,
variables,
bit_declarations,
quantum_declarations,
| {"golden_diff": "diff --git a/qiskit/qasm3/exporter.py b/qiskit/qasm3/exporter.py\n--- a/qiskit/qasm3/exporter.py\n+++ b/qiskit/qasm3/exporter.py\n@@ -532,9 +532,9 @@\n return [\n statement\n for source in (\n- definitions,\n inputs,\n outputs,\n+ definitions,\n variables,\n bit_declarations,\n quantum_declarations,\n", "issue": "qasm3 exporter wrong placement of `input` declaration\n### Environment\r\n\r\n- **Qiskit Terra version**: 0.20.0\r\n- **Python version**: 3.8.13\r\n- **Operating system**: macOS Monterey 12.3.1\r\n\r\n\r\n### What is happening?\r\n\r\nWhen exporting parametrized `QuantumCircuit`s with custom gates to a QASM3 string, the exporter wrongly places the circuit's `input` declaration after the `gate` declarations, which is not according to the OpenQASM3.0 grammar - IO declarations should appear in the [header](https://github.com/Qiskit/openqasm/blob/c39eac9f5c87b80df9e8eaeced96cbf4b477e5d2/source/grammar/qasm3.g4#L12).\r\n\r\n### How can we reproduce the issue?\r\n\r\nRun the following:\r\n\r\n```python\r\nfrom qiskit import QuantumCircuit, qasm3\r\nfrom qiskit.circuit import Parameter\r\n\r\ntheta = Parameter(\"theta\")\r\ninner_qc = QuantumCircuit(1)\r\ninner_qc.rz(theta, 0)\r\nqc = QuantumCircuit(1)\r\nqc.append(inner_qc.to_gate(), range(1))\r\nprint(qasm3.dumps(qc))\r\n```\r\nThe resulting OpenQASM3.0 code is then:\r\n```qasm\r\nOPENQASM 3;\r\ninclude \"stdgates.inc\";\r\ngate circuit_21(theta) _gate_q_0 {\r\n rz(theta) _gate_q_0;\r\n}\r\ninput float[64] theta;\r\nqubit[1] _all_qubits;\r\nlet q = _all_qubits[0:0];\r\ncircuit_21(theta) q[0];\r\n```\r\n\r\n### What should happen?\r\n\r\nThe expected output according to the grammar is:\r\n```qasm\r\nOPENQASM 3;\r\ninclude \"stdgates.inc\";\r\ninput float[64] theta;\r\ngate circuit_21(theta) _gate_q_0 {\r\n rz(theta) _gate_q_0;\r\n}\r\nqubit[1] _all_qubits;\r\nlet q = _all_qubits[0:0];\r\ncircuit_21(theta) q[0];\r\n```\r\n\r\n### Any suggestions?\r\n\r\n_No response_\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"QASM3 Exporter\"\"\"\n\nimport collections\nimport io\nimport itertools\nimport numbers\nfrom os.path import dirname, join, abspath\nfrom typing import Iterable, List, Sequence, Union\n\nfrom qiskit.circuit import (\n Barrier,\n Clbit,\n Gate,\n Instruction,\n Measure,\n Parameter,\n ParameterExpression,\n QuantumCircuit,\n QuantumRegister,\n Qubit,\n Reset,\n Delay,\n)\nfrom qiskit.circuit.bit import Bit\nfrom qiskit.circuit.controlflow import (\n IfElseOp,\n ForLoopOp,\n WhileLoopOp,\n ControlFlowOp,\n BreakLoopOp,\n ContinueLoopOp,\n)\nfrom qiskit.circuit.library import standard_gates\nfrom qiskit.circuit.register import Register\nfrom qiskit.circuit.tools import pi_check\n\nfrom . import ast\nfrom .exceptions import QASM3ExporterError\nfrom .printer import BasicPrinter\n\n\n# Reserved keywords that gates and variables cannot be named. It is possible that some of these\n# _could_ be accepted as variable names by OpenQASM 3 parsers, but it's safer for us to just be very\n# conservative.\n_RESERVED_KEYWORDS = frozenset(\n {\n \"OPENQASM\",\n \"U\",\n \"angle\",\n \"array\",\n \"barrier\",\n \"bit\",\n \"bool\",\n \"box\",\n \"break\",\n \"cal\",\n \"complex\",\n \"const\",\n \"continue\",\n \"creg\",\n \"ctrl\",\n \"def\",\n \"defcal\",\n \"defcalgrammar\",\n \"delay\",\n \"duration\",\n \"durationof\",\n \"else\",\n \"end\",\n \"extern\",\n \"float\",\n \"for\",\n \"gate\",\n \"gphase\",\n \"if\",\n \"in\",\n \"include\",\n \"input\",\n \"int\",\n \"inv\",\n \"let\",\n \"measure\",\n \"mutable\",\n \"negctrl\",\n \"output\",\n \"pow\",\n \"qreg\",\n \"qubit\",\n \"reset\",\n \"return\",\n \"sizeof\",\n \"stretch\",\n \"uint\",\n \"while\",\n }\n)\n\n\nclass Exporter:\n \"\"\"QASM3 expoter main class.\"\"\"\n\n def __init__(\n self,\n includes: Sequence[str] = (\"stdgates.inc\",),\n basis_gates: Sequence[str] = (\"U\",),\n disable_constants: bool = False,\n alias_classical_registers: bool = False,\n indent: str = \" \",\n ):\n \"\"\"\n Args:\n includes: the filenames that should be emitted as includes. These files will be parsed\n for gates, and any objects dumped from this exporter will use those definitions\n where possible.\n basis_gates: the basic defined gate set of the backend.\n disable_constants: if ``True``, always emit floating-point constants for numeric\n parameter values. If ``False`` (the default), then values close to multiples of\n QASM 3 constants (``pi``, ``euler``, and ``tau``) will be emitted in terms of those\n constants instead, potentially improving accuracy in the output.\n alias_classical_registers: If ``True``, then classical bit and classical register\n declarations will look similar to quantum declarations, where the whole set of bits\n will be declared in a flat array, and the registers will just be aliases to\n collections of these bits. This is inefficient for running OpenQASM 3 programs,\n however, and may not be well supported on backends. Instead, the default behaviour\n of ``False`` means that individual classical registers will gain their own\n ``bit[size] register;`` declarations, and loose :obj:`.Clbit`\\\\ s will go onto their\n own declaration. In this form, each :obj:`.Clbit` must be in either zero or one\n :obj:`.ClassicalRegister`\\\\ s.\n indent: the indentation string to use for each level within an indented block. Can be\n set to the empty string to disable indentation.\n \"\"\"\n self.basis_gates = basis_gates\n self.disable_constants = disable_constants\n self.alias_classical_registers = alias_classical_registers\n self.includes = list(includes)\n self.indent = indent\n\n def dumps(self, circuit):\n \"\"\"Convert the circuit to QASM 3, returning the result as a string.\"\"\"\n with io.StringIO() as stream:\n self.dump(circuit, stream)\n return stream.getvalue()\n\n def dump(self, circuit, stream):\n \"\"\"Convert the circuit to QASM 3, dumping the result to a file or text stream.\"\"\"\n builder = QASM3Builder(\n circuit,\n includeslist=self.includes,\n basis_gates=self.basis_gates,\n disable_constants=self.disable_constants,\n alias_classical_registers=self.alias_classical_registers,\n )\n BasicPrinter(stream, indent=self.indent).visit(builder.build_program())\n\n\nclass GlobalNamespace:\n \"\"\"Global namespace dict-like.\"\"\"\n\n qiskit_gates = {\n \"p\": standard_gates.PhaseGate,\n \"x\": standard_gates.XGate,\n \"y\": standard_gates.YGate,\n \"z\": standard_gates.ZGate,\n \"h\": standard_gates.HGate,\n \"s\": standard_gates.SGate,\n \"sdg\": standard_gates.SdgGate,\n \"t\": standard_gates.TGate,\n \"tdg\": standard_gates.TdgGate,\n \"sx\": standard_gates.SXGate,\n \"rx\": standard_gates.RXGate,\n \"ry\": standard_gates.RYGate,\n \"rz\": standard_gates.RZGate,\n \"cx\": standard_gates.CXGate,\n \"cy\": standard_gates.CYGate,\n \"cz\": standard_gates.CZGate,\n \"cp\": standard_gates.CPhaseGate,\n \"crx\": standard_gates.CRXGate,\n \"cry\": standard_gates.CRYGate,\n \"crz\": standard_gates.CRZGate,\n \"ch\": standard_gates.CHGate,\n \"swap\": standard_gates.SwapGate,\n \"ccx\": standard_gates.CCXGate,\n \"cswap\": standard_gates.CSwapGate,\n \"cu\": standard_gates.CUGate,\n \"CX\": standard_gates.CXGate,\n \"phase\": standard_gates.PhaseGate,\n \"cphase\": standard_gates.CPhaseGate,\n \"id\": standard_gates.IGate,\n \"u1\": standard_gates.U1Gate,\n \"u2\": standard_gates.U2Gate,\n \"u3\": standard_gates.U3Gate,\n }\n include_paths = [abspath(join(dirname(__file__), \"..\", \"qasm\", \"libs\"))]\n\n def __init__(self, includelist, basis_gates=()):\n self._data = {gate: None for gate in basis_gates}\n\n for includefile in includelist:\n if includefile == \"stdgates.inc\":\n self._data.update(self.qiskit_gates)\n else:\n # TODO What do if an inc file is not standard?\n # Should it be parsed?\n pass\n\n def __setitem__(self, name_str, instruction):\n self._data[name_str] = type(instruction)\n self._data[id(instruction)] = name_str\n\n def __getitem__(self, key):\n if isinstance(key, Instruction):\n try:\n # Registered gates.\n return self._data[id(key)]\n except KeyError:\n pass\n # Built-in gates.\n if key.name not in self._data:\n raise KeyError(key)\n return key.name\n return self._data[key]\n\n def __iter__(self):\n return iter(self._data)\n\n def __contains__(self, instruction):\n if isinstance(instruction, standard_gates.UGate):\n return True\n if id(instruction) in self._data:\n return True\n if type(instruction) in [Gate, Instruction]: # user-defined instructions/gate\n return self._data.get(instruction.name, None) == instruction\n if instruction.name in self._data:\n if self._data.get(instruction.name) is None: # it is a basis gate:\n return True\n if isinstance(instruction, self._data.get(instruction.name)):\n return True\n return False\n\n def register(self, instruction):\n \"\"\"Register an instruction in the namespace\"\"\"\n # The second part of the condition is a nasty hack to ensure that gates that come with at\n # least one parameter always have their id in the name. This is a workaround a bug, where\n # gates with parameters do not contain the information required to build the gate definition\n # in symbolic form (unless the parameters are all symbolic). The exporter currently\n # (2021-12-01) builds gate declarations with parameters in the signature, but then ignores\n # those parameters during the body, and just uses the concrete values from the first\n # instance of the gate it sees, such as:\n # gate rzx(_gate_p_0) _gate_q_0, _gate_q_1 {\n # h _gate_q_1;\n # cx _gate_q_0, _gate_q_1;\n # rz(0.2) _gate_q_1; // <- note the concrete value.\n # cx _gate_q_0, _gate_q_1;\n # h _gate_q_1;\n # }\n # This then means that multiple calls to the same gate with different parameters will be\n # incorrect. By forcing all gates to be defined including their id, we generate a QASM3\n # program that does what was intended, even though the output QASM3 is silly. See gh-7335.\n if instruction.name in self._data or (\n isinstance(instruction, Gate)\n and not all(isinstance(param, Parameter) for param in instruction.params)\n ):\n key = f\"{instruction.name}_{id(instruction)}\"\n else:\n key = instruction.name\n self[key] = instruction\n\n\n# A _Scope is the structure used in the builder to store the contexts and re-mappings of bits from\n# the top-level scope where the bits were actually defined. In the class, 'circuit' is an instance\n# of QuantumCircuit that defines this level, and 'bit_map' is a mapping of 'Bit: Bit', where the\n# keys are bits in the circuit in this scope, and the values are the Bit in the top-level scope in\n# this context that this bit actually represents. 'symbol_map' is a bidirectional mapping of\n# '<Terra object>: Identifier' and 'str: <Terra object>', where the string in the second map is the\n# name of the identifier. This is a cheap hack around actually implementing a proper symbol table.\n_Scope = collections.namedtuple(\"_Scope\", (\"circuit\", \"bit_map\", \"symbol_map\"))\n\n\nclass QASM3Builder:\n \"\"\"QASM3 builder constructs an AST from a QuantumCircuit.\"\"\"\n\n builtins = (Barrier, Measure, Reset, Delay, BreakLoopOp, ContinueLoopOp)\n gate_parameter_prefix = \"_gate_p\"\n gate_qubit_prefix = \"_gate_q\"\n\n def __init__(\n self,\n quantumcircuit,\n includeslist,\n basis_gates,\n disable_constants,\n alias_classical_registers,\n ):\n # This is a stack of stacks; the outer stack is a list of \"outer\" look-up contexts, and the\n # inner stack is for scopes within these. A \"outer\" look-up context in this sense means\n # the main program body or a gate/subroutine definition, whereas the scopes are for things\n # like the body of a ``for`` loop construct.\n self._circuit_ctx = []\n self.push_context(quantumcircuit)\n self.includeslist = includeslist\n self._gate_to_declare = {}\n self._subroutine_to_declare = {}\n self._opaque_to_declare = {}\n self._flat_reg = False\n self._physical_qubit = False\n self._loose_clbit_index_lookup = {}\n # An arbitrary counter to help with generation of unique ids for symbol names when there are\n # clashes (though we generally prefer to keep user names if possible).\n self._counter = itertools.count()\n self.disable_constants = disable_constants\n self.alias_classical_registers = alias_classical_registers\n self.global_namespace = GlobalNamespace(includeslist, basis_gates)\n\n def _register_gate(self, gate):\n self.global_namespace.register(gate)\n self._gate_to_declare[id(gate)] = gate\n\n def _register_subroutine(self, instruction):\n self.global_namespace.register(instruction)\n self._subroutine_to_declare[id(instruction)] = instruction\n\n def _register_opaque(self, instruction):\n if instruction not in self.global_namespace:\n self.global_namespace.register(instruction)\n self._opaque_to_declare[id(instruction)] = instruction\n\n def _register_variable(self, variable, name=None) -> ast.Identifier:\n \"\"\"Register a variable in the symbol table for the current scope, returning the name that\n should be used to refer to the variable. The same name will be returned by subsequent calls\n to :meth:`_lookup_variable` within the same scope.\n\n If ``name`` is given explicitly, it must not already be defined in the scope.\n \"\"\"\n # Note that the registration only checks for the existence of a variable that was declared\n # in the current scope, not just one that's available. This is a rough implementation of\n # the shadowing proposal currently being drafted for OpenQASM 3, though we expect it to be\n # expanded and modified in the future (2022-03-07).\n table = self.current_scope().symbol_map\n if name is not None:\n if name in _RESERVED_KEYWORDS:\n raise QASM3ExporterError(f\"cannot reserve the keyword '{name}' as a variable name\")\n if name in table:\n raise QASM3ExporterError(\n f\"tried to reserve '{name}', but it is already used by '{table[name]}'\"\n )\n else:\n name = variable.name\n while name in table or name in _RESERVED_KEYWORDS:\n name = f\"{variable.name}__generated{next(self._counter)}\"\n identifier = ast.Identifier(name)\n table[identifier.string] = variable\n table[variable] = identifier\n return identifier\n\n def _reserve_variable_name(self, name: ast.Identifier) -> ast.Identifier:\n \"\"\"Reserve a variable name in the current scope, raising a :class:`.QASM3ExporterError` if\n the name is already in use.\n\n This is useful for autogenerated names that the exporter itself reserves when dealing with\n objects that have no standard Terra object backing them, such as the declaration of all\n circuit qubits, so cannot be placed into the symbol table by the normal means.\n\n Returns the same identifier, for convenience in chaining.\"\"\"\n table = self.current_scope().symbol_map\n if name.string in table:\n variable = table[name.string]\n raise QASM3ExporterError(\n f\"tried to reserve '{name.string}', but it is already used by '{variable}'\"\n )\n table[name.string] = \"<internal object>\"\n return name\n\n def _lookup_variable(self, variable) -> ast.Identifier:\n \"\"\"Lookup a Terra object within the current context, and return the name that should be used\n to represent it in OpenQASM 3 programmes.\"\"\"\n for scope in reversed(self.current_context()):\n if variable in scope.symbol_map:\n return scope.symbol_map[variable]\n raise KeyError(f\"'{variable}' is not defined in the current context\")\n\n def build_header(self):\n \"\"\"Builds a Header\"\"\"\n version = ast.Version(\"3\")\n includes = self.build_includes()\n return ast.Header(version, includes)\n\n def build_program(self):\n \"\"\"Builds a Program\"\"\"\n self.hoist_declarations(self.global_scope(assert_=True).circuit.data)\n return ast.Program(self.build_header(), self.build_global_statements())\n\n def hoist_declarations(self, instructions):\n \"\"\"Walks the definitions in gates/instructions to make a list of gates to declare.\"\"\"\n for instruction in instructions:\n if isinstance(instruction[0], ControlFlowOp):\n for block in instruction[0].blocks:\n self.hoist_declarations(block.data)\n continue\n if instruction[0] in self.global_namespace or isinstance(instruction[0], self.builtins):\n continue\n\n if instruction[0].definition is None:\n self._register_opaque(instruction[0])\n else:\n self.hoist_declarations(instruction[0].definition.data)\n if isinstance(instruction[0], Gate):\n self._register_gate(instruction[0])\n else:\n self._register_subroutine(instruction[0])\n\n def global_scope(self, assert_=False):\n \"\"\"Return the global circuit scope that is used as the basis of the full program. If\n ``assert_=True``, then this raises :obj:`.QASM3ExporterError` if the current context is not\n the global one.\"\"\"\n if assert_ and len(self._circuit_ctx) != 1 and len(self._circuit_ctx[0]) != 1:\n # Defensive code to help catch logic errors.\n raise QASM3ExporterError( # pragma: no cover\n f\"Not currently in the global context. Current contexts are: {self._circuit_ctx}\"\n )\n return self._circuit_ctx[0][0]\n\n def current_outermost_scope(self):\n \"\"\"Return the outermost scope for this context. If building the main program, then this is\n the :obj:`.QuantumCircuit` instance that the full program is being built from. If building\n a gate or subroutine definition, this is the body that defines the gate or subroutine.\"\"\"\n return self._circuit_ctx[-1][0]\n\n def current_scope(self):\n \"\"\"Return the current circuit scope.\"\"\"\n return self._circuit_ctx[-1][-1]\n\n def current_context(self):\n \"\"\"Return the current context (list of scopes).\"\"\"\n return self._circuit_ctx[-1]\n\n def push_scope(self, circuit: QuantumCircuit, qubits: Iterable[Qubit], clbits: Iterable[Clbit]):\n \"\"\"Push a new scope (like a ``for`` or ``while`` loop body) onto the current context\n stack.\"\"\"\n current_map = self.current_scope().bit_map\n qubits = tuple(current_map[qubit] for qubit in qubits)\n clbits = tuple(current_map[clbit] for clbit in clbits)\n if circuit.num_qubits != len(qubits):\n raise QASM3ExporterError( # pragma: no cover\n f\"Tried to push a scope whose circuit needs {circuit.num_qubits} qubits, but only\"\n f\" provided {len(qubits)} qubits to create the mapping.\"\n )\n if circuit.num_clbits != len(clbits):\n raise QASM3ExporterError( # pragma: no cover\n f\"Tried to push a scope whose circuit needs {circuit.num_clbits} clbits, but only\"\n f\" provided {len(clbits)} clbits to create the mapping.\"\n )\n mapping = dict(itertools.chain(zip(circuit.qubits, qubits), zip(circuit.clbits, clbits)))\n self._circuit_ctx[-1].append(_Scope(circuit, mapping, {}))\n\n def pop_scope(self) -> _Scope:\n \"\"\"Pop the current scope (like a ``for`` or ``while`` loop body) off the current context\n stack.\"\"\"\n if len(self._circuit_ctx[-1]) <= 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop a scope from the current context, but there are no current scopes.\"\n )\n return self._circuit_ctx[-1].pop()\n\n def push_context(self, outer_context: QuantumCircuit):\n \"\"\"Push a new context (like for a ``gate`` or ``def`` body) onto the stack.\"\"\"\n mapping = {bit: bit for bit in itertools.chain(outer_context.qubits, outer_context.clbits)}\n self._circuit_ctx.append([_Scope(outer_context, mapping, {})])\n\n def pop_context(self):\n \"\"\"Pop the current context (like for a ``gate`` or ``def`` body) onto the stack.\"\"\"\n if len(self._circuit_ctx) == 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop the current context, but that is the global context.\"\n )\n if len(self._circuit_ctx[-1]) != 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop the current context while there are still\"\n f\" {len(self._circuit_ctx[-1]) - 1} unclosed scopes.\"\n )\n self._circuit_ctx.pop()\n\n def build_includes(self):\n \"\"\"Builds a list of included files.\"\"\"\n return [ast.Include(filename) for filename in self.includeslist]\n\n def build_global_statements(self) -> List[ast.Statement]:\n \"\"\"\n globalStatement\n : subroutineDefinition\n | kernelDeclaration\n | quantumGateDefinition\n | calibration\n | quantumDeclarationStatement # build_quantumdeclaration\n | pragma\n ;\n\n statement\n : expressionStatement\n | assignmentStatement\n | classicalDeclarationStatement\n | branchingStatement\n | loopStatement\n | endStatement\n | aliasStatement\n | quantumStatement # build_quantuminstruction\n ;\n \"\"\"\n definitions = self.build_definitions()\n inputs, outputs, variables = self.build_variable_declarations()\n bit_declarations = self.build_classical_declarations()\n context = self.global_scope(assert_=True).circuit\n if getattr(context, \"_layout\", None) is not None:\n self._physical_qubit = True\n quantum_declarations = []\n else:\n quantum_declarations = self.build_quantum_declarations()\n quantum_instructions = self.build_quantum_instructions(context.data)\n self._physical_qubit = False\n\n return [\n statement\n for source in (\n definitions,\n inputs,\n outputs,\n variables,\n bit_declarations,\n quantum_declarations,\n quantum_instructions,\n )\n for statement in source\n ]\n\n def build_definitions(self):\n \"\"\"Builds all the definition.\"\"\"\n ret = []\n for instruction in self._opaque_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_opaque_definition))\n for instruction in self._subroutine_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_subroutine_definition))\n for instruction in self._gate_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_gate_definition))\n return ret\n\n def build_definition(self, instruction, builder):\n \"\"\"Using a given definition builder, builds that definition.\"\"\"\n try:\n return instruction._define_qasm3()\n except AttributeError:\n pass\n self._flat_reg = True\n definition = builder(instruction)\n self._flat_reg = False\n return definition\n\n def build_opaque_definition(self, instruction):\n \"\"\"Builds an Opaque gate definition as a CalibrationDefinition\"\"\"\n # We can't do anything sensible with this yet, so it's better to loudly say that.\n raise QASM3ExporterError(\n \"Exporting opaque instructions with pulse-level calibrations is not yet supported by\"\n \" the OpenQASM 3 exporter. Received this instruction, which appears opaque:\"\n f\"\\n{instruction}\"\n )\n\n def build_subroutine_definition(self, instruction):\n \"\"\"Builds a SubroutineDefinition\"\"\"\n if instruction.definition.parameters:\n # We don't yet have the type system to store the parameter types in a symbol table, and\n # we currently don't have the correct logic in place to handle parameters correctly in\n # the definition.\n raise QASM3ExporterError(\n \"Exporting subroutines with parameters is not yet supported by the OpenQASM 3\"\n \" exporter. Received this instruction, which appears parameterized:\"\n f\"\\n{instruction}\"\n )\n name = self.global_namespace[instruction]\n self.push_context(instruction.definition)\n quantum_arguments = [\n ast.QuantumArgument(\n self._reserve_variable_name(ast.Identifier(f\"{self.gate_qubit_prefix}_{n_qubit}\"))\n )\n for n_qubit in range(len(instruction.definition.qubits))\n ]\n subroutine_body = ast.SubroutineBlock(\n self.build_quantum_instructions(instruction.definition.data),\n )\n self.pop_context()\n return ast.SubroutineDefinition(ast.Identifier(name), subroutine_body, quantum_arguments)\n\n def build_gate_definition(self, gate):\n \"\"\"Builds a QuantumGateDefinition\"\"\"\n self.push_context(gate.definition)\n signature = self.build_gate_signature(gate)\n body = ast.QuantumBlock(self.build_quantum_instructions(gate.definition.data))\n self.pop_context()\n return ast.QuantumGateDefinition(signature, body)\n\n def build_gate_signature(self, gate):\n \"\"\"Builds a QuantumGateSignature\"\"\"\n name = self.global_namespace[gate]\n params = []\n definition = gate.definition\n # Dummy parameters\n for num in range(len(gate.params) - len(definition.parameters)):\n param_name = f\"{self.gate_parameter_prefix}_{num}\"\n params.append(self._reserve_variable_name(ast.Identifier(param_name)))\n params += [self._register_variable(param) for param in definition.parameters]\n quantum_arguments = [\n self._reserve_variable_name(ast.Identifier(f\"{self.gate_qubit_prefix}_{n_qubit}\"))\n for n_qubit in range(len(definition.qubits))\n ]\n return ast.QuantumGateSignature(ast.Identifier(name), quantum_arguments, params or None)\n\n def build_variable_declarations(self):\n \"\"\"Builds lists of the input, output and standard variables used in this program.\"\"\"\n inputs, outputs, variables = [], [], []\n global_scope = self.global_scope(assert_=True).circuit\n for parameter in global_scope.parameters:\n parameter_name = self._register_variable(parameter)\n declaration = _infer_variable_declaration(global_scope, parameter, parameter_name)\n if declaration is None:\n continue\n if isinstance(declaration, ast.IODeclaration):\n if declaration.modifier is ast.IOModifier.INPUT:\n inputs.append(declaration)\n else:\n outputs.append(declaration)\n else:\n variables.append(declaration)\n return inputs, outputs, variables\n\n @property\n def base_classical_register_name(self):\n \"\"\"The base register name\"\"\"\n name = \"_all_clbits\" if self.alias_classical_registers else \"_loose_clbits\"\n if name in self.global_namespace._data:\n raise NotImplementedError # TODO choose a different name if there is a name collision\n return name\n\n @property\n def base_quantum_register_name(self):\n \"\"\"The base register name\"\"\"\n name = \"_all_qubits\"\n if name in self.global_namespace._data:\n raise NotImplementedError # TODO choose a different name if there is a name collision\n return name\n\n def build_classical_declarations(self):\n \"\"\"Return a list of AST nodes declaring all the classical bits and registers.\n\n The behaviour of this function depends on the setting ``alias_classical_registers``. If this\n is ``True``, then the output will be in the same form as the output of\n :meth:`.build_classical_declarations`, with the registers being aliases. If ``False``, it\n will instead return a :obj:`.ast.ClassicalDeclaration` for each classical register, and one\n for the loose :obj:`.Clbit` instances, and will raise :obj:`QASM3ExporterError` if any\n registers overlap.\n\n This function populates the lookup table ``self._loose_clbit_index_lookup``.\n \"\"\"\n circuit = self.current_scope().circuit\n if self.alias_classical_registers:\n self._loose_clbit_index_lookup = {\n bit: index for index, bit in enumerate(circuit.clbits)\n }\n flat_declaration = self.build_clbit_declaration(\n len(circuit.clbits),\n self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),\n )\n return [flat_declaration] + self.build_aliases(circuit.cregs)\n loose_register_size = 0\n for index, bit in enumerate(circuit.clbits):\n found_bit = circuit.find_bit(bit)\n if len(found_bit.registers) > 1:\n raise QASM3ExporterError(\n f\"Clbit {index} is in multiple registers, but 'alias_classical_registers' is\"\n f\" False. Registers and indices: {found_bit.registers}.\"\n )\n if not found_bit.registers:\n self._loose_clbit_index_lookup[bit] = loose_register_size\n loose_register_size += 1\n if loose_register_size > 0:\n loose = [\n self.build_clbit_declaration(\n loose_register_size,\n self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),\n )\n ]\n else:\n loose = []\n return loose + [\n self.build_clbit_declaration(len(register), self._register_variable(register))\n for register in circuit.cregs\n ]\n\n def build_clbit_declaration(\n self, n_clbits: int, name: ast.Identifier\n ) -> ast.ClassicalDeclaration:\n \"\"\"Return a declaration of the :obj:`.Clbit`\\\\ s as a ``bit[n]``.\"\"\"\n return ast.ClassicalDeclaration(ast.BitArrayType(n_clbits), name)\n\n def build_quantum_declarations(self):\n \"\"\"Return a list of AST nodes declaring all the qubits in the current scope, and all the\n alias declarations for these qubits.\"\"\"\n return [self.build_qubit_declarations()] + self.build_aliases(\n self.current_scope().circuit.qregs\n )\n\n def build_qubit_declarations(self):\n \"\"\"Return a declaration of all the :obj:`.Qubit`\\\\ s in the current scope.\"\"\"\n # Base register\n return ast.QuantumDeclaration(\n self._reserve_variable_name(ast.Identifier(self.base_quantum_register_name)),\n ast.Designator(self.build_integer(self.current_scope().circuit.num_qubits)),\n )\n\n def build_aliases(self, registers: Iterable[Register]) -> List[ast.AliasStatement]:\n \"\"\"Return a list of alias declarations for the given registers. The registers can be either\n classical or quantum.\"\"\"\n out = []\n for register in registers:\n elements = []\n # Greedily consolidate runs of bits into ranges. We don't bother trying to handle\n # steps; there's no need in generated code. Even single bits are referenced as ranges\n # because the concatenation in an alias statement can only concatenate arraylike values.\n start_index, prev_index = None, None\n register_identifier = (\n ast.Identifier(self.base_quantum_register_name)\n if isinstance(register, QuantumRegister)\n else ast.Identifier(self.base_classical_register_name)\n )\n for bit in register:\n cur_index = self.find_bit(bit).index\n if start_index is None:\n start_index = cur_index\n elif cur_index != prev_index + 1:\n elements.append(\n ast.SubscriptedIdentifier(\n register_identifier,\n ast.Range(\n start=self.build_integer(start_index),\n end=self.build_integer(prev_index),\n ),\n )\n )\n start_index = prev_index = cur_index\n prev_index = cur_index\n # After the loop, if there were any bits at all, there's always one unemitted range.\n if len(register) != 0:\n elements.append(\n ast.SubscriptedIdentifier(\n register_identifier,\n ast.Range(\n start=self.build_integer(start_index),\n end=self.build_integer(prev_index),\n ),\n )\n )\n out.append(ast.AliasStatement(self._register_variable(register), elements))\n return out\n\n def build_quantum_instructions(self, instructions):\n \"\"\"Builds a list of call statements\"\"\"\n ret = []\n for instruction in instructions:\n if isinstance(instruction[0], Gate):\n if instruction[0].condition:\n eqcondition = self.build_eqcondition(instruction[0].condition)\n instruction_without_condition = instruction[0].copy()\n instruction_without_condition.condition = None\n true_body = self.build_program_block(\n [(instruction_without_condition, instruction[1], instruction[2])]\n )\n ret.append(ast.BranchingStatement(eqcondition, true_body))\n else:\n ret.append(self.build_gate_call(instruction))\n elif isinstance(instruction[0], Barrier):\n operands = [self.build_single_bit_reference(operand) for operand in instruction[1]]\n ret.append(ast.QuantumBarrier(operands))\n elif isinstance(instruction[0], Measure):\n measurement = ast.QuantumMeasurement(\n [self.build_single_bit_reference(operand) for operand in instruction[1]]\n )\n qubit = self.build_single_bit_reference(instruction[2][0])\n ret.append(ast.QuantumMeasurementAssignment(qubit, measurement))\n elif isinstance(instruction[0], Reset):\n for operand in instruction[1]:\n ret.append(ast.QuantumReset(self.build_single_bit_reference(operand)))\n elif isinstance(instruction[0], Delay):\n ret.append(self.build_delay(*instruction))\n elif isinstance(instruction[0], ForLoopOp):\n ret.append(self.build_for_loop(*instruction))\n elif isinstance(instruction[0], WhileLoopOp):\n ret.append(self.build_while_loop(*instruction))\n elif isinstance(instruction[0], IfElseOp):\n ret.append(self.build_if_statement(*instruction))\n elif isinstance(instruction[0], BreakLoopOp):\n ret.append(ast.BreakStatement())\n elif isinstance(instruction[0], ContinueLoopOp):\n ret.append(ast.ContinueStatement())\n else:\n ret.append(self.build_subroutine_call(instruction))\n return ret\n\n def build_if_statement(\n self, instruction: IfElseOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.BranchingStatement:\n \"\"\"Build an :obj:`.IfElseOp` into a :obj:`.ast.BranchingStatement`.\"\"\"\n condition = self.build_eqcondition(instruction.condition)\n\n true_circuit = instruction.blocks[0]\n self.push_scope(true_circuit, qubits, clbits)\n true_body = self.build_program_block(true_circuit.data)\n self.pop_scope()\n if len(instruction.blocks) == 1:\n return ast.BranchingStatement(condition, true_body, None)\n\n false_circuit = instruction.blocks[1]\n self.push_scope(false_circuit, qubits, clbits)\n false_body = self.build_program_block(false_circuit.data)\n self.pop_scope()\n return ast.BranchingStatement(condition, true_body, false_body)\n\n def build_while_loop(\n self, instruction: WhileLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.WhileLoopStatement:\n \"\"\"Build a :obj:`.WhileLoopOp` into a :obj:`.ast.WhileLoopStatement`.\"\"\"\n condition = self.build_eqcondition(instruction.condition)\n loop_circuit = instruction.blocks[0]\n self.push_scope(loop_circuit, qubits, clbits)\n loop_body = self.build_program_block(loop_circuit.data)\n self.pop_scope()\n return ast.WhileLoopStatement(condition, loop_body)\n\n def build_for_loop(\n self, instruction: ForLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.ForLoopStatement:\n \"\"\"Build a :obj:`.ForLoopOp` into a :obj:`.ast.ForLoopStatement`.\"\"\"\n indexset, loop_parameter, loop_circuit = instruction.params\n self.push_scope(loop_circuit, qubits, clbits)\n if loop_parameter is None:\n # The loop parameter is implicitly declared by the ``for`` loop (see also\n # _infer_parameter_declaration), so it doesn't matter that we haven't declared this.\n loop_parameter_ast = self._reserve_variable_name(ast.Identifier(\"_\"))\n else:\n loop_parameter_ast = self._register_variable(loop_parameter)\n if isinstance(indexset, range):\n # QASM 3 uses inclusive ranges on both ends, unlike Python.\n indexset_ast = ast.Range(\n start=self.build_integer(indexset.start),\n end=self.build_integer(indexset.stop - 1),\n step=self.build_integer(indexset.step) if indexset.step != 1 else None,\n )\n else:\n try:\n indexset_ast = ast.IndexSet([self.build_integer(value) for value in indexset])\n except QASM3ExporterError:\n raise QASM3ExporterError(\n \"The values in QASM 3 'for' loops must all be integers, but received\"\n f\" '{indexset}'.\"\n ) from None\n body_ast = self.build_program_block(loop_circuit)\n self.pop_scope()\n return ast.ForLoopStatement(indexset_ast, loop_parameter_ast, body_ast)\n\n def build_delay(\n self, instruction: Delay, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.QuantumDelay:\n \"\"\"Build a built-in delay statement.\"\"\"\n clbits = tuple(clbits)\n if clbits:\n raise QASM3ExporterError(\n f\"Found a delay instruction acting on classical bits: {instruction} on {clbits}\"\n )\n if instruction.unit == \"ps\":\n duration = ast.DurationLiteral(1000 * instruction.duration, ast.DurationUnit.NANOSECOND)\n else:\n unit_map = {\n \"ns\": ast.DurationUnit.NANOSECOND,\n \"us\": ast.DurationUnit.MICROSECOND,\n \"ms\": ast.DurationUnit.MILLISECOND,\n \"s\": ast.DurationUnit.SECOND,\n \"dt\": ast.DurationUnit.SAMPLE,\n }\n duration = ast.DurationLiteral(instruction.duration, unit_map[instruction.unit])\n return ast.QuantumDelay(\n duration, [self.build_single_bit_reference(qubit) for qubit in qubits]\n )\n\n def build_integer(self, value) -> ast.Integer:\n \"\"\"Build an integer literal, raising a :obj:`.QASM3ExporterError` if the input is not\n actually an\n integer.\"\"\"\n if not isinstance(value, numbers.Integral):\n # This is meant to be purely defensive, in case a non-integer slips into the logic\n # somewhere, but no valid Terra object should trigger this.\n raise QASM3ExporterError(f\"'{value}' is not an integer\") # pragma: no cover\n return ast.Integer(int(value))\n\n def build_program_block(self, instructions):\n \"\"\"Builds a ProgramBlock\"\"\"\n return ast.ProgramBlock(self.build_quantum_instructions(instructions))\n\n def build_eqcondition(self, condition):\n \"\"\"Classical Conditional condition from a instruction.condition\"\"\"\n if isinstance(condition[0], Clbit):\n condition_on = self.build_single_bit_reference(condition[0])\n else:\n condition_on = self._lookup_variable(condition[0])\n return ast.ComparisonExpression(\n condition_on, ast.EqualsOperator(), self.build_integer(condition[1])\n )\n\n def _rebind_scoped_parameters(self, expression):\n \"\"\"If the input is a :class:`.ParameterExpression`, rebind any internal\n :class:`.Parameter`\\\\ s so that their names match their names in the scope. Other inputs\n are returned unchanged.\"\"\"\n # This is a little hacky, but the entirety of the Expression handling is essentially\n # missing, pending a new system in Terra to replace it (2022-03-07).\n if not isinstance(expression, ParameterExpression):\n return expression\n return expression.subs(\n {\n param: Parameter(self._lookup_variable(param).string)\n for param in expression.parameters\n }\n )\n\n def build_gate_call(self, instruction):\n \"\"\"Builds a QuantumGateCall\"\"\"\n if isinstance(instruction[0], standard_gates.UGate):\n gate_name = ast.Identifier(\"U\")\n else:\n gate_name = ast.Identifier(self.global_namespace[instruction[0]])\n qubits = [self.build_single_bit_reference(qubit) for qubit in instruction[1]]\n if self.disable_constants:\n parameters = [\n ast.Expression(self._rebind_scoped_parameters(param))\n for param in instruction[0].params\n ]\n else:\n parameters = [\n ast.Expression(pi_check(self._rebind_scoped_parameters(param), output=\"qasm\"))\n for param in instruction[0].params\n ]\n\n return ast.QuantumGateCall(gate_name, qubits, parameters=parameters)\n\n def build_subroutine_call(self, instruction):\n \"\"\"Builds a SubroutineCall\"\"\"\n identifier = ast.Identifier(self.global_namespace[instruction[0]])\n expressions = [ast.Expression(param) for param in instruction[0].params]\n # TODO: qubits should go inside the brackets of subroutine calls, but neither Terra nor the\n # AST here really support the calls, so there's no sensible way of writing it yet.\n bits = [self.build_single_bit_reference(bit) for bit in instruction[1]]\n return ast.SubroutineCall(identifier, bits, expressions)\n\n def build_single_bit_reference(self, bit: Bit) -> ast.Identifier:\n \"\"\"Get an identifier node that refers to one particular bit.\"\"\"\n found_bit = self.find_bit(bit)\n if self._physical_qubit and isinstance(bit, Qubit):\n return ast.PhysicalQubitIdentifier(ast.Identifier(str(found_bit.index)))\n if self._flat_reg:\n return ast.Identifier(f\"{self.gate_qubit_prefix}_{found_bit.index}\")\n if found_bit.registers:\n # We preferentially return a reference via a register in the hope that this is what the\n # user is used to seeing as well.\n register, index = found_bit.registers[0]\n return ast.SubscriptedIdentifier(\n self._lookup_variable(register), self.build_integer(index)\n )\n # Otherwise reference via the list of all qubits, or the list of loose clbits.\n if isinstance(bit, Qubit):\n return ast.SubscriptedIdentifier(\n ast.Identifier(self.base_quantum_register_name), self.build_integer(found_bit.index)\n )\n return ast.SubscriptedIdentifier(\n ast.Identifier(self.base_classical_register_name),\n self.build_integer(self._loose_clbit_index_lookup[bit]),\n )\n\n def find_bit(self, bit: Bit):\n \"\"\"Look up the bit using :meth:`.QuantumCircuit.find_bit` in the current outermost scope.\"\"\"\n # This is a hacky work-around for now. Really this should be a proper symbol-table lookup,\n # but with us expecting to put in a whole new AST for Terra 0.20, this should be sufficient\n # for the use-cases we support. (Jake, 2021-11-22.)\n if len(self.current_context()) > 1:\n ancestor_bit = self.current_scope().bit_map[bit]\n return self.current_outermost_scope().circuit.find_bit(ancestor_bit)\n return self.current_scope().circuit.find_bit(bit)\n\n\ndef _infer_variable_declaration(\n circuit: QuantumCircuit, parameter: Parameter, parameter_name: ast.Identifier\n) -> Union[ast.ClassicalDeclaration, None]:\n \"\"\"Attempt to infer what type a parameter should be declared as to work with a circuit.\n\n This is very simplistic; it assumes all parameters are real numbers that need to be input to the\n program, unless one is used as a loop variable, in which case it shouldn't be declared at all,\n because the ``for`` loop declares it implicitly (per the Qiskit/QSS reading of the OpenQASM\n spec at Qiskit/openqasm@8ee55ec).\n\n .. note::\n\n This is a hack around not having a proper type system implemented in Terra, and really this\n whole function should be removed in favour of proper symbol-table building and lookups.\n This function is purely to try and hack the parameters for ``for`` loops into the exporter\n for now.\n\n Args:\n circuit: The global-scope circuit, which is the base of the exported program.\n parameter: The parameter to infer the type of.\n parameter_name: The name of the parameter to use in the declaration.\n\n Returns:\n A suitable :obj:`.ast.ClassicalDeclaration` node, or, if the parameter should *not* be\n declared, then ``None``.\n \"\"\"\n\n def is_loop_variable(circuit, parameter):\n \"\"\"Recurse into the instructions a parameter is used in, checking at every level if it is\n used as the loop variable of a ``for`` loop.\"\"\"\n # This private access is hacky, and shouldn't need to happen; the type of a parameter\n # _should_ be an intrinsic part of the parameter, or somewhere publicly accessible, but\n # Terra doesn't have those concepts yet. We can only try and guess at the type by looking\n # at all the places it's used in the circuit.\n for instruction, index in circuit._parameter_table[parameter]:\n if isinstance(instruction, ForLoopOp):\n # The parameters of ForLoopOp are (indexset, loop_parameter, body).\n if index == 1:\n return True\n if isinstance(instruction, ControlFlowOp):\n if is_loop_variable(instruction.params[index], parameter):\n return True\n return False\n\n if is_loop_variable(circuit, parameter):\n return None\n # Arbitrary choice of double-precision float for all other parameters, but it's what we actually\n # expect people to be binding to their Parameters right now.\n return ast.IODeclaration(ast.IOModifier.INPUT, ast.FloatType.DOUBLE, parameter_name)\n", "path": "qiskit/qasm3/exporter.py"}], "after_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2021.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"QASM3 Exporter\"\"\"\n\nimport collections\nimport io\nimport itertools\nimport numbers\nfrom os.path import dirname, join, abspath\nfrom typing import Iterable, List, Sequence, Union\n\nfrom qiskit.circuit import (\n Barrier,\n Clbit,\n Gate,\n Instruction,\n Measure,\n Parameter,\n ParameterExpression,\n QuantumCircuit,\n QuantumRegister,\n Qubit,\n Reset,\n Delay,\n)\nfrom qiskit.circuit.bit import Bit\nfrom qiskit.circuit.controlflow import (\n IfElseOp,\n ForLoopOp,\n WhileLoopOp,\n ControlFlowOp,\n BreakLoopOp,\n ContinueLoopOp,\n)\nfrom qiskit.circuit.library import standard_gates\nfrom qiskit.circuit.register import Register\nfrom qiskit.circuit.tools import pi_check\n\nfrom . import ast\nfrom .exceptions import QASM3ExporterError\nfrom .printer import BasicPrinter\n\n\n# Reserved keywords that gates and variables cannot be named. It is possible that some of these\n# _could_ be accepted as variable names by OpenQASM 3 parsers, but it's safer for us to just be very\n# conservative.\n_RESERVED_KEYWORDS = frozenset(\n {\n \"OPENQASM\",\n \"U\",\n \"angle\",\n \"array\",\n \"barrier\",\n \"bit\",\n \"bool\",\n \"box\",\n \"break\",\n \"cal\",\n \"complex\",\n \"const\",\n \"continue\",\n \"creg\",\n \"ctrl\",\n \"def\",\n \"defcal\",\n \"defcalgrammar\",\n \"delay\",\n \"duration\",\n \"durationof\",\n \"else\",\n \"end\",\n \"extern\",\n \"float\",\n \"for\",\n \"gate\",\n \"gphase\",\n \"if\",\n \"in\",\n \"include\",\n \"input\",\n \"int\",\n \"inv\",\n \"let\",\n \"measure\",\n \"mutable\",\n \"negctrl\",\n \"output\",\n \"pow\",\n \"qreg\",\n \"qubit\",\n \"reset\",\n \"return\",\n \"sizeof\",\n \"stretch\",\n \"uint\",\n \"while\",\n }\n)\n\n\nclass Exporter:\n \"\"\"QASM3 expoter main class.\"\"\"\n\n def __init__(\n self,\n includes: Sequence[str] = (\"stdgates.inc\",),\n basis_gates: Sequence[str] = (\"U\",),\n disable_constants: bool = False,\n alias_classical_registers: bool = False,\n indent: str = \" \",\n ):\n \"\"\"\n Args:\n includes: the filenames that should be emitted as includes. These files will be parsed\n for gates, and any objects dumped from this exporter will use those definitions\n where possible.\n basis_gates: the basic defined gate set of the backend.\n disable_constants: if ``True``, always emit floating-point constants for numeric\n parameter values. If ``False`` (the default), then values close to multiples of\n QASM 3 constants (``pi``, ``euler``, and ``tau``) will be emitted in terms of those\n constants instead, potentially improving accuracy in the output.\n alias_classical_registers: If ``True``, then classical bit and classical register\n declarations will look similar to quantum declarations, where the whole set of bits\n will be declared in a flat array, and the registers will just be aliases to\n collections of these bits. This is inefficient for running OpenQASM 3 programs,\n however, and may not be well supported on backends. Instead, the default behaviour\n of ``False`` means that individual classical registers will gain their own\n ``bit[size] register;`` declarations, and loose :obj:`.Clbit`\\\\ s will go onto their\n own declaration. In this form, each :obj:`.Clbit` must be in either zero or one\n :obj:`.ClassicalRegister`\\\\ s.\n indent: the indentation string to use for each level within an indented block. Can be\n set to the empty string to disable indentation.\n \"\"\"\n self.basis_gates = basis_gates\n self.disable_constants = disable_constants\n self.alias_classical_registers = alias_classical_registers\n self.includes = list(includes)\n self.indent = indent\n\n def dumps(self, circuit):\n \"\"\"Convert the circuit to QASM 3, returning the result as a string.\"\"\"\n with io.StringIO() as stream:\n self.dump(circuit, stream)\n return stream.getvalue()\n\n def dump(self, circuit, stream):\n \"\"\"Convert the circuit to QASM 3, dumping the result to a file or text stream.\"\"\"\n builder = QASM3Builder(\n circuit,\n includeslist=self.includes,\n basis_gates=self.basis_gates,\n disable_constants=self.disable_constants,\n alias_classical_registers=self.alias_classical_registers,\n )\n BasicPrinter(stream, indent=self.indent).visit(builder.build_program())\n\n\nclass GlobalNamespace:\n \"\"\"Global namespace dict-like.\"\"\"\n\n qiskit_gates = {\n \"p\": standard_gates.PhaseGate,\n \"x\": standard_gates.XGate,\n \"y\": standard_gates.YGate,\n \"z\": standard_gates.ZGate,\n \"h\": standard_gates.HGate,\n \"s\": standard_gates.SGate,\n \"sdg\": standard_gates.SdgGate,\n \"t\": standard_gates.TGate,\n \"tdg\": standard_gates.TdgGate,\n \"sx\": standard_gates.SXGate,\n \"rx\": standard_gates.RXGate,\n \"ry\": standard_gates.RYGate,\n \"rz\": standard_gates.RZGate,\n \"cx\": standard_gates.CXGate,\n \"cy\": standard_gates.CYGate,\n \"cz\": standard_gates.CZGate,\n \"cp\": standard_gates.CPhaseGate,\n \"crx\": standard_gates.CRXGate,\n \"cry\": standard_gates.CRYGate,\n \"crz\": standard_gates.CRZGate,\n \"ch\": standard_gates.CHGate,\n \"swap\": standard_gates.SwapGate,\n \"ccx\": standard_gates.CCXGate,\n \"cswap\": standard_gates.CSwapGate,\n \"cu\": standard_gates.CUGate,\n \"CX\": standard_gates.CXGate,\n \"phase\": standard_gates.PhaseGate,\n \"cphase\": standard_gates.CPhaseGate,\n \"id\": standard_gates.IGate,\n \"u1\": standard_gates.U1Gate,\n \"u2\": standard_gates.U2Gate,\n \"u3\": standard_gates.U3Gate,\n }\n include_paths = [abspath(join(dirname(__file__), \"..\", \"qasm\", \"libs\"))]\n\n def __init__(self, includelist, basis_gates=()):\n self._data = {gate: None for gate in basis_gates}\n\n for includefile in includelist:\n if includefile == \"stdgates.inc\":\n self._data.update(self.qiskit_gates)\n else:\n # TODO What do if an inc file is not standard?\n # Should it be parsed?\n pass\n\n def __setitem__(self, name_str, instruction):\n self._data[name_str] = type(instruction)\n self._data[id(instruction)] = name_str\n\n def __getitem__(self, key):\n if isinstance(key, Instruction):\n try:\n # Registered gates.\n return self._data[id(key)]\n except KeyError:\n pass\n # Built-in gates.\n if key.name not in self._data:\n raise KeyError(key)\n return key.name\n return self._data[key]\n\n def __iter__(self):\n return iter(self._data)\n\n def __contains__(self, instruction):\n if isinstance(instruction, standard_gates.UGate):\n return True\n if id(instruction) in self._data:\n return True\n if type(instruction) in [Gate, Instruction]: # user-defined instructions/gate\n return self._data.get(instruction.name, None) == instruction\n if instruction.name in self._data:\n if self._data.get(instruction.name) is None: # it is a basis gate:\n return True\n if isinstance(instruction, self._data.get(instruction.name)):\n return True\n return False\n\n def register(self, instruction):\n \"\"\"Register an instruction in the namespace\"\"\"\n # The second part of the condition is a nasty hack to ensure that gates that come with at\n # least one parameter always have their id in the name. This is a workaround a bug, where\n # gates with parameters do not contain the information required to build the gate definition\n # in symbolic form (unless the parameters are all symbolic). The exporter currently\n # (2021-12-01) builds gate declarations with parameters in the signature, but then ignores\n # those parameters during the body, and just uses the concrete values from the first\n # instance of the gate it sees, such as:\n # gate rzx(_gate_p_0) _gate_q_0, _gate_q_1 {\n # h _gate_q_1;\n # cx _gate_q_0, _gate_q_1;\n # rz(0.2) _gate_q_1; // <- note the concrete value.\n # cx _gate_q_0, _gate_q_1;\n # h _gate_q_1;\n # }\n # This then means that multiple calls to the same gate with different parameters will be\n # incorrect. By forcing all gates to be defined including their id, we generate a QASM3\n # program that does what was intended, even though the output QASM3 is silly. See gh-7335.\n if instruction.name in self._data or (\n isinstance(instruction, Gate)\n and not all(isinstance(param, Parameter) for param in instruction.params)\n ):\n key = f\"{instruction.name}_{id(instruction)}\"\n else:\n key = instruction.name\n self[key] = instruction\n\n\n# A _Scope is the structure used in the builder to store the contexts and re-mappings of bits from\n# the top-level scope where the bits were actually defined. In the class, 'circuit' is an instance\n# of QuantumCircuit that defines this level, and 'bit_map' is a mapping of 'Bit: Bit', where the\n# keys are bits in the circuit in this scope, and the values are the Bit in the top-level scope in\n# this context that this bit actually represents. 'symbol_map' is a bidirectional mapping of\n# '<Terra object>: Identifier' and 'str: <Terra object>', where the string in the second map is the\n# name of the identifier. This is a cheap hack around actually implementing a proper symbol table.\n_Scope = collections.namedtuple(\"_Scope\", (\"circuit\", \"bit_map\", \"symbol_map\"))\n\n\nclass QASM3Builder:\n \"\"\"QASM3 builder constructs an AST from a QuantumCircuit.\"\"\"\n\n builtins = (Barrier, Measure, Reset, Delay, BreakLoopOp, ContinueLoopOp)\n gate_parameter_prefix = \"_gate_p\"\n gate_qubit_prefix = \"_gate_q\"\n\n def __init__(\n self,\n quantumcircuit,\n includeslist,\n basis_gates,\n disable_constants,\n alias_classical_registers,\n ):\n # This is a stack of stacks; the outer stack is a list of \"outer\" look-up contexts, and the\n # inner stack is for scopes within these. A \"outer\" look-up context in this sense means\n # the main program body or a gate/subroutine definition, whereas the scopes are for things\n # like the body of a ``for`` loop construct.\n self._circuit_ctx = []\n self.push_context(quantumcircuit)\n self.includeslist = includeslist\n self._gate_to_declare = {}\n self._subroutine_to_declare = {}\n self._opaque_to_declare = {}\n self._flat_reg = False\n self._physical_qubit = False\n self._loose_clbit_index_lookup = {}\n # An arbitrary counter to help with generation of unique ids for symbol names when there are\n # clashes (though we generally prefer to keep user names if possible).\n self._counter = itertools.count()\n self.disable_constants = disable_constants\n self.alias_classical_registers = alias_classical_registers\n self.global_namespace = GlobalNamespace(includeslist, basis_gates)\n\n def _register_gate(self, gate):\n self.global_namespace.register(gate)\n self._gate_to_declare[id(gate)] = gate\n\n def _register_subroutine(self, instruction):\n self.global_namespace.register(instruction)\n self._subroutine_to_declare[id(instruction)] = instruction\n\n def _register_opaque(self, instruction):\n if instruction not in self.global_namespace:\n self.global_namespace.register(instruction)\n self._opaque_to_declare[id(instruction)] = instruction\n\n def _register_variable(self, variable, name=None) -> ast.Identifier:\n \"\"\"Register a variable in the symbol table for the current scope, returning the name that\n should be used to refer to the variable. The same name will be returned by subsequent calls\n to :meth:`_lookup_variable` within the same scope.\n\n If ``name`` is given explicitly, it must not already be defined in the scope.\n \"\"\"\n # Note that the registration only checks for the existence of a variable that was declared\n # in the current scope, not just one that's available. This is a rough implementation of\n # the shadowing proposal currently being drafted for OpenQASM 3, though we expect it to be\n # expanded and modified in the future (2022-03-07).\n table = self.current_scope().symbol_map\n if name is not None:\n if name in _RESERVED_KEYWORDS:\n raise QASM3ExporterError(f\"cannot reserve the keyword '{name}' as a variable name\")\n if name in table:\n raise QASM3ExporterError(\n f\"tried to reserve '{name}', but it is already used by '{table[name]}'\"\n )\n else:\n name = variable.name\n while name in table or name in _RESERVED_KEYWORDS:\n name = f\"{variable.name}__generated{next(self._counter)}\"\n identifier = ast.Identifier(name)\n table[identifier.string] = variable\n table[variable] = identifier\n return identifier\n\n def _reserve_variable_name(self, name: ast.Identifier) -> ast.Identifier:\n \"\"\"Reserve a variable name in the current scope, raising a :class:`.QASM3ExporterError` if\n the name is already in use.\n\n This is useful for autogenerated names that the exporter itself reserves when dealing with\n objects that have no standard Terra object backing them, such as the declaration of all\n circuit qubits, so cannot be placed into the symbol table by the normal means.\n\n Returns the same identifier, for convenience in chaining.\"\"\"\n table = self.current_scope().symbol_map\n if name.string in table:\n variable = table[name.string]\n raise QASM3ExporterError(\n f\"tried to reserve '{name.string}', but it is already used by '{variable}'\"\n )\n table[name.string] = \"<internal object>\"\n return name\n\n def _lookup_variable(self, variable) -> ast.Identifier:\n \"\"\"Lookup a Terra object within the current context, and return the name that should be used\n to represent it in OpenQASM 3 programmes.\"\"\"\n for scope in reversed(self.current_context()):\n if variable in scope.symbol_map:\n return scope.symbol_map[variable]\n raise KeyError(f\"'{variable}' is not defined in the current context\")\n\n def build_header(self):\n \"\"\"Builds a Header\"\"\"\n version = ast.Version(\"3\")\n includes = self.build_includes()\n return ast.Header(version, includes)\n\n def build_program(self):\n \"\"\"Builds a Program\"\"\"\n self.hoist_declarations(self.global_scope(assert_=True).circuit.data)\n return ast.Program(self.build_header(), self.build_global_statements())\n\n def hoist_declarations(self, instructions):\n \"\"\"Walks the definitions in gates/instructions to make a list of gates to declare.\"\"\"\n for instruction in instructions:\n if isinstance(instruction[0], ControlFlowOp):\n for block in instruction[0].blocks:\n self.hoist_declarations(block.data)\n continue\n if instruction[0] in self.global_namespace or isinstance(instruction[0], self.builtins):\n continue\n\n if instruction[0].definition is None:\n self._register_opaque(instruction[0])\n else:\n self.hoist_declarations(instruction[0].definition.data)\n if isinstance(instruction[0], Gate):\n self._register_gate(instruction[0])\n else:\n self._register_subroutine(instruction[0])\n\n def global_scope(self, assert_=False):\n \"\"\"Return the global circuit scope that is used as the basis of the full program. If\n ``assert_=True``, then this raises :obj:`.QASM3ExporterError` if the current context is not\n the global one.\"\"\"\n if assert_ and len(self._circuit_ctx) != 1 and len(self._circuit_ctx[0]) != 1:\n # Defensive code to help catch logic errors.\n raise QASM3ExporterError( # pragma: no cover\n f\"Not currently in the global context. Current contexts are: {self._circuit_ctx}\"\n )\n return self._circuit_ctx[0][0]\n\n def current_outermost_scope(self):\n \"\"\"Return the outermost scope for this context. If building the main program, then this is\n the :obj:`.QuantumCircuit` instance that the full program is being built from. If building\n a gate or subroutine definition, this is the body that defines the gate or subroutine.\"\"\"\n return self._circuit_ctx[-1][0]\n\n def current_scope(self):\n \"\"\"Return the current circuit scope.\"\"\"\n return self._circuit_ctx[-1][-1]\n\n def current_context(self):\n \"\"\"Return the current context (list of scopes).\"\"\"\n return self._circuit_ctx[-1]\n\n def push_scope(self, circuit: QuantumCircuit, qubits: Iterable[Qubit], clbits: Iterable[Clbit]):\n \"\"\"Push a new scope (like a ``for`` or ``while`` loop body) onto the current context\n stack.\"\"\"\n current_map = self.current_scope().bit_map\n qubits = tuple(current_map[qubit] for qubit in qubits)\n clbits = tuple(current_map[clbit] for clbit in clbits)\n if circuit.num_qubits != len(qubits):\n raise QASM3ExporterError( # pragma: no cover\n f\"Tried to push a scope whose circuit needs {circuit.num_qubits} qubits, but only\"\n f\" provided {len(qubits)} qubits to create the mapping.\"\n )\n if circuit.num_clbits != len(clbits):\n raise QASM3ExporterError( # pragma: no cover\n f\"Tried to push a scope whose circuit needs {circuit.num_clbits} clbits, but only\"\n f\" provided {len(clbits)} clbits to create the mapping.\"\n )\n mapping = dict(itertools.chain(zip(circuit.qubits, qubits), zip(circuit.clbits, clbits)))\n self._circuit_ctx[-1].append(_Scope(circuit, mapping, {}))\n\n def pop_scope(self) -> _Scope:\n \"\"\"Pop the current scope (like a ``for`` or ``while`` loop body) off the current context\n stack.\"\"\"\n if len(self._circuit_ctx[-1]) <= 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop a scope from the current context, but there are no current scopes.\"\n )\n return self._circuit_ctx[-1].pop()\n\n def push_context(self, outer_context: QuantumCircuit):\n \"\"\"Push a new context (like for a ``gate`` or ``def`` body) onto the stack.\"\"\"\n mapping = {bit: bit for bit in itertools.chain(outer_context.qubits, outer_context.clbits)}\n self._circuit_ctx.append([_Scope(outer_context, mapping, {})])\n\n def pop_context(self):\n \"\"\"Pop the current context (like for a ``gate`` or ``def`` body) onto the stack.\"\"\"\n if len(self._circuit_ctx) == 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop the current context, but that is the global context.\"\n )\n if len(self._circuit_ctx[-1]) != 1:\n raise QASM3ExporterError( # pragma: no cover\n \"Tried to pop the current context while there are still\"\n f\" {len(self._circuit_ctx[-1]) - 1} unclosed scopes.\"\n )\n self._circuit_ctx.pop()\n\n def build_includes(self):\n \"\"\"Builds a list of included files.\"\"\"\n return [ast.Include(filename) for filename in self.includeslist]\n\n def build_global_statements(self) -> List[ast.Statement]:\n \"\"\"\n globalStatement\n : subroutineDefinition\n | kernelDeclaration\n | quantumGateDefinition\n | calibration\n | quantumDeclarationStatement # build_quantumdeclaration\n | pragma\n ;\n\n statement\n : expressionStatement\n | assignmentStatement\n | classicalDeclarationStatement\n | branchingStatement\n | loopStatement\n | endStatement\n | aliasStatement\n | quantumStatement # build_quantuminstruction\n ;\n \"\"\"\n definitions = self.build_definitions()\n inputs, outputs, variables = self.build_variable_declarations()\n bit_declarations = self.build_classical_declarations()\n context = self.global_scope(assert_=True).circuit\n if getattr(context, \"_layout\", None) is not None:\n self._physical_qubit = True\n quantum_declarations = []\n else:\n quantum_declarations = self.build_quantum_declarations()\n quantum_instructions = self.build_quantum_instructions(context.data)\n self._physical_qubit = False\n\n return [\n statement\n for source in (\n inputs,\n outputs,\n definitions,\n variables,\n bit_declarations,\n quantum_declarations,\n quantum_instructions,\n )\n for statement in source\n ]\n\n def build_definitions(self):\n \"\"\"Builds all the definition.\"\"\"\n ret = []\n for instruction in self._opaque_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_opaque_definition))\n for instruction in self._subroutine_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_subroutine_definition))\n for instruction in self._gate_to_declare.values():\n ret.append(self.build_definition(instruction, self.build_gate_definition))\n return ret\n\n def build_definition(self, instruction, builder):\n \"\"\"Using a given definition builder, builds that definition.\"\"\"\n try:\n return instruction._define_qasm3()\n except AttributeError:\n pass\n self._flat_reg = True\n definition = builder(instruction)\n self._flat_reg = False\n return definition\n\n def build_opaque_definition(self, instruction):\n \"\"\"Builds an Opaque gate definition as a CalibrationDefinition\"\"\"\n # We can't do anything sensible with this yet, so it's better to loudly say that.\n raise QASM3ExporterError(\n \"Exporting opaque instructions with pulse-level calibrations is not yet supported by\"\n \" the OpenQASM 3 exporter. Received this instruction, which appears opaque:\"\n f\"\\n{instruction}\"\n )\n\n def build_subroutine_definition(self, instruction):\n \"\"\"Builds a SubroutineDefinition\"\"\"\n if instruction.definition.parameters:\n # We don't yet have the type system to store the parameter types in a symbol table, and\n # we currently don't have the correct logic in place to handle parameters correctly in\n # the definition.\n raise QASM3ExporterError(\n \"Exporting subroutines with parameters is not yet supported by the OpenQASM 3\"\n \" exporter. Received this instruction, which appears parameterized:\"\n f\"\\n{instruction}\"\n )\n name = self.global_namespace[instruction]\n self.push_context(instruction.definition)\n quantum_arguments = [\n ast.QuantumArgument(\n self._reserve_variable_name(ast.Identifier(f\"{self.gate_qubit_prefix}_{n_qubit}\"))\n )\n for n_qubit in range(len(instruction.definition.qubits))\n ]\n subroutine_body = ast.SubroutineBlock(\n self.build_quantum_instructions(instruction.definition.data),\n )\n self.pop_context()\n return ast.SubroutineDefinition(ast.Identifier(name), subroutine_body, quantum_arguments)\n\n def build_gate_definition(self, gate):\n \"\"\"Builds a QuantumGateDefinition\"\"\"\n self.push_context(gate.definition)\n signature = self.build_gate_signature(gate)\n body = ast.QuantumBlock(self.build_quantum_instructions(gate.definition.data))\n self.pop_context()\n return ast.QuantumGateDefinition(signature, body)\n\n def build_gate_signature(self, gate):\n \"\"\"Builds a QuantumGateSignature\"\"\"\n name = self.global_namespace[gate]\n params = []\n definition = gate.definition\n # Dummy parameters\n for num in range(len(gate.params) - len(definition.parameters)):\n param_name = f\"{self.gate_parameter_prefix}_{num}\"\n params.append(self._reserve_variable_name(ast.Identifier(param_name)))\n params += [self._register_variable(param) for param in definition.parameters]\n quantum_arguments = [\n self._reserve_variable_name(ast.Identifier(f\"{self.gate_qubit_prefix}_{n_qubit}\"))\n for n_qubit in range(len(definition.qubits))\n ]\n return ast.QuantumGateSignature(ast.Identifier(name), quantum_arguments, params or None)\n\n def build_variable_declarations(self):\n \"\"\"Builds lists of the input, output and standard variables used in this program.\"\"\"\n inputs, outputs, variables = [], [], []\n global_scope = self.global_scope(assert_=True).circuit\n for parameter in global_scope.parameters:\n parameter_name = self._register_variable(parameter)\n declaration = _infer_variable_declaration(global_scope, parameter, parameter_name)\n if declaration is None:\n continue\n if isinstance(declaration, ast.IODeclaration):\n if declaration.modifier is ast.IOModifier.INPUT:\n inputs.append(declaration)\n else:\n outputs.append(declaration)\n else:\n variables.append(declaration)\n return inputs, outputs, variables\n\n @property\n def base_classical_register_name(self):\n \"\"\"The base register name\"\"\"\n name = \"_all_clbits\" if self.alias_classical_registers else \"_loose_clbits\"\n if name in self.global_namespace._data:\n raise NotImplementedError # TODO choose a different name if there is a name collision\n return name\n\n @property\n def base_quantum_register_name(self):\n \"\"\"The base register name\"\"\"\n name = \"_all_qubits\"\n if name in self.global_namespace._data:\n raise NotImplementedError # TODO choose a different name if there is a name collision\n return name\n\n def build_classical_declarations(self):\n \"\"\"Return a list of AST nodes declaring all the classical bits and registers.\n\n The behaviour of this function depends on the setting ``alias_classical_registers``. If this\n is ``True``, then the output will be in the same form as the output of\n :meth:`.build_classical_declarations`, with the registers being aliases. If ``False``, it\n will instead return a :obj:`.ast.ClassicalDeclaration` for each classical register, and one\n for the loose :obj:`.Clbit` instances, and will raise :obj:`QASM3ExporterError` if any\n registers overlap.\n\n This function populates the lookup table ``self._loose_clbit_index_lookup``.\n \"\"\"\n circuit = self.current_scope().circuit\n if self.alias_classical_registers:\n self._loose_clbit_index_lookup = {\n bit: index for index, bit in enumerate(circuit.clbits)\n }\n flat_declaration = self.build_clbit_declaration(\n len(circuit.clbits),\n self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),\n )\n return [flat_declaration] + self.build_aliases(circuit.cregs)\n loose_register_size = 0\n for index, bit in enumerate(circuit.clbits):\n found_bit = circuit.find_bit(bit)\n if len(found_bit.registers) > 1:\n raise QASM3ExporterError(\n f\"Clbit {index} is in multiple registers, but 'alias_classical_registers' is\"\n f\" False. Registers and indices: {found_bit.registers}.\"\n )\n if not found_bit.registers:\n self._loose_clbit_index_lookup[bit] = loose_register_size\n loose_register_size += 1\n if loose_register_size > 0:\n loose = [\n self.build_clbit_declaration(\n loose_register_size,\n self._reserve_variable_name(ast.Identifier(self.base_classical_register_name)),\n )\n ]\n else:\n loose = []\n return loose + [\n self.build_clbit_declaration(len(register), self._register_variable(register))\n for register in circuit.cregs\n ]\n\n def build_clbit_declaration(\n self, n_clbits: int, name: ast.Identifier\n ) -> ast.ClassicalDeclaration:\n \"\"\"Return a declaration of the :obj:`.Clbit`\\\\ s as a ``bit[n]``.\"\"\"\n return ast.ClassicalDeclaration(ast.BitArrayType(n_clbits), name)\n\n def build_quantum_declarations(self):\n \"\"\"Return a list of AST nodes declaring all the qubits in the current scope, and all the\n alias declarations for these qubits.\"\"\"\n return [self.build_qubit_declarations()] + self.build_aliases(\n self.current_scope().circuit.qregs\n )\n\n def build_qubit_declarations(self):\n \"\"\"Return a declaration of all the :obj:`.Qubit`\\\\ s in the current scope.\"\"\"\n # Base register\n return ast.QuantumDeclaration(\n self._reserve_variable_name(ast.Identifier(self.base_quantum_register_name)),\n ast.Designator(self.build_integer(self.current_scope().circuit.num_qubits)),\n )\n\n def build_aliases(self, registers: Iterable[Register]) -> List[ast.AliasStatement]:\n \"\"\"Return a list of alias declarations for the given registers. The registers can be either\n classical or quantum.\"\"\"\n out = []\n for register in registers:\n elements = []\n # Greedily consolidate runs of bits into ranges. We don't bother trying to handle\n # steps; there's no need in generated code. Even single bits are referenced as ranges\n # because the concatenation in an alias statement can only concatenate arraylike values.\n start_index, prev_index = None, None\n register_identifier = (\n ast.Identifier(self.base_quantum_register_name)\n if isinstance(register, QuantumRegister)\n else ast.Identifier(self.base_classical_register_name)\n )\n for bit in register:\n cur_index = self.find_bit(bit).index\n if start_index is None:\n start_index = cur_index\n elif cur_index != prev_index + 1:\n elements.append(\n ast.SubscriptedIdentifier(\n register_identifier,\n ast.Range(\n start=self.build_integer(start_index),\n end=self.build_integer(prev_index),\n ),\n )\n )\n start_index = prev_index = cur_index\n prev_index = cur_index\n # After the loop, if there were any bits at all, there's always one unemitted range.\n if len(register) != 0:\n elements.append(\n ast.SubscriptedIdentifier(\n register_identifier,\n ast.Range(\n start=self.build_integer(start_index),\n end=self.build_integer(prev_index),\n ),\n )\n )\n out.append(ast.AliasStatement(self._register_variable(register), elements))\n return out\n\n def build_quantum_instructions(self, instructions):\n \"\"\"Builds a list of call statements\"\"\"\n ret = []\n for instruction in instructions:\n if isinstance(instruction[0], Gate):\n if instruction[0].condition:\n eqcondition = self.build_eqcondition(instruction[0].condition)\n instruction_without_condition = instruction[0].copy()\n instruction_without_condition.condition = None\n true_body = self.build_program_block(\n [(instruction_without_condition, instruction[1], instruction[2])]\n )\n ret.append(ast.BranchingStatement(eqcondition, true_body))\n else:\n ret.append(self.build_gate_call(instruction))\n elif isinstance(instruction[0], Barrier):\n operands = [self.build_single_bit_reference(operand) for operand in instruction[1]]\n ret.append(ast.QuantumBarrier(operands))\n elif isinstance(instruction[0], Measure):\n measurement = ast.QuantumMeasurement(\n [self.build_single_bit_reference(operand) for operand in instruction[1]]\n )\n qubit = self.build_single_bit_reference(instruction[2][0])\n ret.append(ast.QuantumMeasurementAssignment(qubit, measurement))\n elif isinstance(instruction[0], Reset):\n for operand in instruction[1]:\n ret.append(ast.QuantumReset(self.build_single_bit_reference(operand)))\n elif isinstance(instruction[0], Delay):\n ret.append(self.build_delay(*instruction))\n elif isinstance(instruction[0], ForLoopOp):\n ret.append(self.build_for_loop(*instruction))\n elif isinstance(instruction[0], WhileLoopOp):\n ret.append(self.build_while_loop(*instruction))\n elif isinstance(instruction[0], IfElseOp):\n ret.append(self.build_if_statement(*instruction))\n elif isinstance(instruction[0], BreakLoopOp):\n ret.append(ast.BreakStatement())\n elif isinstance(instruction[0], ContinueLoopOp):\n ret.append(ast.ContinueStatement())\n else:\n ret.append(self.build_subroutine_call(instruction))\n return ret\n\n def build_if_statement(\n self, instruction: IfElseOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.BranchingStatement:\n \"\"\"Build an :obj:`.IfElseOp` into a :obj:`.ast.BranchingStatement`.\"\"\"\n condition = self.build_eqcondition(instruction.condition)\n\n true_circuit = instruction.blocks[0]\n self.push_scope(true_circuit, qubits, clbits)\n true_body = self.build_program_block(true_circuit.data)\n self.pop_scope()\n if len(instruction.blocks) == 1:\n return ast.BranchingStatement(condition, true_body, None)\n\n false_circuit = instruction.blocks[1]\n self.push_scope(false_circuit, qubits, clbits)\n false_body = self.build_program_block(false_circuit.data)\n self.pop_scope()\n return ast.BranchingStatement(condition, true_body, false_body)\n\n def build_while_loop(\n self, instruction: WhileLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.WhileLoopStatement:\n \"\"\"Build a :obj:`.WhileLoopOp` into a :obj:`.ast.WhileLoopStatement`.\"\"\"\n condition = self.build_eqcondition(instruction.condition)\n loop_circuit = instruction.blocks[0]\n self.push_scope(loop_circuit, qubits, clbits)\n loop_body = self.build_program_block(loop_circuit.data)\n self.pop_scope()\n return ast.WhileLoopStatement(condition, loop_body)\n\n def build_for_loop(\n self, instruction: ForLoopOp, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.ForLoopStatement:\n \"\"\"Build a :obj:`.ForLoopOp` into a :obj:`.ast.ForLoopStatement`.\"\"\"\n indexset, loop_parameter, loop_circuit = instruction.params\n self.push_scope(loop_circuit, qubits, clbits)\n if loop_parameter is None:\n # The loop parameter is implicitly declared by the ``for`` loop (see also\n # _infer_parameter_declaration), so it doesn't matter that we haven't declared this.\n loop_parameter_ast = self._reserve_variable_name(ast.Identifier(\"_\"))\n else:\n loop_parameter_ast = self._register_variable(loop_parameter)\n if isinstance(indexset, range):\n # QASM 3 uses inclusive ranges on both ends, unlike Python.\n indexset_ast = ast.Range(\n start=self.build_integer(indexset.start),\n end=self.build_integer(indexset.stop - 1),\n step=self.build_integer(indexset.step) if indexset.step != 1 else None,\n )\n else:\n try:\n indexset_ast = ast.IndexSet([self.build_integer(value) for value in indexset])\n except QASM3ExporterError:\n raise QASM3ExporterError(\n \"The values in QASM 3 'for' loops must all be integers, but received\"\n f\" '{indexset}'.\"\n ) from None\n body_ast = self.build_program_block(loop_circuit)\n self.pop_scope()\n return ast.ForLoopStatement(indexset_ast, loop_parameter_ast, body_ast)\n\n def build_delay(\n self, instruction: Delay, qubits: Iterable[Qubit], clbits: Iterable[Clbit]\n ) -> ast.QuantumDelay:\n \"\"\"Build a built-in delay statement.\"\"\"\n clbits = tuple(clbits)\n if clbits:\n raise QASM3ExporterError(\n f\"Found a delay instruction acting on classical bits: {instruction} on {clbits}\"\n )\n if instruction.unit == \"ps\":\n duration = ast.DurationLiteral(1000 * instruction.duration, ast.DurationUnit.NANOSECOND)\n else:\n unit_map = {\n \"ns\": ast.DurationUnit.NANOSECOND,\n \"us\": ast.DurationUnit.MICROSECOND,\n \"ms\": ast.DurationUnit.MILLISECOND,\n \"s\": ast.DurationUnit.SECOND,\n \"dt\": ast.DurationUnit.SAMPLE,\n }\n duration = ast.DurationLiteral(instruction.duration, unit_map[instruction.unit])\n return ast.QuantumDelay(\n duration, [self.build_single_bit_reference(qubit) for qubit in qubits]\n )\n\n def build_integer(self, value) -> ast.Integer:\n \"\"\"Build an integer literal, raising a :obj:`.QASM3ExporterError` if the input is not\n actually an\n integer.\"\"\"\n if not isinstance(value, numbers.Integral):\n # This is meant to be purely defensive, in case a non-integer slips into the logic\n # somewhere, but no valid Terra object should trigger this.\n raise QASM3ExporterError(f\"'{value}' is not an integer\") # pragma: no cover\n return ast.Integer(int(value))\n\n def build_program_block(self, instructions):\n \"\"\"Builds a ProgramBlock\"\"\"\n return ast.ProgramBlock(self.build_quantum_instructions(instructions))\n\n def build_eqcondition(self, condition):\n \"\"\"Classical Conditional condition from a instruction.condition\"\"\"\n if isinstance(condition[0], Clbit):\n condition_on = self.build_single_bit_reference(condition[0])\n else:\n condition_on = self._lookup_variable(condition[0])\n return ast.ComparisonExpression(\n condition_on, ast.EqualsOperator(), self.build_integer(condition[1])\n )\n\n def _rebind_scoped_parameters(self, expression):\n \"\"\"If the input is a :class:`.ParameterExpression`, rebind any internal\n :class:`.Parameter`\\\\ s so that their names match their names in the scope. Other inputs\n are returned unchanged.\"\"\"\n # This is a little hacky, but the entirety of the Expression handling is essentially\n # missing, pending a new system in Terra to replace it (2022-03-07).\n if not isinstance(expression, ParameterExpression):\n return expression\n return expression.subs(\n {\n param: Parameter(self._lookup_variable(param).string)\n for param in expression.parameters\n }\n )\n\n def build_gate_call(self, instruction):\n \"\"\"Builds a QuantumGateCall\"\"\"\n if isinstance(instruction[0], standard_gates.UGate):\n gate_name = ast.Identifier(\"U\")\n else:\n gate_name = ast.Identifier(self.global_namespace[instruction[0]])\n qubits = [self.build_single_bit_reference(qubit) for qubit in instruction[1]]\n if self.disable_constants:\n parameters = [\n ast.Expression(self._rebind_scoped_parameters(param))\n for param in instruction[0].params\n ]\n else:\n parameters = [\n ast.Expression(pi_check(self._rebind_scoped_parameters(param), output=\"qasm\"))\n for param in instruction[0].params\n ]\n\n return ast.QuantumGateCall(gate_name, qubits, parameters=parameters)\n\n def build_subroutine_call(self, instruction):\n \"\"\"Builds a SubroutineCall\"\"\"\n identifier = ast.Identifier(self.global_namespace[instruction[0]])\n expressions = [ast.Expression(param) for param in instruction[0].params]\n # TODO: qubits should go inside the brackets of subroutine calls, but neither Terra nor the\n # AST here really support the calls, so there's no sensible way of writing it yet.\n bits = [self.build_single_bit_reference(bit) for bit in instruction[1]]\n return ast.SubroutineCall(identifier, bits, expressions)\n\n def build_single_bit_reference(self, bit: Bit) -> ast.Identifier:\n \"\"\"Get an identifier node that refers to one particular bit.\"\"\"\n found_bit = self.find_bit(bit)\n if self._physical_qubit and isinstance(bit, Qubit):\n return ast.PhysicalQubitIdentifier(ast.Identifier(str(found_bit.index)))\n if self._flat_reg:\n return ast.Identifier(f\"{self.gate_qubit_prefix}_{found_bit.index}\")\n if found_bit.registers:\n # We preferentially return a reference via a register in the hope that this is what the\n # user is used to seeing as well.\n register, index = found_bit.registers[0]\n return ast.SubscriptedIdentifier(\n self._lookup_variable(register), self.build_integer(index)\n )\n # Otherwise reference via the list of all qubits, or the list of loose clbits.\n if isinstance(bit, Qubit):\n return ast.SubscriptedIdentifier(\n ast.Identifier(self.base_quantum_register_name), self.build_integer(found_bit.index)\n )\n return ast.SubscriptedIdentifier(\n ast.Identifier(self.base_classical_register_name),\n self.build_integer(self._loose_clbit_index_lookup[bit]),\n )\n\n def find_bit(self, bit: Bit):\n \"\"\"Look up the bit using :meth:`.QuantumCircuit.find_bit` in the current outermost scope.\"\"\"\n # This is a hacky work-around for now. Really this should be a proper symbol-table lookup,\n # but with us expecting to put in a whole new AST for Terra 0.20, this should be sufficient\n # for the use-cases we support. (Jake, 2021-11-22.)\n if len(self.current_context()) > 1:\n ancestor_bit = self.current_scope().bit_map[bit]\n return self.current_outermost_scope().circuit.find_bit(ancestor_bit)\n return self.current_scope().circuit.find_bit(bit)\n\n\ndef _infer_variable_declaration(\n circuit: QuantumCircuit, parameter: Parameter, parameter_name: ast.Identifier\n) -> Union[ast.ClassicalDeclaration, None]:\n \"\"\"Attempt to infer what type a parameter should be declared as to work with a circuit.\n\n This is very simplistic; it assumes all parameters are real numbers that need to be input to the\n program, unless one is used as a loop variable, in which case it shouldn't be declared at all,\n because the ``for`` loop declares it implicitly (per the Qiskit/QSS reading of the OpenQASM\n spec at Qiskit/openqasm@8ee55ec).\n\n .. note::\n\n This is a hack around not having a proper type system implemented in Terra, and really this\n whole function should be removed in favour of proper symbol-table building and lookups.\n This function is purely to try and hack the parameters for ``for`` loops into the exporter\n for now.\n\n Args:\n circuit: The global-scope circuit, which is the base of the exported program.\n parameter: The parameter to infer the type of.\n parameter_name: The name of the parameter to use in the declaration.\n\n Returns:\n A suitable :obj:`.ast.ClassicalDeclaration` node, or, if the parameter should *not* be\n declared, then ``None``.\n \"\"\"\n\n def is_loop_variable(circuit, parameter):\n \"\"\"Recurse into the instructions a parameter is used in, checking at every level if it is\n used as the loop variable of a ``for`` loop.\"\"\"\n # This private access is hacky, and shouldn't need to happen; the type of a parameter\n # _should_ be an intrinsic part of the parameter, or somewhere publicly accessible, but\n # Terra doesn't have those concepts yet. We can only try and guess at the type by looking\n # at all the places it's used in the circuit.\n for instruction, index in circuit._parameter_table[parameter]:\n if isinstance(instruction, ForLoopOp):\n # The parameters of ForLoopOp are (indexset, loop_parameter, body).\n if index == 1:\n return True\n if isinstance(instruction, ControlFlowOp):\n if is_loop_variable(instruction.params[index], parameter):\n return True\n return False\n\n if is_loop_variable(circuit, parameter):\n return None\n # Arbitrary choice of double-precision float for all other parameters, but it's what we actually\n # expect people to be binding to their Parameters right now.\n return ast.IODeclaration(ast.IOModifier.INPUT, ast.FloatType.DOUBLE, parameter_name)\n", "path": "qiskit/qasm3/exporter.py"}]} |
gh_patches_debug_1143 | rasdani/github-patches | git_diff | celery__celery-6103 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Document and code are inconsistent about task_reject_on_worker_lost config
<!--
Please fill this template entirely and do not erase parts of it.
We reserve the right to close without a response
bug reports which are incomplete.
-->
# Checklist
<!--
To check an item on the list replace [ ] with [x].
-->
- [x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)
for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)
for existing proposed fixes.
- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)
to find out if the bug was already fixed in the master branch.
- [x] I have included all related issues and possible duplicate issues in this issue
(If there are none, check this box anyway).
## Related Issues and Possible Duplicates
<!--
Please make sure to search and mention any related issues
or possible duplicates to this issue as requested by the checklist above.
This may or may not include issues in other repositories that the Celery project
maintains or other repositories that are dependencies of Celery.
If you don't know how to mention issues, please refer to Github's documentation
on the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests
-->
#### Related Issues
- None
#### Possible Duplicates
- None
# Description
<!--
Please describe what's missing or incorrect about our documentation.
Include links and/or screenshots which will aid us to resolve the issue.
-->
In the latest version of the documentation about [task_reject_on_worker_lost](http://docs.celeryproject.org/en/latest/userguide/configuration.html?highlight=task_reject_on_worker_lost), it says `Enabling this can cause message loops`
But actually, enabling this will not cause message loops, tasks only execute twice.Tasks that have been redelivered will not be redelivered again, [source code](https://github.com/celery/celery/blob/master/celery/worker/request.py#L518)
# Suggestions
<!-- Please provide us suggestions for how to fix the documentation -->
If it is a documentation error, it is best to remove the warning from the document.
If the document is ok, it is need to modify the code.
I can help modify the document or code.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `celery/worker/request.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Task request.
3
4 This module defines the :class:`Request` class, that specifies
5 how tasks are executed.
6 """
7 from __future__ import absolute_import, unicode_literals
8
9 import logging
10 import sys
11 from datetime import datetime
12 from time import time
13 from weakref import ref
14
15 from billiard.common import TERM_SIGNAME
16 from kombu.utils.encoding import safe_repr, safe_str
17 from kombu.utils.objects import cached_property
18
19 from celery import signals
20 from celery.app.task import Context
21 from celery.app.trace import trace_task, trace_task_ret
22 from celery.exceptions import (Ignore, InvalidTaskError, Reject, Retry,
23 TaskRevokedError, Terminated,
24 TimeLimitExceeded, WorkerLostError)
25 from celery.five import monotonic, python_2_unicode_compatible, string
26 from celery.platforms import signals as _signals
27 from celery.utils.functional import maybe, noop
28 from celery.utils.log import get_logger
29 from celery.utils.nodenames import gethostname
30 from celery.utils.serialization import get_pickled_exception
31 from celery.utils.time import maybe_iso8601, maybe_make_aware, timezone
32
33 from . import state
34
35 __all__ = ('Request',)
36
37 # pylint: disable=redefined-outer-name
38 # We cache globals and attribute lookups, so disable this warning.
39
40 IS_PYPY = hasattr(sys, 'pypy_version_info')
41
42 logger = get_logger(__name__)
43 debug, info, warn, error = (logger.debug, logger.info,
44 logger.warning, logger.error)
45 _does_info = False
46 _does_debug = False
47
48
49 def __optimize__():
50 # this is also called by celery.app.trace.setup_worker_optimizations
51 global _does_debug
52 global _does_info
53 _does_debug = logger.isEnabledFor(logging.DEBUG)
54 _does_info = logger.isEnabledFor(logging.INFO)
55
56
57 __optimize__() # noqa: E305
58
59 # Localize
60 tz_or_local = timezone.tz_or_local
61 send_revoked = signals.task_revoked.send
62
63 task_accepted = state.task_accepted
64 task_ready = state.task_ready
65 revoked_tasks = state.revoked
66
67
68 @python_2_unicode_compatible
69 class Request(object):
70 """A request for task execution."""
71
72 acknowledged = False
73 time_start = None
74 worker_pid = None
75 time_limits = (None, None)
76 _already_revoked = False
77 _terminate_on_ack = None
78 _apply_result = None
79 _tzlocal = None
80
81 if not IS_PYPY: # pragma: no cover
82 __slots__ = (
83 '_app', '_type', 'name', 'id', '_root_id', '_parent_id',
84 '_on_ack', '_body', '_hostname', '_eventer', '_connection_errors',
85 '_task', '_eta', '_expires', '_request_dict', '_on_reject', '_utc',
86 '_content_type', '_content_encoding', '_argsrepr', '_kwargsrepr',
87 '_args', '_kwargs', '_decoded', '__payload',
88 '__weakref__', '__dict__',
89 )
90
91 def __init__(self, message, on_ack=noop,
92 hostname=None, eventer=None, app=None,
93 connection_errors=None, request_dict=None,
94 task=None, on_reject=noop, body=None,
95 headers=None, decoded=False, utc=True,
96 maybe_make_aware=maybe_make_aware,
97 maybe_iso8601=maybe_iso8601, **opts):
98 self._message = message
99 self._request_dict = message.headers if headers is None else headers
100 self._body = message.body if body is None else body
101 self._app = app
102 self._utc = utc
103 self._decoded = decoded
104 if decoded:
105 self._content_type = self._content_encoding = None
106 else:
107 self._content_type, self._content_encoding = (
108 message.content_type, message.content_encoding,
109 )
110 self.__payload = self._body if self._decoded else message.payload
111 self.id = self._request_dict['id']
112 self._type = self.name = self._request_dict['task']
113 if 'shadow' in self._request_dict:
114 self.name = self._request_dict['shadow'] or self.name
115 self._root_id = self._request_dict.get('root_id')
116 self._parent_id = self._request_dict.get('parent_id')
117 timelimit = self._request_dict.get('timelimit', None)
118 if timelimit:
119 self.time_limits = timelimit
120 self._argsrepr = self._request_dict.get('argsrepr', '')
121 self._kwargsrepr = self._request_dict.get('kwargsrepr', '')
122 self._on_ack = on_ack
123 self._on_reject = on_reject
124 self._hostname = hostname or gethostname()
125 self._eventer = eventer
126 self._connection_errors = connection_errors or ()
127 self._task = task or self._app.tasks[self._type]
128
129 # timezone means the message is timezone-aware, and the only timezone
130 # supported at this point is UTC.
131 eta = self._request_dict.get('eta')
132 if eta is not None:
133 try:
134 eta = maybe_iso8601(eta)
135 except (AttributeError, ValueError, TypeError) as exc:
136 raise InvalidTaskError(
137 'invalid ETA value {0!r}: {1}'.format(eta, exc))
138 self._eta = maybe_make_aware(eta, self.tzlocal)
139 else:
140 self._eta = None
141
142 expires = self._request_dict.get('expires')
143 if expires is not None:
144 try:
145 expires = maybe_iso8601(expires)
146 except (AttributeError, ValueError, TypeError) as exc:
147 raise InvalidTaskError(
148 'invalid expires value {0!r}: {1}'.format(expires, exc))
149 self._expires = maybe_make_aware(expires, self.tzlocal)
150 else:
151 self._expires = None
152
153 delivery_info = message.delivery_info or {}
154 properties = message.properties or {}
155 self._delivery_info = {
156 'exchange': delivery_info.get('exchange'),
157 'routing_key': delivery_info.get('routing_key'),
158 'priority': properties.get('priority'),
159 'redelivered': delivery_info.get('redelivered'),
160 }
161 self._request_dict.update({
162 'reply_to': properties.get('reply_to'),
163 'correlation_id': properties.get('correlation_id'),
164 'hostname': self._hostname,
165 'delivery_info': self._delivery_info
166 })
167 # this is a reference pass to avoid memory usage burst
168 self._request_dict['args'], self._request_dict['kwargs'], _ = self.__payload
169 self._args = self._request_dict['args']
170 self._kwargs = self._request_dict['kwargs']
171
172 @property
173 def delivery_info(self):
174 return self._delivery_info
175
176 @property
177 def message(self):
178 return self._message
179
180 @property
181 def request_dict(self):
182 return self._request_dict
183
184 @property
185 def body(self):
186 return self._body
187
188 @property
189 def app(self):
190 return self._app
191
192 @property
193 def utc(self):
194 return self._utc
195
196 @property
197 def content_type(self):
198 return self._content_type
199
200 @property
201 def content_encoding(self):
202 return self._content_encoding
203
204 @property
205 def type(self):
206 return self._type
207
208 @property
209 def root_id(self):
210 return self._root_id
211
212 @property
213 def parent_id(self):
214 return self._parent_id
215
216 @property
217 def argsrepr(self):
218 return self._argsrepr
219
220 @property
221 def args(self):
222 return self._args
223
224 @property
225 def kwargs(self):
226 return self._kwargs
227
228 @property
229 def kwargsrepr(self):
230 return self._kwargsrepr
231
232 @property
233 def on_ack(self):
234 return self._on_ack
235
236 @property
237 def on_reject(self):
238 return self._on_reject
239
240 @on_reject.setter
241 def on_reject(self, value):
242 self._on_reject = value
243
244 @property
245 def hostname(self):
246 return self._hostname
247
248 @property
249 def eventer(self):
250 return self._eventer
251
252 @eventer.setter
253 def eventer(self, eventer):
254 self._eventer = eventer
255
256 @property
257 def connection_errors(self):
258 return self._connection_errors
259
260 @property
261 def task(self):
262 return self._task
263
264 @property
265 def eta(self):
266 return self._eta
267
268 @property
269 def expires(self):
270 return self._expires
271
272 @expires.setter
273 def expires(self, value):
274 self._expires = value
275
276 @property
277 def tzlocal(self):
278 if self._tzlocal is None:
279 self._tzlocal = self._app.conf.timezone
280 return self._tzlocal
281
282 @property
283 def store_errors(self):
284 return (not self.task.ignore_result or
285 self.task.store_errors_even_if_ignored)
286
287 @property
288 def task_id(self):
289 # XXX compat
290 return self.id
291
292 @task_id.setter # noqa
293 def task_id(self, value):
294 self.id = value
295
296 @property
297 def task_name(self):
298 # XXX compat
299 return self.name
300
301 @task_name.setter # noqa
302 def task_name(self, value):
303 self.name = value
304
305 @property
306 def reply_to(self):
307 # used by rpc backend when failures reported by parent process
308 return self._request_dict['reply_to']
309
310 @property
311 def correlation_id(self):
312 # used similarly to reply_to
313 return self._request_dict['correlation_id']
314
315 def execute_using_pool(self, pool, **kwargs):
316 """Used by the worker to send this task to the pool.
317
318 Arguments:
319 pool (~celery.concurrency.base.TaskPool): The execution pool
320 used to execute this request.
321
322 Raises:
323 celery.exceptions.TaskRevokedError: if the task was revoked.
324 """
325 task_id = self.id
326 task = self._task
327 if self.revoked():
328 raise TaskRevokedError(task_id)
329
330 time_limit, soft_time_limit = self.time_limits
331 result = pool.apply_async(
332 trace_task_ret,
333 args=(self._type, task_id, self._request_dict, self._body,
334 self._content_type, self._content_encoding),
335 accept_callback=self.on_accepted,
336 timeout_callback=self.on_timeout,
337 callback=self.on_success,
338 error_callback=self.on_failure,
339 soft_timeout=soft_time_limit or task.soft_time_limit,
340 timeout=time_limit or task.time_limit,
341 correlation_id=task_id,
342 )
343 # cannot create weakref to None
344 self._apply_result = maybe(ref, result)
345 return result
346
347 def execute(self, loglevel=None, logfile=None):
348 """Execute the task in a :func:`~celery.app.trace.trace_task`.
349
350 Arguments:
351 loglevel (int): The loglevel used by the task.
352 logfile (str): The logfile used by the task.
353 """
354 if self.revoked():
355 return
356
357 # acknowledge task as being processed.
358 if not self.task.acks_late:
359 self.acknowledge()
360
361 _, _, embed = self._payload
362 request = self._request_dict
363 # pylint: disable=unpacking-non-sequence
364 # payload is a property, so pylint doesn't think it's a tuple.
365 request.update({
366 'loglevel': loglevel,
367 'logfile': logfile,
368 'is_eager': False,
369 }, **embed or {})
370 retval = trace_task(self.task, self.id, self._args, self._kwargs, request,
371 hostname=self._hostname, loader=self._app.loader,
372 app=self._app)[0]
373 self.acknowledge()
374 return retval
375
376 def maybe_expire(self):
377 """If expired, mark the task as revoked."""
378 if self._expires:
379 now = datetime.now(self._expires.tzinfo)
380 if now > self._expires:
381 revoked_tasks.add(self.id)
382 return True
383
384 def terminate(self, pool, signal=None):
385 signal = _signals.signum(signal or TERM_SIGNAME)
386 if self.time_start:
387 pool.terminate_job(self.worker_pid, signal)
388 self._announce_revoked('terminated', True, signal, False)
389 else:
390 self._terminate_on_ack = pool, signal
391 if self._apply_result is not None:
392 obj = self._apply_result() # is a weakref
393 if obj is not None:
394 obj.terminate(signal)
395
396 def _announce_revoked(self, reason, terminated, signum, expired):
397 task_ready(self)
398 self.send_event('task-revoked',
399 terminated=terminated, signum=signum, expired=expired)
400 self.task.backend.mark_as_revoked(
401 self.id, reason, request=self._context,
402 store_result=self.store_errors,
403 )
404 self.acknowledge()
405 self._already_revoked = True
406 send_revoked(self.task, request=self._context,
407 terminated=terminated, signum=signum, expired=expired)
408
409 def revoked(self):
410 """If revoked, skip task and mark state."""
411 expired = False
412 if self._already_revoked:
413 return True
414 if self._expires:
415 expired = self.maybe_expire()
416 if self.id in revoked_tasks:
417 info('Discarding revoked task: %s[%s]', self.name, self.id)
418 self._announce_revoked(
419 'expired' if expired else 'revoked', False, None, expired,
420 )
421 return True
422 return False
423
424 def send_event(self, type, **fields):
425 if self._eventer and self._eventer.enabled and self.task.send_events:
426 self._eventer.send(type, uuid=self.id, **fields)
427
428 def on_accepted(self, pid, time_accepted):
429 """Handler called when task is accepted by worker pool."""
430 self.worker_pid = pid
431 # Convert monotonic time_accepted to absolute time
432 self.time_start = time() - (monotonic() - time_accepted)
433 task_accepted(self)
434 if not self.task.acks_late:
435 self.acknowledge()
436 self.send_event('task-started')
437 if _does_debug:
438 debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)
439 if self._terminate_on_ack is not None:
440 self.terminate(*self._terminate_on_ack)
441
442 def on_timeout(self, soft, timeout):
443 """Handler called if the task times out."""
444 if soft:
445 warn('Soft time limit (%ss) exceeded for %s[%s]',
446 timeout, self.name, self.id)
447 else:
448 task_ready(self)
449 error('Hard time limit (%ss) exceeded for %s[%s]',
450 timeout, self.name, self.id)
451 exc = TimeLimitExceeded(timeout)
452
453 self.task.backend.mark_as_failure(
454 self.id, exc, request=self._context,
455 store_result=self.store_errors,
456 )
457
458 if self.task.acks_late and self.task.acks_on_failure_or_timeout:
459 self.acknowledge()
460
461 def on_success(self, failed__retval__runtime, **kwargs):
462 """Handler called if the task was successfully processed."""
463 failed, retval, runtime = failed__retval__runtime
464 if failed:
465 if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):
466 raise retval.exception
467 return self.on_failure(retval, return_ok=True)
468 task_ready(self)
469
470 if self.task.acks_late:
471 self.acknowledge()
472
473 self.send_event('task-succeeded', result=retval, runtime=runtime)
474
475 def on_retry(self, exc_info):
476 """Handler called if the task should be retried."""
477 if self.task.acks_late:
478 self.acknowledge()
479
480 self.send_event('task-retried',
481 exception=safe_repr(exc_info.exception.exc),
482 traceback=safe_str(exc_info.traceback))
483
484 def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
485 """Handler called if the task raised an exception."""
486 task_ready(self)
487 if isinstance(exc_info.exception, MemoryError):
488 raise MemoryError('Process got: %s' % (exc_info.exception,))
489 elif isinstance(exc_info.exception, Reject):
490 return self.reject(requeue=exc_info.exception.requeue)
491 elif isinstance(exc_info.exception, Ignore):
492 return self.acknowledge()
493
494 exc = exc_info.exception
495
496 if isinstance(exc, Retry):
497 return self.on_retry(exc_info)
498
499 # (acks_late) acknowledge after result stored.
500 requeue = False
501 if self.task.acks_late:
502 reject = (
503 self.task.reject_on_worker_lost and
504 isinstance(exc, WorkerLostError)
505 )
506 ack = self.task.acks_on_failure_or_timeout
507 if reject:
508 requeue = not self.delivery_info.get('redelivered')
509 self.reject(requeue=requeue)
510 send_failed_event = False
511 elif ack:
512 self.acknowledge()
513 else:
514 # supporting the behaviour where a task failed and
515 # need to be removed from prefetched local queue
516 self.reject(requeue=False)
517
518 # These are special cases where the process would not have had time
519 # to write the result.
520 if isinstance(exc, Terminated):
521 self._announce_revoked(
522 'terminated', True, string(exc), False)
523 send_failed_event = False # already sent revoked event
524 elif not requeue and (isinstance(exc, WorkerLostError) or not return_ok):
525 # only mark as failure if task has not been requeued
526 self.task.backend.mark_as_failure(
527 self.id, exc, request=self._context,
528 store_result=self.store_errors,
529 )
530
531 if send_failed_event:
532 self.send_event(
533 'task-failed',
534 exception=safe_repr(get_pickled_exception(exc_info.exception)),
535 traceback=exc_info.traceback,
536 )
537
538 if not return_ok:
539 error('Task handler raised error: %r', exc,
540 exc_info=exc_info.exc_info)
541
542 def acknowledge(self):
543 """Acknowledge task."""
544 if not self.acknowledged:
545 self._on_ack(logger, self._connection_errors)
546 self.acknowledged = True
547
548 def reject(self, requeue=False):
549 if not self.acknowledged:
550 self._on_reject(logger, self._connection_errors, requeue)
551 self.acknowledged = True
552 self.send_event('task-rejected', requeue=requeue)
553
554 def info(self, safe=False):
555 return {
556 'id': self.id,
557 'name': self.name,
558 'args': self._args,
559 'kwargs': self._kwargs,
560 'type': self._type,
561 'hostname': self._hostname,
562 'time_start': self.time_start,
563 'acknowledged': self.acknowledged,
564 'delivery_info': self.delivery_info,
565 'worker_pid': self.worker_pid,
566 }
567
568 def humaninfo(self):
569 return '{0.name}[{0.id}]'.format(self)
570
571 def __str__(self):
572 """``str(self)``."""
573 return ' '.join([
574 self.humaninfo(),
575 ' ETA:[{0}]'.format(self._eta) if self._eta else '',
576 ' expires:[{0}]'.format(self._expires) if self._expires else '',
577 ])
578
579 def __repr__(self):
580 """``repr(self)``."""
581 return '<{0}: {1} {2} {3}>'.format(
582 type(self).__name__, self.humaninfo(),
583 self._argsrepr, self._kwargsrepr,
584 )
585
586 @cached_property
587 def _payload(self):
588 return self.__payload
589
590 @cached_property
591 def chord(self):
592 # used by backend.mark_as_failure when failure is reported
593 # by parent process
594 # pylint: disable=unpacking-non-sequence
595 # payload is a property, so pylint doesn't think it's a tuple.
596 _, _, embed = self._payload
597 return embed.get('chord')
598
599 @cached_property
600 def errbacks(self):
601 # used by backend.mark_as_failure when failure is reported
602 # by parent process
603 # pylint: disable=unpacking-non-sequence
604 # payload is a property, so pylint doesn't think it's a tuple.
605 _, _, embed = self._payload
606 return embed.get('errbacks')
607
608 @cached_property
609 def group(self):
610 # used by backend.on_chord_part_return when failures reported
611 # by parent process
612 return self._request_dict.get('group')
613
614 @cached_property
615 def _context(self):
616 """Context (:class:`~celery.app.task.Context`) of this task."""
617 request = self._request_dict
618 # pylint: disable=unpacking-non-sequence
619 # payload is a property, so pylint doesn't think it's a tuple.
620 _, _, embed = self._payload
621 request.update(**embed or {})
622 return Context(request)
623
624
625 def create_request_cls(base, task, pool, hostname, eventer,
626 ref=ref, revoked_tasks=revoked_tasks,
627 task_ready=task_ready, trace=trace_task_ret):
628 default_time_limit = task.time_limit
629 default_soft_time_limit = task.soft_time_limit
630 apply_async = pool.apply_async
631 acks_late = task.acks_late
632 events = eventer and eventer.enabled
633
634 class Request(base):
635
636 def execute_using_pool(self, pool, **kwargs):
637 task_id = self.task_id
638 if (self.expires or task_id in revoked_tasks) and self.revoked():
639 raise TaskRevokedError(task_id)
640
641 time_limit, soft_time_limit = self.time_limits
642 result = apply_async(
643 trace,
644 args=(self.type, task_id, self.request_dict, self.body,
645 self.content_type, self.content_encoding),
646 accept_callback=self.on_accepted,
647 timeout_callback=self.on_timeout,
648 callback=self.on_success,
649 error_callback=self.on_failure,
650 soft_timeout=soft_time_limit or default_soft_time_limit,
651 timeout=time_limit or default_time_limit,
652 correlation_id=task_id,
653 )
654 # cannot create weakref to None
655 # pylint: disable=attribute-defined-outside-init
656 self._apply_result = maybe(ref, result)
657 return result
658
659 def on_success(self, failed__retval__runtime, **kwargs):
660 failed, retval, runtime = failed__retval__runtime
661 if failed:
662 if isinstance(retval.exception, (
663 SystemExit, KeyboardInterrupt)):
664 raise retval.exception
665 return self.on_failure(retval, return_ok=True)
666 task_ready(self)
667
668 if acks_late:
669 self.acknowledge()
670
671 if events:
672 self.send_event(
673 'task-succeeded', result=retval, runtime=runtime,
674 )
675
676 return Request
677
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -505,7 +505,7 @@
)
ack = self.task.acks_on_failure_or_timeout
if reject:
- requeue = not self.delivery_info.get('redelivered')
+ requeue = True
self.reject(requeue=requeue)
send_failed_event = False
elif ack:
| {"golden_diff": "diff --git a/celery/worker/request.py b/celery/worker/request.py\n--- a/celery/worker/request.py\n+++ b/celery/worker/request.py\n@@ -505,7 +505,7 @@\n )\n ack = self.task.acks_on_failure_or_timeout\n if reject:\n- requeue = not self.delivery_info.get('redelivered')\n+ requeue = True\n self.reject(requeue=requeue)\n send_failed_event = False\n elif ack:\n", "issue": "Document and code are inconsistent about task_reject_on_worker_lost config\n<!--\r\nPlease fill this template entirely and do not erase parts of it.\r\nWe reserve the right to close without a response\r\nbug reports which are incomplete.\r\n-->\r\n# Checklist\r\n<!--\r\nTo check an item on the list replace [ ] with [x].\r\n-->\r\n\r\n- [x] I have checked the [issues list](https://github.com/celery/celery/issues?utf8=%E2%9C%93&q=is%3Aissue+label%3A%22Category%3A+Documentation%22+)\r\n for similar or identical bug reports.\r\n- [x] I have checked the [pull requests list](https://github.com/celery/celery/pulls?q=is%3Apr+label%3A%22Category%3A+Documentation%22)\r\n for existing proposed fixes.\r\n- [x] I have checked the [commit log](https://github.com/celery/celery/commits/master)\r\n to find out if the bug was already fixed in the master branch.\r\n- [x] I have included all related issues and possible duplicate issues in this issue\r\n (If there are none, check this box anyway).\r\n\r\n## Related Issues and Possible Duplicates\r\n<!--\r\nPlease make sure to search and mention any related issues\r\nor possible duplicates to this issue as requested by the checklist above.\r\n\r\nThis may or may not include issues in other repositories that the Celery project\r\nmaintains or other repositories that are dependencies of Celery.\r\n\r\nIf you don't know how to mention issues, please refer to Github's documentation\r\non the subject: https://help.github.com/en/articles/autolinked-references-and-urls#issues-and-pull-requests\r\n-->\r\n\r\n#### Related Issues\r\n\r\n- None\r\n\r\n#### Possible Duplicates\r\n\r\n- None\r\n\r\n# Description\r\n<!--\r\nPlease describe what's missing or incorrect about our documentation.\r\nInclude links and/or screenshots which will aid us to resolve the issue.\r\n-->\r\nIn the latest version of the documentation about [task_reject_on_worker_lost](http://docs.celeryproject.org/en/latest/userguide/configuration.html?highlight=task_reject_on_worker_lost), it says `Enabling this can cause message loops`\r\n\r\nBut actually, enabling this will not cause message loops, tasks only execute twice.Tasks that have been redelivered will not be redelivered again, [source code](https://github.com/celery/celery/blob/master/celery/worker/request.py#L518)\r\n\r\n\r\n# Suggestions\r\n<!-- Please provide us suggestions for how to fix the documentation -->\r\nIf it is a documentation error, it is best to remove the warning from the document.\r\nIf the document is ok, it is need to modify the code.\r\nI can help modify the document or code.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Task request.\n\nThis module defines the :class:`Request` class, that specifies\nhow tasks are executed.\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport sys\nfrom datetime import datetime\nfrom time import time\nfrom weakref import ref\n\nfrom billiard.common import TERM_SIGNAME\nfrom kombu.utils.encoding import safe_repr, safe_str\nfrom kombu.utils.objects import cached_property\n\nfrom celery import signals\nfrom celery.app.task import Context\nfrom celery.app.trace import trace_task, trace_task_ret\nfrom celery.exceptions import (Ignore, InvalidTaskError, Reject, Retry,\n TaskRevokedError, Terminated,\n TimeLimitExceeded, WorkerLostError)\nfrom celery.five import monotonic, python_2_unicode_compatible, string\nfrom celery.platforms import signals as _signals\nfrom celery.utils.functional import maybe, noop\nfrom celery.utils.log import get_logger\nfrom celery.utils.nodenames import gethostname\nfrom celery.utils.serialization import get_pickled_exception\nfrom celery.utils.time import maybe_iso8601, maybe_make_aware, timezone\n\nfrom . import state\n\n__all__ = ('Request',)\n\n# pylint: disable=redefined-outer-name\n# We cache globals and attribute lookups, so disable this warning.\n\nIS_PYPY = hasattr(sys, 'pypy_version_info')\n\nlogger = get_logger(__name__)\ndebug, info, warn, error = (logger.debug, logger.info,\n logger.warning, logger.error)\n_does_info = False\n_does_debug = False\n\n\ndef __optimize__():\n # this is also called by celery.app.trace.setup_worker_optimizations\n global _does_debug\n global _does_info\n _does_debug = logger.isEnabledFor(logging.DEBUG)\n _does_info = logger.isEnabledFor(logging.INFO)\n\n\n__optimize__() # noqa: E305\n\n# Localize\ntz_or_local = timezone.tz_or_local\nsend_revoked = signals.task_revoked.send\n\ntask_accepted = state.task_accepted\ntask_ready = state.task_ready\nrevoked_tasks = state.revoked\n\n\n@python_2_unicode_compatible\nclass Request(object):\n \"\"\"A request for task execution.\"\"\"\n\n acknowledged = False\n time_start = None\n worker_pid = None\n time_limits = (None, None)\n _already_revoked = False\n _terminate_on_ack = None\n _apply_result = None\n _tzlocal = None\n\n if not IS_PYPY: # pragma: no cover\n __slots__ = (\n '_app', '_type', 'name', 'id', '_root_id', '_parent_id',\n '_on_ack', '_body', '_hostname', '_eventer', '_connection_errors',\n '_task', '_eta', '_expires', '_request_dict', '_on_reject', '_utc',\n '_content_type', '_content_encoding', '_argsrepr', '_kwargsrepr',\n '_args', '_kwargs', '_decoded', '__payload',\n '__weakref__', '__dict__',\n )\n\n def __init__(self, message, on_ack=noop,\n hostname=None, eventer=None, app=None,\n connection_errors=None, request_dict=None,\n task=None, on_reject=noop, body=None,\n headers=None, decoded=False, utc=True,\n maybe_make_aware=maybe_make_aware,\n maybe_iso8601=maybe_iso8601, **opts):\n self._message = message\n self._request_dict = message.headers if headers is None else headers\n self._body = message.body if body is None else body\n self._app = app\n self._utc = utc\n self._decoded = decoded\n if decoded:\n self._content_type = self._content_encoding = None\n else:\n self._content_type, self._content_encoding = (\n message.content_type, message.content_encoding,\n )\n self.__payload = self._body if self._decoded else message.payload\n self.id = self._request_dict['id']\n self._type = self.name = self._request_dict['task']\n if 'shadow' in self._request_dict:\n self.name = self._request_dict['shadow'] or self.name\n self._root_id = self._request_dict.get('root_id')\n self._parent_id = self._request_dict.get('parent_id')\n timelimit = self._request_dict.get('timelimit', None)\n if timelimit:\n self.time_limits = timelimit\n self._argsrepr = self._request_dict.get('argsrepr', '')\n self._kwargsrepr = self._request_dict.get('kwargsrepr', '')\n self._on_ack = on_ack\n self._on_reject = on_reject\n self._hostname = hostname or gethostname()\n self._eventer = eventer\n self._connection_errors = connection_errors or ()\n self._task = task or self._app.tasks[self._type]\n\n # timezone means the message is timezone-aware, and the only timezone\n # supported at this point is UTC.\n eta = self._request_dict.get('eta')\n if eta is not None:\n try:\n eta = maybe_iso8601(eta)\n except (AttributeError, ValueError, TypeError) as exc:\n raise InvalidTaskError(\n 'invalid ETA value {0!r}: {1}'.format(eta, exc))\n self._eta = maybe_make_aware(eta, self.tzlocal)\n else:\n self._eta = None\n\n expires = self._request_dict.get('expires')\n if expires is not None:\n try:\n expires = maybe_iso8601(expires)\n except (AttributeError, ValueError, TypeError) as exc:\n raise InvalidTaskError(\n 'invalid expires value {0!r}: {1}'.format(expires, exc))\n self._expires = maybe_make_aware(expires, self.tzlocal)\n else:\n self._expires = None\n\n delivery_info = message.delivery_info or {}\n properties = message.properties or {}\n self._delivery_info = {\n 'exchange': delivery_info.get('exchange'),\n 'routing_key': delivery_info.get('routing_key'),\n 'priority': properties.get('priority'),\n 'redelivered': delivery_info.get('redelivered'),\n }\n self._request_dict.update({\n 'reply_to': properties.get('reply_to'),\n 'correlation_id': properties.get('correlation_id'),\n 'hostname': self._hostname,\n 'delivery_info': self._delivery_info\n })\n # this is a reference pass to avoid memory usage burst\n self._request_dict['args'], self._request_dict['kwargs'], _ = self.__payload\n self._args = self._request_dict['args']\n self._kwargs = self._request_dict['kwargs']\n\n @property\n def delivery_info(self):\n return self._delivery_info\n\n @property\n def message(self):\n return self._message\n\n @property\n def request_dict(self):\n return self._request_dict\n\n @property\n def body(self):\n return self._body\n\n @property\n def app(self):\n return self._app\n\n @property\n def utc(self):\n return self._utc\n\n @property\n def content_type(self):\n return self._content_type\n\n @property\n def content_encoding(self):\n return self._content_encoding\n\n @property\n def type(self):\n return self._type\n\n @property\n def root_id(self):\n return self._root_id\n\n @property\n def parent_id(self):\n return self._parent_id\n\n @property\n def argsrepr(self):\n return self._argsrepr\n\n @property\n def args(self):\n return self._args\n\n @property\n def kwargs(self):\n return self._kwargs\n\n @property\n def kwargsrepr(self):\n return self._kwargsrepr\n\n @property\n def on_ack(self):\n return self._on_ack\n\n @property\n def on_reject(self):\n return self._on_reject\n\n @on_reject.setter\n def on_reject(self, value):\n self._on_reject = value\n\n @property\n def hostname(self):\n return self._hostname\n\n @property\n def eventer(self):\n return self._eventer\n\n @eventer.setter\n def eventer(self, eventer):\n self._eventer = eventer\n\n @property\n def connection_errors(self):\n return self._connection_errors\n\n @property\n def task(self):\n return self._task\n\n @property\n def eta(self):\n return self._eta\n\n @property\n def expires(self):\n return self._expires\n\n @expires.setter\n def expires(self, value):\n self._expires = value\n\n @property\n def tzlocal(self):\n if self._tzlocal is None:\n self._tzlocal = self._app.conf.timezone\n return self._tzlocal\n\n @property\n def store_errors(self):\n return (not self.task.ignore_result or\n self.task.store_errors_even_if_ignored)\n\n @property\n def task_id(self):\n # XXX compat\n return self.id\n\n @task_id.setter # noqa\n def task_id(self, value):\n self.id = value\n\n @property\n def task_name(self):\n # XXX compat\n return self.name\n\n @task_name.setter # noqa\n def task_name(self, value):\n self.name = value\n\n @property\n def reply_to(self):\n # used by rpc backend when failures reported by parent process\n return self._request_dict['reply_to']\n\n @property\n def correlation_id(self):\n # used similarly to reply_to\n return self._request_dict['correlation_id']\n\n def execute_using_pool(self, pool, **kwargs):\n \"\"\"Used by the worker to send this task to the pool.\n\n Arguments:\n pool (~celery.concurrency.base.TaskPool): The execution pool\n used to execute this request.\n\n Raises:\n celery.exceptions.TaskRevokedError: if the task was revoked.\n \"\"\"\n task_id = self.id\n task = self._task\n if self.revoked():\n raise TaskRevokedError(task_id)\n\n time_limit, soft_time_limit = self.time_limits\n result = pool.apply_async(\n trace_task_ret,\n args=(self._type, task_id, self._request_dict, self._body,\n self._content_type, self._content_encoding),\n accept_callback=self.on_accepted,\n timeout_callback=self.on_timeout,\n callback=self.on_success,\n error_callback=self.on_failure,\n soft_timeout=soft_time_limit or task.soft_time_limit,\n timeout=time_limit or task.time_limit,\n correlation_id=task_id,\n )\n # cannot create weakref to None\n self._apply_result = maybe(ref, result)\n return result\n\n def execute(self, loglevel=None, logfile=None):\n \"\"\"Execute the task in a :func:`~celery.app.trace.trace_task`.\n\n Arguments:\n loglevel (int): The loglevel used by the task.\n logfile (str): The logfile used by the task.\n \"\"\"\n if self.revoked():\n return\n\n # acknowledge task as being processed.\n if not self.task.acks_late:\n self.acknowledge()\n\n _, _, embed = self._payload\n request = self._request_dict\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n request.update({\n 'loglevel': loglevel,\n 'logfile': logfile,\n 'is_eager': False,\n }, **embed or {})\n retval = trace_task(self.task, self.id, self._args, self._kwargs, request,\n hostname=self._hostname, loader=self._app.loader,\n app=self._app)[0]\n self.acknowledge()\n return retval\n\n def maybe_expire(self):\n \"\"\"If expired, mark the task as revoked.\"\"\"\n if self._expires:\n now = datetime.now(self._expires.tzinfo)\n if now > self._expires:\n revoked_tasks.add(self.id)\n return True\n\n def terminate(self, pool, signal=None):\n signal = _signals.signum(signal or TERM_SIGNAME)\n if self.time_start:\n pool.terminate_job(self.worker_pid, signal)\n self._announce_revoked('terminated', True, signal, False)\n else:\n self._terminate_on_ack = pool, signal\n if self._apply_result is not None:\n obj = self._apply_result() # is a weakref\n if obj is not None:\n obj.terminate(signal)\n\n def _announce_revoked(self, reason, terminated, signum, expired):\n task_ready(self)\n self.send_event('task-revoked',\n terminated=terminated, signum=signum, expired=expired)\n self.task.backend.mark_as_revoked(\n self.id, reason, request=self._context,\n store_result=self.store_errors,\n )\n self.acknowledge()\n self._already_revoked = True\n send_revoked(self.task, request=self._context,\n terminated=terminated, signum=signum, expired=expired)\n\n def revoked(self):\n \"\"\"If revoked, skip task and mark state.\"\"\"\n expired = False\n if self._already_revoked:\n return True\n if self._expires:\n expired = self.maybe_expire()\n if self.id in revoked_tasks:\n info('Discarding revoked task: %s[%s]', self.name, self.id)\n self._announce_revoked(\n 'expired' if expired else 'revoked', False, None, expired,\n )\n return True\n return False\n\n def send_event(self, type, **fields):\n if self._eventer and self._eventer.enabled and self.task.send_events:\n self._eventer.send(type, uuid=self.id, **fields)\n\n def on_accepted(self, pid, time_accepted):\n \"\"\"Handler called when task is accepted by worker pool.\"\"\"\n self.worker_pid = pid\n # Convert monotonic time_accepted to absolute time\n self.time_start = time() - (monotonic() - time_accepted)\n task_accepted(self)\n if not self.task.acks_late:\n self.acknowledge()\n self.send_event('task-started')\n if _does_debug:\n debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)\n if self._terminate_on_ack is not None:\n self.terminate(*self._terminate_on_ack)\n\n def on_timeout(self, soft, timeout):\n \"\"\"Handler called if the task times out.\"\"\"\n if soft:\n warn('Soft time limit (%ss) exceeded for %s[%s]',\n timeout, self.name, self.id)\n else:\n task_ready(self)\n error('Hard time limit (%ss) exceeded for %s[%s]',\n timeout, self.name, self.id)\n exc = TimeLimitExceeded(timeout)\n\n self.task.backend.mark_as_failure(\n self.id, exc, request=self._context,\n store_result=self.store_errors,\n )\n\n if self.task.acks_late and self.task.acks_on_failure_or_timeout:\n self.acknowledge()\n\n def on_success(self, failed__retval__runtime, **kwargs):\n \"\"\"Handler called if the task was successfully processed.\"\"\"\n failed, retval, runtime = failed__retval__runtime\n if failed:\n if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):\n raise retval.exception\n return self.on_failure(retval, return_ok=True)\n task_ready(self)\n\n if self.task.acks_late:\n self.acknowledge()\n\n self.send_event('task-succeeded', result=retval, runtime=runtime)\n\n def on_retry(self, exc_info):\n \"\"\"Handler called if the task should be retried.\"\"\"\n if self.task.acks_late:\n self.acknowledge()\n\n self.send_event('task-retried',\n exception=safe_repr(exc_info.exception.exc),\n traceback=safe_str(exc_info.traceback))\n\n def on_failure(self, exc_info, send_failed_event=True, return_ok=False):\n \"\"\"Handler called if the task raised an exception.\"\"\"\n task_ready(self)\n if isinstance(exc_info.exception, MemoryError):\n raise MemoryError('Process got: %s' % (exc_info.exception,))\n elif isinstance(exc_info.exception, Reject):\n return self.reject(requeue=exc_info.exception.requeue)\n elif isinstance(exc_info.exception, Ignore):\n return self.acknowledge()\n\n exc = exc_info.exception\n\n if isinstance(exc, Retry):\n return self.on_retry(exc_info)\n\n # (acks_late) acknowledge after result stored.\n requeue = False\n if self.task.acks_late:\n reject = (\n self.task.reject_on_worker_lost and\n isinstance(exc, WorkerLostError)\n )\n ack = self.task.acks_on_failure_or_timeout\n if reject:\n requeue = not self.delivery_info.get('redelivered')\n self.reject(requeue=requeue)\n send_failed_event = False\n elif ack:\n self.acknowledge()\n else:\n # supporting the behaviour where a task failed and\n # need to be removed from prefetched local queue\n self.reject(requeue=False)\n\n # These are special cases where the process would not have had time\n # to write the result.\n if isinstance(exc, Terminated):\n self._announce_revoked(\n 'terminated', True, string(exc), False)\n send_failed_event = False # already sent revoked event\n elif not requeue and (isinstance(exc, WorkerLostError) or not return_ok):\n # only mark as failure if task has not been requeued\n self.task.backend.mark_as_failure(\n self.id, exc, request=self._context,\n store_result=self.store_errors,\n )\n\n if send_failed_event:\n self.send_event(\n 'task-failed',\n exception=safe_repr(get_pickled_exception(exc_info.exception)),\n traceback=exc_info.traceback,\n )\n\n if not return_ok:\n error('Task handler raised error: %r', exc,\n exc_info=exc_info.exc_info)\n\n def acknowledge(self):\n \"\"\"Acknowledge task.\"\"\"\n if not self.acknowledged:\n self._on_ack(logger, self._connection_errors)\n self.acknowledged = True\n\n def reject(self, requeue=False):\n if not self.acknowledged:\n self._on_reject(logger, self._connection_errors, requeue)\n self.acknowledged = True\n self.send_event('task-rejected', requeue=requeue)\n\n def info(self, safe=False):\n return {\n 'id': self.id,\n 'name': self.name,\n 'args': self._args,\n 'kwargs': self._kwargs,\n 'type': self._type,\n 'hostname': self._hostname,\n 'time_start': self.time_start,\n 'acknowledged': self.acknowledged,\n 'delivery_info': self.delivery_info,\n 'worker_pid': self.worker_pid,\n }\n\n def humaninfo(self):\n return '{0.name}[{0.id}]'.format(self)\n\n def __str__(self):\n \"\"\"``str(self)``.\"\"\"\n return ' '.join([\n self.humaninfo(),\n ' ETA:[{0}]'.format(self._eta) if self._eta else '',\n ' expires:[{0}]'.format(self._expires) if self._expires else '',\n ])\n\n def __repr__(self):\n \"\"\"``repr(self)``.\"\"\"\n return '<{0}: {1} {2} {3}>'.format(\n type(self).__name__, self.humaninfo(),\n self._argsrepr, self._kwargsrepr,\n )\n\n @cached_property\n def _payload(self):\n return self.__payload\n\n @cached_property\n def chord(self):\n # used by backend.mark_as_failure when failure is reported\n # by parent process\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n return embed.get('chord')\n\n @cached_property\n def errbacks(self):\n # used by backend.mark_as_failure when failure is reported\n # by parent process\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n return embed.get('errbacks')\n\n @cached_property\n def group(self):\n # used by backend.on_chord_part_return when failures reported\n # by parent process\n return self._request_dict.get('group')\n\n @cached_property\n def _context(self):\n \"\"\"Context (:class:`~celery.app.task.Context`) of this task.\"\"\"\n request = self._request_dict\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n request.update(**embed or {})\n return Context(request)\n\n\ndef create_request_cls(base, task, pool, hostname, eventer,\n ref=ref, revoked_tasks=revoked_tasks,\n task_ready=task_ready, trace=trace_task_ret):\n default_time_limit = task.time_limit\n default_soft_time_limit = task.soft_time_limit\n apply_async = pool.apply_async\n acks_late = task.acks_late\n events = eventer and eventer.enabled\n\n class Request(base):\n\n def execute_using_pool(self, pool, **kwargs):\n task_id = self.task_id\n if (self.expires or task_id in revoked_tasks) and self.revoked():\n raise TaskRevokedError(task_id)\n\n time_limit, soft_time_limit = self.time_limits\n result = apply_async(\n trace,\n args=(self.type, task_id, self.request_dict, self.body,\n self.content_type, self.content_encoding),\n accept_callback=self.on_accepted,\n timeout_callback=self.on_timeout,\n callback=self.on_success,\n error_callback=self.on_failure,\n soft_timeout=soft_time_limit or default_soft_time_limit,\n timeout=time_limit or default_time_limit,\n correlation_id=task_id,\n )\n # cannot create weakref to None\n # pylint: disable=attribute-defined-outside-init\n self._apply_result = maybe(ref, result)\n return result\n\n def on_success(self, failed__retval__runtime, **kwargs):\n failed, retval, runtime = failed__retval__runtime\n if failed:\n if isinstance(retval.exception, (\n SystemExit, KeyboardInterrupt)):\n raise retval.exception\n return self.on_failure(retval, return_ok=True)\n task_ready(self)\n\n if acks_late:\n self.acknowledge()\n\n if events:\n self.send_event(\n 'task-succeeded', result=retval, runtime=runtime,\n )\n\n return Request\n", "path": "celery/worker/request.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Task request.\n\nThis module defines the :class:`Request` class, that specifies\nhow tasks are executed.\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport logging\nimport sys\nfrom datetime import datetime\nfrom time import time\nfrom weakref import ref\n\nfrom billiard.common import TERM_SIGNAME\nfrom kombu.utils.encoding import safe_repr, safe_str\nfrom kombu.utils.objects import cached_property\n\nfrom celery import signals\nfrom celery.app.task import Context\nfrom celery.app.trace import trace_task, trace_task_ret\nfrom celery.exceptions import (Ignore, InvalidTaskError, Reject, Retry,\n TaskRevokedError, Terminated,\n TimeLimitExceeded, WorkerLostError)\nfrom celery.five import monotonic, python_2_unicode_compatible, string\nfrom celery.platforms import signals as _signals\nfrom celery.utils.functional import maybe, noop\nfrom celery.utils.log import get_logger\nfrom celery.utils.nodenames import gethostname\nfrom celery.utils.serialization import get_pickled_exception\nfrom celery.utils.time import maybe_iso8601, maybe_make_aware, timezone\n\nfrom . import state\n\n__all__ = ('Request',)\n\n# pylint: disable=redefined-outer-name\n# We cache globals and attribute lookups, so disable this warning.\n\nIS_PYPY = hasattr(sys, 'pypy_version_info')\n\nlogger = get_logger(__name__)\ndebug, info, warn, error = (logger.debug, logger.info,\n logger.warning, logger.error)\n_does_info = False\n_does_debug = False\n\n\ndef __optimize__():\n # this is also called by celery.app.trace.setup_worker_optimizations\n global _does_debug\n global _does_info\n _does_debug = logger.isEnabledFor(logging.DEBUG)\n _does_info = logger.isEnabledFor(logging.INFO)\n\n\n__optimize__() # noqa: E305\n\n# Localize\ntz_or_local = timezone.tz_or_local\nsend_revoked = signals.task_revoked.send\n\ntask_accepted = state.task_accepted\ntask_ready = state.task_ready\nrevoked_tasks = state.revoked\n\n\n@python_2_unicode_compatible\nclass Request(object):\n \"\"\"A request for task execution.\"\"\"\n\n acknowledged = False\n time_start = None\n worker_pid = None\n time_limits = (None, None)\n _already_revoked = False\n _terminate_on_ack = None\n _apply_result = None\n _tzlocal = None\n\n if not IS_PYPY: # pragma: no cover\n __slots__ = (\n '_app', '_type', 'name', 'id', '_root_id', '_parent_id',\n '_on_ack', '_body', '_hostname', '_eventer', '_connection_errors',\n '_task', '_eta', '_expires', '_request_dict', '_on_reject', '_utc',\n '_content_type', '_content_encoding', '_argsrepr', '_kwargsrepr',\n '_args', '_kwargs', '_decoded', '__payload',\n '__weakref__', '__dict__',\n )\n\n def __init__(self, message, on_ack=noop,\n hostname=None, eventer=None, app=None,\n connection_errors=None, request_dict=None,\n task=None, on_reject=noop, body=None,\n headers=None, decoded=False, utc=True,\n maybe_make_aware=maybe_make_aware,\n maybe_iso8601=maybe_iso8601, **opts):\n self._message = message\n self._request_dict = message.headers if headers is None else headers\n self._body = message.body if body is None else body\n self._app = app\n self._utc = utc\n self._decoded = decoded\n if decoded:\n self._content_type = self._content_encoding = None\n else:\n self._content_type, self._content_encoding = (\n message.content_type, message.content_encoding,\n )\n self.__payload = self._body if self._decoded else message.payload\n self.id = self._request_dict['id']\n self._type = self.name = self._request_dict['task']\n if 'shadow' in self._request_dict:\n self.name = self._request_dict['shadow'] or self.name\n self._root_id = self._request_dict.get('root_id')\n self._parent_id = self._request_dict.get('parent_id')\n timelimit = self._request_dict.get('timelimit', None)\n if timelimit:\n self.time_limits = timelimit\n self._argsrepr = self._request_dict.get('argsrepr', '')\n self._kwargsrepr = self._request_dict.get('kwargsrepr', '')\n self._on_ack = on_ack\n self._on_reject = on_reject\n self._hostname = hostname or gethostname()\n self._eventer = eventer\n self._connection_errors = connection_errors or ()\n self._task = task or self._app.tasks[self._type]\n\n # timezone means the message is timezone-aware, and the only timezone\n # supported at this point is UTC.\n eta = self._request_dict.get('eta')\n if eta is not None:\n try:\n eta = maybe_iso8601(eta)\n except (AttributeError, ValueError, TypeError) as exc:\n raise InvalidTaskError(\n 'invalid ETA value {0!r}: {1}'.format(eta, exc))\n self._eta = maybe_make_aware(eta, self.tzlocal)\n else:\n self._eta = None\n\n expires = self._request_dict.get('expires')\n if expires is not None:\n try:\n expires = maybe_iso8601(expires)\n except (AttributeError, ValueError, TypeError) as exc:\n raise InvalidTaskError(\n 'invalid expires value {0!r}: {1}'.format(expires, exc))\n self._expires = maybe_make_aware(expires, self.tzlocal)\n else:\n self._expires = None\n\n delivery_info = message.delivery_info or {}\n properties = message.properties or {}\n self._delivery_info = {\n 'exchange': delivery_info.get('exchange'),\n 'routing_key': delivery_info.get('routing_key'),\n 'priority': properties.get('priority'),\n 'redelivered': delivery_info.get('redelivered'),\n }\n self._request_dict.update({\n 'reply_to': properties.get('reply_to'),\n 'correlation_id': properties.get('correlation_id'),\n 'hostname': self._hostname,\n 'delivery_info': self._delivery_info\n })\n # this is a reference pass to avoid memory usage burst\n self._request_dict['args'], self._request_dict['kwargs'], _ = self.__payload\n self._args = self._request_dict['args']\n self._kwargs = self._request_dict['kwargs']\n\n @property\n def delivery_info(self):\n return self._delivery_info\n\n @property\n def message(self):\n return self._message\n\n @property\n def request_dict(self):\n return self._request_dict\n\n @property\n def body(self):\n return self._body\n\n @property\n def app(self):\n return self._app\n\n @property\n def utc(self):\n return self._utc\n\n @property\n def content_type(self):\n return self._content_type\n\n @property\n def content_encoding(self):\n return self._content_encoding\n\n @property\n def type(self):\n return self._type\n\n @property\n def root_id(self):\n return self._root_id\n\n @property\n def parent_id(self):\n return self._parent_id\n\n @property\n def argsrepr(self):\n return self._argsrepr\n\n @property\n def args(self):\n return self._args\n\n @property\n def kwargs(self):\n return self._kwargs\n\n @property\n def kwargsrepr(self):\n return self._kwargsrepr\n\n @property\n def on_ack(self):\n return self._on_ack\n\n @property\n def on_reject(self):\n return self._on_reject\n\n @on_reject.setter\n def on_reject(self, value):\n self._on_reject = value\n\n @property\n def hostname(self):\n return self._hostname\n\n @property\n def eventer(self):\n return self._eventer\n\n @eventer.setter\n def eventer(self, eventer):\n self._eventer = eventer\n\n @property\n def connection_errors(self):\n return self._connection_errors\n\n @property\n def task(self):\n return self._task\n\n @property\n def eta(self):\n return self._eta\n\n @property\n def expires(self):\n return self._expires\n\n @expires.setter\n def expires(self, value):\n self._expires = value\n\n @property\n def tzlocal(self):\n if self._tzlocal is None:\n self._tzlocal = self._app.conf.timezone\n return self._tzlocal\n\n @property\n def store_errors(self):\n return (not self.task.ignore_result or\n self.task.store_errors_even_if_ignored)\n\n @property\n def task_id(self):\n # XXX compat\n return self.id\n\n @task_id.setter # noqa\n def task_id(self, value):\n self.id = value\n\n @property\n def task_name(self):\n # XXX compat\n return self.name\n\n @task_name.setter # noqa\n def task_name(self, value):\n self.name = value\n\n @property\n def reply_to(self):\n # used by rpc backend when failures reported by parent process\n return self._request_dict['reply_to']\n\n @property\n def correlation_id(self):\n # used similarly to reply_to\n return self._request_dict['correlation_id']\n\n def execute_using_pool(self, pool, **kwargs):\n \"\"\"Used by the worker to send this task to the pool.\n\n Arguments:\n pool (~celery.concurrency.base.TaskPool): The execution pool\n used to execute this request.\n\n Raises:\n celery.exceptions.TaskRevokedError: if the task was revoked.\n \"\"\"\n task_id = self.id\n task = self._task\n if self.revoked():\n raise TaskRevokedError(task_id)\n\n time_limit, soft_time_limit = self.time_limits\n result = pool.apply_async(\n trace_task_ret,\n args=(self._type, task_id, self._request_dict, self._body,\n self._content_type, self._content_encoding),\n accept_callback=self.on_accepted,\n timeout_callback=self.on_timeout,\n callback=self.on_success,\n error_callback=self.on_failure,\n soft_timeout=soft_time_limit or task.soft_time_limit,\n timeout=time_limit or task.time_limit,\n correlation_id=task_id,\n )\n # cannot create weakref to None\n self._apply_result = maybe(ref, result)\n return result\n\n def execute(self, loglevel=None, logfile=None):\n \"\"\"Execute the task in a :func:`~celery.app.trace.trace_task`.\n\n Arguments:\n loglevel (int): The loglevel used by the task.\n logfile (str): The logfile used by the task.\n \"\"\"\n if self.revoked():\n return\n\n # acknowledge task as being processed.\n if not self.task.acks_late:\n self.acknowledge()\n\n _, _, embed = self._payload\n request = self._request_dict\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n request.update({\n 'loglevel': loglevel,\n 'logfile': logfile,\n 'is_eager': False,\n }, **embed or {})\n retval = trace_task(self.task, self.id, self._args, self._kwargs, request,\n hostname=self._hostname, loader=self._app.loader,\n app=self._app)[0]\n self.acknowledge()\n return retval\n\n def maybe_expire(self):\n \"\"\"If expired, mark the task as revoked.\"\"\"\n if self._expires:\n now = datetime.now(self._expires.tzinfo)\n if now > self._expires:\n revoked_tasks.add(self.id)\n return True\n\n def terminate(self, pool, signal=None):\n signal = _signals.signum(signal or TERM_SIGNAME)\n if self.time_start:\n pool.terminate_job(self.worker_pid, signal)\n self._announce_revoked('terminated', True, signal, False)\n else:\n self._terminate_on_ack = pool, signal\n if self._apply_result is not None:\n obj = self._apply_result() # is a weakref\n if obj is not None:\n obj.terminate(signal)\n\n def _announce_revoked(self, reason, terminated, signum, expired):\n task_ready(self)\n self.send_event('task-revoked',\n terminated=terminated, signum=signum, expired=expired)\n self.task.backend.mark_as_revoked(\n self.id, reason, request=self._context,\n store_result=self.store_errors,\n )\n self.acknowledge()\n self._already_revoked = True\n send_revoked(self.task, request=self._context,\n terminated=terminated, signum=signum, expired=expired)\n\n def revoked(self):\n \"\"\"If revoked, skip task and mark state.\"\"\"\n expired = False\n if self._already_revoked:\n return True\n if self._expires:\n expired = self.maybe_expire()\n if self.id in revoked_tasks:\n info('Discarding revoked task: %s[%s]', self.name, self.id)\n self._announce_revoked(\n 'expired' if expired else 'revoked', False, None, expired,\n )\n return True\n return False\n\n def send_event(self, type, **fields):\n if self._eventer and self._eventer.enabled and self.task.send_events:\n self._eventer.send(type, uuid=self.id, **fields)\n\n def on_accepted(self, pid, time_accepted):\n \"\"\"Handler called when task is accepted by worker pool.\"\"\"\n self.worker_pid = pid\n # Convert monotonic time_accepted to absolute time\n self.time_start = time() - (monotonic() - time_accepted)\n task_accepted(self)\n if not self.task.acks_late:\n self.acknowledge()\n self.send_event('task-started')\n if _does_debug:\n debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)\n if self._terminate_on_ack is not None:\n self.terminate(*self._terminate_on_ack)\n\n def on_timeout(self, soft, timeout):\n \"\"\"Handler called if the task times out.\"\"\"\n if soft:\n warn('Soft time limit (%ss) exceeded for %s[%s]',\n timeout, self.name, self.id)\n else:\n task_ready(self)\n error('Hard time limit (%ss) exceeded for %s[%s]',\n timeout, self.name, self.id)\n exc = TimeLimitExceeded(timeout)\n\n self.task.backend.mark_as_failure(\n self.id, exc, request=self._context,\n store_result=self.store_errors,\n )\n\n if self.task.acks_late and self.task.acks_on_failure_or_timeout:\n self.acknowledge()\n\n def on_success(self, failed__retval__runtime, **kwargs):\n \"\"\"Handler called if the task was successfully processed.\"\"\"\n failed, retval, runtime = failed__retval__runtime\n if failed:\n if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):\n raise retval.exception\n return self.on_failure(retval, return_ok=True)\n task_ready(self)\n\n if self.task.acks_late:\n self.acknowledge()\n\n self.send_event('task-succeeded', result=retval, runtime=runtime)\n\n def on_retry(self, exc_info):\n \"\"\"Handler called if the task should be retried.\"\"\"\n if self.task.acks_late:\n self.acknowledge()\n\n self.send_event('task-retried',\n exception=safe_repr(exc_info.exception.exc),\n traceback=safe_str(exc_info.traceback))\n\n def on_failure(self, exc_info, send_failed_event=True, return_ok=False):\n \"\"\"Handler called if the task raised an exception.\"\"\"\n task_ready(self)\n if isinstance(exc_info.exception, MemoryError):\n raise MemoryError('Process got: %s' % (exc_info.exception,))\n elif isinstance(exc_info.exception, Reject):\n return self.reject(requeue=exc_info.exception.requeue)\n elif isinstance(exc_info.exception, Ignore):\n return self.acknowledge()\n\n exc = exc_info.exception\n\n if isinstance(exc, Retry):\n return self.on_retry(exc_info)\n\n # (acks_late) acknowledge after result stored.\n requeue = False\n if self.task.acks_late:\n reject = (\n self.task.reject_on_worker_lost and\n isinstance(exc, WorkerLostError)\n )\n ack = self.task.acks_on_failure_or_timeout\n if reject:\n requeue = True\n self.reject(requeue=requeue)\n send_failed_event = False\n elif ack:\n self.acknowledge()\n else:\n # supporting the behaviour where a task failed and\n # need to be removed from prefetched local queue\n self.reject(requeue=False)\n\n # These are special cases where the process would not have had time\n # to write the result.\n if isinstance(exc, Terminated):\n self._announce_revoked(\n 'terminated', True, string(exc), False)\n send_failed_event = False # already sent revoked event\n elif not requeue and (isinstance(exc, WorkerLostError) or not return_ok):\n # only mark as failure if task has not been requeued\n self.task.backend.mark_as_failure(\n self.id, exc, request=self._context,\n store_result=self.store_errors,\n )\n\n if send_failed_event:\n self.send_event(\n 'task-failed',\n exception=safe_repr(get_pickled_exception(exc_info.exception)),\n traceback=exc_info.traceback,\n )\n\n if not return_ok:\n error('Task handler raised error: %r', exc,\n exc_info=exc_info.exc_info)\n\n def acknowledge(self):\n \"\"\"Acknowledge task.\"\"\"\n if not self.acknowledged:\n self._on_ack(logger, self._connection_errors)\n self.acknowledged = True\n\n def reject(self, requeue=False):\n if not self.acknowledged:\n self._on_reject(logger, self._connection_errors, requeue)\n self.acknowledged = True\n self.send_event('task-rejected', requeue=requeue)\n\n def info(self, safe=False):\n return {\n 'id': self.id,\n 'name': self.name,\n 'args': self._args,\n 'kwargs': self._kwargs,\n 'type': self._type,\n 'hostname': self._hostname,\n 'time_start': self.time_start,\n 'acknowledged': self.acknowledged,\n 'delivery_info': self.delivery_info,\n 'worker_pid': self.worker_pid,\n }\n\n def humaninfo(self):\n return '{0.name}[{0.id}]'.format(self)\n\n def __str__(self):\n \"\"\"``str(self)``.\"\"\"\n return ' '.join([\n self.humaninfo(),\n ' ETA:[{0}]'.format(self._eta) if self._eta else '',\n ' expires:[{0}]'.format(self._expires) if self._expires else '',\n ])\n\n def __repr__(self):\n \"\"\"``repr(self)``.\"\"\"\n return '<{0}: {1} {2} {3}>'.format(\n type(self).__name__, self.humaninfo(),\n self._argsrepr, self._kwargsrepr,\n )\n\n @cached_property\n def _payload(self):\n return self.__payload\n\n @cached_property\n def chord(self):\n # used by backend.mark_as_failure when failure is reported\n # by parent process\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n return embed.get('chord')\n\n @cached_property\n def errbacks(self):\n # used by backend.mark_as_failure when failure is reported\n # by parent process\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n return embed.get('errbacks')\n\n @cached_property\n def group(self):\n # used by backend.on_chord_part_return when failures reported\n # by parent process\n return self._request_dict.get('group')\n\n @cached_property\n def _context(self):\n \"\"\"Context (:class:`~celery.app.task.Context`) of this task.\"\"\"\n request = self._request_dict\n # pylint: disable=unpacking-non-sequence\n # payload is a property, so pylint doesn't think it's a tuple.\n _, _, embed = self._payload\n request.update(**embed or {})\n return Context(request)\n\n\ndef create_request_cls(base, task, pool, hostname, eventer,\n ref=ref, revoked_tasks=revoked_tasks,\n task_ready=task_ready, trace=trace_task_ret):\n default_time_limit = task.time_limit\n default_soft_time_limit = task.soft_time_limit\n apply_async = pool.apply_async\n acks_late = task.acks_late\n events = eventer and eventer.enabled\n\n class Request(base):\n\n def execute_using_pool(self, pool, **kwargs):\n task_id = self.task_id\n if (self.expires or task_id in revoked_tasks) and self.revoked():\n raise TaskRevokedError(task_id)\n\n time_limit, soft_time_limit = self.time_limits\n result = apply_async(\n trace,\n args=(self.type, task_id, self.request_dict, self.body,\n self.content_type, self.content_encoding),\n accept_callback=self.on_accepted,\n timeout_callback=self.on_timeout,\n callback=self.on_success,\n error_callback=self.on_failure,\n soft_timeout=soft_time_limit or default_soft_time_limit,\n timeout=time_limit or default_time_limit,\n correlation_id=task_id,\n )\n # cannot create weakref to None\n # pylint: disable=attribute-defined-outside-init\n self._apply_result = maybe(ref, result)\n return result\n\n def on_success(self, failed__retval__runtime, **kwargs):\n failed, retval, runtime = failed__retval__runtime\n if failed:\n if isinstance(retval.exception, (\n SystemExit, KeyboardInterrupt)):\n raise retval.exception\n return self.on_failure(retval, return_ok=True)\n task_ready(self)\n\n if acks_late:\n self.acknowledge()\n\n if events:\n self.send_event(\n 'task-succeeded', result=retval, runtime=runtime,\n )\n\n return Request\n", "path": "celery/worker/request.py"}]} |
gh_patches_debug_1144 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4754 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Improve unexpected error message
Many users are reporting / filling an issue in our issue tracker when this message is shown to them, which is logic because it's what the message says.
> There was a problem with Read the Docs while building your documentation. Please report this to us with your build id (1234)
Although, I think we should improve this message saying something like "if this problem persists, please report..." or something similar to that. Otherwise, sometimes it's a temporal failure and we get tons of reports.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `readthedocs/doc_builder/exceptions.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 """Exceptions raised when building documentation."""
3
4 from __future__ import division, print_function, unicode_literals
5
6 from django.utils.translation import ugettext_noop
7
8
9 class BuildEnvironmentException(Exception):
10
11 message = None
12 status_code = None
13
14 def __init__(self, message=None, **kwargs):
15 self.status_code = kwargs.pop('status_code', None) or self.status_code or 1
16 message = message or self.get_default_message()
17 super(BuildEnvironmentException, self).__init__(message, **kwargs)
18
19 def get_default_message(self):
20 return self.message
21
22
23 class BuildEnvironmentError(BuildEnvironmentException):
24
25 GENERIC_WITH_BUILD_ID = ugettext_noop(
26 'There was a problem with Read the Docs while building your documentation. '
27 'Please report this to us with your build id ({build_id}).',
28 )
29
30
31 class BuildEnvironmentCreationFailed(BuildEnvironmentError):
32
33 message = ugettext_noop('Build environment creation failed')
34
35
36 class VersionLockedError(BuildEnvironmentError):
37
38 message = ugettext_noop('Version locked, retrying in 5 minutes.')
39 status_code = 423
40
41
42 class ProjectBuildsSkippedError(BuildEnvironmentError):
43
44 message = ugettext_noop('Builds for this project are temporarily disabled')
45
46
47 class YAMLParseError(BuildEnvironmentError):
48
49 GENERIC_WITH_PARSE_EXCEPTION = ugettext_noop(
50 'Problem parsing YAML configuration. {exception}',
51 )
52
53
54 class BuildTimeoutError(BuildEnvironmentError):
55
56 message = ugettext_noop('Build exited due to time out')
57
58
59 class BuildEnvironmentWarning(BuildEnvironmentException):
60 pass
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/readthedocs/doc_builder/exceptions.py b/readthedocs/doc_builder/exceptions.py
--- a/readthedocs/doc_builder/exceptions.py
+++ b/readthedocs/doc_builder/exceptions.py
@@ -24,7 +24,9 @@
GENERIC_WITH_BUILD_ID = ugettext_noop(
'There was a problem with Read the Docs while building your documentation. '
- 'Please report this to us with your build id ({build_id}).',
+ 'Please try again later. '
+ 'However, if this problem persists, '
+ 'please report this to us with your build id ({build_id}).',
)
| {"golden_diff": "diff --git a/readthedocs/doc_builder/exceptions.py b/readthedocs/doc_builder/exceptions.py\n--- a/readthedocs/doc_builder/exceptions.py\n+++ b/readthedocs/doc_builder/exceptions.py\n@@ -24,7 +24,9 @@\n \n GENERIC_WITH_BUILD_ID = ugettext_noop(\n 'There was a problem with Read the Docs while building your documentation. '\n- 'Please report this to us with your build id ({build_id}).',\n+ 'Please try again later. '\n+ 'However, if this problem persists, '\n+ 'please report this to us with your build id ({build_id}).',\n )\n", "issue": "Improve unexpected error message\nMany users are reporting / filling an issue in our issue tracker when this message is shown to them, which is logic because it's what the message says.\r\n\r\n> There was a problem with Read the Docs while building your documentation. Please report this to us with your build id (1234)\r\n\r\nAlthough, I think we should improve this message saying something like \"if this problem persists, please report...\" or something similar to that. Otherwise, sometimes it's a temporal failure and we get tons of reports.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Exceptions raised when building documentation.\"\"\"\n\nfrom __future__ import division, print_function, unicode_literals\n\nfrom django.utils.translation import ugettext_noop\n\n\nclass BuildEnvironmentException(Exception):\n\n message = None\n status_code = None\n\n def __init__(self, message=None, **kwargs):\n self.status_code = kwargs.pop('status_code', None) or self.status_code or 1\n message = message or self.get_default_message()\n super(BuildEnvironmentException, self).__init__(message, **kwargs)\n\n def get_default_message(self):\n return self.message\n\n\nclass BuildEnvironmentError(BuildEnvironmentException):\n\n GENERIC_WITH_BUILD_ID = ugettext_noop(\n 'There was a problem with Read the Docs while building your documentation. '\n 'Please report this to us with your build id ({build_id}).',\n )\n\n\nclass BuildEnvironmentCreationFailed(BuildEnvironmentError):\n\n message = ugettext_noop('Build environment creation failed')\n\n\nclass VersionLockedError(BuildEnvironmentError):\n\n message = ugettext_noop('Version locked, retrying in 5 minutes.')\n status_code = 423\n\n\nclass ProjectBuildsSkippedError(BuildEnvironmentError):\n\n message = ugettext_noop('Builds for this project are temporarily disabled')\n\n\nclass YAMLParseError(BuildEnvironmentError):\n\n GENERIC_WITH_PARSE_EXCEPTION = ugettext_noop(\n 'Problem parsing YAML configuration. {exception}',\n )\n\n\nclass BuildTimeoutError(BuildEnvironmentError):\n\n message = ugettext_noop('Build exited due to time out')\n\n\nclass BuildEnvironmentWarning(BuildEnvironmentException):\n pass\n", "path": "readthedocs/doc_builder/exceptions.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Exceptions raised when building documentation.\"\"\"\n\nfrom __future__ import division, print_function, unicode_literals\n\nfrom django.utils.translation import ugettext_noop\n\n\nclass BuildEnvironmentException(Exception):\n\n message = None\n status_code = None\n\n def __init__(self, message=None, **kwargs):\n self.status_code = kwargs.pop('status_code', None) or self.status_code or 1\n message = message or self.get_default_message()\n super(BuildEnvironmentException, self).__init__(message, **kwargs)\n\n def get_default_message(self):\n return self.message\n\n\nclass BuildEnvironmentError(BuildEnvironmentException):\n\n GENERIC_WITH_BUILD_ID = ugettext_noop(\n 'There was a problem with Read the Docs while building your documentation. '\n 'Please try again later. '\n 'However, if this problem persists, '\n 'please report this to us with your build id ({build_id}).',\n )\n\n\nclass BuildEnvironmentCreationFailed(BuildEnvironmentError):\n\n message = ugettext_noop('Build environment creation failed')\n\n\nclass VersionLockedError(BuildEnvironmentError):\n\n message = ugettext_noop('Version locked, retrying in 5 minutes.')\n status_code = 423\n\n\nclass ProjectBuildsSkippedError(BuildEnvironmentError):\n\n message = ugettext_noop('Builds for this project are temporarily disabled')\n\n\nclass YAMLParseError(BuildEnvironmentError):\n\n GENERIC_WITH_PARSE_EXCEPTION = ugettext_noop(\n 'Problem parsing YAML configuration. {exception}',\n )\n\n\nclass BuildTimeoutError(BuildEnvironmentError):\n\n message = ugettext_noop('Build exited due to time out')\n\n\nclass BuildEnvironmentWarning(BuildEnvironmentException):\n pass\n", "path": "readthedocs/doc_builder/exceptions.py"}]} |
gh_patches_debug_1145 | rasdani/github-patches | git_diff | pypa__pip-2227 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pip fails to install packages with local versions.
`pip install apache_libcloud-0.16.0+clusterhq.0-py2.py3-none-any.whl` gives
``` python-traceback
Exception:
Traceback (most recent call last):
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/basecommand.py", line 210, in main
status = self.run(options, args)
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/commands/install.py", line 304, in run
name, None, isolated=options.isolated_mode,
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/req/req_install.py", line 179, in from_line
isolated=isolated)
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/req/req_install.py", line 53, in __init__
req = pkg_resources.Requirement.parse(req)
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2842, in parse
reqs = list(parse_requirements(s))
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2789, in parse_requirements
"version spec")
File "/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py", line 2765, in scan_list
raise ValueError(msg, line, "at", line[p:])
ValueError: ("Expected ',' or end-of-list in", 'apache-libcloud==0.16.0%2Bclusterhq.0', 'at', '%2Bclusterhq.0')
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pip/index.py`
Content:
```
1 """Routines related to PyPI, indexes"""
2 from __future__ import absolute_import
3
4 import logging
5 import cgi
6 import sys
7 import os
8 import re
9 import mimetypes
10 import posixpath
11 import warnings
12
13 from pip._vendor.six.moves.urllib import parse as urllib_parse
14 from pip._vendor.six.moves.urllib import request as urllib_request
15
16 from pip.compat import ipaddress
17 from pip.utils import Inf, cached_property, normalize_name, splitext
18 from pip.utils.deprecation import RemovedInPip7Warning, RemovedInPip8Warning
19 from pip.utils.logging import indent_log
20 from pip.exceptions import (
21 DistributionNotFound, BestVersionAlreadyInstalled, InvalidWheelFilename,
22 UnsupportedWheel,
23 )
24 from pip.download import url_to_path, path_to_url
25 from pip.models import PyPI
26 from pip.wheel import Wheel, wheel_ext
27 from pip.pep425tags import supported_tags, supported_tags_noarch, get_platform
28 from pip.req.req_requirement import InstallationCandidate
29 from pip._vendor import html5lib, requests, pkg_resources, six
30 from pip._vendor.packaging.version import parse as parse_version
31 from pip._vendor.requests.exceptions import SSLError
32
33
34 __all__ = ['PackageFinder']
35
36
37 # Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
38 SECURE_ORIGINS = [
39 # protocol, hostname, port
40 ("https", "*", "*"),
41 ("*", "localhost", "*"),
42 ("*", "127.0.0.0/8", "*"),
43 ("*", "::1/128", "*"),
44 ("file", "*", None),
45 ]
46
47
48 logger = logging.getLogger(__name__)
49
50
51 class PackageFinder(object):
52 """This finds packages.
53
54 This is meant to match easy_install's technique for looking for
55 packages, by reading pages and looking for appropriate links
56 """
57
58 def __init__(self, find_links, index_urls,
59 use_wheel=True, allow_external=(), allow_unverified=(),
60 allow_all_external=False, allow_all_prereleases=False,
61 trusted_hosts=None, process_dependency_links=False,
62 session=None):
63 if session is None:
64 raise TypeError(
65 "PackageFinder() missing 1 required keyword argument: "
66 "'session'"
67 )
68
69 self.find_links = find_links
70 self.index_urls = index_urls
71 self.dependency_links = []
72
73 # These are boring links that have already been logged somehow:
74 self.logged_links = set()
75
76 self.use_wheel = use_wheel
77
78 # Do we allow (safe and verifiable) externally hosted files?
79 self.allow_external = set(normalize_name(n) for n in allow_external)
80
81 # Which names are allowed to install insecure and unverifiable files?
82 self.allow_unverified = set(
83 normalize_name(n) for n in allow_unverified
84 )
85
86 # Anything that is allowed unverified is also allowed external
87 self.allow_external |= self.allow_unverified
88
89 # Do we allow all (safe and verifiable) externally hosted files?
90 self.allow_all_external = allow_all_external
91
92 # Domains that we won't emit warnings for when not using HTTPS
93 self.secure_origins = [
94 ("*", host, "*")
95 for host in (trusted_hosts if trusted_hosts else [])
96 ]
97
98 # Stores if we ignored any external links so that we can instruct
99 # end users how to install them if no distributions are available
100 self.need_warn_external = False
101
102 # Stores if we ignored any unsafe links so that we can instruct
103 # end users how to install them if no distributions are available
104 self.need_warn_unverified = False
105
106 # Do we want to allow _all_ pre-releases?
107 self.allow_all_prereleases = allow_all_prereleases
108
109 # Do we process dependency links?
110 self.process_dependency_links = process_dependency_links
111
112 # The Session we'll use to make requests
113 self.session = session
114
115 def add_dependency_links(self, links):
116 # # FIXME: this shouldn't be global list this, it should only
117 # # apply to requirements of the package that specifies the
118 # # dependency_links value
119 # # FIXME: also, we should track comes_from (i.e., use Link)
120 if self.process_dependency_links:
121 warnings.warn(
122 "Dependency Links processing has been deprecated and will be "
123 "removed in a future release.",
124 RemovedInPip7Warning,
125 )
126 self.dependency_links.extend(links)
127
128 def _sort_locations(self, locations):
129 """
130 Sort locations into "files" (archives) and "urls", and return
131 a pair of lists (files,urls)
132 """
133 files = []
134 urls = []
135
136 # puts the url for the given file path into the appropriate list
137 def sort_path(path):
138 url = path_to_url(path)
139 if mimetypes.guess_type(url, strict=False)[0] == 'text/html':
140 urls.append(url)
141 else:
142 files.append(url)
143
144 for url in locations:
145
146 is_local_path = os.path.exists(url)
147 is_file_url = url.startswith('file:')
148 is_find_link = url in self.find_links
149
150 if is_local_path or is_file_url:
151 if is_local_path:
152 path = url
153 else:
154 path = url_to_path(url)
155 if is_find_link and os.path.isdir(path):
156 path = os.path.realpath(path)
157 for item in os.listdir(path):
158 sort_path(os.path.join(path, item))
159 elif is_file_url and os.path.isdir(path):
160 urls.append(url)
161 elif os.path.isfile(path):
162 sort_path(path)
163 else:
164 urls.append(url)
165
166 return files, urls
167
168 def _candidate_sort_key(self, candidate):
169 """
170 Function used to generate link sort key for link tuples.
171 The greater the return value, the more preferred it is.
172 If not finding wheels, then sorted by version only.
173 If finding wheels, then the sort order is by version, then:
174 1. existing installs
175 2. wheels ordered via Wheel.support_index_min()
176 3. source archives
177 Note: it was considered to embed this logic into the Link
178 comparison operators, but then different sdist links
179 with the same version, would have to be considered equal
180 """
181 if self.use_wheel:
182 support_num = len(supported_tags)
183 if candidate.location == INSTALLED_VERSION:
184 pri = 1
185 elif candidate.location.ext == wheel_ext:
186 # can raise InvalidWheelFilename
187 wheel = Wheel(candidate.location.filename)
188 if not wheel.supported():
189 raise UnsupportedWheel(
190 "%s is not a supported wheel for this platform. It "
191 "can't be sorted." % wheel.filename
192 )
193 pri = -(wheel.support_index_min())
194 else: # sdist
195 pri = -(support_num)
196 return (candidate.version, pri)
197 else:
198 return candidate.version
199
200 def _sort_versions(self, applicable_versions):
201 """
202 Bring the latest version (and wheels) to the front, but maintain the
203 existing ordering as secondary. See the docstring for `_link_sort_key`
204 for details. This function is isolated for easier unit testing.
205 """
206 return sorted(
207 applicable_versions,
208 key=self._candidate_sort_key,
209 reverse=True
210 )
211
212 def _validate_secure_origin(self, logger, location):
213 # Determine if this url used a secure transport mechanism
214 parsed = urllib_parse.urlparse(str(location))
215 origin = (parsed.scheme, parsed.hostname, parsed.port)
216
217 # Determine if our origin is a secure origin by looking through our
218 # hardcoded list of secure origins, as well as any additional ones
219 # configured on this PackageFinder instance.
220 for secure_origin in (SECURE_ORIGINS + self.secure_origins):
221 # Check to see if the protocol matches
222 if origin[0] != secure_origin[0] and secure_origin[0] != "*":
223 continue
224
225 try:
226 # We need to do this decode dance to ensure that we have a
227 # unicode object, even on Python 2.x.
228 addr = ipaddress.ip_address(
229 origin[1]
230 if (
231 isinstance(origin[1], six.text_type)
232 or origin[1] is None
233 )
234 else origin[1].decode("utf8")
235 )
236 network = ipaddress.ip_network(
237 secure_origin[1]
238 if isinstance(secure_origin[1], six.text_type)
239 else secure_origin[1].decode("utf8")
240 )
241 except ValueError:
242 # We don't have both a valid address or a valid network, so
243 # we'll check this origin against hostnames.
244 if origin[1] != secure_origin[1] and secure_origin[1] != "*":
245 continue
246 else:
247 # We have a valid address and network, so see if the address
248 # is contained within the network.
249 if addr not in network:
250 continue
251
252 # Check to see if the port patches
253 if (origin[2] != secure_origin[2]
254 and secure_origin[2] != "*"
255 and secure_origin[2] is not None):
256 continue
257
258 # If we've gotten here, then this origin matches the current
259 # secure origin and we should break out of the loop and continue
260 # on.
261 break
262 else:
263 # If the loop successfully completed without a break, that means
264 # that the origin we are testing is not a secure origin.
265 logger.warning(
266 "This repository located at %s is not a trusted host, if "
267 "this repository is available via HTTPS it is recommend to "
268 "use HTTPS instead, otherwise you may silence this warning "
269 "with '--trusted-host %s'.",
270 parsed.hostname,
271 parsed.hostname,
272 )
273
274 warnings.warn(
275 "Implicitly allowing locations which are not hosted at a "
276 "secure origin is deprecated and will require the use of "
277 "--trusted-host in the future.",
278 RemovedInPip7Warning,
279 )
280
281 def find_requirement(self, req, upgrade):
282
283 def mkurl_pypi_url(url):
284 loc = posixpath.join(url, url_name)
285 # For maximum compatibility with easy_install, ensure the path
286 # ends in a trailing slash. Although this isn't in the spec
287 # (and PyPI can handle it without the slash) some other index
288 # implementations might break if they relied on easy_install's
289 # behavior.
290 if not loc.endswith('/'):
291 loc = loc + '/'
292 return loc
293
294 url_name = req.url_name
295
296 # Only check main index if index URL is given:
297 main_index_url = None
298 if self.index_urls:
299 # Check that we have the url_name correctly spelled:
300 main_index_url = Link(
301 mkurl_pypi_url(self.index_urls[0]),
302 trusted=True,
303 )
304
305 page = self._get_page(main_index_url, req)
306 if page is None and PyPI.netloc not in str(main_index_url):
307 warnings.warn(
308 "Failed to find %r at %s. It is suggested to upgrade "
309 "your index to support normalized names as the name in "
310 "/simple/{name}." % (req.name, main_index_url),
311 RemovedInPip8Warning,
312 )
313
314 url_name = self._find_url_name(
315 Link(self.index_urls[0], trusted=True),
316 url_name, req
317 ) or req.url_name
318
319 if url_name is not None:
320 locations = [
321 mkurl_pypi_url(url)
322 for url in self.index_urls] + self.find_links
323 else:
324 locations = list(self.find_links)
325
326 file_locations, url_locations = self._sort_locations(locations)
327 _flocations, _ulocations = self._sort_locations(self.dependency_links)
328 file_locations.extend(_flocations)
329
330 # We trust every url that the user has given us whether it was given
331 # via --index-url or --find-links
332 locations = [Link(url, trusted=True) for url in url_locations]
333
334 # We explicitly do not trust links that came from dependency_links
335 locations.extend([Link(url) for url in _ulocations])
336
337 logger.debug('URLs to search for versions for %s:', req)
338 for location in locations:
339 logger.debug('* %s', location)
340 self._validate_secure_origin(logger, location)
341
342 found_versions = []
343 found_versions.extend(
344 self._package_versions(
345 # We trust every directly linked archive in find_links
346 [Link(url, '-f', trusted=True) for url in self.find_links],
347 req.name.lower()
348 )
349 )
350 page_versions = []
351 for page in self._get_pages(locations, req):
352 logger.debug('Analyzing links from page %s', page.url)
353 with indent_log():
354 page_versions.extend(
355 self._package_versions(page.links, req.name.lower())
356 )
357 dependency_versions = list(self._package_versions(
358 [Link(url) for url in self.dependency_links], req.name.lower()))
359 if dependency_versions:
360 logger.debug(
361 'dependency_links found: %s',
362 ', '.join([
363 link.url for p, link, version in dependency_versions
364 ])
365 )
366 file_versions = list(
367 self._package_versions(
368 [Link(url) for url in file_locations],
369 req.name.lower()
370 )
371 )
372 if (not found_versions
373 and not page_versions
374 and not dependency_versions
375 and not file_versions):
376 logger.critical(
377 'Could not find any downloads that satisfy the requirement %s',
378 req,
379 )
380
381 if self.need_warn_external:
382 logger.warning(
383 "Some externally hosted files were ignored as access to "
384 "them may be unreliable (use --allow-external %s to "
385 "allow).",
386 req.name,
387 )
388
389 if self.need_warn_unverified:
390 logger.warning(
391 "Some insecure and unverifiable files were ignored"
392 " (use --allow-unverified %s to allow).",
393 req.name,
394 )
395
396 raise DistributionNotFound(
397 'No distributions at all found for %s' % req
398 )
399 installed_version = []
400 if req.satisfied_by is not None:
401 installed_version = [
402 InstallationCandidate(
403 req.name,
404 req.satisfied_by.version,
405 INSTALLED_VERSION,
406 ),
407 ]
408 if file_versions:
409 file_versions.sort(reverse=True)
410 logger.debug(
411 'Local files found: %s',
412 ', '.join([
413 url_to_path(candidate.location.url)
414 for candidate in file_versions
415 ])
416 )
417
418 # This is an intentional priority ordering
419 all_versions = (
420 file_versions + found_versions + page_versions
421 + dependency_versions
422 )
423
424 # Filter out anything which doesn't match our specifier
425 _versions = set(
426 req.specifier.filter(
427 [x.version for x in all_versions],
428 prereleases=(
429 self.allow_all_prereleases
430 if self.allow_all_prereleases else None
431 ),
432 )
433 )
434 all_versions = [x for x in all_versions if x.version in _versions]
435
436 # Finally add our existing versions to the front of our versions.
437 applicable_versions = installed_version + all_versions
438
439 applicable_versions = self._sort_versions(applicable_versions)
440 existing_applicable = any(
441 i.location is INSTALLED_VERSION
442 for i in applicable_versions
443 )
444
445 if not upgrade and existing_applicable:
446 if applicable_versions[0].location is INSTALLED_VERSION:
447 logger.debug(
448 'Existing installed version (%s) is most up-to-date and '
449 'satisfies requirement',
450 req.satisfied_by.version,
451 )
452 else:
453 logger.debug(
454 'Existing installed version (%s) satisfies requirement '
455 '(most up-to-date version is %s)',
456 req.satisfied_by.version,
457 applicable_versions[0][2],
458 )
459 return None
460
461 if not applicable_versions:
462 logger.critical(
463 'Could not find a version that satisfies the requirement %s '
464 '(from versions: %s)',
465 req,
466 ', '.join(
467 sorted(
468 set(str(i.version) for i in all_versions),
469 key=parse_version,
470 )
471 )
472 )
473
474 if self.need_warn_external:
475 logger.warning(
476 "Some externally hosted files were ignored as access to "
477 "them may be unreliable (use --allow-external to allow)."
478 )
479
480 if self.need_warn_unverified:
481 logger.warning(
482 "Some insecure and unverifiable files were ignored"
483 " (use --allow-unverified %s to allow).",
484 req.name,
485 )
486
487 raise DistributionNotFound(
488 'No distributions matching the version for %s' % req
489 )
490
491 if applicable_versions[0].location is INSTALLED_VERSION:
492 # We have an existing version, and its the best version
493 logger.debug(
494 'Installed version (%s) is most up-to-date (past versions: ',
495 '%s)',
496 req.satisfied_by.version,
497 ', '.join(str(i.version) for i in applicable_versions[1:])
498 or "none",
499 )
500 raise BestVersionAlreadyInstalled
501
502 if len(applicable_versions) > 1:
503 logger.debug(
504 'Using version %s (newest of versions: %s)',
505 applicable_versions[0].version,
506 ', '.join(str(i.version) for i in applicable_versions)
507 )
508
509 selected_version = applicable_versions[0].location
510
511 if (selected_version.verifiable is not None
512 and not selected_version.verifiable):
513 logger.warning(
514 "%s is potentially insecure and unverifiable.", req.name,
515 )
516
517 if selected_version._deprecated_regex:
518 warnings.warn(
519 "%s discovered using a deprecated method of parsing, in the "
520 "future it will no longer be discovered." % req.name,
521 RemovedInPip7Warning,
522 )
523
524 return selected_version
525
526 def _find_url_name(self, index_url, url_name, req):
527 """
528 Finds the true URL name of a package, when the given name isn't quite
529 correct.
530 This is usually used to implement case-insensitivity.
531 """
532 if not index_url.url.endswith('/'):
533 # Vaguely part of the PyPI API... weird but true.
534 # FIXME: bad to modify this?
535 index_url.url += '/'
536 page = self._get_page(index_url, req)
537 if page is None:
538 logger.critical('Cannot fetch index base URL %s', index_url)
539 return
540 norm_name = normalize_name(req.url_name)
541 for link in page.links:
542 base = posixpath.basename(link.path.rstrip('/'))
543 if norm_name == normalize_name(base):
544 logger.debug(
545 'Real name of requirement %s is %s', url_name, base,
546 )
547 return base
548 return None
549
550 def _get_pages(self, locations, req):
551 """
552 Yields (page, page_url) from the given locations, skipping
553 locations that have errors, and adding download/homepage links
554 """
555 all_locations = list(locations)
556 seen = set()
557
558 while all_locations:
559 location = all_locations.pop(0)
560 if location in seen:
561 continue
562 seen.add(location)
563
564 page = self._get_page(location, req)
565 if page is None:
566 continue
567
568 yield page
569
570 for link in page.rel_links():
571 normalized = normalize_name(req.name).lower()
572
573 if (normalized not in self.allow_external
574 and not self.allow_all_external):
575 self.need_warn_external = True
576 logger.debug(
577 "Not searching %s for files because external "
578 "urls are disallowed.",
579 link,
580 )
581 continue
582
583 if (link.trusted is not None
584 and not link.trusted
585 and normalized not in self.allow_unverified):
586 logger.debug(
587 "Not searching %s for urls, it is an "
588 "untrusted link and cannot produce safe or "
589 "verifiable files.",
590 link,
591 )
592 self.need_warn_unverified = True
593 continue
594
595 all_locations.append(link)
596
597 _egg_fragment_re = re.compile(r'#egg=([^&]*)')
598 _egg_info_re = re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)
599 _py_version_re = re.compile(r'-py([123]\.?[0-9]?)$')
600
601 def _sort_links(self, links):
602 """
603 Returns elements of links in order, non-egg links first, egg links
604 second, while eliminating duplicates
605 """
606 eggs, no_eggs = [], []
607 seen = set()
608 for link in links:
609 if link not in seen:
610 seen.add(link)
611 if link.egg_fragment:
612 eggs.append(link)
613 else:
614 no_eggs.append(link)
615 return no_eggs + eggs
616
617 def _package_versions(self, links, search_name):
618 for link in self._sort_links(links):
619 v = self._link_package_versions(link, search_name)
620 if v is not None:
621 yield v
622
623 def _known_extensions(self):
624 extensions = ('.tar.gz', '.tar.bz2', '.tar', '.tgz', '.zip')
625 if self.use_wheel:
626 return extensions + (wheel_ext,)
627 return extensions
628
629 def _link_package_versions(self, link, search_name):
630 """
631 Return an iterable of triples (pkg_resources_version_key,
632 link, python_version) that can be extracted from the given
633 link.
634
635 Meant to be overridden by subclasses, not called by clients.
636 """
637 platform = get_platform()
638
639 version = None
640 if link.egg_fragment:
641 egg_info = link.egg_fragment
642 else:
643 egg_info, ext = link.splitext()
644 if not ext:
645 if link not in self.logged_links:
646 logger.debug('Skipping link %s; not a file', link)
647 self.logged_links.add(link)
648 return
649 if egg_info.endswith('.tar'):
650 # Special double-extension case:
651 egg_info = egg_info[:-4]
652 ext = '.tar' + ext
653 if ext not in self._known_extensions():
654 if link not in self.logged_links:
655 logger.debug(
656 'Skipping link %s; unknown archive format: %s',
657 link,
658 ext,
659 )
660 self.logged_links.add(link)
661 return
662 if "macosx10" in link.path and ext == '.zip':
663 if link not in self.logged_links:
664 logger.debug('Skipping link %s; macosx10 one', link)
665 self.logged_links.add(link)
666 return
667 if ext == wheel_ext:
668 try:
669 wheel = Wheel(link.filename)
670 except InvalidWheelFilename:
671 logger.debug(
672 'Skipping %s because the wheel filename is invalid',
673 link
674 )
675 return
676 if (pkg_resources.safe_name(wheel.name).lower()
677 != pkg_resources.safe_name(search_name).lower()):
678 logger.debug(
679 'Skipping link %s; wrong project name (not %s)',
680 link,
681 search_name,
682 )
683 return
684 if not wheel.supported():
685 logger.debug(
686 'Skipping %s because it is not compatible with this '
687 'Python',
688 link,
689 )
690 return
691 # This is a dirty hack to prevent installing Binary Wheels from
692 # PyPI unless it is a Windows or Mac Binary Wheel. This is
693 # paired with a change to PyPI disabling uploads for the
694 # same. Once we have a mechanism for enabling support for
695 # binary wheels on linux that deals with the inherent problems
696 # of binary distribution this can be removed.
697 comes_from = getattr(link, "comes_from", None)
698 if (
699 (
700 not platform.startswith('win')
701 and not platform.startswith('macosx')
702 and not platform == 'cli'
703 )
704 and comes_from is not None
705 and urllib_parse.urlparse(
706 comes_from.url
707 ).netloc.endswith(PyPI.netloc)):
708 if not wheel.supported(tags=supported_tags_noarch):
709 logger.debug(
710 "Skipping %s because it is a pypi-hosted binary "
711 "Wheel on an unsupported platform",
712 link,
713 )
714 return
715 version = wheel.version
716
717 if not version:
718 version = self._egg_info_matches(egg_info, search_name, link)
719 if version is None:
720 logger.debug(
721 'Skipping link %s; wrong project name (not %s)',
722 link,
723 search_name,
724 )
725 return
726
727 if (link.internal is not None
728 and not link.internal
729 and not normalize_name(search_name).lower()
730 in self.allow_external
731 and not self.allow_all_external):
732 # We have a link that we are sure is external, so we should skip
733 # it unless we are allowing externals
734 logger.debug("Skipping %s because it is externally hosted.", link)
735 self.need_warn_external = True
736 return
737
738 if (link.verifiable is not None
739 and not link.verifiable
740 and not (normalize_name(search_name).lower()
741 in self.allow_unverified)):
742 # We have a link that we are sure we cannot verify its integrity,
743 # so we should skip it unless we are allowing unsafe installs
744 # for this requirement.
745 logger.debug(
746 "Skipping %s because it is an insecure and unverifiable file.",
747 link,
748 )
749 self.need_warn_unverified = True
750 return
751
752 match = self._py_version_re.search(version)
753 if match:
754 version = version[:match.start()]
755 py_version = match.group(1)
756 if py_version != sys.version[:3]:
757 logger.debug(
758 'Skipping %s because Python version is incorrect', link
759 )
760 return
761 logger.debug('Found link %s, version: %s', link, version)
762
763 return InstallationCandidate(search_name, version, link)
764
765 def _egg_info_matches(self, egg_info, search_name, link):
766 match = self._egg_info_re.search(egg_info)
767 if not match:
768 logger.debug('Could not parse version from link: %s', link)
769 return None
770 name = match.group(0).lower()
771 # To match the "safe" name that pkg_resources creates:
772 name = name.replace('_', '-')
773 # project name and version must be separated by a dash
774 look_for = search_name.lower() + "-"
775 if name.startswith(look_for):
776 return match.group(0)[len(look_for):]
777 else:
778 return None
779
780 def _get_page(self, link, req):
781 return HTMLPage.get_page(link, req, session=self.session)
782
783
784 class HTMLPage(object):
785 """Represents one page, along with its URL"""
786
787 # FIXME: these regexes are horrible hacks:
788 _homepage_re = re.compile(b'<th>\\s*home\\s*page', re.I)
789 _download_re = re.compile(b'<th>\\s*download\\s+url', re.I)
790 _href_re = re.compile(
791 b'href=(?:"([^"]*)"|\'([^\']*)\'|([^>\\s\\n]*))',
792 re.I | re.S
793 )
794
795 def __init__(self, content, url, headers=None, trusted=None):
796 # Determine if we have any encoding information in our headers
797 encoding = None
798 if headers and "Content-Type" in headers:
799 content_type, params = cgi.parse_header(headers["Content-Type"])
800
801 if "charset" in params:
802 encoding = params['charset']
803
804 self.content = content
805 self.parsed = html5lib.parse(
806 self.content,
807 encoding=encoding,
808 namespaceHTMLElements=False,
809 )
810 self.url = url
811 self.headers = headers
812 self.trusted = trusted
813
814 def __str__(self):
815 return self.url
816
817 @classmethod
818 def get_page(cls, link, req, skip_archives=True, session=None):
819 if session is None:
820 raise TypeError(
821 "get_page() missing 1 required keyword argument: 'session'"
822 )
823
824 url = link.url
825 url = url.split('#', 1)[0]
826
827 # Check for VCS schemes that do not support lookup as web pages.
828 from pip.vcs import VcsSupport
829 for scheme in VcsSupport.schemes:
830 if url.lower().startswith(scheme) and url[len(scheme)] in '+:':
831 logger.debug('Cannot look at %s URL %s', scheme, link)
832 return None
833
834 try:
835 if skip_archives:
836 filename = link.filename
837 for bad_ext in ['.tar', '.tar.gz', '.tar.bz2', '.tgz', '.zip']:
838 if filename.endswith(bad_ext):
839 content_type = cls._get_content_type(
840 url, session=session,
841 )
842 if content_type.lower().startswith('text/html'):
843 break
844 else:
845 logger.debug(
846 'Skipping page %s because of Content-Type: %s',
847 link,
848 content_type,
849 )
850 return
851
852 logger.debug('Getting page %s', url)
853
854 # Tack index.html onto file:// URLs that point to directories
855 (scheme, netloc, path, params, query, fragment) = \
856 urllib_parse.urlparse(url)
857 if (scheme == 'file'
858 and os.path.isdir(urllib_request.url2pathname(path))):
859 # add trailing slash if not present so urljoin doesn't trim
860 # final segment
861 if not url.endswith('/'):
862 url += '/'
863 url = urllib_parse.urljoin(url, 'index.html')
864 logger.debug(' file: URL is directory, getting %s', url)
865
866 resp = session.get(
867 url,
868 headers={
869 "Accept": "text/html",
870 "Cache-Control": "max-age=600",
871 },
872 )
873 resp.raise_for_status()
874
875 # The check for archives above only works if the url ends with
876 # something that looks like an archive. However that is not a
877 # requirement of an url. Unless we issue a HEAD request on every
878 # url we cannot know ahead of time for sure if something is HTML
879 # or not. However we can check after we've downloaded it.
880 content_type = resp.headers.get('Content-Type', 'unknown')
881 if not content_type.lower().startswith("text/html"):
882 logger.debug(
883 'Skipping page %s because of Content-Type: %s',
884 link,
885 content_type,
886 )
887 return
888
889 inst = cls(
890 resp.content, resp.url, resp.headers,
891 trusted=link.trusted,
892 )
893 except requests.HTTPError as exc:
894 level = 2 if exc.response.status_code == 404 else 1
895 cls._handle_fail(req, link, exc, url, level=level)
896 except requests.ConnectionError as exc:
897 cls._handle_fail(
898 req, link, "connection error: %s" % exc, url,
899 )
900 except requests.Timeout:
901 cls._handle_fail(req, link, "timed out", url)
902 except SSLError as exc:
903 reason = ("There was a problem confirming the ssl certificate: "
904 "%s" % exc)
905 cls._handle_fail(
906 req, link, reason, url,
907 level=2,
908 meth=logger.info,
909 )
910 else:
911 return inst
912
913 @staticmethod
914 def _handle_fail(req, link, reason, url, level=1, meth=None):
915 if meth is None:
916 meth = logger.debug
917
918 meth("Could not fetch URL %s: %s", link, reason)
919 meth("Will skip URL %s when looking for download links for %s" %
920 (link.url, req))
921
922 @staticmethod
923 def _get_content_type(url, session):
924 """Get the Content-Type of the given url, using a HEAD request"""
925 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url)
926 if scheme not in ('http', 'https'):
927 # FIXME: some warning or something?
928 # assertion error?
929 return ''
930
931 resp = session.head(url, allow_redirects=True)
932 resp.raise_for_status()
933
934 return resp.headers.get("Content-Type", "")
935
936 @cached_property
937 def api_version(self):
938 metas = [
939 x for x in self.parsed.findall(".//meta")
940 if x.get("name", "").lower() == "api-version"
941 ]
942 if metas:
943 try:
944 return int(metas[0].get("value", None))
945 except (TypeError, ValueError):
946 pass
947
948 return None
949
950 @cached_property
951 def base_url(self):
952 bases = [
953 x for x in self.parsed.findall(".//base")
954 if x.get("href") is not None
955 ]
956 if bases and bases[0].get("href"):
957 return bases[0].get("href")
958 else:
959 return self.url
960
961 @property
962 def links(self):
963 """Yields all links in the page"""
964 for anchor in self.parsed.findall(".//a"):
965 if anchor.get("href"):
966 href = anchor.get("href")
967 url = self.clean_link(
968 urllib_parse.urljoin(self.base_url, href)
969 )
970
971 # Determine if this link is internal. If that distinction
972 # doesn't make sense in this context, then we don't make
973 # any distinction.
974 internal = None
975 if self.api_version and self.api_version >= 2:
976 # Only api_versions >= 2 have a distinction between
977 # external and internal links
978 internal = bool(
979 anchor.get("rel")
980 and "internal" in anchor.get("rel").split()
981 )
982
983 yield Link(url, self, internal=internal)
984
985 def rel_links(self):
986 for url in self.explicit_rel_links():
987 yield url
988 for url in self.scraped_rel_links():
989 yield url
990
991 def explicit_rel_links(self, rels=('homepage', 'download')):
992 """Yields all links with the given relations"""
993 rels = set(rels)
994
995 for anchor in self.parsed.findall(".//a"):
996 if anchor.get("rel") and anchor.get("href"):
997 found_rels = set(anchor.get("rel").split())
998 # Determine the intersection between what rels were found and
999 # what rels were being looked for
1000 if found_rels & rels:
1001 href = anchor.get("href")
1002 url = self.clean_link(
1003 urllib_parse.urljoin(self.base_url, href)
1004 )
1005 yield Link(url, self, trusted=False)
1006
1007 def scraped_rel_links(self):
1008 # Can we get rid of this horrible horrible method?
1009 for regex in (self._homepage_re, self._download_re):
1010 match = regex.search(self.content)
1011 if not match:
1012 continue
1013 href_match = self._href_re.search(self.content, pos=match.end())
1014 if not href_match:
1015 continue
1016 url = (
1017 href_match.group(1)
1018 or href_match.group(2)
1019 or href_match.group(3)
1020 )
1021 if not url:
1022 continue
1023 try:
1024 url = url.decode("ascii")
1025 except UnicodeDecodeError:
1026 continue
1027 url = self.clean_link(urllib_parse.urljoin(self.base_url, url))
1028 yield Link(url, self, trusted=False, _deprecated_regex=True)
1029
1030 _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\|-]', re.I)
1031
1032 def clean_link(self, url):
1033 """Makes sure a link is fully encoded. That is, if a ' ' shows up in
1034 the link, it will be rewritten to %20 (while not over-quoting
1035 % or other characters)."""
1036 return self._clean_re.sub(
1037 lambda match: '%%%2x' % ord(match.group(0)), url)
1038
1039
1040 class Link(object):
1041
1042 def __init__(self, url, comes_from=None, internal=None, trusted=None,
1043 _deprecated_regex=False):
1044
1045 # url can be a UNC windows share
1046 if url != Inf and url.startswith('\\\\'):
1047 url = path_to_url(url)
1048
1049 self.url = url
1050 self.comes_from = comes_from
1051 self.internal = internal
1052 self.trusted = trusted
1053 self._deprecated_regex = _deprecated_regex
1054
1055 def __str__(self):
1056 if self.comes_from:
1057 return '%s (from %s)' % (self.url, self.comes_from)
1058 else:
1059 return str(self.url)
1060
1061 def __repr__(self):
1062 return '<Link %s>' % self
1063
1064 def __eq__(self, other):
1065 if not isinstance(other, Link):
1066 return NotImplemented
1067 return self.url == other.url
1068
1069 def __ne__(self, other):
1070 if not isinstance(other, Link):
1071 return NotImplemented
1072 return self.url != other.url
1073
1074 def __lt__(self, other):
1075 if not isinstance(other, Link):
1076 return NotImplemented
1077 return self.url < other.url
1078
1079 def __le__(self, other):
1080 if not isinstance(other, Link):
1081 return NotImplemented
1082 return self.url <= other.url
1083
1084 def __gt__(self, other):
1085 if not isinstance(other, Link):
1086 return NotImplemented
1087 return self.url > other.url
1088
1089 def __ge__(self, other):
1090 if not isinstance(other, Link):
1091 return NotImplemented
1092 return self.url >= other.url
1093
1094 def __hash__(self):
1095 return hash(self.url)
1096
1097 @property
1098 def filename(self):
1099 _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)
1100 name = posixpath.basename(path.rstrip('/')) or netloc
1101 assert name, ('URL %r produced no filename' % self.url)
1102 return name
1103
1104 @property
1105 def scheme(self):
1106 return urllib_parse.urlsplit(self.url)[0]
1107
1108 @property
1109 def netloc(self):
1110 return urllib_parse.urlsplit(self.url)[1]
1111
1112 @property
1113 def path(self):
1114 return urllib_parse.urlsplit(self.url)[2]
1115
1116 def splitext(self):
1117 return splitext(posixpath.basename(self.path.rstrip('/')))
1118
1119 @property
1120 def ext(self):
1121 return self.splitext()[1]
1122
1123 @property
1124 def url_without_fragment(self):
1125 scheme, netloc, path, query, fragment = urllib_parse.urlsplit(self.url)
1126 return urllib_parse.urlunsplit((scheme, netloc, path, query, None))
1127
1128 _egg_fragment_re = re.compile(r'#egg=([^&]*)')
1129
1130 @property
1131 def egg_fragment(self):
1132 match = self._egg_fragment_re.search(self.url)
1133 if not match:
1134 return None
1135 return match.group(1)
1136
1137 _hash_re = re.compile(
1138 r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)'
1139 )
1140
1141 @property
1142 def hash(self):
1143 match = self._hash_re.search(self.url)
1144 if match:
1145 return match.group(2)
1146 return None
1147
1148 @property
1149 def hash_name(self):
1150 match = self._hash_re.search(self.url)
1151 if match:
1152 return match.group(1)
1153 return None
1154
1155 @property
1156 def show_url(self):
1157 return posixpath.basename(self.url.split('#', 1)[0].split('?', 1)[0])
1158
1159 @property
1160 def verifiable(self):
1161 """
1162 Returns True if this link can be verified after download, False if it
1163 cannot, and None if we cannot determine.
1164 """
1165 trusted = self.trusted or getattr(self.comes_from, "trusted", None)
1166 if trusted is not None and trusted:
1167 # This link came from a trusted source. It *may* be verifiable but
1168 # first we need to see if this page is operating under the new
1169 # API version.
1170 try:
1171 api_version = getattr(self.comes_from, "api_version", None)
1172 api_version = int(api_version)
1173 except (ValueError, TypeError):
1174 api_version = None
1175
1176 if api_version is None or api_version <= 1:
1177 # This link is either trusted, or it came from a trusted,
1178 # however it is not operating under the API version 2 so
1179 # we can't make any claims about if it's safe or not
1180 return
1181
1182 if self.hash:
1183 # This link came from a trusted source and it has a hash, so we
1184 # can consider it safe.
1185 return True
1186 else:
1187 # This link came from a trusted source, using the new API
1188 # version, and it does not have a hash. It is NOT verifiable
1189 return False
1190 elif trusted is not None:
1191 # This link came from an untrusted source and we cannot trust it
1192 return False
1193
1194
1195 # An object to represent the "link" for the installed version of a requirement.
1196 # Using Inf as the url makes it sort higher.
1197 INSTALLED_VERSION = Link(Inf)
1198
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pip/index.py b/pip/index.py
--- a/pip/index.py
+++ b/pip/index.py
@@ -1098,6 +1098,7 @@
def filename(self):
_, netloc, path, _, _ = urllib_parse.urlsplit(self.url)
name = posixpath.basename(path.rstrip('/')) or netloc
+ name = urllib_parse.unquote(name)
assert name, ('URL %r produced no filename' % self.url)
return name
| {"golden_diff": "diff --git a/pip/index.py b/pip/index.py\n--- a/pip/index.py\n+++ b/pip/index.py\n@@ -1098,6 +1098,7 @@\n def filename(self):\n _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)\n name = posixpath.basename(path.rstrip('/')) or netloc\n+ name = urllib_parse.unquote(name)\n assert name, ('URL %r produced no filename' % self.url)\n return name\n", "issue": "pip fails to install packages with local versions.\n`pip install apache_libcloud-0.16.0+clusterhq.0-py2.py3-none-any.whl` gives\n\n``` python-traceback\nException:\nTraceback (most recent call last):\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/basecommand.py\", line 210, in main\n status = self.run(options, args)\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/commands/install.py\", line 304, in run\n name, None, isolated=options.isolated_mode,\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/req/req_install.py\", line 179, in from_line\n isolated=isolated)\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/req/req_install.py\", line 53, in __init__\n req = pkg_resources.Requirement.parse(req)\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py\", line 2842, in parse\n reqs = list(parse_requirements(s))\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py\", line 2789, in parse_requirements\n \"version spec\")\n File \"/Users/cougar/dev/tmp-83d539b3e47fa7c3/lib/python2.7/site-packages/pip/_vendor/pkg_resources.py\", line 2765, in scan_list\n raise ValueError(msg, line, \"at\", line[p:])\nValueError: (\"Expected ',' or end-of-list in\", 'apache-libcloud==0.16.0%2Bclusterhq.0', 'at', '%2Bclusterhq.0')\n```\n\n", "before_files": [{"content": "\"\"\"Routines related to PyPI, indexes\"\"\"\nfrom __future__ import absolute_import\n\nimport logging\nimport cgi\nimport sys\nimport os\nimport re\nimport mimetypes\nimport posixpath\nimport warnings\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves.urllib import request as urllib_request\n\nfrom pip.compat import ipaddress\nfrom pip.utils import Inf, cached_property, normalize_name, splitext\nfrom pip.utils.deprecation import RemovedInPip7Warning, RemovedInPip8Warning\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import (\n DistributionNotFound, BestVersionAlreadyInstalled, InvalidWheelFilename,\n UnsupportedWheel,\n)\nfrom pip.download import url_to_path, path_to_url\nfrom pip.models import PyPI\nfrom pip.wheel import Wheel, wheel_ext\nfrom pip.pep425tags import supported_tags, supported_tags_noarch, get_platform\nfrom pip.req.req_requirement import InstallationCandidate\nfrom pip._vendor import html5lib, requests, pkg_resources, six\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor.requests.exceptions import SSLError\n\n\n__all__ = ['PackageFinder']\n\n\n# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)\nSECURE_ORIGINS = [\n # protocol, hostname, port\n (\"https\", \"*\", \"*\"),\n (\"*\", \"localhost\", \"*\"),\n (\"*\", \"127.0.0.0/8\", \"*\"),\n (\"*\", \"::1/128\", \"*\"),\n (\"file\", \"*\", None),\n]\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PackageFinder(object):\n \"\"\"This finds packages.\n\n This is meant to match easy_install's technique for looking for\n packages, by reading pages and looking for appropriate links\n \"\"\"\n\n def __init__(self, find_links, index_urls,\n use_wheel=True, allow_external=(), allow_unverified=(),\n allow_all_external=False, allow_all_prereleases=False,\n trusted_hosts=None, process_dependency_links=False,\n session=None):\n if session is None:\n raise TypeError(\n \"PackageFinder() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n self.find_links = find_links\n self.index_urls = index_urls\n self.dependency_links = []\n\n # These are boring links that have already been logged somehow:\n self.logged_links = set()\n\n self.use_wheel = use_wheel\n\n # Do we allow (safe and verifiable) externally hosted files?\n self.allow_external = set(normalize_name(n) for n in allow_external)\n\n # Which names are allowed to install insecure and unverifiable files?\n self.allow_unverified = set(\n normalize_name(n) for n in allow_unverified\n )\n\n # Anything that is allowed unverified is also allowed external\n self.allow_external |= self.allow_unverified\n\n # Do we allow all (safe and verifiable) externally hosted files?\n self.allow_all_external = allow_all_external\n\n # Domains that we won't emit warnings for when not using HTTPS\n self.secure_origins = [\n (\"*\", host, \"*\")\n for host in (trusted_hosts if trusted_hosts else [])\n ]\n\n # Stores if we ignored any external links so that we can instruct\n # end users how to install them if no distributions are available\n self.need_warn_external = False\n\n # Stores if we ignored any unsafe links so that we can instruct\n # end users how to install them if no distributions are available\n self.need_warn_unverified = False\n\n # Do we want to allow _all_ pre-releases?\n self.allow_all_prereleases = allow_all_prereleases\n\n # Do we process dependency links?\n self.process_dependency_links = process_dependency_links\n\n # The Session we'll use to make requests\n self.session = session\n\n def add_dependency_links(self, links):\n # # FIXME: this shouldn't be global list this, it should only\n # # apply to requirements of the package that specifies the\n # # dependency_links value\n # # FIXME: also, we should track comes_from (i.e., use Link)\n if self.process_dependency_links:\n warnings.warn(\n \"Dependency Links processing has been deprecated and will be \"\n \"removed in a future release.\",\n RemovedInPip7Warning,\n )\n self.dependency_links.extend(links)\n\n def _sort_locations(self, locations):\n \"\"\"\n Sort locations into \"files\" (archives) and \"urls\", and return\n a pair of lists (files,urls)\n \"\"\"\n files = []\n urls = []\n\n # puts the url for the given file path into the appropriate list\n def sort_path(path):\n url = path_to_url(path)\n if mimetypes.guess_type(url, strict=False)[0] == 'text/html':\n urls.append(url)\n else:\n files.append(url)\n\n for url in locations:\n\n is_local_path = os.path.exists(url)\n is_file_url = url.startswith('file:')\n is_find_link = url in self.find_links\n\n if is_local_path or is_file_url:\n if is_local_path:\n path = url\n else:\n path = url_to_path(url)\n if is_find_link and os.path.isdir(path):\n path = os.path.realpath(path)\n for item in os.listdir(path):\n sort_path(os.path.join(path, item))\n elif is_file_url and os.path.isdir(path):\n urls.append(url)\n elif os.path.isfile(path):\n sort_path(path)\n else:\n urls.append(url)\n\n return files, urls\n\n def _candidate_sort_key(self, candidate):\n \"\"\"\n Function used to generate link sort key for link tuples.\n The greater the return value, the more preferred it is.\n If not finding wheels, then sorted by version only.\n If finding wheels, then the sort order is by version, then:\n 1. existing installs\n 2. wheels ordered via Wheel.support_index_min()\n 3. source archives\n Note: it was considered to embed this logic into the Link\n comparison operators, but then different sdist links\n with the same version, would have to be considered equal\n \"\"\"\n if self.use_wheel:\n support_num = len(supported_tags)\n if candidate.location == INSTALLED_VERSION:\n pri = 1\n elif candidate.location.ext == wheel_ext:\n # can raise InvalidWheelFilename\n wheel = Wheel(candidate.location.filename)\n if not wheel.supported():\n raise UnsupportedWheel(\n \"%s is not a supported wheel for this platform. It \"\n \"can't be sorted.\" % wheel.filename\n )\n pri = -(wheel.support_index_min())\n else: # sdist\n pri = -(support_num)\n return (candidate.version, pri)\n else:\n return candidate.version\n\n def _sort_versions(self, applicable_versions):\n \"\"\"\n Bring the latest version (and wheels) to the front, but maintain the\n existing ordering as secondary. See the docstring for `_link_sort_key`\n for details. This function is isolated for easier unit testing.\n \"\"\"\n return sorted(\n applicable_versions,\n key=self._candidate_sort_key,\n reverse=True\n )\n\n def _validate_secure_origin(self, logger, location):\n # Determine if this url used a secure transport mechanism\n parsed = urllib_parse.urlparse(str(location))\n origin = (parsed.scheme, parsed.hostname, parsed.port)\n\n # Determine if our origin is a secure origin by looking through our\n # hardcoded list of secure origins, as well as any additional ones\n # configured on this PackageFinder instance.\n for secure_origin in (SECURE_ORIGINS + self.secure_origins):\n # Check to see if the protocol matches\n if origin[0] != secure_origin[0] and secure_origin[0] != \"*\":\n continue\n\n try:\n # We need to do this decode dance to ensure that we have a\n # unicode object, even on Python 2.x.\n addr = ipaddress.ip_address(\n origin[1]\n if (\n isinstance(origin[1], six.text_type)\n or origin[1] is None\n )\n else origin[1].decode(\"utf8\")\n )\n network = ipaddress.ip_network(\n secure_origin[1]\n if isinstance(secure_origin[1], six.text_type)\n else secure_origin[1].decode(\"utf8\")\n )\n except ValueError:\n # We don't have both a valid address or a valid network, so\n # we'll check this origin against hostnames.\n if origin[1] != secure_origin[1] and secure_origin[1] != \"*\":\n continue\n else:\n # We have a valid address and network, so see if the address\n # is contained within the network.\n if addr not in network:\n continue\n\n # Check to see if the port patches\n if (origin[2] != secure_origin[2]\n and secure_origin[2] != \"*\"\n and secure_origin[2] is not None):\n continue\n\n # If we've gotten here, then this origin matches the current\n # secure origin and we should break out of the loop and continue\n # on.\n break\n else:\n # If the loop successfully completed without a break, that means\n # that the origin we are testing is not a secure origin.\n logger.warning(\n \"This repository located at %s is not a trusted host, if \"\n \"this repository is available via HTTPS it is recommend to \"\n \"use HTTPS instead, otherwise you may silence this warning \"\n \"with '--trusted-host %s'.\",\n parsed.hostname,\n parsed.hostname,\n )\n\n warnings.warn(\n \"Implicitly allowing locations which are not hosted at a \"\n \"secure origin is deprecated and will require the use of \"\n \"--trusted-host in the future.\",\n RemovedInPip7Warning,\n )\n\n def find_requirement(self, req, upgrade):\n\n def mkurl_pypi_url(url):\n loc = posixpath.join(url, url_name)\n # For maximum compatibility with easy_install, ensure the path\n # ends in a trailing slash. Although this isn't in the spec\n # (and PyPI can handle it without the slash) some other index\n # implementations might break if they relied on easy_install's\n # behavior.\n if not loc.endswith('/'):\n loc = loc + '/'\n return loc\n\n url_name = req.url_name\n\n # Only check main index if index URL is given:\n main_index_url = None\n if self.index_urls:\n # Check that we have the url_name correctly spelled:\n main_index_url = Link(\n mkurl_pypi_url(self.index_urls[0]),\n trusted=True,\n )\n\n page = self._get_page(main_index_url, req)\n if page is None and PyPI.netloc not in str(main_index_url):\n warnings.warn(\n \"Failed to find %r at %s. It is suggested to upgrade \"\n \"your index to support normalized names as the name in \"\n \"/simple/{name}.\" % (req.name, main_index_url),\n RemovedInPip8Warning,\n )\n\n url_name = self._find_url_name(\n Link(self.index_urls[0], trusted=True),\n url_name, req\n ) or req.url_name\n\n if url_name is not None:\n locations = [\n mkurl_pypi_url(url)\n for url in self.index_urls] + self.find_links\n else:\n locations = list(self.find_links)\n\n file_locations, url_locations = self._sort_locations(locations)\n _flocations, _ulocations = self._sort_locations(self.dependency_links)\n file_locations.extend(_flocations)\n\n # We trust every url that the user has given us whether it was given\n # via --index-url or --find-links\n locations = [Link(url, trusted=True) for url in url_locations]\n\n # We explicitly do not trust links that came from dependency_links\n locations.extend([Link(url) for url in _ulocations])\n\n logger.debug('URLs to search for versions for %s:', req)\n for location in locations:\n logger.debug('* %s', location)\n self._validate_secure_origin(logger, location)\n\n found_versions = []\n found_versions.extend(\n self._package_versions(\n # We trust every directly linked archive in find_links\n [Link(url, '-f', trusted=True) for url in self.find_links],\n req.name.lower()\n )\n )\n page_versions = []\n for page in self._get_pages(locations, req):\n logger.debug('Analyzing links from page %s', page.url)\n with indent_log():\n page_versions.extend(\n self._package_versions(page.links, req.name.lower())\n )\n dependency_versions = list(self._package_versions(\n [Link(url) for url in self.dependency_links], req.name.lower()))\n if dependency_versions:\n logger.debug(\n 'dependency_links found: %s',\n ', '.join([\n link.url for p, link, version in dependency_versions\n ])\n )\n file_versions = list(\n self._package_versions(\n [Link(url) for url in file_locations],\n req.name.lower()\n )\n )\n if (not found_versions\n and not page_versions\n and not dependency_versions\n and not file_versions):\n logger.critical(\n 'Could not find any downloads that satisfy the requirement %s',\n req,\n )\n\n if self.need_warn_external:\n logger.warning(\n \"Some externally hosted files were ignored as access to \"\n \"them may be unreliable (use --allow-external %s to \"\n \"allow).\",\n req.name,\n )\n\n if self.need_warn_unverified:\n logger.warning(\n \"Some insecure and unverifiable files were ignored\"\n \" (use --allow-unverified %s to allow).\",\n req.name,\n )\n\n raise DistributionNotFound(\n 'No distributions at all found for %s' % req\n )\n installed_version = []\n if req.satisfied_by is not None:\n installed_version = [\n InstallationCandidate(\n req.name,\n req.satisfied_by.version,\n INSTALLED_VERSION,\n ),\n ]\n if file_versions:\n file_versions.sort(reverse=True)\n logger.debug(\n 'Local files found: %s',\n ', '.join([\n url_to_path(candidate.location.url)\n for candidate in file_versions\n ])\n )\n\n # This is an intentional priority ordering\n all_versions = (\n file_versions + found_versions + page_versions\n + dependency_versions\n )\n\n # Filter out anything which doesn't match our specifier\n _versions = set(\n req.specifier.filter(\n [x.version for x in all_versions],\n prereleases=(\n self.allow_all_prereleases\n if self.allow_all_prereleases else None\n ),\n )\n )\n all_versions = [x for x in all_versions if x.version in _versions]\n\n # Finally add our existing versions to the front of our versions.\n applicable_versions = installed_version + all_versions\n\n applicable_versions = self._sort_versions(applicable_versions)\n existing_applicable = any(\n i.location is INSTALLED_VERSION\n for i in applicable_versions\n )\n\n if not upgrade and existing_applicable:\n if applicable_versions[0].location is INSTALLED_VERSION:\n logger.debug(\n 'Existing installed version (%s) is most up-to-date and '\n 'satisfies requirement',\n req.satisfied_by.version,\n )\n else:\n logger.debug(\n 'Existing installed version (%s) satisfies requirement '\n '(most up-to-date version is %s)',\n req.satisfied_by.version,\n applicable_versions[0][2],\n )\n return None\n\n if not applicable_versions:\n logger.critical(\n 'Could not find a version that satisfies the requirement %s '\n '(from versions: %s)',\n req,\n ', '.join(\n sorted(\n set(str(i.version) for i in all_versions),\n key=parse_version,\n )\n )\n )\n\n if self.need_warn_external:\n logger.warning(\n \"Some externally hosted files were ignored as access to \"\n \"them may be unreliable (use --allow-external to allow).\"\n )\n\n if self.need_warn_unverified:\n logger.warning(\n \"Some insecure and unverifiable files were ignored\"\n \" (use --allow-unverified %s to allow).\",\n req.name,\n )\n\n raise DistributionNotFound(\n 'No distributions matching the version for %s' % req\n )\n\n if applicable_versions[0].location is INSTALLED_VERSION:\n # We have an existing version, and its the best version\n logger.debug(\n 'Installed version (%s) is most up-to-date (past versions: ',\n '%s)',\n req.satisfied_by.version,\n ', '.join(str(i.version) for i in applicable_versions[1:])\n or \"none\",\n )\n raise BestVersionAlreadyInstalled\n\n if len(applicable_versions) > 1:\n logger.debug(\n 'Using version %s (newest of versions: %s)',\n applicable_versions[0].version,\n ', '.join(str(i.version) for i in applicable_versions)\n )\n\n selected_version = applicable_versions[0].location\n\n if (selected_version.verifiable is not None\n and not selected_version.verifiable):\n logger.warning(\n \"%s is potentially insecure and unverifiable.\", req.name,\n )\n\n if selected_version._deprecated_regex:\n warnings.warn(\n \"%s discovered using a deprecated method of parsing, in the \"\n \"future it will no longer be discovered.\" % req.name,\n RemovedInPip7Warning,\n )\n\n return selected_version\n\n def _find_url_name(self, index_url, url_name, req):\n \"\"\"\n Finds the true URL name of a package, when the given name isn't quite\n correct.\n This is usually used to implement case-insensitivity.\n \"\"\"\n if not index_url.url.endswith('/'):\n # Vaguely part of the PyPI API... weird but true.\n # FIXME: bad to modify this?\n index_url.url += '/'\n page = self._get_page(index_url, req)\n if page is None:\n logger.critical('Cannot fetch index base URL %s', index_url)\n return\n norm_name = normalize_name(req.url_name)\n for link in page.links:\n base = posixpath.basename(link.path.rstrip('/'))\n if norm_name == normalize_name(base):\n logger.debug(\n 'Real name of requirement %s is %s', url_name, base,\n )\n return base\n return None\n\n def _get_pages(self, locations, req):\n \"\"\"\n Yields (page, page_url) from the given locations, skipping\n locations that have errors, and adding download/homepage links\n \"\"\"\n all_locations = list(locations)\n seen = set()\n\n while all_locations:\n location = all_locations.pop(0)\n if location in seen:\n continue\n seen.add(location)\n\n page = self._get_page(location, req)\n if page is None:\n continue\n\n yield page\n\n for link in page.rel_links():\n normalized = normalize_name(req.name).lower()\n\n if (normalized not in self.allow_external\n and not self.allow_all_external):\n self.need_warn_external = True\n logger.debug(\n \"Not searching %s for files because external \"\n \"urls are disallowed.\",\n link,\n )\n continue\n\n if (link.trusted is not None\n and not link.trusted\n and normalized not in self.allow_unverified):\n logger.debug(\n \"Not searching %s for urls, it is an \"\n \"untrusted link and cannot produce safe or \"\n \"verifiable files.\",\n link,\n )\n self.need_warn_unverified = True\n continue\n\n all_locations.append(link)\n\n _egg_fragment_re = re.compile(r'#egg=([^&]*)')\n _egg_info_re = re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)\n _py_version_re = re.compile(r'-py([123]\\.?[0-9]?)$')\n\n def _sort_links(self, links):\n \"\"\"\n Returns elements of links in order, non-egg links first, egg links\n second, while eliminating duplicates\n \"\"\"\n eggs, no_eggs = [], []\n seen = set()\n for link in links:\n if link not in seen:\n seen.add(link)\n if link.egg_fragment:\n eggs.append(link)\n else:\n no_eggs.append(link)\n return no_eggs + eggs\n\n def _package_versions(self, links, search_name):\n for link in self._sort_links(links):\n v = self._link_package_versions(link, search_name)\n if v is not None:\n yield v\n\n def _known_extensions(self):\n extensions = ('.tar.gz', '.tar.bz2', '.tar', '.tgz', '.zip')\n if self.use_wheel:\n return extensions + (wheel_ext,)\n return extensions\n\n def _link_package_versions(self, link, search_name):\n \"\"\"\n Return an iterable of triples (pkg_resources_version_key,\n link, python_version) that can be extracted from the given\n link.\n\n Meant to be overridden by subclasses, not called by clients.\n \"\"\"\n platform = get_platform()\n\n version = None\n if link.egg_fragment:\n egg_info = link.egg_fragment\n else:\n egg_info, ext = link.splitext()\n if not ext:\n if link not in self.logged_links:\n logger.debug('Skipping link %s; not a file', link)\n self.logged_links.add(link)\n return\n if egg_info.endswith('.tar'):\n # Special double-extension case:\n egg_info = egg_info[:-4]\n ext = '.tar' + ext\n if ext not in self._known_extensions():\n if link not in self.logged_links:\n logger.debug(\n 'Skipping link %s; unknown archive format: %s',\n link,\n ext,\n )\n self.logged_links.add(link)\n return\n if \"macosx10\" in link.path and ext == '.zip':\n if link not in self.logged_links:\n logger.debug('Skipping link %s; macosx10 one', link)\n self.logged_links.add(link)\n return\n if ext == wheel_ext:\n try:\n wheel = Wheel(link.filename)\n except InvalidWheelFilename:\n logger.debug(\n 'Skipping %s because the wheel filename is invalid',\n link\n )\n return\n if (pkg_resources.safe_name(wheel.name).lower()\n != pkg_resources.safe_name(search_name).lower()):\n logger.debug(\n 'Skipping link %s; wrong project name (not %s)',\n link,\n search_name,\n )\n return\n if not wheel.supported():\n logger.debug(\n 'Skipping %s because it is not compatible with this '\n 'Python',\n link,\n )\n return\n # This is a dirty hack to prevent installing Binary Wheels from\n # PyPI unless it is a Windows or Mac Binary Wheel. This is\n # paired with a change to PyPI disabling uploads for the\n # same. Once we have a mechanism for enabling support for\n # binary wheels on linux that deals with the inherent problems\n # of binary distribution this can be removed.\n comes_from = getattr(link, \"comes_from\", None)\n if (\n (\n not platform.startswith('win')\n and not platform.startswith('macosx')\n and not platform == 'cli'\n )\n and comes_from is not None\n and urllib_parse.urlparse(\n comes_from.url\n ).netloc.endswith(PyPI.netloc)):\n if not wheel.supported(tags=supported_tags_noarch):\n logger.debug(\n \"Skipping %s because it is a pypi-hosted binary \"\n \"Wheel on an unsupported platform\",\n link,\n )\n return\n version = wheel.version\n\n if not version:\n version = self._egg_info_matches(egg_info, search_name, link)\n if version is None:\n logger.debug(\n 'Skipping link %s; wrong project name (not %s)',\n link,\n search_name,\n )\n return\n\n if (link.internal is not None\n and not link.internal\n and not normalize_name(search_name).lower()\n in self.allow_external\n and not self.allow_all_external):\n # We have a link that we are sure is external, so we should skip\n # it unless we are allowing externals\n logger.debug(\"Skipping %s because it is externally hosted.\", link)\n self.need_warn_external = True\n return\n\n if (link.verifiable is not None\n and not link.verifiable\n and not (normalize_name(search_name).lower()\n in self.allow_unverified)):\n # We have a link that we are sure we cannot verify its integrity,\n # so we should skip it unless we are allowing unsafe installs\n # for this requirement.\n logger.debug(\n \"Skipping %s because it is an insecure and unverifiable file.\",\n link,\n )\n self.need_warn_unverified = True\n return\n\n match = self._py_version_re.search(version)\n if match:\n version = version[:match.start()]\n py_version = match.group(1)\n if py_version != sys.version[:3]:\n logger.debug(\n 'Skipping %s because Python version is incorrect', link\n )\n return\n logger.debug('Found link %s, version: %s', link, version)\n\n return InstallationCandidate(search_name, version, link)\n\n def _egg_info_matches(self, egg_info, search_name, link):\n match = self._egg_info_re.search(egg_info)\n if not match:\n logger.debug('Could not parse version from link: %s', link)\n return None\n name = match.group(0).lower()\n # To match the \"safe\" name that pkg_resources creates:\n name = name.replace('_', '-')\n # project name and version must be separated by a dash\n look_for = search_name.lower() + \"-\"\n if name.startswith(look_for):\n return match.group(0)[len(look_for):]\n else:\n return None\n\n def _get_page(self, link, req):\n return HTMLPage.get_page(link, req, session=self.session)\n\n\nclass HTMLPage(object):\n \"\"\"Represents one page, along with its URL\"\"\"\n\n # FIXME: these regexes are horrible hacks:\n _homepage_re = re.compile(b'<th>\\\\s*home\\\\s*page', re.I)\n _download_re = re.compile(b'<th>\\\\s*download\\\\s+url', re.I)\n _href_re = re.compile(\n b'href=(?:\"([^\"]*)\"|\\'([^\\']*)\\'|([^>\\\\s\\\\n]*))',\n re.I | re.S\n )\n\n def __init__(self, content, url, headers=None, trusted=None):\n # Determine if we have any encoding information in our headers\n encoding = None\n if headers and \"Content-Type\" in headers:\n content_type, params = cgi.parse_header(headers[\"Content-Type\"])\n\n if \"charset\" in params:\n encoding = params['charset']\n\n self.content = content\n self.parsed = html5lib.parse(\n self.content,\n encoding=encoding,\n namespaceHTMLElements=False,\n )\n self.url = url\n self.headers = headers\n self.trusted = trusted\n\n def __str__(self):\n return self.url\n\n @classmethod\n def get_page(cls, link, req, skip_archives=True, session=None):\n if session is None:\n raise TypeError(\n \"get_page() missing 1 required keyword argument: 'session'\"\n )\n\n url = link.url\n url = url.split('#', 1)[0]\n\n # Check for VCS schemes that do not support lookup as web pages.\n from pip.vcs import VcsSupport\n for scheme in VcsSupport.schemes:\n if url.lower().startswith(scheme) and url[len(scheme)] in '+:':\n logger.debug('Cannot look at %s URL %s', scheme, link)\n return None\n\n try:\n if skip_archives:\n filename = link.filename\n for bad_ext in ['.tar', '.tar.gz', '.tar.bz2', '.tgz', '.zip']:\n if filename.endswith(bad_ext):\n content_type = cls._get_content_type(\n url, session=session,\n )\n if content_type.lower().startswith('text/html'):\n break\n else:\n logger.debug(\n 'Skipping page %s because of Content-Type: %s',\n link,\n content_type,\n )\n return\n\n logger.debug('Getting page %s', url)\n\n # Tack index.html onto file:// URLs that point to directories\n (scheme, netloc, path, params, query, fragment) = \\\n urllib_parse.urlparse(url)\n if (scheme == 'file'\n and os.path.isdir(urllib_request.url2pathname(path))):\n # add trailing slash if not present so urljoin doesn't trim\n # final segment\n if not url.endswith('/'):\n url += '/'\n url = urllib_parse.urljoin(url, 'index.html')\n logger.debug(' file: URL is directory, getting %s', url)\n\n resp = session.get(\n url,\n headers={\n \"Accept\": \"text/html\",\n \"Cache-Control\": \"max-age=600\",\n },\n )\n resp.raise_for_status()\n\n # The check for archives above only works if the url ends with\n # something that looks like an archive. However that is not a\n # requirement of an url. Unless we issue a HEAD request on every\n # url we cannot know ahead of time for sure if something is HTML\n # or not. However we can check after we've downloaded it.\n content_type = resp.headers.get('Content-Type', 'unknown')\n if not content_type.lower().startswith(\"text/html\"):\n logger.debug(\n 'Skipping page %s because of Content-Type: %s',\n link,\n content_type,\n )\n return\n\n inst = cls(\n resp.content, resp.url, resp.headers,\n trusted=link.trusted,\n )\n except requests.HTTPError as exc:\n level = 2 if exc.response.status_code == 404 else 1\n cls._handle_fail(req, link, exc, url, level=level)\n except requests.ConnectionError as exc:\n cls._handle_fail(\n req, link, \"connection error: %s\" % exc, url,\n )\n except requests.Timeout:\n cls._handle_fail(req, link, \"timed out\", url)\n except SSLError as exc:\n reason = (\"There was a problem confirming the ssl certificate: \"\n \"%s\" % exc)\n cls._handle_fail(\n req, link, reason, url,\n level=2,\n meth=logger.info,\n )\n else:\n return inst\n\n @staticmethod\n def _handle_fail(req, link, reason, url, level=1, meth=None):\n if meth is None:\n meth = logger.debug\n\n meth(\"Could not fetch URL %s: %s\", link, reason)\n meth(\"Will skip URL %s when looking for download links for %s\" %\n (link.url, req))\n\n @staticmethod\n def _get_content_type(url, session):\n \"\"\"Get the Content-Type of the given url, using a HEAD request\"\"\"\n scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url)\n if scheme not in ('http', 'https'):\n # FIXME: some warning or something?\n # assertion error?\n return ''\n\n resp = session.head(url, allow_redirects=True)\n resp.raise_for_status()\n\n return resp.headers.get(\"Content-Type\", \"\")\n\n @cached_property\n def api_version(self):\n metas = [\n x for x in self.parsed.findall(\".//meta\")\n if x.get(\"name\", \"\").lower() == \"api-version\"\n ]\n if metas:\n try:\n return int(metas[0].get(\"value\", None))\n except (TypeError, ValueError):\n pass\n\n return None\n\n @cached_property\n def base_url(self):\n bases = [\n x for x in self.parsed.findall(\".//base\")\n if x.get(\"href\") is not None\n ]\n if bases and bases[0].get(\"href\"):\n return bases[0].get(\"href\")\n else:\n return self.url\n\n @property\n def links(self):\n \"\"\"Yields all links in the page\"\"\"\n for anchor in self.parsed.findall(\".//a\"):\n if anchor.get(\"href\"):\n href = anchor.get(\"href\")\n url = self.clean_link(\n urllib_parse.urljoin(self.base_url, href)\n )\n\n # Determine if this link is internal. If that distinction\n # doesn't make sense in this context, then we don't make\n # any distinction.\n internal = None\n if self.api_version and self.api_version >= 2:\n # Only api_versions >= 2 have a distinction between\n # external and internal links\n internal = bool(\n anchor.get(\"rel\")\n and \"internal\" in anchor.get(\"rel\").split()\n )\n\n yield Link(url, self, internal=internal)\n\n def rel_links(self):\n for url in self.explicit_rel_links():\n yield url\n for url in self.scraped_rel_links():\n yield url\n\n def explicit_rel_links(self, rels=('homepage', 'download')):\n \"\"\"Yields all links with the given relations\"\"\"\n rels = set(rels)\n\n for anchor in self.parsed.findall(\".//a\"):\n if anchor.get(\"rel\") and anchor.get(\"href\"):\n found_rels = set(anchor.get(\"rel\").split())\n # Determine the intersection between what rels were found and\n # what rels were being looked for\n if found_rels & rels:\n href = anchor.get(\"href\")\n url = self.clean_link(\n urllib_parse.urljoin(self.base_url, href)\n )\n yield Link(url, self, trusted=False)\n\n def scraped_rel_links(self):\n # Can we get rid of this horrible horrible method?\n for regex in (self._homepage_re, self._download_re):\n match = regex.search(self.content)\n if not match:\n continue\n href_match = self._href_re.search(self.content, pos=match.end())\n if not href_match:\n continue\n url = (\n href_match.group(1)\n or href_match.group(2)\n or href_match.group(3)\n )\n if not url:\n continue\n try:\n url = url.decode(\"ascii\")\n except UnicodeDecodeError:\n continue\n url = self.clean_link(urllib_parse.urljoin(self.base_url, url))\n yield Link(url, self, trusted=False, _deprecated_regex=True)\n\n _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\\\|-]', re.I)\n\n def clean_link(self, url):\n \"\"\"Makes sure a link is fully encoded. That is, if a ' ' shows up in\n the link, it will be rewritten to %20 (while not over-quoting\n % or other characters).\"\"\"\n return self._clean_re.sub(\n lambda match: '%%%2x' % ord(match.group(0)), url)\n\n\nclass Link(object):\n\n def __init__(self, url, comes_from=None, internal=None, trusted=None,\n _deprecated_regex=False):\n\n # url can be a UNC windows share\n if url != Inf and url.startswith('\\\\\\\\'):\n url = path_to_url(url)\n\n self.url = url\n self.comes_from = comes_from\n self.internal = internal\n self.trusted = trusted\n self._deprecated_regex = _deprecated_regex\n\n def __str__(self):\n if self.comes_from:\n return '%s (from %s)' % (self.url, self.comes_from)\n else:\n return str(self.url)\n\n def __repr__(self):\n return '<Link %s>' % self\n\n def __eq__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url == other.url\n\n def __ne__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url != other.url\n\n def __lt__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url < other.url\n\n def __le__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url <= other.url\n\n def __gt__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url > other.url\n\n def __ge__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url >= other.url\n\n def __hash__(self):\n return hash(self.url)\n\n @property\n def filename(self):\n _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)\n name = posixpath.basename(path.rstrip('/')) or netloc\n assert name, ('URL %r produced no filename' % self.url)\n return name\n\n @property\n def scheme(self):\n return urllib_parse.urlsplit(self.url)[0]\n\n @property\n def netloc(self):\n return urllib_parse.urlsplit(self.url)[1]\n\n @property\n def path(self):\n return urllib_parse.urlsplit(self.url)[2]\n\n def splitext(self):\n return splitext(posixpath.basename(self.path.rstrip('/')))\n\n @property\n def ext(self):\n return self.splitext()[1]\n\n @property\n def url_without_fragment(self):\n scheme, netloc, path, query, fragment = urllib_parse.urlsplit(self.url)\n return urllib_parse.urlunsplit((scheme, netloc, path, query, None))\n\n _egg_fragment_re = re.compile(r'#egg=([^&]*)')\n\n @property\n def egg_fragment(self):\n match = self._egg_fragment_re.search(self.url)\n if not match:\n return None\n return match.group(1)\n\n _hash_re = re.compile(\n r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)'\n )\n\n @property\n def hash(self):\n match = self._hash_re.search(self.url)\n if match:\n return match.group(2)\n return None\n\n @property\n def hash_name(self):\n match = self._hash_re.search(self.url)\n if match:\n return match.group(1)\n return None\n\n @property\n def show_url(self):\n return posixpath.basename(self.url.split('#', 1)[0].split('?', 1)[0])\n\n @property\n def verifiable(self):\n \"\"\"\n Returns True if this link can be verified after download, False if it\n cannot, and None if we cannot determine.\n \"\"\"\n trusted = self.trusted or getattr(self.comes_from, \"trusted\", None)\n if trusted is not None and trusted:\n # This link came from a trusted source. It *may* be verifiable but\n # first we need to see if this page is operating under the new\n # API version.\n try:\n api_version = getattr(self.comes_from, \"api_version\", None)\n api_version = int(api_version)\n except (ValueError, TypeError):\n api_version = None\n\n if api_version is None or api_version <= 1:\n # This link is either trusted, or it came from a trusted,\n # however it is not operating under the API version 2 so\n # we can't make any claims about if it's safe or not\n return\n\n if self.hash:\n # This link came from a trusted source and it has a hash, so we\n # can consider it safe.\n return True\n else:\n # This link came from a trusted source, using the new API\n # version, and it does not have a hash. It is NOT verifiable\n return False\n elif trusted is not None:\n # This link came from an untrusted source and we cannot trust it\n return False\n\n\n# An object to represent the \"link\" for the installed version of a requirement.\n# Using Inf as the url makes it sort higher.\nINSTALLED_VERSION = Link(Inf)\n", "path": "pip/index.py"}], "after_files": [{"content": "\"\"\"Routines related to PyPI, indexes\"\"\"\nfrom __future__ import absolute_import\n\nimport logging\nimport cgi\nimport sys\nimport os\nimport re\nimport mimetypes\nimport posixpath\nimport warnings\n\nfrom pip._vendor.six.moves.urllib import parse as urllib_parse\nfrom pip._vendor.six.moves.urllib import request as urllib_request\n\nfrom pip.compat import ipaddress\nfrom pip.utils import Inf, cached_property, normalize_name, splitext\nfrom pip.utils.deprecation import RemovedInPip7Warning, RemovedInPip8Warning\nfrom pip.utils.logging import indent_log\nfrom pip.exceptions import (\n DistributionNotFound, BestVersionAlreadyInstalled, InvalidWheelFilename,\n UnsupportedWheel,\n)\nfrom pip.download import url_to_path, path_to_url\nfrom pip.models import PyPI\nfrom pip.wheel import Wheel, wheel_ext\nfrom pip.pep425tags import supported_tags, supported_tags_noarch, get_platform\nfrom pip.req.req_requirement import InstallationCandidate\nfrom pip._vendor import html5lib, requests, pkg_resources, six\nfrom pip._vendor.packaging.version import parse as parse_version\nfrom pip._vendor.requests.exceptions import SSLError\n\n\n__all__ = ['PackageFinder']\n\n\n# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)\nSECURE_ORIGINS = [\n # protocol, hostname, port\n (\"https\", \"*\", \"*\"),\n (\"*\", \"localhost\", \"*\"),\n (\"*\", \"127.0.0.0/8\", \"*\"),\n (\"*\", \"::1/128\", \"*\"),\n (\"file\", \"*\", None),\n]\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PackageFinder(object):\n \"\"\"This finds packages.\n\n This is meant to match easy_install's technique for looking for\n packages, by reading pages and looking for appropriate links\n \"\"\"\n\n def __init__(self, find_links, index_urls,\n use_wheel=True, allow_external=(), allow_unverified=(),\n allow_all_external=False, allow_all_prereleases=False,\n trusted_hosts=None, process_dependency_links=False,\n session=None):\n if session is None:\n raise TypeError(\n \"PackageFinder() missing 1 required keyword argument: \"\n \"'session'\"\n )\n\n self.find_links = find_links\n self.index_urls = index_urls\n self.dependency_links = []\n\n # These are boring links that have already been logged somehow:\n self.logged_links = set()\n\n self.use_wheel = use_wheel\n\n # Do we allow (safe and verifiable) externally hosted files?\n self.allow_external = set(normalize_name(n) for n in allow_external)\n\n # Which names are allowed to install insecure and unverifiable files?\n self.allow_unverified = set(\n normalize_name(n) for n in allow_unverified\n )\n\n # Anything that is allowed unverified is also allowed external\n self.allow_external |= self.allow_unverified\n\n # Do we allow all (safe and verifiable) externally hosted files?\n self.allow_all_external = allow_all_external\n\n # Domains that we won't emit warnings for when not using HTTPS\n self.secure_origins = [\n (\"*\", host, \"*\")\n for host in (trusted_hosts if trusted_hosts else [])\n ]\n\n # Stores if we ignored any external links so that we can instruct\n # end users how to install them if no distributions are available\n self.need_warn_external = False\n\n # Stores if we ignored any unsafe links so that we can instruct\n # end users how to install them if no distributions are available\n self.need_warn_unverified = False\n\n # Do we want to allow _all_ pre-releases?\n self.allow_all_prereleases = allow_all_prereleases\n\n # Do we process dependency links?\n self.process_dependency_links = process_dependency_links\n\n # The Session we'll use to make requests\n self.session = session\n\n def add_dependency_links(self, links):\n # # FIXME: this shouldn't be global list this, it should only\n # # apply to requirements of the package that specifies the\n # # dependency_links value\n # # FIXME: also, we should track comes_from (i.e., use Link)\n if self.process_dependency_links:\n warnings.warn(\n \"Dependency Links processing has been deprecated and will be \"\n \"removed in a future release.\",\n RemovedInPip7Warning,\n )\n self.dependency_links.extend(links)\n\n def _sort_locations(self, locations):\n \"\"\"\n Sort locations into \"files\" (archives) and \"urls\", and return\n a pair of lists (files,urls)\n \"\"\"\n files = []\n urls = []\n\n # puts the url for the given file path into the appropriate list\n def sort_path(path):\n url = path_to_url(path)\n if mimetypes.guess_type(url, strict=False)[0] == 'text/html':\n urls.append(url)\n else:\n files.append(url)\n\n for url in locations:\n\n is_local_path = os.path.exists(url)\n is_file_url = url.startswith('file:')\n is_find_link = url in self.find_links\n\n if is_local_path or is_file_url:\n if is_local_path:\n path = url\n else:\n path = url_to_path(url)\n if is_find_link and os.path.isdir(path):\n path = os.path.realpath(path)\n for item in os.listdir(path):\n sort_path(os.path.join(path, item))\n elif is_file_url and os.path.isdir(path):\n urls.append(url)\n elif os.path.isfile(path):\n sort_path(path)\n else:\n urls.append(url)\n\n return files, urls\n\n def _candidate_sort_key(self, candidate):\n \"\"\"\n Function used to generate link sort key for link tuples.\n The greater the return value, the more preferred it is.\n If not finding wheels, then sorted by version only.\n If finding wheels, then the sort order is by version, then:\n 1. existing installs\n 2. wheels ordered via Wheel.support_index_min()\n 3. source archives\n Note: it was considered to embed this logic into the Link\n comparison operators, but then different sdist links\n with the same version, would have to be considered equal\n \"\"\"\n if self.use_wheel:\n support_num = len(supported_tags)\n if candidate.location == INSTALLED_VERSION:\n pri = 1\n elif candidate.location.ext == wheel_ext:\n # can raise InvalidWheelFilename\n wheel = Wheel(candidate.location.filename)\n if not wheel.supported():\n raise UnsupportedWheel(\n \"%s is not a supported wheel for this platform. It \"\n \"can't be sorted.\" % wheel.filename\n )\n pri = -(wheel.support_index_min())\n else: # sdist\n pri = -(support_num)\n return (candidate.version, pri)\n else:\n return candidate.version\n\n def _sort_versions(self, applicable_versions):\n \"\"\"\n Bring the latest version (and wheels) to the front, but maintain the\n existing ordering as secondary. See the docstring for `_link_sort_key`\n for details. This function is isolated for easier unit testing.\n \"\"\"\n return sorted(\n applicable_versions,\n key=self._candidate_sort_key,\n reverse=True\n )\n\n def _validate_secure_origin(self, logger, location):\n # Determine if this url used a secure transport mechanism\n parsed = urllib_parse.urlparse(str(location))\n origin = (parsed.scheme, parsed.hostname, parsed.port)\n\n # Determine if our origin is a secure origin by looking through our\n # hardcoded list of secure origins, as well as any additional ones\n # configured on this PackageFinder instance.\n for secure_origin in (SECURE_ORIGINS + self.secure_origins):\n # Check to see if the protocol matches\n if origin[0] != secure_origin[0] and secure_origin[0] != \"*\":\n continue\n\n try:\n # We need to do this decode dance to ensure that we have a\n # unicode object, even on Python 2.x.\n addr = ipaddress.ip_address(\n origin[1]\n if (\n isinstance(origin[1], six.text_type)\n or origin[1] is None\n )\n else origin[1].decode(\"utf8\")\n )\n network = ipaddress.ip_network(\n secure_origin[1]\n if isinstance(secure_origin[1], six.text_type)\n else secure_origin[1].decode(\"utf8\")\n )\n except ValueError:\n # We don't have both a valid address or a valid network, so\n # we'll check this origin against hostnames.\n if origin[1] != secure_origin[1] and secure_origin[1] != \"*\":\n continue\n else:\n # We have a valid address and network, so see if the address\n # is contained within the network.\n if addr not in network:\n continue\n\n # Check to see if the port patches\n if (origin[2] != secure_origin[2]\n and secure_origin[2] != \"*\"\n and secure_origin[2] is not None):\n continue\n\n # If we've gotten here, then this origin matches the current\n # secure origin and we should break out of the loop and continue\n # on.\n break\n else:\n # If the loop successfully completed without a break, that means\n # that the origin we are testing is not a secure origin.\n logger.warning(\n \"This repository located at %s is not a trusted host, if \"\n \"this repository is available via HTTPS it is recommend to \"\n \"use HTTPS instead, otherwise you may silence this warning \"\n \"with '--trusted-host %s'.\",\n parsed.hostname,\n parsed.hostname,\n )\n\n warnings.warn(\n \"Implicitly allowing locations which are not hosted at a \"\n \"secure origin is deprecated and will require the use of \"\n \"--trusted-host in the future.\",\n RemovedInPip7Warning,\n )\n\n def find_requirement(self, req, upgrade):\n\n def mkurl_pypi_url(url):\n loc = posixpath.join(url, url_name)\n # For maximum compatibility with easy_install, ensure the path\n # ends in a trailing slash. Although this isn't in the spec\n # (and PyPI can handle it without the slash) some other index\n # implementations might break if they relied on easy_install's\n # behavior.\n if not loc.endswith('/'):\n loc = loc + '/'\n return loc\n\n url_name = req.url_name\n\n # Only check main index if index URL is given:\n main_index_url = None\n if self.index_urls:\n # Check that we have the url_name correctly spelled:\n main_index_url = Link(\n mkurl_pypi_url(self.index_urls[0]),\n trusted=True,\n )\n\n page = self._get_page(main_index_url, req)\n if page is None and PyPI.netloc not in str(main_index_url):\n warnings.warn(\n \"Failed to find %r at %s. It is suggested to upgrade \"\n \"your index to support normalized names as the name in \"\n \"/simple/{name}.\" % (req.name, main_index_url),\n RemovedInPip8Warning,\n )\n\n url_name = self._find_url_name(\n Link(self.index_urls[0], trusted=True),\n url_name, req\n ) or req.url_name\n\n if url_name is not None:\n locations = [\n mkurl_pypi_url(url)\n for url in self.index_urls] + self.find_links\n else:\n locations = list(self.find_links)\n\n file_locations, url_locations = self._sort_locations(locations)\n _flocations, _ulocations = self._sort_locations(self.dependency_links)\n file_locations.extend(_flocations)\n\n # We trust every url that the user has given us whether it was given\n # via --index-url or --find-links\n locations = [Link(url, trusted=True) for url in url_locations]\n\n # We explicitly do not trust links that came from dependency_links\n locations.extend([Link(url) for url in _ulocations])\n\n logger.debug('URLs to search for versions for %s:', req)\n for location in locations:\n logger.debug('* %s', location)\n self._validate_secure_origin(logger, location)\n\n found_versions = []\n found_versions.extend(\n self._package_versions(\n # We trust every directly linked archive in find_links\n [Link(url, '-f', trusted=True) for url in self.find_links],\n req.name.lower()\n )\n )\n page_versions = []\n for page in self._get_pages(locations, req):\n logger.debug('Analyzing links from page %s', page.url)\n with indent_log():\n page_versions.extend(\n self._package_versions(page.links, req.name.lower())\n )\n dependency_versions = list(self._package_versions(\n [Link(url) for url in self.dependency_links], req.name.lower()))\n if dependency_versions:\n logger.debug(\n 'dependency_links found: %s',\n ', '.join([\n link.url for p, link, version in dependency_versions\n ])\n )\n file_versions = list(\n self._package_versions(\n [Link(url) for url in file_locations],\n req.name.lower()\n )\n )\n if (not found_versions\n and not page_versions\n and not dependency_versions\n and not file_versions):\n logger.critical(\n 'Could not find any downloads that satisfy the requirement %s',\n req,\n )\n\n if self.need_warn_external:\n logger.warning(\n \"Some externally hosted files were ignored as access to \"\n \"them may be unreliable (use --allow-external %s to \"\n \"allow).\",\n req.name,\n )\n\n if self.need_warn_unverified:\n logger.warning(\n \"Some insecure and unverifiable files were ignored\"\n \" (use --allow-unverified %s to allow).\",\n req.name,\n )\n\n raise DistributionNotFound(\n 'No distributions at all found for %s' % req\n )\n installed_version = []\n if req.satisfied_by is not None:\n installed_version = [\n InstallationCandidate(\n req.name,\n req.satisfied_by.version,\n INSTALLED_VERSION,\n ),\n ]\n if file_versions:\n file_versions.sort(reverse=True)\n logger.debug(\n 'Local files found: %s',\n ', '.join([\n url_to_path(candidate.location.url)\n for candidate in file_versions\n ])\n )\n\n # This is an intentional priority ordering\n all_versions = (\n file_versions + found_versions + page_versions\n + dependency_versions\n )\n\n # Filter out anything which doesn't match our specifier\n _versions = set(\n req.specifier.filter(\n [x.version for x in all_versions],\n prereleases=(\n self.allow_all_prereleases\n if self.allow_all_prereleases else None\n ),\n )\n )\n all_versions = [x for x in all_versions if x.version in _versions]\n\n # Finally add our existing versions to the front of our versions.\n applicable_versions = installed_version + all_versions\n\n applicable_versions = self._sort_versions(applicable_versions)\n existing_applicable = any(\n i.location is INSTALLED_VERSION\n for i in applicable_versions\n )\n\n if not upgrade and existing_applicable:\n if applicable_versions[0].location is INSTALLED_VERSION:\n logger.debug(\n 'Existing installed version (%s) is most up-to-date and '\n 'satisfies requirement',\n req.satisfied_by.version,\n )\n else:\n logger.debug(\n 'Existing installed version (%s) satisfies requirement '\n '(most up-to-date version is %s)',\n req.satisfied_by.version,\n applicable_versions[0][2],\n )\n return None\n\n if not applicable_versions:\n logger.critical(\n 'Could not find a version that satisfies the requirement %s '\n '(from versions: %s)',\n req,\n ', '.join(\n sorted(\n set(str(i.version) for i in all_versions),\n key=parse_version,\n )\n )\n )\n\n if self.need_warn_external:\n logger.warning(\n \"Some externally hosted files were ignored as access to \"\n \"them may be unreliable (use --allow-external to allow).\"\n )\n\n if self.need_warn_unverified:\n logger.warning(\n \"Some insecure and unverifiable files were ignored\"\n \" (use --allow-unverified %s to allow).\",\n req.name,\n )\n\n raise DistributionNotFound(\n 'No distributions matching the version for %s' % req\n )\n\n if applicable_versions[0].location is INSTALLED_VERSION:\n # We have an existing version, and its the best version\n logger.debug(\n 'Installed version (%s) is most up-to-date (past versions: ',\n '%s)',\n req.satisfied_by.version,\n ', '.join(str(i.version) for i in applicable_versions[1:])\n or \"none\",\n )\n raise BestVersionAlreadyInstalled\n\n if len(applicable_versions) > 1:\n logger.debug(\n 'Using version %s (newest of versions: %s)',\n applicable_versions[0].version,\n ', '.join(str(i.version) for i in applicable_versions)\n )\n\n selected_version = applicable_versions[0].location\n\n if (selected_version.verifiable is not None\n and not selected_version.verifiable):\n logger.warning(\n \"%s is potentially insecure and unverifiable.\", req.name,\n )\n\n if selected_version._deprecated_regex:\n warnings.warn(\n \"%s discovered using a deprecated method of parsing, in the \"\n \"future it will no longer be discovered.\" % req.name,\n RemovedInPip7Warning,\n )\n\n return selected_version\n\n def _find_url_name(self, index_url, url_name, req):\n \"\"\"\n Finds the true URL name of a package, when the given name isn't quite\n correct.\n This is usually used to implement case-insensitivity.\n \"\"\"\n if not index_url.url.endswith('/'):\n # Vaguely part of the PyPI API... weird but true.\n # FIXME: bad to modify this?\n index_url.url += '/'\n page = self._get_page(index_url, req)\n if page is None:\n logger.critical('Cannot fetch index base URL %s', index_url)\n return\n norm_name = normalize_name(req.url_name)\n for link in page.links:\n base = posixpath.basename(link.path.rstrip('/'))\n if norm_name == normalize_name(base):\n logger.debug(\n 'Real name of requirement %s is %s', url_name, base,\n )\n return base\n return None\n\n def _get_pages(self, locations, req):\n \"\"\"\n Yields (page, page_url) from the given locations, skipping\n locations that have errors, and adding download/homepage links\n \"\"\"\n all_locations = list(locations)\n seen = set()\n\n while all_locations:\n location = all_locations.pop(0)\n if location in seen:\n continue\n seen.add(location)\n\n page = self._get_page(location, req)\n if page is None:\n continue\n\n yield page\n\n for link in page.rel_links():\n normalized = normalize_name(req.name).lower()\n\n if (normalized not in self.allow_external\n and not self.allow_all_external):\n self.need_warn_external = True\n logger.debug(\n \"Not searching %s for files because external \"\n \"urls are disallowed.\",\n link,\n )\n continue\n\n if (link.trusted is not None\n and not link.trusted\n and normalized not in self.allow_unverified):\n logger.debug(\n \"Not searching %s for urls, it is an \"\n \"untrusted link and cannot produce safe or \"\n \"verifiable files.\",\n link,\n )\n self.need_warn_unverified = True\n continue\n\n all_locations.append(link)\n\n _egg_fragment_re = re.compile(r'#egg=([^&]*)')\n _egg_info_re = re.compile(r'([a-z0-9_.]+)-([a-z0-9_.!+-]+)', re.I)\n _py_version_re = re.compile(r'-py([123]\\.?[0-9]?)$')\n\n def _sort_links(self, links):\n \"\"\"\n Returns elements of links in order, non-egg links first, egg links\n second, while eliminating duplicates\n \"\"\"\n eggs, no_eggs = [], []\n seen = set()\n for link in links:\n if link not in seen:\n seen.add(link)\n if link.egg_fragment:\n eggs.append(link)\n else:\n no_eggs.append(link)\n return no_eggs + eggs\n\n def _package_versions(self, links, search_name):\n for link in self._sort_links(links):\n v = self._link_package_versions(link, search_name)\n if v is not None:\n yield v\n\n def _known_extensions(self):\n extensions = ('.tar.gz', '.tar.bz2', '.tar', '.tgz', '.zip')\n if self.use_wheel:\n return extensions + (wheel_ext,)\n return extensions\n\n def _link_package_versions(self, link, search_name):\n \"\"\"\n Return an iterable of triples (pkg_resources_version_key,\n link, python_version) that can be extracted from the given\n link.\n\n Meant to be overridden by subclasses, not called by clients.\n \"\"\"\n platform = get_platform()\n\n version = None\n if link.egg_fragment:\n egg_info = link.egg_fragment\n else:\n egg_info, ext = link.splitext()\n if not ext:\n if link not in self.logged_links:\n logger.debug('Skipping link %s; not a file', link)\n self.logged_links.add(link)\n return\n if egg_info.endswith('.tar'):\n # Special double-extension case:\n egg_info = egg_info[:-4]\n ext = '.tar' + ext\n if ext not in self._known_extensions():\n if link not in self.logged_links:\n logger.debug(\n 'Skipping link %s; unknown archive format: %s',\n link,\n ext,\n )\n self.logged_links.add(link)\n return\n if \"macosx10\" in link.path and ext == '.zip':\n if link not in self.logged_links:\n logger.debug('Skipping link %s; macosx10 one', link)\n self.logged_links.add(link)\n return\n if ext == wheel_ext:\n try:\n wheel = Wheel(link.filename)\n except InvalidWheelFilename:\n logger.debug(\n 'Skipping %s because the wheel filename is invalid',\n link\n )\n return\n if (pkg_resources.safe_name(wheel.name).lower()\n != pkg_resources.safe_name(search_name).lower()):\n logger.debug(\n 'Skipping link %s; wrong project name (not %s)',\n link,\n search_name,\n )\n return\n if not wheel.supported():\n logger.debug(\n 'Skipping %s because it is not compatible with this '\n 'Python',\n link,\n )\n return\n # This is a dirty hack to prevent installing Binary Wheels from\n # PyPI unless it is a Windows or Mac Binary Wheel. This is\n # paired with a change to PyPI disabling uploads for the\n # same. Once we have a mechanism for enabling support for\n # binary wheels on linux that deals with the inherent problems\n # of binary distribution this can be removed.\n comes_from = getattr(link, \"comes_from\", None)\n if (\n (\n not platform.startswith('win')\n and not platform.startswith('macosx')\n and not platform == 'cli'\n )\n and comes_from is not None\n and urllib_parse.urlparse(\n comes_from.url\n ).netloc.endswith(PyPI.netloc)):\n if not wheel.supported(tags=supported_tags_noarch):\n logger.debug(\n \"Skipping %s because it is a pypi-hosted binary \"\n \"Wheel on an unsupported platform\",\n link,\n )\n return\n version = wheel.version\n\n if not version:\n version = self._egg_info_matches(egg_info, search_name, link)\n if version is None:\n logger.debug(\n 'Skipping link %s; wrong project name (not %s)',\n link,\n search_name,\n )\n return\n\n if (link.internal is not None\n and not link.internal\n and not normalize_name(search_name).lower()\n in self.allow_external\n and not self.allow_all_external):\n # We have a link that we are sure is external, so we should skip\n # it unless we are allowing externals\n logger.debug(\"Skipping %s because it is externally hosted.\", link)\n self.need_warn_external = True\n return\n\n if (link.verifiable is not None\n and not link.verifiable\n and not (normalize_name(search_name).lower()\n in self.allow_unverified)):\n # We have a link that we are sure we cannot verify its integrity,\n # so we should skip it unless we are allowing unsafe installs\n # for this requirement.\n logger.debug(\n \"Skipping %s because it is an insecure and unverifiable file.\",\n link,\n )\n self.need_warn_unverified = True\n return\n\n match = self._py_version_re.search(version)\n if match:\n version = version[:match.start()]\n py_version = match.group(1)\n if py_version != sys.version[:3]:\n logger.debug(\n 'Skipping %s because Python version is incorrect', link\n )\n return\n logger.debug('Found link %s, version: %s', link, version)\n\n return InstallationCandidate(search_name, version, link)\n\n def _egg_info_matches(self, egg_info, search_name, link):\n match = self._egg_info_re.search(egg_info)\n if not match:\n logger.debug('Could not parse version from link: %s', link)\n return None\n name = match.group(0).lower()\n # To match the \"safe\" name that pkg_resources creates:\n name = name.replace('_', '-')\n # project name and version must be separated by a dash\n look_for = search_name.lower() + \"-\"\n if name.startswith(look_for):\n return match.group(0)[len(look_for):]\n else:\n return None\n\n def _get_page(self, link, req):\n return HTMLPage.get_page(link, req, session=self.session)\n\n\nclass HTMLPage(object):\n \"\"\"Represents one page, along with its URL\"\"\"\n\n # FIXME: these regexes are horrible hacks:\n _homepage_re = re.compile(b'<th>\\\\s*home\\\\s*page', re.I)\n _download_re = re.compile(b'<th>\\\\s*download\\\\s+url', re.I)\n _href_re = re.compile(\n b'href=(?:\"([^\"]*)\"|\\'([^\\']*)\\'|([^>\\\\s\\\\n]*))',\n re.I | re.S\n )\n\n def __init__(self, content, url, headers=None, trusted=None):\n # Determine if we have any encoding information in our headers\n encoding = None\n if headers and \"Content-Type\" in headers:\n content_type, params = cgi.parse_header(headers[\"Content-Type\"])\n\n if \"charset\" in params:\n encoding = params['charset']\n\n self.content = content\n self.parsed = html5lib.parse(\n self.content,\n encoding=encoding,\n namespaceHTMLElements=False,\n )\n self.url = url\n self.headers = headers\n self.trusted = trusted\n\n def __str__(self):\n return self.url\n\n @classmethod\n def get_page(cls, link, req, skip_archives=True, session=None):\n if session is None:\n raise TypeError(\n \"get_page() missing 1 required keyword argument: 'session'\"\n )\n\n url = link.url\n url = url.split('#', 1)[0]\n\n # Check for VCS schemes that do not support lookup as web pages.\n from pip.vcs import VcsSupport\n for scheme in VcsSupport.schemes:\n if url.lower().startswith(scheme) and url[len(scheme)] in '+:':\n logger.debug('Cannot look at %s URL %s', scheme, link)\n return None\n\n try:\n if skip_archives:\n filename = link.filename\n for bad_ext in ['.tar', '.tar.gz', '.tar.bz2', '.tgz', '.zip']:\n if filename.endswith(bad_ext):\n content_type = cls._get_content_type(\n url, session=session,\n )\n if content_type.lower().startswith('text/html'):\n break\n else:\n logger.debug(\n 'Skipping page %s because of Content-Type: %s',\n link,\n content_type,\n )\n return\n\n logger.debug('Getting page %s', url)\n\n # Tack index.html onto file:// URLs that point to directories\n (scheme, netloc, path, params, query, fragment) = \\\n urllib_parse.urlparse(url)\n if (scheme == 'file'\n and os.path.isdir(urllib_request.url2pathname(path))):\n # add trailing slash if not present so urljoin doesn't trim\n # final segment\n if not url.endswith('/'):\n url += '/'\n url = urllib_parse.urljoin(url, 'index.html')\n logger.debug(' file: URL is directory, getting %s', url)\n\n resp = session.get(\n url,\n headers={\n \"Accept\": \"text/html\",\n \"Cache-Control\": \"max-age=600\",\n },\n )\n resp.raise_for_status()\n\n # The check for archives above only works if the url ends with\n # something that looks like an archive. However that is not a\n # requirement of an url. Unless we issue a HEAD request on every\n # url we cannot know ahead of time for sure if something is HTML\n # or not. However we can check after we've downloaded it.\n content_type = resp.headers.get('Content-Type', 'unknown')\n if not content_type.lower().startswith(\"text/html\"):\n logger.debug(\n 'Skipping page %s because of Content-Type: %s',\n link,\n content_type,\n )\n return\n\n inst = cls(\n resp.content, resp.url, resp.headers,\n trusted=link.trusted,\n )\n except requests.HTTPError as exc:\n level = 2 if exc.response.status_code == 404 else 1\n cls._handle_fail(req, link, exc, url, level=level)\n except requests.ConnectionError as exc:\n cls._handle_fail(\n req, link, \"connection error: %s\" % exc, url,\n )\n except requests.Timeout:\n cls._handle_fail(req, link, \"timed out\", url)\n except SSLError as exc:\n reason = (\"There was a problem confirming the ssl certificate: \"\n \"%s\" % exc)\n cls._handle_fail(\n req, link, reason, url,\n level=2,\n meth=logger.info,\n )\n else:\n return inst\n\n @staticmethod\n def _handle_fail(req, link, reason, url, level=1, meth=None):\n if meth is None:\n meth = logger.debug\n\n meth(\"Could not fetch URL %s: %s\", link, reason)\n meth(\"Will skip URL %s when looking for download links for %s\" %\n (link.url, req))\n\n @staticmethod\n def _get_content_type(url, session):\n \"\"\"Get the Content-Type of the given url, using a HEAD request\"\"\"\n scheme, netloc, path, query, fragment = urllib_parse.urlsplit(url)\n if scheme not in ('http', 'https'):\n # FIXME: some warning or something?\n # assertion error?\n return ''\n\n resp = session.head(url, allow_redirects=True)\n resp.raise_for_status()\n\n return resp.headers.get(\"Content-Type\", \"\")\n\n @cached_property\n def api_version(self):\n metas = [\n x for x in self.parsed.findall(\".//meta\")\n if x.get(\"name\", \"\").lower() == \"api-version\"\n ]\n if metas:\n try:\n return int(metas[0].get(\"value\", None))\n except (TypeError, ValueError):\n pass\n\n return None\n\n @cached_property\n def base_url(self):\n bases = [\n x for x in self.parsed.findall(\".//base\")\n if x.get(\"href\") is not None\n ]\n if bases and bases[0].get(\"href\"):\n return bases[0].get(\"href\")\n else:\n return self.url\n\n @property\n def links(self):\n \"\"\"Yields all links in the page\"\"\"\n for anchor in self.parsed.findall(\".//a\"):\n if anchor.get(\"href\"):\n href = anchor.get(\"href\")\n url = self.clean_link(\n urllib_parse.urljoin(self.base_url, href)\n )\n\n # Determine if this link is internal. If that distinction\n # doesn't make sense in this context, then we don't make\n # any distinction.\n internal = None\n if self.api_version and self.api_version >= 2:\n # Only api_versions >= 2 have a distinction between\n # external and internal links\n internal = bool(\n anchor.get(\"rel\")\n and \"internal\" in anchor.get(\"rel\").split()\n )\n\n yield Link(url, self, internal=internal)\n\n def rel_links(self):\n for url in self.explicit_rel_links():\n yield url\n for url in self.scraped_rel_links():\n yield url\n\n def explicit_rel_links(self, rels=('homepage', 'download')):\n \"\"\"Yields all links with the given relations\"\"\"\n rels = set(rels)\n\n for anchor in self.parsed.findall(\".//a\"):\n if anchor.get(\"rel\") and anchor.get(\"href\"):\n found_rels = set(anchor.get(\"rel\").split())\n # Determine the intersection between what rels were found and\n # what rels were being looked for\n if found_rels & rels:\n href = anchor.get(\"href\")\n url = self.clean_link(\n urllib_parse.urljoin(self.base_url, href)\n )\n yield Link(url, self, trusted=False)\n\n def scraped_rel_links(self):\n # Can we get rid of this horrible horrible method?\n for regex in (self._homepage_re, self._download_re):\n match = regex.search(self.content)\n if not match:\n continue\n href_match = self._href_re.search(self.content, pos=match.end())\n if not href_match:\n continue\n url = (\n href_match.group(1)\n or href_match.group(2)\n or href_match.group(3)\n )\n if not url:\n continue\n try:\n url = url.decode(\"ascii\")\n except UnicodeDecodeError:\n continue\n url = self.clean_link(urllib_parse.urljoin(self.base_url, url))\n yield Link(url, self, trusted=False, _deprecated_regex=True)\n\n _clean_re = re.compile(r'[^a-z0-9$&+,/:;=?@.#%_\\\\|-]', re.I)\n\n def clean_link(self, url):\n \"\"\"Makes sure a link is fully encoded. That is, if a ' ' shows up in\n the link, it will be rewritten to %20 (while not over-quoting\n % or other characters).\"\"\"\n return self._clean_re.sub(\n lambda match: '%%%2x' % ord(match.group(0)), url)\n\n\nclass Link(object):\n\n def __init__(self, url, comes_from=None, internal=None, trusted=None,\n _deprecated_regex=False):\n\n # url can be a UNC windows share\n if url != Inf and url.startswith('\\\\\\\\'):\n url = path_to_url(url)\n\n self.url = url\n self.comes_from = comes_from\n self.internal = internal\n self.trusted = trusted\n self._deprecated_regex = _deprecated_regex\n\n def __str__(self):\n if self.comes_from:\n return '%s (from %s)' % (self.url, self.comes_from)\n else:\n return str(self.url)\n\n def __repr__(self):\n return '<Link %s>' % self\n\n def __eq__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url == other.url\n\n def __ne__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url != other.url\n\n def __lt__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url < other.url\n\n def __le__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url <= other.url\n\n def __gt__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url > other.url\n\n def __ge__(self, other):\n if not isinstance(other, Link):\n return NotImplemented\n return self.url >= other.url\n\n def __hash__(self):\n return hash(self.url)\n\n @property\n def filename(self):\n _, netloc, path, _, _ = urllib_parse.urlsplit(self.url)\n name = posixpath.basename(path.rstrip('/')) or netloc\n name = urllib_parse.unquote(name)\n assert name, ('URL %r produced no filename' % self.url)\n return name\n\n @property\n def scheme(self):\n return urllib_parse.urlsplit(self.url)[0]\n\n @property\n def netloc(self):\n return urllib_parse.urlsplit(self.url)[1]\n\n @property\n def path(self):\n return urllib_parse.urlsplit(self.url)[2]\n\n def splitext(self):\n return splitext(posixpath.basename(self.path.rstrip('/')))\n\n @property\n def ext(self):\n return self.splitext()[1]\n\n @property\n def url_without_fragment(self):\n scheme, netloc, path, query, fragment = urllib_parse.urlsplit(self.url)\n return urllib_parse.urlunsplit((scheme, netloc, path, query, None))\n\n _egg_fragment_re = re.compile(r'#egg=([^&]*)')\n\n @property\n def egg_fragment(self):\n match = self._egg_fragment_re.search(self.url)\n if not match:\n return None\n return match.group(1)\n\n _hash_re = re.compile(\n r'(sha1|sha224|sha384|sha256|sha512|md5)=([a-f0-9]+)'\n )\n\n @property\n def hash(self):\n match = self._hash_re.search(self.url)\n if match:\n return match.group(2)\n return None\n\n @property\n def hash_name(self):\n match = self._hash_re.search(self.url)\n if match:\n return match.group(1)\n return None\n\n @property\n def show_url(self):\n return posixpath.basename(self.url.split('#', 1)[0].split('?', 1)[0])\n\n @property\n def verifiable(self):\n \"\"\"\n Returns True if this link can be verified after download, False if it\n cannot, and None if we cannot determine.\n \"\"\"\n trusted = self.trusted or getattr(self.comes_from, \"trusted\", None)\n if trusted is not None and trusted:\n # This link came from a trusted source. It *may* be verifiable but\n # first we need to see if this page is operating under the new\n # API version.\n try:\n api_version = getattr(self.comes_from, \"api_version\", None)\n api_version = int(api_version)\n except (ValueError, TypeError):\n api_version = None\n\n if api_version is None or api_version <= 1:\n # This link is either trusted, or it came from a trusted,\n # however it is not operating under the API version 2 so\n # we can't make any claims about if it's safe or not\n return\n\n if self.hash:\n # This link came from a trusted source and it has a hash, so we\n # can consider it safe.\n return True\n else:\n # This link came from a trusted source, using the new API\n # version, and it does not have a hash. It is NOT verifiable\n return False\n elif trusted is not None:\n # This link came from an untrusted source and we cannot trust it\n return False\n\n\n# An object to represent the \"link\" for the installed version of a requirement.\n# Using Inf as the url makes it sort higher.\nINSTALLED_VERSION = Link(Inf)\n", "path": "pip/index.py"}]} |
gh_patches_debug_1146 | rasdani/github-patches | git_diff | chainer__chainer-7561 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support ellipsis in `Array::At` and `__getitem__`
Depends on #7559 because `py::ellipsis` is supported from v2.3.0.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `chainerx/_fallback_workarounds.py`
Content:
```
1 # This file defines workaround implementation for
2 # NumPy-compatibility functions that fall back to NumPy/CuPy functions
3 # for native/cuda devices respecitvely.
4 # The workaround does not support backprop, and also requires external
5 # libraries mentioned above.
6 # Functions defined in this file should be considered to have high priority for
7 # genuine implementations.
8 import numpy
9
10 import chainerx
11
12
13 try:
14 import cupy
15 except Exception:
16 cupy = None
17
18
19 class _DummyContext:
20 def __enter__(self):
21 pass
22
23 def __exit__(self, type, value, traceback):
24 pass
25
26
27 _dummy_context = _DummyContext()
28
29
30 def _to_numpy(array):
31 assert isinstance(array, chainerx.ndarray)
32 return chainerx.to_numpy(array, copy=False)
33
34
35 def _from_numpy(array):
36 assert isinstance(array, numpy.ndarray)
37 return chainerx.array(array, copy=False)
38
39
40 def _to_cupy(array):
41 assert cupy is not None
42 # Convert to cupy.ndarray on the same device as source array
43 return chainerx._to_cupy(array)
44
45
46 def _from_cupy(array):
47 assert cupy is not None
48 assert isinstance(array, cupy.ndarray)
49 device = chainerx.get_device('cuda', array.device.id)
50 return chainerx._core._fromrawpointer(
51 array.data.mem.ptr,
52 array.shape,
53 array.dtype,
54 array.strides,
55 device,
56 array.data.ptr - array.data.mem.ptr,
57 array)
58
59
60 def _from_chx(array, check_backprop=True):
61 # Converts chainerx.ndarray to numpy/cupy.ndarray.
62 # Objects with other types are kept intact.
63 # Returns a pair: (xp, cupy device or dummy context, numpy/cupy.ndarray).
64 if not isinstance(array, chainerx.ndarray):
65 if (isinstance(array, numpy.ndarray)
66 or (cupy and isinstance(array, cupy.ndarray))):
67 raise TypeError(
68 'ChainerX function fallback using NumPy/CuPy arrays '
69 'is not supported.')
70 # _from_chx is also called for slice and tuple objects
71 # Used to index a chx array
72 return None, _dummy_context, array
73 if check_backprop and array.is_backprop_required():
74 raise RuntimeError(
75 'ChainerX function fallback using NumPy/CuPy is not '
76 'supported for arrays that are connected to a graph.')
77 backend_name = array.device.backend.name
78 if backend_name == 'native':
79 return numpy, _dummy_context, _to_numpy(array)
80 if backend_name == 'cuda':
81 if cupy is None:
82 raise RuntimeError(
83 'ChainerX fallback implementation for cuda backend requires '
84 'cupy to be installed.')
85 array_cupy = _to_cupy(array)
86 return cupy, array_cupy.device, array_cupy
87 raise RuntimeError(
88 'ChainerX fallback implementation only supports native or cuda '
89 'backends.')
90
91
92 def _to_chx(array):
93 # Converts numpy/cupy.ndarray to chainerx.ndarray.
94 # Objects with other types are kept intact.
95 if isinstance(array, numpy.ndarray):
96 return _from_numpy(array)
97 elif cupy is not None and isinstance(array, cupy.ndarray):
98 return _from_cupy(array)
99 return array
100
101
102 def _populate_module_functions():
103
104 def _fix(arr):
105 xp, dev, arr = _from_chx(arr)
106 with dev:
107 ret = xp.fix(arr)
108 ret = xp.asarray(ret)
109 return _to_chx(ret)
110
111 chainerx.fix = _fix
112
113
114 def _populate_ndarray():
115 ndarray = chainerx.ndarray
116
117 # __getitem__ with advanced indexing
118 old_getitem = ndarray.__getitem__
119
120 def __getitem__(arr, key):
121 try:
122 return old_getitem(arr, key)
123 except (IndexError, chainerx.DimensionError):
124 pass
125
126 is_backprop_required = arr.is_backprop_required()
127
128 xp, dev, arr = _from_chx(arr, check_backprop=False)
129 # The elements used for indexing the array might be
130 # also ChainerX arrays. _from_chx ignores
131 # other types and return them as-is
132 if isinstance(key, tuple):
133 key = tuple([_from_chx(k, check_backprop=False)[2] for k in key])
134 else:
135 _, _, key = _from_chx(key, check_backprop=False)
136
137 with dev:
138 ret = arr[key]
139
140 # Doing this check after the fallback __getitem__ because the error
141 # which caused the fallback might not be due to advanced indexing.
142 # In such case the fallback __getitem__ should also raise the error.
143
144 if is_backprop_required:
145 raise RuntimeError(
146 'ChainerX getitem fallback for advanced indexing is not '
147 'supported for arrays that are connected to a graph.')
148
149 return _to_chx(ret)
150
151 # __setitem__ with advanced indexing
152 def __setitem__(self, key, value):
153 if self.is_backprop_required():
154 raise RuntimeError(
155 'ChainerX setitem fallback for advanced indexing is not '
156 'supported for arrays that are connected to a graph.')
157
158 xp, dev, self = _from_chx(self)
159 if isinstance(key, tuple):
160 key = tuple([_from_chx(k)[2] for k in key])
161 else:
162 _, _, key = _from_chx(key)
163 _, _, value = _from_chx(value)
164
165 with dev:
166 self[key] = value
167
168 ndarray.__setitem__ = __setitem__
169 ndarray.__getitem__ = __getitem__
170
171 def tolist(arr):
172 _, dev, arr = _from_chx(arr)
173 with dev:
174 ret = arr.tolist()
175 return ret
176
177 ndarray.tolist = tolist
178
179
180 def populate():
181 _populate_module_functions()
182 _populate_ndarray()
183
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/chainerx/_fallback_workarounds.py b/chainerx/_fallback_workarounds.py
--- a/chainerx/_fallback_workarounds.py
+++ b/chainerx/_fallback_workarounds.py
@@ -118,10 +118,8 @@
old_getitem = ndarray.__getitem__
def __getitem__(arr, key):
- try:
+ if not isinstance(key, chainerx.ndarray):
return old_getitem(arr, key)
- except (IndexError, chainerx.DimensionError):
- pass
is_backprop_required = arr.is_backprop_required()
| {"golden_diff": "diff --git a/chainerx/_fallback_workarounds.py b/chainerx/_fallback_workarounds.py\n--- a/chainerx/_fallback_workarounds.py\n+++ b/chainerx/_fallback_workarounds.py\n@@ -118,10 +118,8 @@\n old_getitem = ndarray.__getitem__\n \n def __getitem__(arr, key):\n- try:\n+ if not isinstance(key, chainerx.ndarray):\n return old_getitem(arr, key)\n- except (IndexError, chainerx.DimensionError):\n- pass\n \n is_backprop_required = arr.is_backprop_required()\n", "issue": "Support ellipsis in `Array::At` and `__getitem__`\nDepends on #7559 because `py::ellipsis` is supported from v2.3.0.\n", "before_files": [{"content": "# This file defines workaround implementation for\n# NumPy-compatibility functions that fall back to NumPy/CuPy functions\n# for native/cuda devices respecitvely.\n# The workaround does not support backprop, and also requires external\n# libraries mentioned above.\n# Functions defined in this file should be considered to have high priority for\n# genuine implementations.\nimport numpy\n\nimport chainerx\n\n\ntry:\n import cupy\nexcept Exception:\n cupy = None\n\n\nclass _DummyContext:\n def __enter__(self):\n pass\n\n def __exit__(self, type, value, traceback):\n pass\n\n\n_dummy_context = _DummyContext()\n\n\ndef _to_numpy(array):\n assert isinstance(array, chainerx.ndarray)\n return chainerx.to_numpy(array, copy=False)\n\n\ndef _from_numpy(array):\n assert isinstance(array, numpy.ndarray)\n return chainerx.array(array, copy=False)\n\n\ndef _to_cupy(array):\n assert cupy is not None\n # Convert to cupy.ndarray on the same device as source array\n return chainerx._to_cupy(array)\n\n\ndef _from_cupy(array):\n assert cupy is not None\n assert isinstance(array, cupy.ndarray)\n device = chainerx.get_device('cuda', array.device.id)\n return chainerx._core._fromrawpointer(\n array.data.mem.ptr,\n array.shape,\n array.dtype,\n array.strides,\n device,\n array.data.ptr - array.data.mem.ptr,\n array)\n\n\ndef _from_chx(array, check_backprop=True):\n # Converts chainerx.ndarray to numpy/cupy.ndarray.\n # Objects with other types are kept intact.\n # Returns a pair: (xp, cupy device or dummy context, numpy/cupy.ndarray).\n if not isinstance(array, chainerx.ndarray):\n if (isinstance(array, numpy.ndarray)\n or (cupy and isinstance(array, cupy.ndarray))):\n raise TypeError(\n 'ChainerX function fallback using NumPy/CuPy arrays '\n 'is not supported.')\n # _from_chx is also called for slice and tuple objects\n # Used to index a chx array\n return None, _dummy_context, array\n if check_backprop and array.is_backprop_required():\n raise RuntimeError(\n 'ChainerX function fallback using NumPy/CuPy is not '\n 'supported for arrays that are connected to a graph.')\n backend_name = array.device.backend.name\n if backend_name == 'native':\n return numpy, _dummy_context, _to_numpy(array)\n if backend_name == 'cuda':\n if cupy is None:\n raise RuntimeError(\n 'ChainerX fallback implementation for cuda backend requires '\n 'cupy to be installed.')\n array_cupy = _to_cupy(array)\n return cupy, array_cupy.device, array_cupy\n raise RuntimeError(\n 'ChainerX fallback implementation only supports native or cuda '\n 'backends.')\n\n\ndef _to_chx(array):\n # Converts numpy/cupy.ndarray to chainerx.ndarray.\n # Objects with other types are kept intact.\n if isinstance(array, numpy.ndarray):\n return _from_numpy(array)\n elif cupy is not None and isinstance(array, cupy.ndarray):\n return _from_cupy(array)\n return array\n\n\ndef _populate_module_functions():\n\n def _fix(arr):\n xp, dev, arr = _from_chx(arr)\n with dev:\n ret = xp.fix(arr)\n ret = xp.asarray(ret)\n return _to_chx(ret)\n\n chainerx.fix = _fix\n\n\ndef _populate_ndarray():\n ndarray = chainerx.ndarray\n\n # __getitem__ with advanced indexing\n old_getitem = ndarray.__getitem__\n\n def __getitem__(arr, key):\n try:\n return old_getitem(arr, key)\n except (IndexError, chainerx.DimensionError):\n pass\n\n is_backprop_required = arr.is_backprop_required()\n\n xp, dev, arr = _from_chx(arr, check_backprop=False)\n # The elements used for indexing the array might be\n # also ChainerX arrays. _from_chx ignores\n # other types and return them as-is\n if isinstance(key, tuple):\n key = tuple([_from_chx(k, check_backprop=False)[2] for k in key])\n else:\n _, _, key = _from_chx(key, check_backprop=False)\n\n with dev:\n ret = arr[key]\n\n # Doing this check after the fallback __getitem__ because the error\n # which caused the fallback might not be due to advanced indexing.\n # In such case the fallback __getitem__ should also raise the error.\n\n if is_backprop_required:\n raise RuntimeError(\n 'ChainerX getitem fallback for advanced indexing is not '\n 'supported for arrays that are connected to a graph.')\n\n return _to_chx(ret)\n\n # __setitem__ with advanced indexing\n def __setitem__(self, key, value):\n if self.is_backprop_required():\n raise RuntimeError(\n 'ChainerX setitem fallback for advanced indexing is not '\n 'supported for arrays that are connected to a graph.')\n\n xp, dev, self = _from_chx(self)\n if isinstance(key, tuple):\n key = tuple([_from_chx(k)[2] for k in key])\n else:\n _, _, key = _from_chx(key)\n _, _, value = _from_chx(value)\n\n with dev:\n self[key] = value\n\n ndarray.__setitem__ = __setitem__\n ndarray.__getitem__ = __getitem__\n\n def tolist(arr):\n _, dev, arr = _from_chx(arr)\n with dev:\n ret = arr.tolist()\n return ret\n\n ndarray.tolist = tolist\n\n\ndef populate():\n _populate_module_functions()\n _populate_ndarray()\n", "path": "chainerx/_fallback_workarounds.py"}], "after_files": [{"content": "# This file defines workaround implementation for\n# NumPy-compatibility functions that fall back to NumPy/CuPy functions\n# for native/cuda devices respecitvely.\n# The workaround does not support backprop, and also requires external\n# libraries mentioned above.\n# Functions defined in this file should be considered to have high priority for\n# genuine implementations.\nimport numpy\n\nimport chainerx\n\n\ntry:\n import cupy\nexcept Exception:\n cupy = None\n\n\nclass _DummyContext:\n def __enter__(self):\n pass\n\n def __exit__(self, type, value, traceback):\n pass\n\n\n_dummy_context = _DummyContext()\n\n\ndef _to_numpy(array):\n assert isinstance(array, chainerx.ndarray)\n return chainerx.to_numpy(array, copy=False)\n\n\ndef _from_numpy(array):\n assert isinstance(array, numpy.ndarray)\n return chainerx.array(array, copy=False)\n\n\ndef _to_cupy(array):\n assert cupy is not None\n # Convert to cupy.ndarray on the same device as source array\n return chainerx._to_cupy(array)\n\n\ndef _from_cupy(array):\n assert cupy is not None\n assert isinstance(array, cupy.ndarray)\n device = chainerx.get_device('cuda', array.device.id)\n return chainerx._core._fromrawpointer(\n array.data.mem.ptr,\n array.shape,\n array.dtype,\n array.strides,\n device,\n array.data.ptr - array.data.mem.ptr,\n array)\n\n\ndef _from_chx(array, check_backprop=True):\n # Converts chainerx.ndarray to numpy/cupy.ndarray.\n # Objects with other types are kept intact.\n # Returns a pair: (xp, cupy device or dummy context, numpy/cupy.ndarray).\n if not isinstance(array, chainerx.ndarray):\n if (isinstance(array, numpy.ndarray)\n or (cupy and isinstance(array, cupy.ndarray))):\n raise TypeError(\n 'ChainerX function fallback using NumPy/CuPy arrays '\n 'is not supported.')\n # _from_chx is also called for slice and tuple objects\n # Used to index a chx array\n return None, _dummy_context, array\n if check_backprop and array.is_backprop_required():\n raise RuntimeError(\n 'ChainerX function fallback using NumPy/CuPy is not '\n 'supported for arrays that are connected to a graph.')\n backend_name = array.device.backend.name\n if backend_name == 'native':\n return numpy, _dummy_context, _to_numpy(array)\n if backend_name == 'cuda':\n if cupy is None:\n raise RuntimeError(\n 'ChainerX fallback implementation for cuda backend requires '\n 'cupy to be installed.')\n array_cupy = _to_cupy(array)\n return cupy, array_cupy.device, array_cupy\n raise RuntimeError(\n 'ChainerX fallback implementation only supports native or cuda '\n 'backends.')\n\n\ndef _to_chx(array):\n # Converts numpy/cupy.ndarray to chainerx.ndarray.\n # Objects with other types are kept intact.\n if isinstance(array, numpy.ndarray):\n return _from_numpy(array)\n elif cupy is not None and isinstance(array, cupy.ndarray):\n return _from_cupy(array)\n return array\n\n\ndef _populate_module_functions():\n\n def _fix(arr):\n xp, dev, arr = _from_chx(arr)\n with dev:\n ret = xp.fix(arr)\n ret = xp.asarray(ret)\n return _to_chx(ret)\n\n chainerx.fix = _fix\n\n\ndef _populate_ndarray():\n ndarray = chainerx.ndarray\n\n # __getitem__ with advanced indexing\n old_getitem = ndarray.__getitem__\n\n def __getitem__(arr, key):\n if not isinstance(key, chainerx.ndarray):\n return old_getitem(arr, key)\n\n is_backprop_required = arr.is_backprop_required()\n\n xp, dev, arr = _from_chx(arr, check_backprop=False)\n # The elements used for indexing the array might be\n # also ChainerX arrays. _from_chx ignores\n # other types and return them as-is\n if isinstance(key, tuple):\n key = tuple([_from_chx(k, check_backprop=False)[2] for k in key])\n else:\n _, _, key = _from_chx(key, check_backprop=False)\n\n with dev:\n ret = arr[key]\n\n # Doing this check after the fallback __getitem__ because the error\n # which caused the fallback might not be due to advanced indexing.\n # In such case the fallback __getitem__ should also raise the error.\n\n if is_backprop_required:\n raise RuntimeError(\n 'ChainerX getitem fallback for advanced indexing is not '\n 'supported for arrays that are connected to a graph.')\n\n return _to_chx(ret)\n\n # __setitem__ with advanced indexing\n def __setitem__(self, key, value):\n if self.is_backprop_required():\n raise RuntimeError(\n 'ChainerX setitem fallback for advanced indexing is not '\n 'supported for arrays that are connected to a graph.')\n\n xp, dev, self = _from_chx(self)\n if isinstance(key, tuple):\n key = tuple([_from_chx(k)[2] for k in key])\n else:\n _, _, key = _from_chx(key)\n _, _, value = _from_chx(value)\n\n with dev:\n self[key] = value\n\n ndarray.__setitem__ = __setitem__\n ndarray.__getitem__ = __getitem__\n\n def tolist(arr):\n _, dev, arr = _from_chx(arr)\n with dev:\n ret = arr.tolist()\n return ret\n\n ndarray.tolist = tolist\n\n\ndef populate():\n _populate_module_functions()\n _populate_ndarray()\n", "path": "chainerx/_fallback_workarounds.py"}]} |
gh_patches_debug_1147 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-1711 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
pathoc does not accept `:pa,f` to pause forever at end of message
##### Steps to reproduce the problem:
`pathoc www.example.com 'get:/:pa,f'`
##### What is the expected behavior?
Send request, but pause forever after sending.
##### What went wrong?
I get a stack trace with "a float is required".
```
$ pathoc www.example.com 'get:/:pa,f'
08-09-16 16:59:41: >> 'GET':/:pa,f
Traceback (most recent call last):
File "/usr/local/bin/pathoc", line 11, in <module>
sys.exit(go_pathoc())
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc_cmdline.py", line 226, in go_pathoc
pathoc.main(args)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 522, in main
ret = p.request(spec)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 452, in request
return self.http(r)
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 432, in http
return resp
File "/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py", line 411, in http
req = language.serve(r, self.wfile, self.settings)
File "/usr/local/lib/python2.7/dist-packages/pathod/language/__init__.py", line 105, in serve
disconnect = writer.write_values(fp, vals, actions[:])
File "/usr/local/lib/python2.7/dist-packages/pathod/language/writer.py", line 61, in write_values
time.sleep(a[2])
TypeError: a float is required
```
##### Any other comments? What have you tried so far?
All other combinations of pause flags work as expected:
```
$ pathoc www.example.com 'get:/:p2,5'
08-09-16 17:05:07: >> 'GET':/:p2,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:pr,5'
08-09-16 17:05:21: >> 'GET':/:pr,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:pa,5'
08-09-16 17:05:41: >> 'GET':/:pa,5
<< 200 OK: 1270 bytes
$ pathoc www.example.com 'get:/:p2,f'
^C08-09-16 17:04:46: >> 'GET':/:p2,f
$ pathoc www.example.com 'get:/:pr,f'
^C08-09-16 17:04:55: >> 'GET':/:pr,f
```
---
pathoc version: 0.17
Operating System: Debian Linux 8.5 "Jessie" x64
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pathod/language/writer.py`
Content:
```
1 import time
2 from mitmproxy import exceptions
3
4 BLOCKSIZE = 1024
5 # It's not clear what the upper limit for time.sleep is. It's lower than the
6 # maximum int or float. 1 year should do.
7 FOREVER = 60 * 60 * 24 * 365
8
9
10 def send_chunk(fp, val, blocksize, start, end):
11 """
12 (start, end): Inclusive lower bound, exclusive upper bound.
13 """
14 for i in range(start, end, blocksize):
15 fp.write(
16 val[i:min(i + blocksize, end)]
17 )
18 return end - start
19
20
21 def write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):
22 """
23 vals: A list of values, which may be strings or Value objects.
24
25 actions: A list of (offset, action, arg) tuples. Action may be "inject",
26 "pause" or "disconnect".
27
28 Both vals and actions are in reverse order, with the first items last.
29
30 Return True if connection should disconnect.
31 """
32 sofar = 0
33 try:
34 while vals:
35 v = vals.pop()
36 offset = 0
37 while actions and actions[-1][0] < (sofar + len(v)):
38 a = actions.pop()
39 offset += send_chunk(
40 fp,
41 v,
42 blocksize,
43 offset,
44 a[0] - sofar - offset
45 )
46 if a[1] == "pause":
47 time.sleep(
48 FOREVER if a[2] == "f" else a[2]
49 )
50 elif a[1] == "disconnect":
51 return True
52 elif a[1] == "inject":
53 send_chunk(fp, a[2], blocksize, 0, len(a[2]))
54 send_chunk(fp, v, blocksize, offset, len(v))
55 sofar += len(v)
56 # Remainders
57 while actions:
58 a = actions.pop()
59 if a[1] == "pause":
60 time.sleep(a[2])
61 elif a[1] == "disconnect":
62 return True
63 elif a[1] == "inject":
64 send_chunk(fp, a[2], blocksize, 0, len(a[2]))
65 except exceptions.TcpDisconnect: # pragma: no cover
66 return True
67
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pathod/language/writer.py b/pathod/language/writer.py
--- a/pathod/language/writer.py
+++ b/pathod/language/writer.py
@@ -57,7 +57,9 @@
while actions:
a = actions.pop()
if a[1] == "pause":
- time.sleep(a[2])
+ time.sleep(
+ FOREVER if a[2] == "f" else a[2]
+ )
elif a[1] == "disconnect":
return True
elif a[1] == "inject":
| {"golden_diff": "diff --git a/pathod/language/writer.py b/pathod/language/writer.py\n--- a/pathod/language/writer.py\n+++ b/pathod/language/writer.py\n@@ -57,7 +57,9 @@\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n- time.sleep(a[2])\n+ time.sleep(\n+ FOREVER if a[2] == \"f\" else a[2]\n+ )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n", "issue": "pathoc does not accept `:pa,f` to pause forever at end of message\n##### Steps to reproduce the problem:\n\n`pathoc www.example.com 'get:/:pa,f'`\n##### What is the expected behavior?\n\nSend request, but pause forever after sending.\n##### What went wrong?\n\nI get a stack trace with \"a float is required\".\n\n```\n$ pathoc www.example.com 'get:/:pa,f'\n08-09-16 16:59:41: >> 'GET':/:pa,f\nTraceback (most recent call last):\n File \"/usr/local/bin/pathoc\", line 11, in <module>\n sys.exit(go_pathoc())\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc_cmdline.py\", line 226, in go_pathoc\n pathoc.main(args)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 522, in main\n ret = p.request(spec)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 452, in request\n return self.http(r)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 432, in http\n return resp\n File \"/usr/local/lib/python2.7/dist-packages/pathod/pathoc.py\", line 411, in http\n req = language.serve(r, self.wfile, self.settings)\n File \"/usr/local/lib/python2.7/dist-packages/pathod/language/__init__.py\", line 105, in serve\n disconnect = writer.write_values(fp, vals, actions[:])\n File \"/usr/local/lib/python2.7/dist-packages/pathod/language/writer.py\", line 61, in write_values\n time.sleep(a[2])\nTypeError: a float is required\n```\n##### Any other comments? What have you tried so far?\n\nAll other combinations of pause flags work as expected:\n\n```\n$ pathoc www.example.com 'get:/:p2,5'\n08-09-16 17:05:07: >> 'GET':/:p2,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:pr,5'\n08-09-16 17:05:21: >> 'GET':/:pr,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:pa,5'\n08-09-16 17:05:41: >> 'GET':/:pa,5\n<< 200 OK: 1270 bytes\n$ pathoc www.example.com 'get:/:p2,f'\n^C08-09-16 17:04:46: >> 'GET':/:p2,f\n$ pathoc www.example.com 'get:/:pr,f'\n^C08-09-16 17:04:55: >> 'GET':/:pr,f\n```\n\n---\n\npathoc version: 0.17\nOperating System: Debian Linux 8.5 \"Jessie\" x64\n\n", "before_files": [{"content": "import time\nfrom mitmproxy import exceptions\n\nBLOCKSIZE = 1024\n# It's not clear what the upper limit for time.sleep is. It's lower than the\n# maximum int or float. 1 year should do.\nFOREVER = 60 * 60 * 24 * 365\n\n\ndef send_chunk(fp, val, blocksize, start, end):\n \"\"\"\n (start, end): Inclusive lower bound, exclusive upper bound.\n \"\"\"\n for i in range(start, end, blocksize):\n fp.write(\n val[i:min(i + blocksize, end)]\n )\n return end - start\n\n\ndef write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):\n \"\"\"\n vals: A list of values, which may be strings or Value objects.\n\n actions: A list of (offset, action, arg) tuples. Action may be \"inject\",\n \"pause\" or \"disconnect\".\n\n Both vals and actions are in reverse order, with the first items last.\n\n Return True if connection should disconnect.\n \"\"\"\n sofar = 0\n try:\n while vals:\n v = vals.pop()\n offset = 0\n while actions and actions[-1][0] < (sofar + len(v)):\n a = actions.pop()\n offset += send_chunk(\n fp,\n v,\n blocksize,\n offset,\n a[0] - sofar - offset\n )\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n send_chunk(fp, v, blocksize, offset, len(v))\n sofar += len(v)\n # Remainders\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n time.sleep(a[2])\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n except exceptions.TcpDisconnect: # pragma: no cover\n return True\n", "path": "pathod/language/writer.py"}], "after_files": [{"content": "import time\nfrom mitmproxy import exceptions\n\nBLOCKSIZE = 1024\n# It's not clear what the upper limit for time.sleep is. It's lower than the\n# maximum int or float. 1 year should do.\nFOREVER = 60 * 60 * 24 * 365\n\n\ndef send_chunk(fp, val, blocksize, start, end):\n \"\"\"\n (start, end): Inclusive lower bound, exclusive upper bound.\n \"\"\"\n for i in range(start, end, blocksize):\n fp.write(\n val[i:min(i + blocksize, end)]\n )\n return end - start\n\n\ndef write_values(fp, vals, actions, sofar=0, blocksize=BLOCKSIZE):\n \"\"\"\n vals: A list of values, which may be strings or Value objects.\n\n actions: A list of (offset, action, arg) tuples. Action may be \"inject\",\n \"pause\" or \"disconnect\".\n\n Both vals and actions are in reverse order, with the first items last.\n\n Return True if connection should disconnect.\n \"\"\"\n sofar = 0\n try:\n while vals:\n v = vals.pop()\n offset = 0\n while actions and actions[-1][0] < (sofar + len(v)):\n a = actions.pop()\n offset += send_chunk(\n fp,\n v,\n blocksize,\n offset,\n a[0] - sofar - offset\n )\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n send_chunk(fp, v, blocksize, offset, len(v))\n sofar += len(v)\n # Remainders\n while actions:\n a = actions.pop()\n if a[1] == \"pause\":\n time.sleep(\n FOREVER if a[2] == \"f\" else a[2]\n )\n elif a[1] == \"disconnect\":\n return True\n elif a[1] == \"inject\":\n send_chunk(fp, a[2], blocksize, 0, len(a[2]))\n except exceptions.TcpDisconnect: # pragma: no cover\n return True\n", "path": "pathod/language/writer.py"}]} |
gh_patches_debug_1148 | rasdani/github-patches | git_diff | pypa__pipenv-3431 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
--skip-lock throws validation error
### Issue description
`--skip-lock` flag throws a validation error
### Expected result
That I get the same successful behaviour as in 2018.11.26 .
### Actual result
Throws a validation error due to the `--skip-lock` flag.
```
Installing dependencies from Pipfile…
Traceback (most recent call last):
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/bin/pipenv", line 11, in <module>
load_entry_point('pipenv==2018.11.26', 'console_scripts', 'pipenv')()
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 1137, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 64, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/cli/command.py", line 254, in install
editable_packages=state.installstate.editables,
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py", line 1874, in do_install
keep_outdated=keep_outdated
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py", line 1253, in do_init
pypi_mirror=pypi_mirror,
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py", line 795, in do_install_dependencies
lockfile = project.get_or_create_lockfile(from_pipfile=True)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/project.py", line 756, in get_or_create_lockfile
path=self.lockfile_location, data=lockfile_dict, meta_from_project=False
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/requirementslib/models/lockfile.py", line 209, in from_data
lockfile = plette.lockfiles.Lockfile(data)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py", line 37, in __init__
self.validate(data)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/lockfiles.py", line 80, in validate
klass.validate(data[key])
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/sections.py", line 70, in validate
klass.validate(data[key])
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py", line 132, in validate
cls.item_class.validate(d)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py", line 67, in validate
return validate(cls, data)
File "/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py", line 59, in validate
raise plette.models.base.ValidationError(data, v)
plette.models.base.ValidationError: {'url': 'https://pypi.python.org/simple', 'verify_ssl': True}
```
### Steps to replicate
I have a simple Pipfile like
```
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
[requires]
python_version = "3.7"
[dev-packages]
"boto3" = "*"
"flake8" = "*"
[packages]
requests = "*"
```
and run `pipenv install --skip-lock --dev`.
-------------------------------------------------------------------------------
<details><summary>$ pipenv --support</summary>
Pipenv version: `'2018.11.26'`
Pipenv location: `'/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv'`
Python location: `'/usr/local/Cellar/pipenv/2018.11.26/libexec/bin/python3.7'`
Python installations found:
- `3.7.2`: `/usr/local/bin/python3`
- `3.7.2`: `/usr/local/bin/python3.7m`
- `3.7.1`: `/Users/andrei/.pyenv/versions/3.7.1/bin/python3`
- `3.7.1`: `/Users/andrei/.pyenv/versions/3.7.1/bin/python3.7m`
- `3.7.0`: `/Users/andrei/.pyenv/versions/3.7.0/bin/python3`
- `3.7.0`: `/Users/andrei/.pyenv/versions/3.7.0/bin/python3.7m`
- `2.7.15`: `/usr/local/bin/python`
- `2.7.15`: `/usr/local/bin/pythonw`
- `2.7.10`: `/usr/bin/python`
- `2.7.10`: `/usr/bin/pythonw`
- `2.7.10`: `/usr/bin/python2.7`
PEP 508 Information:
```
{'implementation_name': 'cpython',
'implementation_version': '3.7.2',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '18.2.0',
'platform_system': 'Darwin',
'platform_version': 'Darwin Kernel Version 18.2.0: Fri Dec 14 18:43:36 PST '
'2018; root:xnu-4903.240.10~4/RELEASE_X86_64',
'python_full_version': '3.7.2',
'python_version': '3.7',
'sys_platform': 'darwin'}
```
System environment variables:
- `PATH`
- `GIT_PS1_SHOWDIRTYSTATE`
- `MANPATH`
- `rvm_use_flag`
- `LESS_TERMCAP_mb`
- `rvm_bin_path`
- `TERM_PROGRAM`
- `LESS_TERMCAP_md`
- `rvm_quiet_flag`
- `GEM_HOME`
- `LESS_TERMCAP_me`
- `rvm_gemstone_url`
- `TERM`
- `SHELL`
- `CLICOLOR`
- `HISTSIZE`
- `rvm_docs_type`
- `PIPENV_VENV_IN_PROJECT`
- `ITERM_SHELL_INTEGRATION_INSTALLED`
- `IRBRC`
- `TMPDIR`
- `Apple_PubSub_Socket_Render`
- `AUTOJUMP_KEEP_SYMLINKS`
- `TERM_PROGRAM_VERSION`
- `TRAVIS_API_TOKEN`
- `GIT_PS1_STATESEPARATOR`
- `rvm_hook`
- `MY_RUBY_HOME`
- `LESS_TERMCAP_ue`
- `AUTOJUMP_IGNORE_CASE`
- `TIME_STYLE`
- `TERM_SESSION_ID`
- `GIT_PS1_SHOWCOLORHINTS`
- `FLAGS_GETOPT_CMD`
- `LC_ALL`
- `GIT_EDITOR`
- `GIT_TERMINAL_PROMPT`
- `NVM_DIR`
- `HISTFILESIZE`
- `USER`
- `rvm_gemstone_package_file`
- `PROMPT_ANDREI_ONLINE`
- `_system_type`
- `HOMEBREW_NO_ANALYTICS`
- `rvm_path`
- `ENV`
- `SSH_AUTH_SOCK`
- `HOMEBREW_NO_AUTO_UPDATE`
- `__CF_USER_TEXT_ENCODING`
- `rvm_proxy`
- `rvm_ruby_file`
- `PAGER`
- `ERL_LIBS`
- `LC_TYPE`
- `LSCOLORS`
- `LESS_TERMCAP_us`
- `rvm_silent_flag`
- `rvm_prefix`
- `rvm_ruby_make`
- `_`
- `WORDCHARS`
- `PWD`
- `HOMEBREW_GITHUB_API_TOKEN`
- `EDITOR`
- `rvm_sdk`
- `LANG`
- `BRCD_RANONCE`
- `ITERM_PROFILE`
- `_system_arch`
- `XPC_FLAGS`
- `_system_version`
- `GIT_MERGE_AUTOEDIT`
- `GIT_PS1_HIDE_IF_PWD_IGNORED`
- `GIT_PS1_SHOWUNTRACKEDFILES`
- `HISTIGNORE`
- `HISTCONTROL`
- `XPC_SERVICE_NAME`
- `rvm_version`
- `rvm_script_name`
- `rvm_pretty_print_flag`
- `PYENV_SHELL`
- `T_AWS_IAM_INC_SH_DIR`
- `SHLVL`
- `HOME`
- `COLORFGBG`
- `rvm_ruby_mode`
- `LC_TERMINAL_VERSION`
- `LS_OPTIONS`
- `GIT_PS1_SHOWSTASHSTATE`
- `BASH_ENV`
- `ITERM_SESSION_ID`
- `LESS`
- `LOGNAME`
- `rvm_alias_expanded`
- `GIT_PS1_SHOWUPSTREAM`
- `VISUAL`
- `GEM_PATH`
- `LESS_TERMCAP_so`
- `LC_CTYPE`
- `PROMPT_ANDREI_BATTERY`
- `LESSOPEN`
- `GOPATH`
- `rvm_nightly_flag`
- `BROWSER`
- `rvm_ruby_make_install`
- `PROMPT_EOL_MARK`
- `rvm_niceness`
- `LC_TERMINAL`
- `rvm_ruby_bits`
- `rvm_bin_flag`
- `rvm_only_path_flag`
- `RUBY_VERSION`
- `SQLITE_EXEMPT_PATH_FROM_VNODE_GUARDS`
- `_system_name`
- `HISTFILE`
- `LESS_TERMCAP_se`
- `COLORTERM`
- `PIP_DISABLE_PIP_VERSION_CHECK`
- `PYTHONDONTWRITEBYTECODE`
- `PIP_SHIMS_BASE_MODULE`
- `PIP_PYTHON_PATH`
- `PYTHONFINDER_IGNORE_UNSUPPORTED`
Pipenv–specific environment variables:
- `PIPENV_VENV_IN_PROJECT`: `1`
Debug–specific environment variables:
- `PATH`: `/usr/local/Cellar/pipenv/2018.11.26/libexec/tools:/Users/andrei/.pyenv/shims:/Users/andrei/.rvm/gems/ruby-2.0.0-p648/bin:/Users/andrei/.rvm/gems/ruby-2.0.0-p648@global/bin:/Users/andrei/.rvm/rubies/ruby-2.0.0-p648/bin:/usr/local/bin:/usr/local/sbin:/Users/andrei/bin:/usr/local/MacGPG2/bin:/usr/local/opt/go/libexec/bin:/Users/andrei/.yarn/bin:/usr/local/opt/perl/bin:/usr/local/opt/unzip/bin:/usr/local/opt/curl/bin:/usr/local/opt/make/libexec/gnubin:/usr/local/opt/gzip/libexec/gnubin:/usr/local/opt/grep/libexec/gnubin:/usr/local/opt/gnu-which/libexec/gnubin:/usr/local/opt/gnu-time/libexec/gnubin:/usr/local/opt/gnu-tar/libexec/gnubin:/usr/local/opt/gnu-sed/libexec/gnubin:/usr/local/opt/findutils/libexec/gnubin:/usr/local/opt/coreutils/libexec/gnubin:/usr/texbin:/usr/bin:/bin:/usr/sbin:/sbin:./node_modules/.bin:/Users/andrei/node_modules/.bin:/Users/andrei/bin/git:/Users/andrei/bin/ansiweather.git:/Users/andrei/bin/git-extra.git:/Users/andrei/bin/git-fiddle.git:/Users/andrei/bin/git-guilt.git:/Users/andrei/bin/git-number.git:/Users/andrei/bin/git-ssdiff.git:/Users/andrei/bin/git-timeofday.git:/Users/andrei/bin/qc.git:/Users/andrei/bin/showlinenum.git:/Users/andrei/bin/skel-complete.git:/Users/andrei/bin/qc.git/qc:/Users/andrei/bin/git-guilt.git/bin:/usr/local/opt/git/share/git-core/contrib/workdir:/Users/andrei/.rvm/bin:/Users/andrei/.rvm/bin:/Users/andrei/git/firecloud/support-firecloud/bin`
- `SHELL`: `/usr/local/bin/zsh`
- `EDITOR`: `/usr/local/bin/emacs`
- `LANG`: `en_US.UTF-8`
- `PWD`: `/Users/andrei/git/firecloud/atex-platform/apex/functions/python`
---------------------------
Contents of `Pipfile` ('/Users/andrei/git/firecloud/atex-platform/apex/functions/python/Pipfile'):
```toml
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
[requires]
python_version = "3.7"
[dev-packages]
"boto3" = "*"
"flake8" = "*"
[packages]
requests = "*"
```
</details>
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pipenv/project.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 import base64
3 import fnmatch
4 import glob
5 import hashlib
6 import io
7 import json
8 import operator
9 import os
10 import re
11 import sys
12
13 import six
14 import toml
15 import tomlkit
16 import vistir
17
18 from first import first
19
20 import pipfile
21 import pipfile.api
22
23 from cached_property import cached_property
24
25 from .cmdparse import Script
26 from .environment import Environment
27 from .environments import (
28 PIPENV_DEFAULT_PYTHON_VERSION, PIPENV_IGNORE_VIRTUALENVS, PIPENV_MAX_DEPTH,
29 PIPENV_PIPFILE, PIPENV_PYTHON, PIPENV_TEST_INDEX, PIPENV_VENV_IN_PROJECT,
30 is_in_virtualenv
31 )
32 from .utils import (
33 cleanup_toml, convert_toml_outline_tables, find_requirements,
34 get_canonical_names, get_url_name, get_workon_home, is_editable,
35 is_installable_file, is_star, is_valid_url, is_virtual_environment,
36 looks_like_dir, normalize_drive, pep423_name, proper_case, python_version,
37 safe_expandvars
38 )
39
40
41 def _normalized(p):
42 if p is None:
43 return None
44 loc = vistir.compat.Path(p)
45 if not loc.is_absolute():
46 try:
47 loc = loc.resolve()
48 except OSError:
49 loc = loc.absolute()
50 # Recase the path properly on Windows. From https://stackoverflow.com/a/35229734/5043728
51 if os.name == 'nt':
52 matches = glob.glob(re.sub(r'([^:/\\])(?=[/\\]|$)', r'[\1]', str(loc)))
53 path_str = matches and matches[0] or str(loc)
54 else:
55 path_str = str(loc)
56 return normalize_drive(path_str)
57
58
59 DEFAULT_NEWLINES = u"\n"
60
61
62 class _LockFileEncoder(json.JSONEncoder):
63 """A specilized JSON encoder to convert loaded TOML data into a lock file.
64
65 This adds a few characteristics to the encoder:
66
67 * The JSON is always prettified with indents and spaces.
68 * TOMLKit's container elements are seamlessly encodable.
69 * The output is always UTF-8-encoded text, never binary, even on Python 2.
70 """
71
72 def __init__(self):
73 super(_LockFileEncoder, self).__init__(
74 indent=4, separators=(",", ": "), sort_keys=True
75 )
76
77 def default(self, obj):
78 if isinstance(obj, vistir.compat.Path):
79 obj = obj.as_posix()
80 return super(_LockFileEncoder, self).default(obj)
81
82 def encode(self, obj):
83 content = super(_LockFileEncoder, self).encode(obj)
84 if not isinstance(content, six.text_type):
85 content = content.decode("utf-8")
86 return content
87
88
89 def preferred_newlines(f):
90 if isinstance(f.newlines, six.text_type):
91 return f.newlines
92 return DEFAULT_NEWLINES
93
94
95 if PIPENV_PIPFILE:
96 if not os.path.isfile(PIPENV_PIPFILE):
97 raise RuntimeError("Given PIPENV_PIPFILE is not found!")
98
99 else:
100 PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)
101 # (path, file contents) => TOMLFile
102 # keeps track of pipfiles that we've seen so we do not need to re-parse 'em
103 _pipfile_cache = {}
104
105
106 if PIPENV_TEST_INDEX:
107 DEFAULT_SOURCE = {
108 u"url": PIPENV_TEST_INDEX,
109 u"verify_ssl": True,
110 u"name": u"custom",
111 }
112 else:
113 DEFAULT_SOURCE = {
114 u"url": u"https://pypi.org/simple",
115 u"verify_ssl": True,
116 u"name": u"pypi",
117 }
118
119 pipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE
120
121
122 class SourceNotFound(KeyError):
123 pass
124
125
126 class Project(object):
127 """docstring for Project"""
128
129 _lockfile_encoder = _LockFileEncoder()
130
131 def __init__(self, which=None, python_version=None, chdir=True):
132 super(Project, self).__init__()
133 self._name = None
134 self._virtualenv_location = None
135 self._download_location = None
136 self._proper_names_db_path = None
137 self._pipfile_location = None
138 self._pipfile_newlines = DEFAULT_NEWLINES
139 self._lockfile_newlines = DEFAULT_NEWLINES
140 self._requirements_location = None
141 self._original_dir = os.path.abspath(os.curdir)
142 self._environment = None
143 self._which = which
144 self._build_system = {
145 "requires": ["setuptools", "wheel"]
146 }
147 self.python_version = python_version
148 # Hack to skip this during pipenv run, or -r.
149 if ("run" not in sys.argv) and chdir:
150 try:
151 os.chdir(self.project_directory)
152 except (TypeError, AttributeError):
153 pass
154
155 def path_to(self, p):
156 """Returns the absolute path to a given relative path."""
157 if os.path.isabs(p):
158 return p
159
160 return os.sep.join([self._original_dir, p])
161
162 def _build_package_list(self, package_section):
163 """Returns a list of packages for pip-tools to consume."""
164 from pipenv.vendor.requirementslib.utils import is_vcs
165 ps = {}
166 # TODO: Separate the logic for showing packages from the filters for supplying pip-tools
167 for k, v in self.parsed_pipfile.get(package_section, {}).items():
168 # Skip editable VCS deps.
169 if hasattr(v, "keys"):
170 # When a vcs url is gven without editable it only appears as a key
171 # Eliminate any vcs, path, or url entries which are not editable
172 # Since pip-tools can't do deep resolution on them, even setuptools-installable ones
173 if (
174 is_vcs(v)
175 or is_vcs(k)
176 or (is_installable_file(k) or is_installable_file(v))
177 or any(
178 (
179 prefix in v
180 and (os.path.isfile(v[prefix]) or is_valid_url(v[prefix]))
181 )
182 for prefix in ["path", "file"]
183 )
184 ):
185 # If they are editable, do resolve them
186 if "editable" not in v:
187 # allow wheels to be passed through
188 if not (
189 hasattr(v, "keys")
190 and v.get("path", v.get("file", "")).endswith(".whl")
191 ):
192 continue
193 ps.update({k: v})
194
195 else:
196 ps.update({k: v})
197 else:
198 ps.update({k: v})
199 else:
200 # Since these entries have no attributes we know they are not editable
201 # So we can safely exclude things that need to be editable in order to be resolved
202 # First exclude anything that is a vcs entry either in the key or value
203 if not (
204 any(is_vcs(i) for i in [k, v])
205 or
206 # Then exclude any installable files that are not directories
207 # Because pip-tools can resolve setup.py for example
208 any(is_installable_file(i) for i in [k, v])
209 or
210 # Then exclude any URLs because they need to be editable also
211 # Things that are excluded can only be 'shallow resolved'
212 any(is_valid_url(i) for i in [k, v])
213 ):
214 ps.update({k: v})
215 return ps
216
217 @property
218 def name(self):
219 if self._name is None:
220 self._name = self.pipfile_location.split(os.sep)[-2]
221 return self._name
222
223 @property
224 def pipfile_exists(self):
225 return bool(self.pipfile_location)
226
227 @property
228 def required_python_version(self):
229 if self.pipfile_exists:
230 required = self.parsed_pipfile.get("requires", {}).get(
231 "python_full_version"
232 )
233 if not required:
234 required = self.parsed_pipfile.get("requires", {}).get("python_version")
235 if required != "*":
236 return required
237
238 @property
239 def project_directory(self):
240 if self.pipfile_location is not None:
241 return os.path.abspath(os.path.join(self.pipfile_location, os.pardir))
242
243 else:
244 return None
245
246 @property
247 def requirements_exists(self):
248 return bool(self.requirements_location)
249
250 def is_venv_in_project(self):
251 return PIPENV_VENV_IN_PROJECT or (
252 self.project_directory
253 and os.path.isdir(os.path.join(self.project_directory, ".venv"))
254 )
255
256 @property
257 def virtualenv_exists(self):
258 # TODO: Decouple project from existence of Pipfile.
259 if self.pipfile_exists and os.path.exists(self.virtualenv_location):
260 if os.name == "nt":
261 extra = ["Scripts", "activate.bat"]
262 else:
263 extra = ["bin", "activate"]
264 return os.path.isfile(os.sep.join([self.virtualenv_location] + extra))
265
266 return False
267
268 def get_location_for_virtualenv(self):
269 # If there's no project yet, set location based on config.
270 if not self.project_directory:
271 if self.is_venv_in_project():
272 return os.path.abspath(".venv")
273 return str(get_workon_home().joinpath(self.virtualenv_name))
274
275 dot_venv = os.path.join(self.project_directory, ".venv")
276
277 # If there's no .venv in project root, set location based on config.
278 if not os.path.exists(dot_venv):
279 if self.is_venv_in_project():
280 return dot_venv
281 return str(get_workon_home().joinpath(self.virtualenv_name))
282
283 # If .venv in project root is a directory, use it.
284 if os.path.isdir(dot_venv):
285 return dot_venv
286
287 # Now we assume .venv in project root is a file. Use its content.
288 with io.open(dot_venv) as f:
289 name = f.read().strip()
290
291 # If content looks like a path, use it as a relative path.
292 # Otherwise use directory named after content in WORKON_HOME.
293 if looks_like_dir(name):
294 path = vistir.compat.Path(self.project_directory, name)
295 return path.absolute().as_posix()
296 return str(get_workon_home().joinpath(name))
297
298 @property
299 def working_set(self):
300 from .utils import load_path
301 sys_path = load_path(self.which("python"))
302 import pkg_resources
303 return pkg_resources.WorkingSet(sys_path)
304
305 @property
306 def installed_packages(self):
307 return self.environment.get_installed_packages()
308
309 @property
310 def installed_package_names(self):
311 return get_canonical_names([pkg.key for pkg in self.installed_packages])
312
313 @property
314 def lockfile_package_names(self):
315 dev_keys = get_canonical_names(self.lockfile_content["develop"].keys())
316 default_keys = get_canonical_names(self.lockfile_content["default"].keys())
317 return {
318 "dev": dev_keys,
319 "default": default_keys,
320 "combined": dev_keys | default_keys
321 }
322
323 @property
324 def pipfile_package_names(self):
325 dev_keys = get_canonical_names(self.dev_packages.keys())
326 default_keys = get_canonical_names(self.packages.keys())
327 return {
328 "dev": dev_keys,
329 "default": default_keys,
330 "combined": dev_keys | default_keys
331 }
332
333 @property
334 def environment(self):
335 if not self._environment:
336 prefix = self.virtualenv_location
337 is_venv = is_in_virtualenv()
338 sources = self.sources if self.sources else [DEFAULT_SOURCE,]
339 self._environment = Environment(
340 prefix=prefix, is_venv=is_venv, sources=sources, pipfile=self.parsed_pipfile,
341 project=self
342 )
343 self._environment.add_dist("pipenv")
344 return self._environment
345
346 def get_outdated_packages(self):
347 return self.environment.get_outdated_packages(pre=self.pipfile.get("pre", False))
348
349 @classmethod
350 def _sanitize(cls, name):
351 # Replace dangerous characters into '_'. The length of the sanitized
352 # project name is limited as 42 because of the limit of linux kernel
353 #
354 # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')
355 #
356 # 127 : BINPRM_BUF_SIZE - 1
357 # 32 : Maximum length of username
358 #
359 # References:
360 # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html
361 # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF
362 # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18
363 return re.sub(r'[ $`!*@"\\\r\n\t]', "_", name)[0:42]
364
365 def _get_virtualenv_hash(self, name):
366 """Get the name of the virtualenv adjusted for windows if needed
367
368 Returns (name, encoded_hash)
369 """
370
371 def get_name(name, location):
372 name = self._sanitize(name)
373 hash = hashlib.sha256(location.encode()).digest()[:6]
374 encoded_hash = base64.urlsafe_b64encode(hash).decode()
375 return name, encoded_hash[:8]
376
377 clean_name, encoded_hash = get_name(name, self.pipfile_location)
378 venv_name = "{0}-{1}".format(clean_name, encoded_hash)
379
380 # This should work most of the time for
381 # Case-sensitive filesystems,
382 # In-project venv
383 # "Proper" path casing (on non-case-sensitive filesystems).
384 if (
385 not fnmatch.fnmatch("A", "a")
386 or self.is_venv_in_project()
387 or get_workon_home().joinpath(venv_name).exists()
388 ):
389 return clean_name, encoded_hash
390
391 # Check for different capitalization of the same project.
392 for path in get_workon_home().iterdir():
393 if not is_virtual_environment(path):
394 continue
395 try:
396 env_name, hash_ = path.name.rsplit("-", 1)
397 except ValueError:
398 continue
399 if len(hash_) != 8 or env_name.lower() != name.lower():
400 continue
401 return get_name(env_name, self.pipfile_location.replace(name, env_name))
402
403 # Use the default if no matching env exists.
404 return clean_name, encoded_hash
405
406 @property
407 def virtualenv_name(self):
408 sanitized, encoded_hash = self._get_virtualenv_hash(self.name)
409 suffix = "-{0}".format(PIPENV_PYTHON) if PIPENV_PYTHON else ""
410 # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',
411 # the name of its virtualenv will be 'my-project-wyUfYPqE'
412 return sanitized + "-" + encoded_hash + suffix
413
414 @property
415 def virtualenv_location(self):
416 # if VIRTUAL_ENV is set, use that.
417 virtualenv_env = os.getenv("VIRTUAL_ENV")
418 if ("PIPENV_ACTIVE" not in os.environ and
419 not PIPENV_IGNORE_VIRTUALENVS and virtualenv_env):
420 return virtualenv_env
421
422 if not self._virtualenv_location: # Use cached version, if available.
423 assert self.project_directory, "project not created"
424 self._virtualenv_location = self.get_location_for_virtualenv()
425 return self._virtualenv_location
426
427 @property
428 def virtualenv_src_location(self):
429 if self.virtualenv_location:
430 loc = os.sep.join([self.virtualenv_location, "src"])
431 else:
432 loc = os.sep.join([self.project_directory, "src"])
433 vistir.path.mkdir_p(loc)
434 return loc
435
436 @property
437 def download_location(self):
438 if self._download_location is None:
439 loc = os.sep.join([self.virtualenv_location, "downloads"])
440 self._download_location = loc
441 # Create the directory, if it doesn't exist.
442 vistir.path.mkdir_p(self._download_location)
443 return self._download_location
444
445 @property
446 def proper_names_db_path(self):
447 if self._proper_names_db_path is None:
448 self._proper_names_db_path = vistir.compat.Path(
449 self.virtualenv_location, "pipenv-proper-names.txt"
450 )
451 self._proper_names_db_path.touch() # Ensure the file exists.
452 return self._proper_names_db_path
453
454 @property
455 def proper_names(self):
456 with self.proper_names_db_path.open() as f:
457 return f.read().splitlines()
458
459 def register_proper_name(self, name):
460 """Registers a proper name to the database."""
461 with self.proper_names_db_path.open("a") as f:
462 f.write(u"{0}\n".format(name))
463
464 @property
465 def pipfile_location(self):
466 if PIPENV_PIPFILE:
467 return PIPENV_PIPFILE
468
469 if self._pipfile_location is None:
470 try:
471 loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)
472 except RuntimeError:
473 loc = None
474 self._pipfile_location = _normalized(loc)
475 return self._pipfile_location
476
477 @property
478 def requirements_location(self):
479 if self._requirements_location is None:
480 try:
481 loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)
482 except RuntimeError:
483 loc = None
484 self._requirements_location = loc
485 return self._requirements_location
486
487 @property
488 def parsed_pipfile(self):
489 """Parse Pipfile into a TOMLFile and cache it
490
491 (call clear_pipfile_cache() afterwards if mutating)"""
492 contents = self.read_pipfile()
493 # use full contents to get around str/bytes 2/3 issues
494 cache_key = (self.pipfile_location, contents)
495 if cache_key not in _pipfile_cache:
496 parsed = self._parse_pipfile(contents)
497 _pipfile_cache[cache_key] = parsed
498 return _pipfile_cache[cache_key]
499
500 def read_pipfile(self):
501 # Open the pipfile, read it into memory.
502 with io.open(self.pipfile_location) as f:
503 contents = f.read()
504 self._pipfile_newlines = preferred_newlines(f)
505
506 return contents
507
508 def clear_pipfile_cache(self):
509 """Clear pipfile cache (e.g., so we can mutate parsed pipfile)"""
510 _pipfile_cache.clear()
511
512 def _parse_pipfile(self, contents):
513 try:
514 return tomlkit.parse(contents)
515 except Exception:
516 # We lose comments here, but it's for the best.)
517 # Fallback to toml parser, for large files.
518 return toml.loads(contents)
519
520 def _read_pyproject(self):
521 pyproject = self.path_to("pyproject.toml")
522 if os.path.exists(pyproject):
523 self._pyproject = toml.load(pyproject)
524 build_system = self._pyproject.get("build-system", None)
525 if not os.path.exists(self.path_to("setup.py")):
526 if not build_system or not build_system.get("requires"):
527 build_system = {
528 "requires": ["setuptools>=38.2.5", "wheel"],
529 "build-backend": "setuptools.build_meta",
530 }
531 self._build_system = build_system
532
533 @property
534 def build_requires(self):
535 return self._build_system.get("requires", [])
536
537 @property
538 def build_backend(self):
539 return self._build_system.get("build-backend", None)
540
541 @property
542 def settings(self):
543 """A dictionary of the settings added to the Pipfile."""
544 return self.parsed_pipfile.get("pipenv", {})
545
546 def has_script(self, name):
547 try:
548 return name in self.parsed_pipfile["scripts"]
549 except KeyError:
550 return False
551
552 def build_script(self, name, extra_args=None):
553 try:
554 script = Script.parse(self.parsed_pipfile["scripts"][name])
555 except KeyError:
556 script = Script(name)
557 if extra_args:
558 script.extend(extra_args)
559 return script
560
561 def update_settings(self, d):
562 settings = self.settings
563 changed = False
564 for new in d:
565 if new not in settings:
566 settings[new] = d[new]
567 changed = True
568 if changed:
569 p = self.parsed_pipfile
570 p["pipenv"] = settings
571 # Write the changes to disk.
572 self.write_toml(p)
573
574 @property
575 def _lockfile(self):
576 """Pipfile.lock divided by PyPI and external dependencies."""
577 pfile = pipfile.load(self.pipfile_location, inject_env=False)
578 lockfile = json.loads(pfile.lock())
579 for section in ("default", "develop"):
580 lock_section = lockfile.get(section, {})
581 for key in list(lock_section.keys()):
582 norm_key = pep423_name(key)
583 lockfile[section][norm_key] = lock_section.pop(key)
584 return lockfile
585
586 @property
587 def _pipfile(self):
588 from .vendor.requirementslib.models.pipfile import Pipfile as ReqLibPipfile
589 pf = ReqLibPipfile.load(self.pipfile_location)
590 return pf
591
592 @property
593 def lockfile_location(self):
594 return "{0}.lock".format(self.pipfile_location)
595
596 @property
597 def lockfile_exists(self):
598 return os.path.isfile(self.lockfile_location)
599
600 @property
601 def lockfile_content(self):
602 return self.load_lockfile()
603
604 def _get_editable_packages(self, dev=False):
605 section = "dev-packages" if dev else "packages"
606 # section = "{0}-editable".format(section)
607 packages = {
608 k: v
609 # for k, v in self._pipfile[section].items()
610 for k, v in self.parsed_pipfile.get(section, {}).items()
611 if is_editable(k) or is_editable(v)
612 }
613 return packages
614
615 def _get_vcs_packages(self, dev=False):
616 from pipenv.vendor.requirementslib.utils import is_vcs
617 section = "dev-packages" if dev else "packages"
618 # section = "{0}-vcs".format(section)
619 packages = {
620 k: v
621 # for k, v in self._pipfile[section].items()
622 for k, v in self.parsed_pipfile.get(section, {}).items()
623 if is_vcs(v) or is_vcs(k)
624 }
625 return packages or {}
626
627 @property
628 def editable_packages(self):
629 return self._get_editable_packages(dev=False)
630
631 @property
632 def editable_dev_packages(self):
633 return self._get_editable_packages(dev=True)
634
635 @property
636 def vcs_packages(self):
637 """Returns a list of VCS packages, for not pip-tools to consume."""
638 return self._get_vcs_packages(dev=False)
639
640 @property
641 def vcs_dev_packages(self):
642 """Returns a list of VCS packages, for not pip-tools to consume."""
643 return self._get_vcs_packages(dev=True)
644
645 @property
646 def all_packages(self):
647 """Returns a list of all packages."""
648 p = dict(self.parsed_pipfile.get("dev-packages", {}))
649 p.update(self.parsed_pipfile.get("packages", {}))
650 return p
651
652 @property
653 def packages(self):
654 """Returns a list of packages, for pip-tools to consume."""
655 return self._build_package_list("packages")
656
657 @property
658 def dev_packages(self):
659 """Returns a list of dev-packages, for pip-tools to consume."""
660 return self._build_package_list("dev-packages")
661
662 def touch_pipfile(self):
663 """Simply touches the Pipfile, for later use."""
664 with open("Pipfile", "a"):
665 os.utime("Pipfile", None)
666
667 @property
668 def pipfile_is_empty(self):
669 if not self.pipfile_exists:
670 return True
671
672 if not len(self.read_pipfile()):
673 return True
674
675 return False
676
677 def create_pipfile(self, python=None):
678 """Creates the Pipfile, filled with juicy defaults."""
679 from .vendor.pip_shims.shims import (
680 ConfigOptionParser, make_option_group, index_group
681 )
682
683 name = self.name if self.name is not None else "Pipfile"
684 config_parser = ConfigOptionParser(name=self.name)
685 config_parser.add_option_group(make_option_group(index_group, config_parser))
686 install = config_parser.option_groups[0]
687 indexes = (
688 " ".join(install.get_option("--extra-index-url").default)
689 .lstrip("\n")
690 .split("\n")
691 )
692 sources = [DEFAULT_SOURCE,]
693 for i, index in enumerate(indexes):
694 if not index:
695 continue
696
697 source_name = "pip_index_{}".format(i)
698 verify_ssl = index.startswith("https")
699 sources.append(
700 {u"url": index, u"verify_ssl": verify_ssl, u"name": source_name}
701 )
702
703 data = {
704 u"source": sources,
705 # Default packages.
706 u"packages": {},
707 u"dev-packages": {},
708 }
709 # Default requires.
710 required_python = python
711 if not python:
712 if self.virtualenv_location:
713 required_python = self.which("python", self.virtualenv_location)
714 else:
715 required_python = self.which("python")
716 version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION
717 if version and len(version) >= 3:
718 data[u"requires"] = {"python_version": version[: len("2.7")]}
719 self.write_toml(data)
720
721 @classmethod
722 def populate_source(cls, source):
723 """Derive missing values of source from the existing fields."""
724 # Only URL pararemter is mandatory, let the KeyError be thrown.
725 if "name" not in source:
726 source["name"] = get_url_name(source["url"])
727 if "verify_ssl" not in source:
728 source["verify_ssl"] = "https://" in source["url"]
729 if not isinstance(source["verify_ssl"], bool):
730 source["verify_ssl"] = source["verify_ssl"].lower() == "true"
731 return source
732
733 def get_or_create_lockfile(self, from_pipfile=False):
734 from pipenv.vendor.requirementslib.models.lockfile import Lockfile as Req_Lockfile
735 lockfile = None
736 if from_pipfile and self.pipfile_exists:
737 lockfile_dict = {
738 "default": self._lockfile["default"].copy(),
739 "develop": self._lockfile["develop"].copy()
740 }
741 lockfile_dict.update({"_meta": self.get_lockfile_meta()})
742 lockfile = Req_Lockfile.from_data(
743 path=self.lockfile_location, data=lockfile_dict, meta_from_project=False
744 )
745 elif self.lockfile_exists:
746 try:
747 lockfile = Req_Lockfile.load(self.lockfile_location)
748 except OSError:
749 lockfile = Req_Lockfile.from_data(self.lockfile_location, self.lockfile_content)
750 else:
751 lockfile = Req_Lockfile.from_data(path=self.lockfile_location, data=self._lockfile, meta_from_project=False)
752 if lockfile._lockfile is not None:
753 return lockfile
754 if self.lockfile_exists and self.lockfile_content:
755 lockfile_dict = self.lockfile_content.copy()
756 sources = lockfile_dict.get("_meta", {}).get("sources", [])
757 if not sources:
758 sources = self.pipfile_sources
759 elif not isinstance(sources, list):
760 sources = [sources,]
761 lockfile_dict["_meta"]["sources"] = [
762 self.populate_source(s) for s in sources
763 ]
764 _created_lockfile = Req_Lockfile.from_data(
765 path=self.lockfile_location, data=lockfile_dict, meta_from_project=False
766 )
767 lockfile._lockfile = lockfile.projectfile.model = _created_lockfile
768 return lockfile
769 else:
770 return self.get_or_create_lockfile(from_pipfile=True)
771
772 def get_lockfile_meta(self):
773 from .vendor.plette.lockfiles import PIPFILE_SPEC_CURRENT
774 if self.lockfile_exists:
775 sources = self.lockfile_content.get("_meta", {}).get("sources", [])
776 else:
777 sources = [dict(source) for source in self.parsed_pipfile["source"]]
778 if not isinstance(sources, list):
779 sources = [sources,]
780 return {
781 "hash": {"sha256": self.calculate_pipfile_hash()},
782 "pipfile-spec": PIPFILE_SPEC_CURRENT,
783 "sources": sources,
784 "requires": self.parsed_pipfile.get("requires", {})
785 }
786
787 def write_toml(self, data, path=None):
788 """Writes the given data structure out as TOML."""
789 if path is None:
790 path = self.pipfile_location
791 data = convert_toml_outline_tables(data)
792 try:
793 formatted_data = tomlkit.dumps(data).rstrip()
794 except Exception:
795 document = tomlkit.document()
796 for section in ("packages", "dev-packages"):
797 document[section] = tomlkit.container.Table()
798 # Convert things to inline tables — fancy :)
799 for package in data.get(section, {}):
800 if hasattr(data[section][package], "keys"):
801 table = tomlkit.inline_table()
802 table.update(data[section][package])
803 document[section][package] = table
804 else:
805 document[section][package] = tomlkit.string(data[section][package])
806 formatted_data = tomlkit.dumps(document).rstrip()
807
808 if (
809 vistir.compat.Path(path).absolute()
810 == vistir.compat.Path(self.pipfile_location).absolute()
811 ):
812 newlines = self._pipfile_newlines
813 else:
814 newlines = DEFAULT_NEWLINES
815 formatted_data = cleanup_toml(formatted_data)
816 with io.open(path, "w", newline=newlines) as f:
817 f.write(formatted_data)
818 # pipfile is mutated!
819 self.clear_pipfile_cache()
820
821 def write_lockfile(self, content):
822 """Write out the lockfile.
823 """
824 s = self._lockfile_encoder.encode(content)
825 open_kwargs = {"newline": self._lockfile_newlines, "encoding": "utf-8"}
826 with vistir.contextmanagers.atomic_open_for_write(
827 self.lockfile_location, **open_kwargs
828 ) as f:
829 f.write(s)
830 # Write newline at end of document. GH-319.
831 # Only need '\n' here; the file object handles the rest.
832 if not s.endswith(u"\n"):
833 f.write(u"\n")
834
835 @property
836 def pipfile_sources(self):
837 if "source" not in self.parsed_pipfile:
838 return [DEFAULT_SOURCE]
839 # We need to make copies of the source info so we don't
840 # accidentally modify the cache. See #2100 where values are
841 # written after the os.path.expandvars() call.
842 return [
843 {k: safe_expandvars(v) for k, v in source.items()}
844 for source in self.parsed_pipfile["source"]
845 ]
846
847 @property
848 def sources(self):
849 if self.lockfile_exists and hasattr(self.lockfile_content, "keys"):
850 meta_ = self.lockfile_content.get("_meta", {})
851 sources_ = meta_.get("sources")
852 if sources_:
853 return sources_
854
855 else:
856 return self.pipfile_sources
857
858 def find_source(self, source):
859 """given a source, find it.
860
861 source can be a url or an index name.
862 """
863 if not is_valid_url(source):
864 try:
865 source = self.get_source(name=source)
866 except SourceNotFound:
867 source = self.get_source(url=source)
868 else:
869 source = self.get_source(url=source)
870 return source
871
872 def get_source(self, name=None, url=None):
873 def find_source(sources, name=None, url=None):
874 source = None
875 if name:
876 source = [s for s in sources if s.get("name") == name]
877 elif url:
878 source = [s for s in sources if url.startswith(s.get("url"))]
879 if source:
880 return first(source)
881
882 found_source = find_source(self.sources, name=name, url=url)
883 if found_source:
884 return found_source
885 found_source = find_source(self.pipfile_sources, name=name, url=url)
886 if found_source:
887 return found_source
888 raise SourceNotFound(name or url)
889
890 def get_package_name_in_pipfile(self, package_name, dev=False):
891 """Get the equivalent package name in pipfile"""
892 key = "dev-packages" if dev else "packages"
893 section = self.parsed_pipfile.get(key, {})
894 package_name = pep423_name(package_name)
895 for name in section.keys():
896 if pep423_name(name) == package_name:
897 return name
898 return None
899
900 def remove_package_from_pipfile(self, package_name, dev=False):
901 # Read and append Pipfile.
902 name = self.get_package_name_in_pipfile(package_name, dev)
903 key = "dev-packages" if dev else "packages"
904 p = self.parsed_pipfile
905 if name:
906 del p[key][name]
907 self.write_toml(p)
908
909 def remove_packages_from_pipfile(self, packages):
910 parsed = self.parsed_pipfile
911 packages = set([pep423_name(pkg) for pkg in packages])
912 for section in ("dev-packages", "packages"):
913 pipfile_section = parsed.get(section, {})
914 pipfile_packages = set([
915 pep423_name(pkg_name) for pkg_name in pipfile_section.keys()
916 ])
917 to_remove = packages & pipfile_packages
918 # The normal toml parser can't handle deleting packages with preceding newlines
919 is_dev = section == "dev-packages"
920 for pkg in to_remove:
921 pkg_name = self.get_package_name_in_pipfile(pkg, dev=is_dev)
922 del parsed[section][pkg_name]
923 self.write_toml(parsed)
924
925 def add_package_to_pipfile(self, package, dev=False):
926 from .vendor.requirementslib import Requirement
927
928 # Read and append Pipfile.
929 p = self.parsed_pipfile
930 # Don't re-capitalize file URLs or VCSs.
931 if not isinstance(package, Requirement):
932 package = Requirement.from_line(package.strip())
933 _, converted = package.pipfile_entry
934 key = "dev-packages" if dev else "packages"
935 # Set empty group if it doesn't exist yet.
936 if key not in p:
937 p[key] = {}
938 name = self.get_package_name_in_pipfile(package.name, dev)
939 if name and is_star(converted):
940 # Skip for wildcard version
941 return
942 # Add the package to the group.
943 p[key][name or pep423_name(package.name)] = converted
944 # Write Pipfile.
945 self.write_toml(p)
946
947 def src_name_from_url(self, index_url):
948 name, _, tld_guess = six.moves.urllib.parse.urlsplit(index_url).netloc.rpartition(
949 "."
950 )
951 src_name = name.replace(".", "")
952 try:
953 self.get_source(name=src_name)
954 except SourceNotFound:
955 name = src_name
956 else:
957 from random import randint
958 name = "{0}-{1}".format(src_name, randint(1, 1000))
959 return name
960
961 def add_index_to_pipfile(self, index, verify_ssl=True):
962 """Adds a given index to the Pipfile."""
963 # Read and append Pipfile.
964 p = self.parsed_pipfile
965 try:
966 self.get_source(url=index)
967 except SourceNotFound:
968 source = {"url": index, "verify_ssl": verify_ssl}
969 else:
970 return
971 source["name"] = self.src_name_from_url(index)
972 # Add the package to the group.
973 if "source" not in p:
974 p["source"] = [source]
975 else:
976 p["source"].append(source)
977 # Write Pipfile.
978 self.write_toml(p)
979
980 def recase_pipfile(self):
981 if self.ensure_proper_casing():
982 self.write_toml(self.parsed_pipfile)
983
984 def load_lockfile(self, expand_env_vars=True):
985 with io.open(self.lockfile_location, encoding="utf-8") as lock:
986 j = json.load(lock)
987 self._lockfile_newlines = preferred_newlines(lock)
988 # lockfile is just a string
989 if not j or not hasattr(j, "keys"):
990 return j
991
992 if expand_env_vars:
993 # Expand environment variables in Pipfile.lock at runtime.
994 for i, source in enumerate(j["_meta"]["sources"][:]):
995 j["_meta"]["sources"][i]["url"] = os.path.expandvars(
996 j["_meta"]["sources"][i]["url"]
997 )
998
999 return j
1000
1001 def get_lockfile_hash(self):
1002 if not os.path.exists(self.lockfile_location):
1003 return
1004
1005 try:
1006 lockfile = self.load_lockfile(expand_env_vars=False)
1007 except ValueError:
1008 # Lockfile corrupted
1009 return ""
1010 if "_meta" in lockfile and hasattr(lockfile, "keys"):
1011 return lockfile["_meta"].get("hash", {}).get("sha256")
1012 # Lockfile exists but has no hash at all
1013 return ""
1014
1015 def calculate_pipfile_hash(self):
1016 # Update the lockfile if it is out-of-date.
1017 p = pipfile.load(self.pipfile_location, inject_env=False)
1018 return p.hash
1019
1020 def ensure_proper_casing(self):
1021 """Ensures proper casing of Pipfile packages"""
1022 pfile = self.parsed_pipfile
1023 casing_changed = self.proper_case_section(pfile.get("packages", {}))
1024 casing_changed |= self.proper_case_section(pfile.get("dev-packages", {}))
1025 return casing_changed
1026
1027 def proper_case_section(self, section):
1028 """Verify proper casing is retrieved, when available, for each
1029 dependency in the section.
1030 """
1031 # Casing for section.
1032 changed_values = False
1033 unknown_names = [k for k in section.keys() if k not in set(self.proper_names)]
1034 # Replace each package with proper casing.
1035 for dep in unknown_names:
1036 try:
1037 # Get new casing for package name.
1038 new_casing = proper_case(dep)
1039 except IOError:
1040 # Unable to normalize package name.
1041 continue
1042
1043 if new_casing != dep:
1044 changed_values = True
1045 self.register_proper_name(new_casing)
1046 # Replace old value with new value.
1047 old_value = section[dep]
1048 section[new_casing] = old_value
1049 del section[dep]
1050 # Return whether or not values have been changed.
1051 return changed_values
1052
1053 @cached_property
1054 def finders(self):
1055 from .vendor.pythonfinder import Finder
1056 scripts_dirname = "Scripts" if os.name == "nt" else "bin"
1057 scripts_dir = os.path.join(self.virtualenv_location, scripts_dirname)
1058 finders = [
1059 Finder(path=scripts_dir, global_search=gs, system=False)
1060 for gs in (False, True)
1061 ]
1062 return finders
1063
1064 @property
1065 def finder(self):
1066 return next(iter(self.finders), None)
1067
1068 def which(self, search, as_path=True):
1069 find = operator.methodcaller("which", search)
1070 result = next(iter(filter(None, (find(finder) for finder in self.finders))), None)
1071 if not result:
1072 result = self._which(search)
1073 else:
1074 if as_path:
1075 result = str(result.path)
1076 return result
1077
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pipenv/project.py b/pipenv/project.py
--- a/pipenv/project.py
+++ b/pipenv/project.py
@@ -780,7 +780,7 @@
return {
"hash": {"sha256": self.calculate_pipfile_hash()},
"pipfile-spec": PIPFILE_SPEC_CURRENT,
- "sources": sources,
+ "sources": [self.populate_source(s) for s in sources],
"requires": self.parsed_pipfile.get("requires", {})
}
| {"golden_diff": "diff --git a/pipenv/project.py b/pipenv/project.py\n--- a/pipenv/project.py\n+++ b/pipenv/project.py\n@@ -780,7 +780,7 @@\n return {\n \"hash\": {\"sha256\": self.calculate_pipfile_hash()},\n \"pipfile-spec\": PIPFILE_SPEC_CURRENT,\n- \"sources\": sources,\n+ \"sources\": [self.populate_source(s) for s in sources],\n \"requires\": self.parsed_pipfile.get(\"requires\", {})\n }\n", "issue": "--skip-lock throws validation error\n### Issue description\r\n\r\n`--skip-lock` flag throws a validation error\r\n\r\n### Expected result\r\n\r\nThat I get the same successful behaviour as in 2018.11.26 .\r\n\r\n### Actual result\r\n\r\nThrows a validation error due to the `--skip-lock` flag.\r\n\r\n```\r\nInstalling dependencies from Pipfile\u2026\r\nTraceback (most recent call last):\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/bin/pipenv\", line 11, in <module>\r\n load_entry_point('pipenv==2018.11.26', 'console_scripts', 'pipenv')()\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 764, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 717, in main\r\n rv = self.invoke(ctx)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 1137, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 956, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py\", line 64, in new_func\r\n return ctx.invoke(f, obj, *args, **kwargs)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/core.py\", line 555, in invoke\r\n return callback(*args, **kwargs)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/click/decorators.py\", line 17, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/cli/command.py\", line 254, in install\r\n editable_packages=state.installstate.editables,\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py\", line 1874, in do_install\r\n keep_outdated=keep_outdated\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py\", line 1253, in do_init\r\n pypi_mirror=pypi_mirror,\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/core.py\", line 795, in do_install_dependencies\r\n lockfile = project.get_or_create_lockfile(from_pipfile=True)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/project.py\", line 756, in get_or_create_lockfile\r\n path=self.lockfile_location, data=lockfile_dict, meta_from_project=False\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/requirementslib/models/lockfile.py\", line 209, in from_data\r\n lockfile = plette.lockfiles.Lockfile(data)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py\", line 37, in __init__\r\n self.validate(data)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/lockfiles.py\", line 80, in validate\r\n klass.validate(data[key])\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/sections.py\", line 70, in validate\r\n klass.validate(data[key])\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py\", line 132, in validate\r\n cls.item_class.validate(d)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/plette/models/base.py\", line 67, in validate\r\n return validate(cls, data)\r\n File \"/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv/vendor/requirementslib/models/pipfile.py\", line 59, in validate\r\n raise plette.models.base.ValidationError(data, v)\r\nplette.models.base.ValidationError: {'url': 'https://pypi.python.org/simple', 'verify_ssl': True}\r\n```\r\n\r\n### Steps to replicate\r\n\r\nI have a simple Pipfile like\r\n\r\n```\r\n[[source]]\r\nurl = \"https://pypi.python.org/simple\"\r\nverify_ssl = true\r\n\r\n[requires]\r\npython_version = \"3.7\"\r\n\r\n[dev-packages]\r\n\"boto3\" = \"*\"\r\n\"flake8\" = \"*\"\r\n\r\n[packages]\r\nrequests = \"*\"\r\n```\r\n\r\nand run `pipenv install --skip-lock --dev`.\r\n\r\n-------------------------------------------------------------------------------\r\n\r\n<details><summary>$ pipenv --support</summary>\r\n\r\nPipenv version: `'2018.11.26'`\r\n\r\nPipenv location: `'/usr/local/Cellar/pipenv/2018.11.26/libexec/lib/python3.7/site-packages/pipenv'`\r\n\r\nPython location: `'/usr/local/Cellar/pipenv/2018.11.26/libexec/bin/python3.7'`\r\n\r\nPython installations found:\r\n\r\n - `3.7.2`: `/usr/local/bin/python3`\r\n - `3.7.2`: `/usr/local/bin/python3.7m`\r\n - `3.7.1`: `/Users/andrei/.pyenv/versions/3.7.1/bin/python3`\r\n - `3.7.1`: `/Users/andrei/.pyenv/versions/3.7.1/bin/python3.7m`\r\n - `3.7.0`: `/Users/andrei/.pyenv/versions/3.7.0/bin/python3`\r\n - `3.7.0`: `/Users/andrei/.pyenv/versions/3.7.0/bin/python3.7m`\r\n - `2.7.15`: `/usr/local/bin/python`\r\n - `2.7.15`: `/usr/local/bin/pythonw`\r\n - `2.7.10`: `/usr/bin/python`\r\n - `2.7.10`: `/usr/bin/pythonw`\r\n - `2.7.10`: `/usr/bin/python2.7`\r\n\r\nPEP 508 Information:\r\n\r\n```\r\n{'implementation_name': 'cpython',\r\n 'implementation_version': '3.7.2',\r\n 'os_name': 'posix',\r\n 'platform_machine': 'x86_64',\r\n 'platform_python_implementation': 'CPython',\r\n 'platform_release': '18.2.0',\r\n 'platform_system': 'Darwin',\r\n 'platform_version': 'Darwin Kernel Version 18.2.0: Fri Dec 14 18:43:36 PST '\r\n '2018; root:xnu-4903.240.10~4/RELEASE_X86_64',\r\n 'python_full_version': '3.7.2',\r\n 'python_version': '3.7',\r\n 'sys_platform': 'darwin'}\r\n```\r\n\r\nSystem environment variables:\r\n\r\n - `PATH`\r\n - `GIT_PS1_SHOWDIRTYSTATE`\r\n - `MANPATH`\r\n - `rvm_use_flag`\r\n - `LESS_TERMCAP_mb`\r\n - `rvm_bin_path`\r\n - `TERM_PROGRAM`\r\n - `LESS_TERMCAP_md`\r\n - `rvm_quiet_flag`\r\n - `GEM_HOME`\r\n - `LESS_TERMCAP_me`\r\n - `rvm_gemstone_url`\r\n - `TERM`\r\n - `SHELL`\r\n - `CLICOLOR`\r\n - `HISTSIZE`\r\n - `rvm_docs_type`\r\n - `PIPENV_VENV_IN_PROJECT`\r\n - `ITERM_SHELL_INTEGRATION_INSTALLED`\r\n - `IRBRC`\r\n - `TMPDIR`\r\n - `Apple_PubSub_Socket_Render`\r\n - `AUTOJUMP_KEEP_SYMLINKS`\r\n - `TERM_PROGRAM_VERSION`\r\n - `TRAVIS_API_TOKEN`\r\n - `GIT_PS1_STATESEPARATOR`\r\n - `rvm_hook`\r\n - `MY_RUBY_HOME`\r\n - `LESS_TERMCAP_ue`\r\n - `AUTOJUMP_IGNORE_CASE`\r\n - `TIME_STYLE`\r\n - `TERM_SESSION_ID`\r\n - `GIT_PS1_SHOWCOLORHINTS`\r\n - `FLAGS_GETOPT_CMD`\r\n - `LC_ALL`\r\n - `GIT_EDITOR`\r\n - `GIT_TERMINAL_PROMPT`\r\n - `NVM_DIR`\r\n - `HISTFILESIZE`\r\n - `USER`\r\n - `rvm_gemstone_package_file`\r\n - `PROMPT_ANDREI_ONLINE`\r\n - `_system_type`\r\n - `HOMEBREW_NO_ANALYTICS`\r\n - `rvm_path`\r\n - `ENV`\r\n - `SSH_AUTH_SOCK`\r\n - `HOMEBREW_NO_AUTO_UPDATE`\r\n - `__CF_USER_TEXT_ENCODING`\r\n - `rvm_proxy`\r\n - `rvm_ruby_file`\r\n - `PAGER`\r\n - `ERL_LIBS`\r\n - `LC_TYPE`\r\n - `LSCOLORS`\r\n - `LESS_TERMCAP_us`\r\n - `rvm_silent_flag`\r\n - `rvm_prefix`\r\n - `rvm_ruby_make`\r\n - `_`\r\n - `WORDCHARS`\r\n - `PWD`\r\n - `HOMEBREW_GITHUB_API_TOKEN`\r\n - `EDITOR`\r\n - `rvm_sdk`\r\n - `LANG`\r\n - `BRCD_RANONCE`\r\n - `ITERM_PROFILE`\r\n - `_system_arch`\r\n - `XPC_FLAGS`\r\n - `_system_version`\r\n - `GIT_MERGE_AUTOEDIT`\r\n - `GIT_PS1_HIDE_IF_PWD_IGNORED`\r\n - `GIT_PS1_SHOWUNTRACKEDFILES`\r\n - `HISTIGNORE`\r\n - `HISTCONTROL`\r\n - `XPC_SERVICE_NAME`\r\n - `rvm_version`\r\n - `rvm_script_name`\r\n - `rvm_pretty_print_flag`\r\n - `PYENV_SHELL`\r\n - `T_AWS_IAM_INC_SH_DIR`\r\n - `SHLVL`\r\n - `HOME`\r\n - `COLORFGBG`\r\n - `rvm_ruby_mode`\r\n - `LC_TERMINAL_VERSION`\r\n - `LS_OPTIONS`\r\n - `GIT_PS1_SHOWSTASHSTATE`\r\n - `BASH_ENV`\r\n - `ITERM_SESSION_ID`\r\n - `LESS`\r\n - `LOGNAME`\r\n - `rvm_alias_expanded`\r\n - `GIT_PS1_SHOWUPSTREAM`\r\n - `VISUAL`\r\n - `GEM_PATH`\r\n - `LESS_TERMCAP_so`\r\n - `LC_CTYPE`\r\n - `PROMPT_ANDREI_BATTERY`\r\n - `LESSOPEN`\r\n - `GOPATH`\r\n - `rvm_nightly_flag`\r\n - `BROWSER`\r\n - `rvm_ruby_make_install`\r\n - `PROMPT_EOL_MARK`\r\n - `rvm_niceness`\r\n - `LC_TERMINAL`\r\n - `rvm_ruby_bits`\r\n - `rvm_bin_flag`\r\n - `rvm_only_path_flag`\r\n - `RUBY_VERSION`\r\n - `SQLITE_EXEMPT_PATH_FROM_VNODE_GUARDS`\r\n - `_system_name`\r\n - `HISTFILE`\r\n - `LESS_TERMCAP_se`\r\n - `COLORTERM`\r\n - `PIP_DISABLE_PIP_VERSION_CHECK`\r\n - `PYTHONDONTWRITEBYTECODE`\r\n - `PIP_SHIMS_BASE_MODULE`\r\n - `PIP_PYTHON_PATH`\r\n - `PYTHONFINDER_IGNORE_UNSUPPORTED`\r\n\r\nPipenv\u2013specific environment variables:\r\n\r\n - `PIPENV_VENV_IN_PROJECT`: `1`\r\n\r\nDebug\u2013specific environment variables:\r\n\r\n - `PATH`: `/usr/local/Cellar/pipenv/2018.11.26/libexec/tools:/Users/andrei/.pyenv/shims:/Users/andrei/.rvm/gems/ruby-2.0.0-p648/bin:/Users/andrei/.rvm/gems/ruby-2.0.0-p648@global/bin:/Users/andrei/.rvm/rubies/ruby-2.0.0-p648/bin:/usr/local/bin:/usr/local/sbin:/Users/andrei/bin:/usr/local/MacGPG2/bin:/usr/local/opt/go/libexec/bin:/Users/andrei/.yarn/bin:/usr/local/opt/perl/bin:/usr/local/opt/unzip/bin:/usr/local/opt/curl/bin:/usr/local/opt/make/libexec/gnubin:/usr/local/opt/gzip/libexec/gnubin:/usr/local/opt/grep/libexec/gnubin:/usr/local/opt/gnu-which/libexec/gnubin:/usr/local/opt/gnu-time/libexec/gnubin:/usr/local/opt/gnu-tar/libexec/gnubin:/usr/local/opt/gnu-sed/libexec/gnubin:/usr/local/opt/findutils/libexec/gnubin:/usr/local/opt/coreutils/libexec/gnubin:/usr/texbin:/usr/bin:/bin:/usr/sbin:/sbin:./node_modules/.bin:/Users/andrei/node_modules/.bin:/Users/andrei/bin/git:/Users/andrei/bin/ansiweather.git:/Users/andrei/bin/git-extra.git:/Users/andrei/bin/git-fiddle.git:/Users/andrei/bin/git-guilt.git:/Users/andrei/bin/git-number.git:/Users/andrei/bin/git-ssdiff.git:/Users/andrei/bin/git-timeofday.git:/Users/andrei/bin/qc.git:/Users/andrei/bin/showlinenum.git:/Users/andrei/bin/skel-complete.git:/Users/andrei/bin/qc.git/qc:/Users/andrei/bin/git-guilt.git/bin:/usr/local/opt/git/share/git-core/contrib/workdir:/Users/andrei/.rvm/bin:/Users/andrei/.rvm/bin:/Users/andrei/git/firecloud/support-firecloud/bin`\r\n - `SHELL`: `/usr/local/bin/zsh`\r\n - `EDITOR`: `/usr/local/bin/emacs`\r\n - `LANG`: `en_US.UTF-8`\r\n - `PWD`: `/Users/andrei/git/firecloud/atex-platform/apex/functions/python`\r\n\r\n\r\n---------------------------\r\n\r\nContents of `Pipfile` ('/Users/andrei/git/firecloud/atex-platform/apex/functions/python/Pipfile'):\r\n\r\n```toml\r\n[[source]]\r\nurl = \"https://pypi.python.org/simple\"\r\nverify_ssl = true\r\n\r\n[requires]\r\npython_version = \"3.7\"\r\n\r\n[dev-packages]\r\n\"boto3\" = \"*\"\r\n\"flake8\" = \"*\"\r\n\r\n[packages]\r\nrequests = \"*\"\r\n\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport base64\nimport fnmatch\nimport glob\nimport hashlib\nimport io\nimport json\nimport operator\nimport os\nimport re\nimport sys\n\nimport six\nimport toml\nimport tomlkit\nimport vistir\n\nfrom first import first\n\nimport pipfile\nimport pipfile.api\n\nfrom cached_property import cached_property\n\nfrom .cmdparse import Script\nfrom .environment import Environment\nfrom .environments import (\n PIPENV_DEFAULT_PYTHON_VERSION, PIPENV_IGNORE_VIRTUALENVS, PIPENV_MAX_DEPTH,\n PIPENV_PIPFILE, PIPENV_PYTHON, PIPENV_TEST_INDEX, PIPENV_VENV_IN_PROJECT,\n is_in_virtualenv\n)\nfrom .utils import (\n cleanup_toml, convert_toml_outline_tables, find_requirements,\n get_canonical_names, get_url_name, get_workon_home, is_editable,\n is_installable_file, is_star, is_valid_url, is_virtual_environment,\n looks_like_dir, normalize_drive, pep423_name, proper_case, python_version,\n safe_expandvars\n)\n\n\ndef _normalized(p):\n if p is None:\n return None\n loc = vistir.compat.Path(p)\n if not loc.is_absolute():\n try:\n loc = loc.resolve()\n except OSError:\n loc = loc.absolute()\n # Recase the path properly on Windows. From https://stackoverflow.com/a/35229734/5043728\n if os.name == 'nt':\n matches = glob.glob(re.sub(r'([^:/\\\\])(?=[/\\\\]|$)', r'[\\1]', str(loc)))\n path_str = matches and matches[0] or str(loc)\n else:\n path_str = str(loc)\n return normalize_drive(path_str)\n\n\nDEFAULT_NEWLINES = u\"\\n\"\n\n\nclass _LockFileEncoder(json.JSONEncoder):\n \"\"\"A specilized JSON encoder to convert loaded TOML data into a lock file.\n\n This adds a few characteristics to the encoder:\n\n * The JSON is always prettified with indents and spaces.\n * TOMLKit's container elements are seamlessly encodable.\n * The output is always UTF-8-encoded text, never binary, even on Python 2.\n \"\"\"\n\n def __init__(self):\n super(_LockFileEncoder, self).__init__(\n indent=4, separators=(\",\", \": \"), sort_keys=True\n )\n\n def default(self, obj):\n if isinstance(obj, vistir.compat.Path):\n obj = obj.as_posix()\n return super(_LockFileEncoder, self).default(obj)\n\n def encode(self, obj):\n content = super(_LockFileEncoder, self).encode(obj)\n if not isinstance(content, six.text_type):\n content = content.decode(\"utf-8\")\n return content\n\n\ndef preferred_newlines(f):\n if isinstance(f.newlines, six.text_type):\n return f.newlines\n return DEFAULT_NEWLINES\n\n\nif PIPENV_PIPFILE:\n if not os.path.isfile(PIPENV_PIPFILE):\n raise RuntimeError(\"Given PIPENV_PIPFILE is not found!\")\n\n else:\n PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)\n# (path, file contents) => TOMLFile\n# keeps track of pipfiles that we've seen so we do not need to re-parse 'em\n_pipfile_cache = {}\n\n\nif PIPENV_TEST_INDEX:\n DEFAULT_SOURCE = {\n u\"url\": PIPENV_TEST_INDEX,\n u\"verify_ssl\": True,\n u\"name\": u\"custom\",\n }\nelse:\n DEFAULT_SOURCE = {\n u\"url\": u\"https://pypi.org/simple\",\n u\"verify_ssl\": True,\n u\"name\": u\"pypi\",\n }\n\npipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE\n\n\nclass SourceNotFound(KeyError):\n pass\n\n\nclass Project(object):\n \"\"\"docstring for Project\"\"\"\n\n _lockfile_encoder = _LockFileEncoder()\n\n def __init__(self, which=None, python_version=None, chdir=True):\n super(Project, self).__init__()\n self._name = None\n self._virtualenv_location = None\n self._download_location = None\n self._proper_names_db_path = None\n self._pipfile_location = None\n self._pipfile_newlines = DEFAULT_NEWLINES\n self._lockfile_newlines = DEFAULT_NEWLINES\n self._requirements_location = None\n self._original_dir = os.path.abspath(os.curdir)\n self._environment = None\n self._which = which\n self._build_system = {\n \"requires\": [\"setuptools\", \"wheel\"]\n }\n self.python_version = python_version\n # Hack to skip this during pipenv run, or -r.\n if (\"run\" not in sys.argv) and chdir:\n try:\n os.chdir(self.project_directory)\n except (TypeError, AttributeError):\n pass\n\n def path_to(self, p):\n \"\"\"Returns the absolute path to a given relative path.\"\"\"\n if os.path.isabs(p):\n return p\n\n return os.sep.join([self._original_dir, p])\n\n def _build_package_list(self, package_section):\n \"\"\"Returns a list of packages for pip-tools to consume.\"\"\"\n from pipenv.vendor.requirementslib.utils import is_vcs\n ps = {}\n # TODO: Separate the logic for showing packages from the filters for supplying pip-tools\n for k, v in self.parsed_pipfile.get(package_section, {}).items():\n # Skip editable VCS deps.\n if hasattr(v, \"keys\"):\n # When a vcs url is gven without editable it only appears as a key\n # Eliminate any vcs, path, or url entries which are not editable\n # Since pip-tools can't do deep resolution on them, even setuptools-installable ones\n if (\n is_vcs(v)\n or is_vcs(k)\n or (is_installable_file(k) or is_installable_file(v))\n or any(\n (\n prefix in v\n and (os.path.isfile(v[prefix]) or is_valid_url(v[prefix]))\n )\n for prefix in [\"path\", \"file\"]\n )\n ):\n # If they are editable, do resolve them\n if \"editable\" not in v:\n # allow wheels to be passed through\n if not (\n hasattr(v, \"keys\")\n and v.get(\"path\", v.get(\"file\", \"\")).endswith(\".whl\")\n ):\n continue\n ps.update({k: v})\n\n else:\n ps.update({k: v})\n else:\n ps.update({k: v})\n else:\n # Since these entries have no attributes we know they are not editable\n # So we can safely exclude things that need to be editable in order to be resolved\n # First exclude anything that is a vcs entry either in the key or value\n if not (\n any(is_vcs(i) for i in [k, v])\n or\n # Then exclude any installable files that are not directories\n # Because pip-tools can resolve setup.py for example\n any(is_installable_file(i) for i in [k, v])\n or\n # Then exclude any URLs because they need to be editable also\n # Things that are excluded can only be 'shallow resolved'\n any(is_valid_url(i) for i in [k, v])\n ):\n ps.update({k: v})\n return ps\n\n @property\n def name(self):\n if self._name is None:\n self._name = self.pipfile_location.split(os.sep)[-2]\n return self._name\n\n @property\n def pipfile_exists(self):\n return bool(self.pipfile_location)\n\n @property\n def required_python_version(self):\n if self.pipfile_exists:\n required = self.parsed_pipfile.get(\"requires\", {}).get(\n \"python_full_version\"\n )\n if not required:\n required = self.parsed_pipfile.get(\"requires\", {}).get(\"python_version\")\n if required != \"*\":\n return required\n\n @property\n def project_directory(self):\n if self.pipfile_location is not None:\n return os.path.abspath(os.path.join(self.pipfile_location, os.pardir))\n\n else:\n return None\n\n @property\n def requirements_exists(self):\n return bool(self.requirements_location)\n\n def is_venv_in_project(self):\n return PIPENV_VENV_IN_PROJECT or (\n self.project_directory\n and os.path.isdir(os.path.join(self.project_directory, \".venv\"))\n )\n\n @property\n def virtualenv_exists(self):\n # TODO: Decouple project from existence of Pipfile.\n if self.pipfile_exists and os.path.exists(self.virtualenv_location):\n if os.name == \"nt\":\n extra = [\"Scripts\", \"activate.bat\"]\n else:\n extra = [\"bin\", \"activate\"]\n return os.path.isfile(os.sep.join([self.virtualenv_location] + extra))\n\n return False\n\n def get_location_for_virtualenv(self):\n # If there's no project yet, set location based on config.\n if not self.project_directory:\n if self.is_venv_in_project():\n return os.path.abspath(\".venv\")\n return str(get_workon_home().joinpath(self.virtualenv_name))\n\n dot_venv = os.path.join(self.project_directory, \".venv\")\n\n # If there's no .venv in project root, set location based on config.\n if not os.path.exists(dot_venv):\n if self.is_venv_in_project():\n return dot_venv\n return str(get_workon_home().joinpath(self.virtualenv_name))\n\n # If .venv in project root is a directory, use it.\n if os.path.isdir(dot_venv):\n return dot_venv\n\n # Now we assume .venv in project root is a file. Use its content.\n with io.open(dot_venv) as f:\n name = f.read().strip()\n\n # If content looks like a path, use it as a relative path.\n # Otherwise use directory named after content in WORKON_HOME.\n if looks_like_dir(name):\n path = vistir.compat.Path(self.project_directory, name)\n return path.absolute().as_posix()\n return str(get_workon_home().joinpath(name))\n\n @property\n def working_set(self):\n from .utils import load_path\n sys_path = load_path(self.which(\"python\"))\n import pkg_resources\n return pkg_resources.WorkingSet(sys_path)\n\n @property\n def installed_packages(self):\n return self.environment.get_installed_packages()\n\n @property\n def installed_package_names(self):\n return get_canonical_names([pkg.key for pkg in self.installed_packages])\n\n @property\n def lockfile_package_names(self):\n dev_keys = get_canonical_names(self.lockfile_content[\"develop\"].keys())\n default_keys = get_canonical_names(self.lockfile_content[\"default\"].keys())\n return {\n \"dev\": dev_keys,\n \"default\": default_keys,\n \"combined\": dev_keys | default_keys\n }\n\n @property\n def pipfile_package_names(self):\n dev_keys = get_canonical_names(self.dev_packages.keys())\n default_keys = get_canonical_names(self.packages.keys())\n return {\n \"dev\": dev_keys,\n \"default\": default_keys,\n \"combined\": dev_keys | default_keys\n }\n\n @property\n def environment(self):\n if not self._environment:\n prefix = self.virtualenv_location\n is_venv = is_in_virtualenv()\n sources = self.sources if self.sources else [DEFAULT_SOURCE,]\n self._environment = Environment(\n prefix=prefix, is_venv=is_venv, sources=sources, pipfile=self.parsed_pipfile,\n project=self\n )\n self._environment.add_dist(\"pipenv\")\n return self._environment\n\n def get_outdated_packages(self):\n return self.environment.get_outdated_packages(pre=self.pipfile.get(\"pre\", False))\n\n @classmethod\n def _sanitize(cls, name):\n # Replace dangerous characters into '_'. The length of the sanitized\n # project name is limited as 42 because of the limit of linux kernel\n #\n # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')\n #\n # 127 : BINPRM_BUF_SIZE - 1\n # 32 : Maximum length of username\n #\n # References:\n # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html\n # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF\n # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18\n return re.sub(r'[ $`!*@\"\\\\\\r\\n\\t]', \"_\", name)[0:42]\n\n def _get_virtualenv_hash(self, name):\n \"\"\"Get the name of the virtualenv adjusted for windows if needed\n\n Returns (name, encoded_hash)\n \"\"\"\n\n def get_name(name, location):\n name = self._sanitize(name)\n hash = hashlib.sha256(location.encode()).digest()[:6]\n encoded_hash = base64.urlsafe_b64encode(hash).decode()\n return name, encoded_hash[:8]\n\n clean_name, encoded_hash = get_name(name, self.pipfile_location)\n venv_name = \"{0}-{1}\".format(clean_name, encoded_hash)\n\n # This should work most of the time for\n # Case-sensitive filesystems,\n # In-project venv\n # \"Proper\" path casing (on non-case-sensitive filesystems).\n if (\n not fnmatch.fnmatch(\"A\", \"a\")\n or self.is_venv_in_project()\n or get_workon_home().joinpath(venv_name).exists()\n ):\n return clean_name, encoded_hash\n\n # Check for different capitalization of the same project.\n for path in get_workon_home().iterdir():\n if not is_virtual_environment(path):\n continue\n try:\n env_name, hash_ = path.name.rsplit(\"-\", 1)\n except ValueError:\n continue\n if len(hash_) != 8 or env_name.lower() != name.lower():\n continue\n return get_name(env_name, self.pipfile_location.replace(name, env_name))\n\n # Use the default if no matching env exists.\n return clean_name, encoded_hash\n\n @property\n def virtualenv_name(self):\n sanitized, encoded_hash = self._get_virtualenv_hash(self.name)\n suffix = \"-{0}\".format(PIPENV_PYTHON) if PIPENV_PYTHON else \"\"\n # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',\n # the name of its virtualenv will be 'my-project-wyUfYPqE'\n return sanitized + \"-\" + encoded_hash + suffix\n\n @property\n def virtualenv_location(self):\n # if VIRTUAL_ENV is set, use that.\n virtualenv_env = os.getenv(\"VIRTUAL_ENV\")\n if (\"PIPENV_ACTIVE\" not in os.environ and\n not PIPENV_IGNORE_VIRTUALENVS and virtualenv_env):\n return virtualenv_env\n\n if not self._virtualenv_location: # Use cached version, if available.\n assert self.project_directory, \"project not created\"\n self._virtualenv_location = self.get_location_for_virtualenv()\n return self._virtualenv_location\n\n @property\n def virtualenv_src_location(self):\n if self.virtualenv_location:\n loc = os.sep.join([self.virtualenv_location, \"src\"])\n else:\n loc = os.sep.join([self.project_directory, \"src\"])\n vistir.path.mkdir_p(loc)\n return loc\n\n @property\n def download_location(self):\n if self._download_location is None:\n loc = os.sep.join([self.virtualenv_location, \"downloads\"])\n self._download_location = loc\n # Create the directory, if it doesn't exist.\n vistir.path.mkdir_p(self._download_location)\n return self._download_location\n\n @property\n def proper_names_db_path(self):\n if self._proper_names_db_path is None:\n self._proper_names_db_path = vistir.compat.Path(\n self.virtualenv_location, \"pipenv-proper-names.txt\"\n )\n self._proper_names_db_path.touch() # Ensure the file exists.\n return self._proper_names_db_path\n\n @property\n def proper_names(self):\n with self.proper_names_db_path.open() as f:\n return f.read().splitlines()\n\n def register_proper_name(self, name):\n \"\"\"Registers a proper name to the database.\"\"\"\n with self.proper_names_db_path.open(\"a\") as f:\n f.write(u\"{0}\\n\".format(name))\n\n @property\n def pipfile_location(self):\n if PIPENV_PIPFILE:\n return PIPENV_PIPFILE\n\n if self._pipfile_location is None:\n try:\n loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._pipfile_location = _normalized(loc)\n return self._pipfile_location\n\n @property\n def requirements_location(self):\n if self._requirements_location is None:\n try:\n loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._requirements_location = loc\n return self._requirements_location\n\n @property\n def parsed_pipfile(self):\n \"\"\"Parse Pipfile into a TOMLFile and cache it\n\n (call clear_pipfile_cache() afterwards if mutating)\"\"\"\n contents = self.read_pipfile()\n # use full contents to get around str/bytes 2/3 issues\n cache_key = (self.pipfile_location, contents)\n if cache_key not in _pipfile_cache:\n parsed = self._parse_pipfile(contents)\n _pipfile_cache[cache_key] = parsed\n return _pipfile_cache[cache_key]\n\n def read_pipfile(self):\n # Open the pipfile, read it into memory.\n with io.open(self.pipfile_location) as f:\n contents = f.read()\n self._pipfile_newlines = preferred_newlines(f)\n\n return contents\n\n def clear_pipfile_cache(self):\n \"\"\"Clear pipfile cache (e.g., so we can mutate parsed pipfile)\"\"\"\n _pipfile_cache.clear()\n\n def _parse_pipfile(self, contents):\n try:\n return tomlkit.parse(contents)\n except Exception:\n # We lose comments here, but it's for the best.)\n # Fallback to toml parser, for large files.\n return toml.loads(contents)\n\n def _read_pyproject(self):\n pyproject = self.path_to(\"pyproject.toml\")\n if os.path.exists(pyproject):\n self._pyproject = toml.load(pyproject)\n build_system = self._pyproject.get(\"build-system\", None)\n if not os.path.exists(self.path_to(\"setup.py\")):\n if not build_system or not build_system.get(\"requires\"):\n build_system = {\n \"requires\": [\"setuptools>=38.2.5\", \"wheel\"],\n \"build-backend\": \"setuptools.build_meta\",\n }\n self._build_system = build_system\n\n @property\n def build_requires(self):\n return self._build_system.get(\"requires\", [])\n\n @property\n def build_backend(self):\n return self._build_system.get(\"build-backend\", None)\n\n @property\n def settings(self):\n \"\"\"A dictionary of the settings added to the Pipfile.\"\"\"\n return self.parsed_pipfile.get(\"pipenv\", {})\n\n def has_script(self, name):\n try:\n return name in self.parsed_pipfile[\"scripts\"]\n except KeyError:\n return False\n\n def build_script(self, name, extra_args=None):\n try:\n script = Script.parse(self.parsed_pipfile[\"scripts\"][name])\n except KeyError:\n script = Script(name)\n if extra_args:\n script.extend(extra_args)\n return script\n\n def update_settings(self, d):\n settings = self.settings\n changed = False\n for new in d:\n if new not in settings:\n settings[new] = d[new]\n changed = True\n if changed:\n p = self.parsed_pipfile\n p[\"pipenv\"] = settings\n # Write the changes to disk.\n self.write_toml(p)\n\n @property\n def _lockfile(self):\n \"\"\"Pipfile.lock divided by PyPI and external dependencies.\"\"\"\n pfile = pipfile.load(self.pipfile_location, inject_env=False)\n lockfile = json.loads(pfile.lock())\n for section in (\"default\", \"develop\"):\n lock_section = lockfile.get(section, {})\n for key in list(lock_section.keys()):\n norm_key = pep423_name(key)\n lockfile[section][norm_key] = lock_section.pop(key)\n return lockfile\n\n @property\n def _pipfile(self):\n from .vendor.requirementslib.models.pipfile import Pipfile as ReqLibPipfile\n pf = ReqLibPipfile.load(self.pipfile_location)\n return pf\n\n @property\n def lockfile_location(self):\n return \"{0}.lock\".format(self.pipfile_location)\n\n @property\n def lockfile_exists(self):\n return os.path.isfile(self.lockfile_location)\n\n @property\n def lockfile_content(self):\n return self.load_lockfile()\n\n def _get_editable_packages(self, dev=False):\n section = \"dev-packages\" if dev else \"packages\"\n # section = \"{0}-editable\".format(section)\n packages = {\n k: v\n # for k, v in self._pipfile[section].items()\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_editable(k) or is_editable(v)\n }\n return packages\n\n def _get_vcs_packages(self, dev=False):\n from pipenv.vendor.requirementslib.utils import is_vcs\n section = \"dev-packages\" if dev else \"packages\"\n # section = \"{0}-vcs\".format(section)\n packages = {\n k: v\n # for k, v in self._pipfile[section].items()\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_vcs(v) or is_vcs(k)\n }\n return packages or {}\n\n @property\n def editable_packages(self):\n return self._get_editable_packages(dev=False)\n\n @property\n def editable_dev_packages(self):\n return self._get_editable_packages(dev=True)\n\n @property\n def vcs_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=False)\n\n @property\n def vcs_dev_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=True)\n\n @property\n def all_packages(self):\n \"\"\"Returns a list of all packages.\"\"\"\n p = dict(self.parsed_pipfile.get(\"dev-packages\", {}))\n p.update(self.parsed_pipfile.get(\"packages\", {}))\n return p\n\n @property\n def packages(self):\n \"\"\"Returns a list of packages, for pip-tools to consume.\"\"\"\n return self._build_package_list(\"packages\")\n\n @property\n def dev_packages(self):\n \"\"\"Returns a list of dev-packages, for pip-tools to consume.\"\"\"\n return self._build_package_list(\"dev-packages\")\n\n def touch_pipfile(self):\n \"\"\"Simply touches the Pipfile, for later use.\"\"\"\n with open(\"Pipfile\", \"a\"):\n os.utime(\"Pipfile\", None)\n\n @property\n def pipfile_is_empty(self):\n if not self.pipfile_exists:\n return True\n\n if not len(self.read_pipfile()):\n return True\n\n return False\n\n def create_pipfile(self, python=None):\n \"\"\"Creates the Pipfile, filled with juicy defaults.\"\"\"\n from .vendor.pip_shims.shims import (\n ConfigOptionParser, make_option_group, index_group\n )\n\n name = self.name if self.name is not None else \"Pipfile\"\n config_parser = ConfigOptionParser(name=self.name)\n config_parser.add_option_group(make_option_group(index_group, config_parser))\n install = config_parser.option_groups[0]\n indexes = (\n \" \".join(install.get_option(\"--extra-index-url\").default)\n .lstrip(\"\\n\")\n .split(\"\\n\")\n )\n sources = [DEFAULT_SOURCE,]\n for i, index in enumerate(indexes):\n if not index:\n continue\n\n source_name = \"pip_index_{}\".format(i)\n verify_ssl = index.startswith(\"https\")\n sources.append(\n {u\"url\": index, u\"verify_ssl\": verify_ssl, u\"name\": source_name}\n )\n\n data = {\n u\"source\": sources,\n # Default packages.\n u\"packages\": {},\n u\"dev-packages\": {},\n }\n # Default requires.\n required_python = python\n if not python:\n if self.virtualenv_location:\n required_python = self.which(\"python\", self.virtualenv_location)\n else:\n required_python = self.which(\"python\")\n version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION\n if version and len(version) >= 3:\n data[u\"requires\"] = {\"python_version\": version[: len(\"2.7\")]}\n self.write_toml(data)\n\n @classmethod\n def populate_source(cls, source):\n \"\"\"Derive missing values of source from the existing fields.\"\"\"\n # Only URL pararemter is mandatory, let the KeyError be thrown.\n if \"name\" not in source:\n source[\"name\"] = get_url_name(source[\"url\"])\n if \"verify_ssl\" not in source:\n source[\"verify_ssl\"] = \"https://\" in source[\"url\"]\n if not isinstance(source[\"verify_ssl\"], bool):\n source[\"verify_ssl\"] = source[\"verify_ssl\"].lower() == \"true\"\n return source\n\n def get_or_create_lockfile(self, from_pipfile=False):\n from pipenv.vendor.requirementslib.models.lockfile import Lockfile as Req_Lockfile\n lockfile = None\n if from_pipfile and self.pipfile_exists:\n lockfile_dict = {\n \"default\": self._lockfile[\"default\"].copy(),\n \"develop\": self._lockfile[\"develop\"].copy()\n }\n lockfile_dict.update({\"_meta\": self.get_lockfile_meta()})\n lockfile = Req_Lockfile.from_data(\n path=self.lockfile_location, data=lockfile_dict, meta_from_project=False\n )\n elif self.lockfile_exists:\n try:\n lockfile = Req_Lockfile.load(self.lockfile_location)\n except OSError:\n lockfile = Req_Lockfile.from_data(self.lockfile_location, self.lockfile_content)\n else:\n lockfile = Req_Lockfile.from_data(path=self.lockfile_location, data=self._lockfile, meta_from_project=False)\n if lockfile._lockfile is not None:\n return lockfile\n if self.lockfile_exists and self.lockfile_content:\n lockfile_dict = self.lockfile_content.copy()\n sources = lockfile_dict.get(\"_meta\", {}).get(\"sources\", [])\n if not sources:\n sources = self.pipfile_sources\n elif not isinstance(sources, list):\n sources = [sources,]\n lockfile_dict[\"_meta\"][\"sources\"] = [\n self.populate_source(s) for s in sources\n ]\n _created_lockfile = Req_Lockfile.from_data(\n path=self.lockfile_location, data=lockfile_dict, meta_from_project=False\n )\n lockfile._lockfile = lockfile.projectfile.model = _created_lockfile\n return lockfile\n else:\n return self.get_or_create_lockfile(from_pipfile=True)\n\n def get_lockfile_meta(self):\n from .vendor.plette.lockfiles import PIPFILE_SPEC_CURRENT\n if self.lockfile_exists:\n sources = self.lockfile_content.get(\"_meta\", {}).get(\"sources\", [])\n else:\n sources = [dict(source) for source in self.parsed_pipfile[\"source\"]]\n if not isinstance(sources, list):\n sources = [sources,]\n return {\n \"hash\": {\"sha256\": self.calculate_pipfile_hash()},\n \"pipfile-spec\": PIPFILE_SPEC_CURRENT,\n \"sources\": sources,\n \"requires\": self.parsed_pipfile.get(\"requires\", {})\n }\n\n def write_toml(self, data, path=None):\n \"\"\"Writes the given data structure out as TOML.\"\"\"\n if path is None:\n path = self.pipfile_location\n data = convert_toml_outline_tables(data)\n try:\n formatted_data = tomlkit.dumps(data).rstrip()\n except Exception:\n document = tomlkit.document()\n for section in (\"packages\", \"dev-packages\"):\n document[section] = tomlkit.container.Table()\n # Convert things to inline tables \u2014 fancy :)\n for package in data.get(section, {}):\n if hasattr(data[section][package], \"keys\"):\n table = tomlkit.inline_table()\n table.update(data[section][package])\n document[section][package] = table\n else:\n document[section][package] = tomlkit.string(data[section][package])\n formatted_data = tomlkit.dumps(document).rstrip()\n\n if (\n vistir.compat.Path(path).absolute()\n == vistir.compat.Path(self.pipfile_location).absolute()\n ):\n newlines = self._pipfile_newlines\n else:\n newlines = DEFAULT_NEWLINES\n formatted_data = cleanup_toml(formatted_data)\n with io.open(path, \"w\", newline=newlines) as f:\n f.write(formatted_data)\n # pipfile is mutated!\n self.clear_pipfile_cache()\n\n def write_lockfile(self, content):\n \"\"\"Write out the lockfile.\n \"\"\"\n s = self._lockfile_encoder.encode(content)\n open_kwargs = {\"newline\": self._lockfile_newlines, \"encoding\": \"utf-8\"}\n with vistir.contextmanagers.atomic_open_for_write(\n self.lockfile_location, **open_kwargs\n ) as f:\n f.write(s)\n # Write newline at end of document. GH-319.\n # Only need '\\n' here; the file object handles the rest.\n if not s.endswith(u\"\\n\"):\n f.write(u\"\\n\")\n\n @property\n def pipfile_sources(self):\n if \"source\" not in self.parsed_pipfile:\n return [DEFAULT_SOURCE]\n # We need to make copies of the source info so we don't\n # accidentally modify the cache. See #2100 where values are\n # written after the os.path.expandvars() call.\n return [\n {k: safe_expandvars(v) for k, v in source.items()}\n for source in self.parsed_pipfile[\"source\"]\n ]\n\n @property\n def sources(self):\n if self.lockfile_exists and hasattr(self.lockfile_content, \"keys\"):\n meta_ = self.lockfile_content.get(\"_meta\", {})\n sources_ = meta_.get(\"sources\")\n if sources_:\n return sources_\n\n else:\n return self.pipfile_sources\n\n def find_source(self, source):\n \"\"\"given a source, find it.\n\n source can be a url or an index name.\n \"\"\"\n if not is_valid_url(source):\n try:\n source = self.get_source(name=source)\n except SourceNotFound:\n source = self.get_source(url=source)\n else:\n source = self.get_source(url=source)\n return source\n\n def get_source(self, name=None, url=None):\n def find_source(sources, name=None, url=None):\n source = None\n if name:\n source = [s for s in sources if s.get(\"name\") == name]\n elif url:\n source = [s for s in sources if url.startswith(s.get(\"url\"))]\n if source:\n return first(source)\n\n found_source = find_source(self.sources, name=name, url=url)\n if found_source:\n return found_source\n found_source = find_source(self.pipfile_sources, name=name, url=url)\n if found_source:\n return found_source\n raise SourceNotFound(name or url)\n\n def get_package_name_in_pipfile(self, package_name, dev=False):\n \"\"\"Get the equivalent package name in pipfile\"\"\"\n key = \"dev-packages\" if dev else \"packages\"\n section = self.parsed_pipfile.get(key, {})\n package_name = pep423_name(package_name)\n for name in section.keys():\n if pep423_name(name) == package_name:\n return name\n return None\n\n def remove_package_from_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n name = self.get_package_name_in_pipfile(package_name, dev)\n key = \"dev-packages\" if dev else \"packages\"\n p = self.parsed_pipfile\n if name:\n del p[key][name]\n self.write_toml(p)\n\n def remove_packages_from_pipfile(self, packages):\n parsed = self.parsed_pipfile\n packages = set([pep423_name(pkg) for pkg in packages])\n for section in (\"dev-packages\", \"packages\"):\n pipfile_section = parsed.get(section, {})\n pipfile_packages = set([\n pep423_name(pkg_name) for pkg_name in pipfile_section.keys()\n ])\n to_remove = packages & pipfile_packages\n # The normal toml parser can't handle deleting packages with preceding newlines\n is_dev = section == \"dev-packages\"\n for pkg in to_remove:\n pkg_name = self.get_package_name_in_pipfile(pkg, dev=is_dev)\n del parsed[section][pkg_name]\n self.write_toml(parsed)\n\n def add_package_to_pipfile(self, package, dev=False):\n from .vendor.requirementslib import Requirement\n\n # Read and append Pipfile.\n p = self.parsed_pipfile\n # Don't re-capitalize file URLs or VCSs.\n if not isinstance(package, Requirement):\n package = Requirement.from_line(package.strip())\n _, converted = package.pipfile_entry\n key = \"dev-packages\" if dev else \"packages\"\n # Set empty group if it doesn't exist yet.\n if key not in p:\n p[key] = {}\n name = self.get_package_name_in_pipfile(package.name, dev)\n if name and is_star(converted):\n # Skip for wildcard version\n return\n # Add the package to the group.\n p[key][name or pep423_name(package.name)] = converted\n # Write Pipfile.\n self.write_toml(p)\n\n def src_name_from_url(self, index_url):\n name, _, tld_guess = six.moves.urllib.parse.urlsplit(index_url).netloc.rpartition(\n \".\"\n )\n src_name = name.replace(\".\", \"\")\n try:\n self.get_source(name=src_name)\n except SourceNotFound:\n name = src_name\n else:\n from random import randint\n name = \"{0}-{1}\".format(src_name, randint(1, 1000))\n return name\n\n def add_index_to_pipfile(self, index, verify_ssl=True):\n \"\"\"Adds a given index to the Pipfile.\"\"\"\n # Read and append Pipfile.\n p = self.parsed_pipfile\n try:\n self.get_source(url=index)\n except SourceNotFound:\n source = {\"url\": index, \"verify_ssl\": verify_ssl}\n else:\n return\n source[\"name\"] = self.src_name_from_url(index)\n # Add the package to the group.\n if \"source\" not in p:\n p[\"source\"] = [source]\n else:\n p[\"source\"].append(source)\n # Write Pipfile.\n self.write_toml(p)\n\n def recase_pipfile(self):\n if self.ensure_proper_casing():\n self.write_toml(self.parsed_pipfile)\n\n def load_lockfile(self, expand_env_vars=True):\n with io.open(self.lockfile_location, encoding=\"utf-8\") as lock:\n j = json.load(lock)\n self._lockfile_newlines = preferred_newlines(lock)\n # lockfile is just a string\n if not j or not hasattr(j, \"keys\"):\n return j\n\n if expand_env_vars:\n # Expand environment variables in Pipfile.lock at runtime.\n for i, source in enumerate(j[\"_meta\"][\"sources\"][:]):\n j[\"_meta\"][\"sources\"][i][\"url\"] = os.path.expandvars(\n j[\"_meta\"][\"sources\"][i][\"url\"]\n )\n\n return j\n\n def get_lockfile_hash(self):\n if not os.path.exists(self.lockfile_location):\n return\n\n try:\n lockfile = self.load_lockfile(expand_env_vars=False)\n except ValueError:\n # Lockfile corrupted\n return \"\"\n if \"_meta\" in lockfile and hasattr(lockfile, \"keys\"):\n return lockfile[\"_meta\"].get(\"hash\", {}).get(\"sha256\")\n # Lockfile exists but has no hash at all\n return \"\"\n\n def calculate_pipfile_hash(self):\n # Update the lockfile if it is out-of-date.\n p = pipfile.load(self.pipfile_location, inject_env=False)\n return p.hash\n\n def ensure_proper_casing(self):\n \"\"\"Ensures proper casing of Pipfile packages\"\"\"\n pfile = self.parsed_pipfile\n casing_changed = self.proper_case_section(pfile.get(\"packages\", {}))\n casing_changed |= self.proper_case_section(pfile.get(\"dev-packages\", {}))\n return casing_changed\n\n def proper_case_section(self, section):\n \"\"\"Verify proper casing is retrieved, when available, for each\n dependency in the section.\n \"\"\"\n # Casing for section.\n changed_values = False\n unknown_names = [k for k in section.keys() if k not in set(self.proper_names)]\n # Replace each package with proper casing.\n for dep in unknown_names:\n try:\n # Get new casing for package name.\n new_casing = proper_case(dep)\n except IOError:\n # Unable to normalize package name.\n continue\n\n if new_casing != dep:\n changed_values = True\n self.register_proper_name(new_casing)\n # Replace old value with new value.\n old_value = section[dep]\n section[new_casing] = old_value\n del section[dep]\n # Return whether or not values have been changed.\n return changed_values\n\n @cached_property\n def finders(self):\n from .vendor.pythonfinder import Finder\n scripts_dirname = \"Scripts\" if os.name == \"nt\" else \"bin\"\n scripts_dir = os.path.join(self.virtualenv_location, scripts_dirname)\n finders = [\n Finder(path=scripts_dir, global_search=gs, system=False)\n for gs in (False, True)\n ]\n return finders\n\n @property\n def finder(self):\n return next(iter(self.finders), None)\n\n def which(self, search, as_path=True):\n find = operator.methodcaller(\"which\", search)\n result = next(iter(filter(None, (find(finder) for finder in self.finders))), None)\n if not result:\n result = self._which(search)\n else:\n if as_path:\n result = str(result.path)\n return result\n", "path": "pipenv/project.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nimport base64\nimport fnmatch\nimport glob\nimport hashlib\nimport io\nimport json\nimport operator\nimport os\nimport re\nimport sys\n\nimport six\nimport toml\nimport tomlkit\nimport vistir\n\nfrom first import first\n\nimport pipfile\nimport pipfile.api\n\nfrom cached_property import cached_property\n\nfrom .cmdparse import Script\nfrom .environment import Environment\nfrom .environments import (\n PIPENV_DEFAULT_PYTHON_VERSION, PIPENV_IGNORE_VIRTUALENVS, PIPENV_MAX_DEPTH,\n PIPENV_PIPFILE, PIPENV_PYTHON, PIPENV_TEST_INDEX, PIPENV_VENV_IN_PROJECT,\n is_in_virtualenv\n)\nfrom .utils import (\n cleanup_toml, convert_toml_outline_tables, find_requirements,\n get_canonical_names, get_url_name, get_workon_home, is_editable,\n is_installable_file, is_star, is_valid_url, is_virtual_environment,\n looks_like_dir, normalize_drive, pep423_name, proper_case, python_version,\n safe_expandvars\n)\n\n\ndef _normalized(p):\n if p is None:\n return None\n loc = vistir.compat.Path(p)\n if not loc.is_absolute():\n try:\n loc = loc.resolve()\n except OSError:\n loc = loc.absolute()\n # Recase the path properly on Windows. From https://stackoverflow.com/a/35229734/5043728\n if os.name == 'nt':\n matches = glob.glob(re.sub(r'([^:/\\\\])(?=[/\\\\]|$)', r'[\\1]', str(loc)))\n path_str = matches and matches[0] or str(loc)\n else:\n path_str = str(loc)\n return normalize_drive(path_str)\n\n\nDEFAULT_NEWLINES = u\"\\n\"\n\n\nclass _LockFileEncoder(json.JSONEncoder):\n \"\"\"A specilized JSON encoder to convert loaded TOML data into a lock file.\n\n This adds a few characteristics to the encoder:\n\n * The JSON is always prettified with indents and spaces.\n * TOMLKit's container elements are seamlessly encodable.\n * The output is always UTF-8-encoded text, never binary, even on Python 2.\n \"\"\"\n\n def __init__(self):\n super(_LockFileEncoder, self).__init__(\n indent=4, separators=(\",\", \": \"), sort_keys=True\n )\n\n def default(self, obj):\n if isinstance(obj, vistir.compat.Path):\n obj = obj.as_posix()\n return super(_LockFileEncoder, self).default(obj)\n\n def encode(self, obj):\n content = super(_LockFileEncoder, self).encode(obj)\n if not isinstance(content, six.text_type):\n content = content.decode(\"utf-8\")\n return content\n\n\ndef preferred_newlines(f):\n if isinstance(f.newlines, six.text_type):\n return f.newlines\n return DEFAULT_NEWLINES\n\n\nif PIPENV_PIPFILE:\n if not os.path.isfile(PIPENV_PIPFILE):\n raise RuntimeError(\"Given PIPENV_PIPFILE is not found!\")\n\n else:\n PIPENV_PIPFILE = _normalized(PIPENV_PIPFILE)\n# (path, file contents) => TOMLFile\n# keeps track of pipfiles that we've seen so we do not need to re-parse 'em\n_pipfile_cache = {}\n\n\nif PIPENV_TEST_INDEX:\n DEFAULT_SOURCE = {\n u\"url\": PIPENV_TEST_INDEX,\n u\"verify_ssl\": True,\n u\"name\": u\"custom\",\n }\nelse:\n DEFAULT_SOURCE = {\n u\"url\": u\"https://pypi.org/simple\",\n u\"verify_ssl\": True,\n u\"name\": u\"pypi\",\n }\n\npipfile.api.DEFAULT_SOURCE = DEFAULT_SOURCE\n\n\nclass SourceNotFound(KeyError):\n pass\n\n\nclass Project(object):\n \"\"\"docstring for Project\"\"\"\n\n _lockfile_encoder = _LockFileEncoder()\n\n def __init__(self, which=None, python_version=None, chdir=True):\n super(Project, self).__init__()\n self._name = None\n self._virtualenv_location = None\n self._download_location = None\n self._proper_names_db_path = None\n self._pipfile_location = None\n self._pipfile_newlines = DEFAULT_NEWLINES\n self._lockfile_newlines = DEFAULT_NEWLINES\n self._requirements_location = None\n self._original_dir = os.path.abspath(os.curdir)\n self._environment = None\n self._which = which\n self._build_system = {\n \"requires\": [\"setuptools\", \"wheel\"]\n }\n self.python_version = python_version\n # Hack to skip this during pipenv run, or -r.\n if (\"run\" not in sys.argv) and chdir:\n try:\n os.chdir(self.project_directory)\n except (TypeError, AttributeError):\n pass\n\n def path_to(self, p):\n \"\"\"Returns the absolute path to a given relative path.\"\"\"\n if os.path.isabs(p):\n return p\n\n return os.sep.join([self._original_dir, p])\n\n def _build_package_list(self, package_section):\n \"\"\"Returns a list of packages for pip-tools to consume.\"\"\"\n from pipenv.vendor.requirementslib.utils import is_vcs\n ps = {}\n # TODO: Separate the logic for showing packages from the filters for supplying pip-tools\n for k, v in self.parsed_pipfile.get(package_section, {}).items():\n # Skip editable VCS deps.\n if hasattr(v, \"keys\"):\n # When a vcs url is gven without editable it only appears as a key\n # Eliminate any vcs, path, or url entries which are not editable\n # Since pip-tools can't do deep resolution on them, even setuptools-installable ones\n if (\n is_vcs(v)\n or is_vcs(k)\n or (is_installable_file(k) or is_installable_file(v))\n or any(\n (\n prefix in v\n and (os.path.isfile(v[prefix]) or is_valid_url(v[prefix]))\n )\n for prefix in [\"path\", \"file\"]\n )\n ):\n # If they are editable, do resolve them\n if \"editable\" not in v:\n # allow wheels to be passed through\n if not (\n hasattr(v, \"keys\")\n and v.get(\"path\", v.get(\"file\", \"\")).endswith(\".whl\")\n ):\n continue\n ps.update({k: v})\n\n else:\n ps.update({k: v})\n else:\n ps.update({k: v})\n else:\n # Since these entries have no attributes we know they are not editable\n # So we can safely exclude things that need to be editable in order to be resolved\n # First exclude anything that is a vcs entry either in the key or value\n if not (\n any(is_vcs(i) for i in [k, v])\n or\n # Then exclude any installable files that are not directories\n # Because pip-tools can resolve setup.py for example\n any(is_installable_file(i) for i in [k, v])\n or\n # Then exclude any URLs because they need to be editable also\n # Things that are excluded can only be 'shallow resolved'\n any(is_valid_url(i) for i in [k, v])\n ):\n ps.update({k: v})\n return ps\n\n @property\n def name(self):\n if self._name is None:\n self._name = self.pipfile_location.split(os.sep)[-2]\n return self._name\n\n @property\n def pipfile_exists(self):\n return bool(self.pipfile_location)\n\n @property\n def required_python_version(self):\n if self.pipfile_exists:\n required = self.parsed_pipfile.get(\"requires\", {}).get(\n \"python_full_version\"\n )\n if not required:\n required = self.parsed_pipfile.get(\"requires\", {}).get(\"python_version\")\n if required != \"*\":\n return required\n\n @property\n def project_directory(self):\n if self.pipfile_location is not None:\n return os.path.abspath(os.path.join(self.pipfile_location, os.pardir))\n\n else:\n return None\n\n @property\n def requirements_exists(self):\n return bool(self.requirements_location)\n\n def is_venv_in_project(self):\n return PIPENV_VENV_IN_PROJECT or (\n self.project_directory\n and os.path.isdir(os.path.join(self.project_directory, \".venv\"))\n )\n\n @property\n def virtualenv_exists(self):\n # TODO: Decouple project from existence of Pipfile.\n if self.pipfile_exists and os.path.exists(self.virtualenv_location):\n if os.name == \"nt\":\n extra = [\"Scripts\", \"activate.bat\"]\n else:\n extra = [\"bin\", \"activate\"]\n return os.path.isfile(os.sep.join([self.virtualenv_location] + extra))\n\n return False\n\n def get_location_for_virtualenv(self):\n # If there's no project yet, set location based on config.\n if not self.project_directory:\n if self.is_venv_in_project():\n return os.path.abspath(\".venv\")\n return str(get_workon_home().joinpath(self.virtualenv_name))\n\n dot_venv = os.path.join(self.project_directory, \".venv\")\n\n # If there's no .venv in project root, set location based on config.\n if not os.path.exists(dot_venv):\n if self.is_venv_in_project():\n return dot_venv\n return str(get_workon_home().joinpath(self.virtualenv_name))\n\n # If .venv in project root is a directory, use it.\n if os.path.isdir(dot_venv):\n return dot_venv\n\n # Now we assume .venv in project root is a file. Use its content.\n with io.open(dot_venv) as f:\n name = f.read().strip()\n\n # If content looks like a path, use it as a relative path.\n # Otherwise use directory named after content in WORKON_HOME.\n if looks_like_dir(name):\n path = vistir.compat.Path(self.project_directory, name)\n return path.absolute().as_posix()\n return str(get_workon_home().joinpath(name))\n\n @property\n def working_set(self):\n from .utils import load_path\n sys_path = load_path(self.which(\"python\"))\n import pkg_resources\n return pkg_resources.WorkingSet(sys_path)\n\n @property\n def installed_packages(self):\n return self.environment.get_installed_packages()\n\n @property\n def installed_package_names(self):\n return get_canonical_names([pkg.key for pkg in self.installed_packages])\n\n @property\n def lockfile_package_names(self):\n dev_keys = get_canonical_names(self.lockfile_content[\"develop\"].keys())\n default_keys = get_canonical_names(self.lockfile_content[\"default\"].keys())\n return {\n \"dev\": dev_keys,\n \"default\": default_keys,\n \"combined\": dev_keys | default_keys\n }\n\n @property\n def pipfile_package_names(self):\n dev_keys = get_canonical_names(self.dev_packages.keys())\n default_keys = get_canonical_names(self.packages.keys())\n return {\n \"dev\": dev_keys,\n \"default\": default_keys,\n \"combined\": dev_keys | default_keys\n }\n\n @property\n def environment(self):\n if not self._environment:\n prefix = self.virtualenv_location\n is_venv = is_in_virtualenv()\n sources = self.sources if self.sources else [DEFAULT_SOURCE,]\n self._environment = Environment(\n prefix=prefix, is_venv=is_venv, sources=sources, pipfile=self.parsed_pipfile,\n project=self\n )\n self._environment.add_dist(\"pipenv\")\n return self._environment\n\n def get_outdated_packages(self):\n return self.environment.get_outdated_packages(pre=self.pipfile.get(\"pre\", False))\n\n @classmethod\n def _sanitize(cls, name):\n # Replace dangerous characters into '_'. The length of the sanitized\n # project name is limited as 42 because of the limit of linux kernel\n #\n # 42 = 127 - len('/home//.local/share/virtualenvs//bin/python2') - 32 - len('-HASHHASH')\n #\n # 127 : BINPRM_BUF_SIZE - 1\n # 32 : Maximum length of username\n #\n # References:\n # https://www.gnu.org/software/bash/manual/html_node/Double-Quotes.html\n # http://www.tldp.org/LDP/abs/html/special-chars.html#FIELDREF\n # https://github.com/torvalds/linux/blob/2bfe01ef/include/uapi/linux/binfmts.h#L18\n return re.sub(r'[ $`!*@\"\\\\\\r\\n\\t]', \"_\", name)[0:42]\n\n def _get_virtualenv_hash(self, name):\n \"\"\"Get the name of the virtualenv adjusted for windows if needed\n\n Returns (name, encoded_hash)\n \"\"\"\n\n def get_name(name, location):\n name = self._sanitize(name)\n hash = hashlib.sha256(location.encode()).digest()[:6]\n encoded_hash = base64.urlsafe_b64encode(hash).decode()\n return name, encoded_hash[:8]\n\n clean_name, encoded_hash = get_name(name, self.pipfile_location)\n venv_name = \"{0}-{1}\".format(clean_name, encoded_hash)\n\n # This should work most of the time for\n # Case-sensitive filesystems,\n # In-project venv\n # \"Proper\" path casing (on non-case-sensitive filesystems).\n if (\n not fnmatch.fnmatch(\"A\", \"a\")\n or self.is_venv_in_project()\n or get_workon_home().joinpath(venv_name).exists()\n ):\n return clean_name, encoded_hash\n\n # Check for different capitalization of the same project.\n for path in get_workon_home().iterdir():\n if not is_virtual_environment(path):\n continue\n try:\n env_name, hash_ = path.name.rsplit(\"-\", 1)\n except ValueError:\n continue\n if len(hash_) != 8 or env_name.lower() != name.lower():\n continue\n return get_name(env_name, self.pipfile_location.replace(name, env_name))\n\n # Use the default if no matching env exists.\n return clean_name, encoded_hash\n\n @property\n def virtualenv_name(self):\n sanitized, encoded_hash = self._get_virtualenv_hash(self.name)\n suffix = \"-{0}\".format(PIPENV_PYTHON) if PIPENV_PYTHON else \"\"\n # If the pipfile was located at '/home/user/MY_PROJECT/Pipfile',\n # the name of its virtualenv will be 'my-project-wyUfYPqE'\n return sanitized + \"-\" + encoded_hash + suffix\n\n @property\n def virtualenv_location(self):\n # if VIRTUAL_ENV is set, use that.\n virtualenv_env = os.getenv(\"VIRTUAL_ENV\")\n if (\"PIPENV_ACTIVE\" not in os.environ and\n not PIPENV_IGNORE_VIRTUALENVS and virtualenv_env):\n return virtualenv_env\n\n if not self._virtualenv_location: # Use cached version, if available.\n assert self.project_directory, \"project not created\"\n self._virtualenv_location = self.get_location_for_virtualenv()\n return self._virtualenv_location\n\n @property\n def virtualenv_src_location(self):\n if self.virtualenv_location:\n loc = os.sep.join([self.virtualenv_location, \"src\"])\n else:\n loc = os.sep.join([self.project_directory, \"src\"])\n vistir.path.mkdir_p(loc)\n return loc\n\n @property\n def download_location(self):\n if self._download_location is None:\n loc = os.sep.join([self.virtualenv_location, \"downloads\"])\n self._download_location = loc\n # Create the directory, if it doesn't exist.\n vistir.path.mkdir_p(self._download_location)\n return self._download_location\n\n @property\n def proper_names_db_path(self):\n if self._proper_names_db_path is None:\n self._proper_names_db_path = vistir.compat.Path(\n self.virtualenv_location, \"pipenv-proper-names.txt\"\n )\n self._proper_names_db_path.touch() # Ensure the file exists.\n return self._proper_names_db_path\n\n @property\n def proper_names(self):\n with self.proper_names_db_path.open() as f:\n return f.read().splitlines()\n\n def register_proper_name(self, name):\n \"\"\"Registers a proper name to the database.\"\"\"\n with self.proper_names_db_path.open(\"a\") as f:\n f.write(u\"{0}\\n\".format(name))\n\n @property\n def pipfile_location(self):\n if PIPENV_PIPFILE:\n return PIPENV_PIPFILE\n\n if self._pipfile_location is None:\n try:\n loc = pipfile.Pipfile.find(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._pipfile_location = _normalized(loc)\n return self._pipfile_location\n\n @property\n def requirements_location(self):\n if self._requirements_location is None:\n try:\n loc = find_requirements(max_depth=PIPENV_MAX_DEPTH)\n except RuntimeError:\n loc = None\n self._requirements_location = loc\n return self._requirements_location\n\n @property\n def parsed_pipfile(self):\n \"\"\"Parse Pipfile into a TOMLFile and cache it\n\n (call clear_pipfile_cache() afterwards if mutating)\"\"\"\n contents = self.read_pipfile()\n # use full contents to get around str/bytes 2/3 issues\n cache_key = (self.pipfile_location, contents)\n if cache_key not in _pipfile_cache:\n parsed = self._parse_pipfile(contents)\n _pipfile_cache[cache_key] = parsed\n return _pipfile_cache[cache_key]\n\n def read_pipfile(self):\n # Open the pipfile, read it into memory.\n with io.open(self.pipfile_location) as f:\n contents = f.read()\n self._pipfile_newlines = preferred_newlines(f)\n\n return contents\n\n def clear_pipfile_cache(self):\n \"\"\"Clear pipfile cache (e.g., so we can mutate parsed pipfile)\"\"\"\n _pipfile_cache.clear()\n\n def _parse_pipfile(self, contents):\n try:\n return tomlkit.parse(contents)\n except Exception:\n # We lose comments here, but it's for the best.)\n # Fallback to toml parser, for large files.\n return toml.loads(contents)\n\n def _read_pyproject(self):\n pyproject = self.path_to(\"pyproject.toml\")\n if os.path.exists(pyproject):\n self._pyproject = toml.load(pyproject)\n build_system = self._pyproject.get(\"build-system\", None)\n if not os.path.exists(self.path_to(\"setup.py\")):\n if not build_system or not build_system.get(\"requires\"):\n build_system = {\n \"requires\": [\"setuptools>=38.2.5\", \"wheel\"],\n \"build-backend\": \"setuptools.build_meta\",\n }\n self._build_system = build_system\n\n @property\n def build_requires(self):\n return self._build_system.get(\"requires\", [])\n\n @property\n def build_backend(self):\n return self._build_system.get(\"build-backend\", None)\n\n @property\n def settings(self):\n \"\"\"A dictionary of the settings added to the Pipfile.\"\"\"\n return self.parsed_pipfile.get(\"pipenv\", {})\n\n def has_script(self, name):\n try:\n return name in self.parsed_pipfile[\"scripts\"]\n except KeyError:\n return False\n\n def build_script(self, name, extra_args=None):\n try:\n script = Script.parse(self.parsed_pipfile[\"scripts\"][name])\n except KeyError:\n script = Script(name)\n if extra_args:\n script.extend(extra_args)\n return script\n\n def update_settings(self, d):\n settings = self.settings\n changed = False\n for new in d:\n if new not in settings:\n settings[new] = d[new]\n changed = True\n if changed:\n p = self.parsed_pipfile\n p[\"pipenv\"] = settings\n # Write the changes to disk.\n self.write_toml(p)\n\n @property\n def _lockfile(self):\n \"\"\"Pipfile.lock divided by PyPI and external dependencies.\"\"\"\n pfile = pipfile.load(self.pipfile_location, inject_env=False)\n lockfile = json.loads(pfile.lock())\n for section in (\"default\", \"develop\"):\n lock_section = lockfile.get(section, {})\n for key in list(lock_section.keys()):\n norm_key = pep423_name(key)\n lockfile[section][norm_key] = lock_section.pop(key)\n return lockfile\n\n @property\n def _pipfile(self):\n from .vendor.requirementslib.models.pipfile import Pipfile as ReqLibPipfile\n pf = ReqLibPipfile.load(self.pipfile_location)\n return pf\n\n @property\n def lockfile_location(self):\n return \"{0}.lock\".format(self.pipfile_location)\n\n @property\n def lockfile_exists(self):\n return os.path.isfile(self.lockfile_location)\n\n @property\n def lockfile_content(self):\n return self.load_lockfile()\n\n def _get_editable_packages(self, dev=False):\n section = \"dev-packages\" if dev else \"packages\"\n # section = \"{0}-editable\".format(section)\n packages = {\n k: v\n # for k, v in self._pipfile[section].items()\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_editable(k) or is_editable(v)\n }\n return packages\n\n def _get_vcs_packages(self, dev=False):\n from pipenv.vendor.requirementslib.utils import is_vcs\n section = \"dev-packages\" if dev else \"packages\"\n # section = \"{0}-vcs\".format(section)\n packages = {\n k: v\n # for k, v in self._pipfile[section].items()\n for k, v in self.parsed_pipfile.get(section, {}).items()\n if is_vcs(v) or is_vcs(k)\n }\n return packages or {}\n\n @property\n def editable_packages(self):\n return self._get_editable_packages(dev=False)\n\n @property\n def editable_dev_packages(self):\n return self._get_editable_packages(dev=True)\n\n @property\n def vcs_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=False)\n\n @property\n def vcs_dev_packages(self):\n \"\"\"Returns a list of VCS packages, for not pip-tools to consume.\"\"\"\n return self._get_vcs_packages(dev=True)\n\n @property\n def all_packages(self):\n \"\"\"Returns a list of all packages.\"\"\"\n p = dict(self.parsed_pipfile.get(\"dev-packages\", {}))\n p.update(self.parsed_pipfile.get(\"packages\", {}))\n return p\n\n @property\n def packages(self):\n \"\"\"Returns a list of packages, for pip-tools to consume.\"\"\"\n return self._build_package_list(\"packages\")\n\n @property\n def dev_packages(self):\n \"\"\"Returns a list of dev-packages, for pip-tools to consume.\"\"\"\n return self._build_package_list(\"dev-packages\")\n\n def touch_pipfile(self):\n \"\"\"Simply touches the Pipfile, for later use.\"\"\"\n with open(\"Pipfile\", \"a\"):\n os.utime(\"Pipfile\", None)\n\n @property\n def pipfile_is_empty(self):\n if not self.pipfile_exists:\n return True\n\n if not len(self.read_pipfile()):\n return True\n\n return False\n\n def create_pipfile(self, python=None):\n \"\"\"Creates the Pipfile, filled with juicy defaults.\"\"\"\n from .vendor.pip_shims.shims import (\n ConfigOptionParser, make_option_group, index_group\n )\n\n name = self.name if self.name is not None else \"Pipfile\"\n config_parser = ConfigOptionParser(name=self.name)\n config_parser.add_option_group(make_option_group(index_group, config_parser))\n install = config_parser.option_groups[0]\n indexes = (\n \" \".join(install.get_option(\"--extra-index-url\").default)\n .lstrip(\"\\n\")\n .split(\"\\n\")\n )\n sources = [DEFAULT_SOURCE,]\n for i, index in enumerate(indexes):\n if not index:\n continue\n\n source_name = \"pip_index_{}\".format(i)\n verify_ssl = index.startswith(\"https\")\n sources.append(\n {u\"url\": index, u\"verify_ssl\": verify_ssl, u\"name\": source_name}\n )\n\n data = {\n u\"source\": sources,\n # Default packages.\n u\"packages\": {},\n u\"dev-packages\": {},\n }\n # Default requires.\n required_python = python\n if not python:\n if self.virtualenv_location:\n required_python = self.which(\"python\", self.virtualenv_location)\n else:\n required_python = self.which(\"python\")\n version = python_version(required_python) or PIPENV_DEFAULT_PYTHON_VERSION\n if version and len(version) >= 3:\n data[u\"requires\"] = {\"python_version\": version[: len(\"2.7\")]}\n self.write_toml(data)\n\n @classmethod\n def populate_source(cls, source):\n \"\"\"Derive missing values of source from the existing fields.\"\"\"\n # Only URL pararemter is mandatory, let the KeyError be thrown.\n if \"name\" not in source:\n source[\"name\"] = get_url_name(source[\"url\"])\n if \"verify_ssl\" not in source:\n source[\"verify_ssl\"] = \"https://\" in source[\"url\"]\n if not isinstance(source[\"verify_ssl\"], bool):\n source[\"verify_ssl\"] = source[\"verify_ssl\"].lower() == \"true\"\n return source\n\n def get_or_create_lockfile(self, from_pipfile=False):\n from pipenv.vendor.requirementslib.models.lockfile import Lockfile as Req_Lockfile\n lockfile = None\n if from_pipfile and self.pipfile_exists:\n lockfile_dict = {\n \"default\": self._lockfile[\"default\"].copy(),\n \"develop\": self._lockfile[\"develop\"].copy()\n }\n lockfile_dict.update({\"_meta\": self.get_lockfile_meta()})\n lockfile = Req_Lockfile.from_data(\n path=self.lockfile_location, data=lockfile_dict, meta_from_project=False\n )\n elif self.lockfile_exists:\n try:\n lockfile = Req_Lockfile.load(self.lockfile_location)\n except OSError:\n lockfile = Req_Lockfile.from_data(self.lockfile_location, self.lockfile_content)\n else:\n lockfile = Req_Lockfile.from_data(path=self.lockfile_location, data=self._lockfile, meta_from_project=False)\n if lockfile._lockfile is not None:\n return lockfile\n if self.lockfile_exists and self.lockfile_content:\n lockfile_dict = self.lockfile_content.copy()\n sources = lockfile_dict.get(\"_meta\", {}).get(\"sources\", [])\n if not sources:\n sources = self.pipfile_sources\n elif not isinstance(sources, list):\n sources = [sources,]\n lockfile_dict[\"_meta\"][\"sources\"] = [\n self.populate_source(s) for s in sources\n ]\n _created_lockfile = Req_Lockfile.from_data(\n path=self.lockfile_location, data=lockfile_dict, meta_from_project=False\n )\n lockfile._lockfile = lockfile.projectfile.model = _created_lockfile\n return lockfile\n else:\n return self.get_or_create_lockfile(from_pipfile=True)\n\n def get_lockfile_meta(self):\n from .vendor.plette.lockfiles import PIPFILE_SPEC_CURRENT\n if self.lockfile_exists:\n sources = self.lockfile_content.get(\"_meta\", {}).get(\"sources\", [])\n else:\n sources = [dict(source) for source in self.parsed_pipfile[\"source\"]]\n if not isinstance(sources, list):\n sources = [sources,]\n return {\n \"hash\": {\"sha256\": self.calculate_pipfile_hash()},\n \"pipfile-spec\": PIPFILE_SPEC_CURRENT,\n \"sources\": [self.populate_source(s) for s in sources],\n \"requires\": self.parsed_pipfile.get(\"requires\", {})\n }\n\n def write_toml(self, data, path=None):\n \"\"\"Writes the given data structure out as TOML.\"\"\"\n if path is None:\n path = self.pipfile_location\n data = convert_toml_outline_tables(data)\n try:\n formatted_data = tomlkit.dumps(data).rstrip()\n except Exception:\n document = tomlkit.document()\n for section in (\"packages\", \"dev-packages\"):\n document[section] = tomlkit.container.Table()\n # Convert things to inline tables \u2014 fancy :)\n for package in data.get(section, {}):\n if hasattr(data[section][package], \"keys\"):\n table = tomlkit.inline_table()\n table.update(data[section][package])\n document[section][package] = table\n else:\n document[section][package] = tomlkit.string(data[section][package])\n formatted_data = tomlkit.dumps(document).rstrip()\n\n if (\n vistir.compat.Path(path).absolute()\n == vistir.compat.Path(self.pipfile_location).absolute()\n ):\n newlines = self._pipfile_newlines\n else:\n newlines = DEFAULT_NEWLINES\n formatted_data = cleanup_toml(formatted_data)\n with io.open(path, \"w\", newline=newlines) as f:\n f.write(formatted_data)\n # pipfile is mutated!\n self.clear_pipfile_cache()\n\n def write_lockfile(self, content):\n \"\"\"Write out the lockfile.\n \"\"\"\n s = self._lockfile_encoder.encode(content)\n open_kwargs = {\"newline\": self._lockfile_newlines, \"encoding\": \"utf-8\"}\n with vistir.contextmanagers.atomic_open_for_write(\n self.lockfile_location, **open_kwargs\n ) as f:\n f.write(s)\n # Write newline at end of document. GH-319.\n # Only need '\\n' here; the file object handles the rest.\n if not s.endswith(u\"\\n\"):\n f.write(u\"\\n\")\n\n @property\n def pipfile_sources(self):\n if \"source\" not in self.parsed_pipfile:\n return [DEFAULT_SOURCE]\n # We need to make copies of the source info so we don't\n # accidentally modify the cache. See #2100 where values are\n # written after the os.path.expandvars() call.\n return [\n {k: safe_expandvars(v) for k, v in source.items()}\n for source in self.parsed_pipfile[\"source\"]\n ]\n\n @property\n def sources(self):\n if self.lockfile_exists and hasattr(self.lockfile_content, \"keys\"):\n meta_ = self.lockfile_content.get(\"_meta\", {})\n sources_ = meta_.get(\"sources\")\n if sources_:\n return sources_\n\n else:\n return self.pipfile_sources\n\n def find_source(self, source):\n \"\"\"given a source, find it.\n\n source can be a url or an index name.\n \"\"\"\n if not is_valid_url(source):\n try:\n source = self.get_source(name=source)\n except SourceNotFound:\n source = self.get_source(url=source)\n else:\n source = self.get_source(url=source)\n return source\n\n def get_source(self, name=None, url=None):\n def find_source(sources, name=None, url=None):\n source = None\n if name:\n source = [s for s in sources if s.get(\"name\") == name]\n elif url:\n source = [s for s in sources if url.startswith(s.get(\"url\"))]\n if source:\n return first(source)\n\n found_source = find_source(self.sources, name=name, url=url)\n if found_source:\n return found_source\n found_source = find_source(self.pipfile_sources, name=name, url=url)\n if found_source:\n return found_source\n raise SourceNotFound(name or url)\n\n def get_package_name_in_pipfile(self, package_name, dev=False):\n \"\"\"Get the equivalent package name in pipfile\"\"\"\n key = \"dev-packages\" if dev else \"packages\"\n section = self.parsed_pipfile.get(key, {})\n package_name = pep423_name(package_name)\n for name in section.keys():\n if pep423_name(name) == package_name:\n return name\n return None\n\n def remove_package_from_pipfile(self, package_name, dev=False):\n # Read and append Pipfile.\n name = self.get_package_name_in_pipfile(package_name, dev)\n key = \"dev-packages\" if dev else \"packages\"\n p = self.parsed_pipfile\n if name:\n del p[key][name]\n self.write_toml(p)\n\n def remove_packages_from_pipfile(self, packages):\n parsed = self.parsed_pipfile\n packages = set([pep423_name(pkg) for pkg in packages])\n for section in (\"dev-packages\", \"packages\"):\n pipfile_section = parsed.get(section, {})\n pipfile_packages = set([\n pep423_name(pkg_name) for pkg_name in pipfile_section.keys()\n ])\n to_remove = packages & pipfile_packages\n # The normal toml parser can't handle deleting packages with preceding newlines\n is_dev = section == \"dev-packages\"\n for pkg in to_remove:\n pkg_name = self.get_package_name_in_pipfile(pkg, dev=is_dev)\n del parsed[section][pkg_name]\n self.write_toml(parsed)\n\n def add_package_to_pipfile(self, package, dev=False):\n from .vendor.requirementslib import Requirement\n\n # Read and append Pipfile.\n p = self.parsed_pipfile\n # Don't re-capitalize file URLs or VCSs.\n if not isinstance(package, Requirement):\n package = Requirement.from_line(package.strip())\n _, converted = package.pipfile_entry\n key = \"dev-packages\" if dev else \"packages\"\n # Set empty group if it doesn't exist yet.\n if key not in p:\n p[key] = {}\n name = self.get_package_name_in_pipfile(package.name, dev)\n if name and is_star(converted):\n # Skip for wildcard version\n return\n # Add the package to the group.\n p[key][name or pep423_name(package.name)] = converted\n # Write Pipfile.\n self.write_toml(p)\n\n def src_name_from_url(self, index_url):\n name, _, tld_guess = six.moves.urllib.parse.urlsplit(index_url).netloc.rpartition(\n \".\"\n )\n src_name = name.replace(\".\", \"\")\n try:\n self.get_source(name=src_name)\n except SourceNotFound:\n name = src_name\n else:\n from random import randint\n name = \"{0}-{1}\".format(src_name, randint(1, 1000))\n return name\n\n def add_index_to_pipfile(self, index, verify_ssl=True):\n \"\"\"Adds a given index to the Pipfile.\"\"\"\n # Read and append Pipfile.\n p = self.parsed_pipfile\n try:\n self.get_source(url=index)\n except SourceNotFound:\n source = {\"url\": index, \"verify_ssl\": verify_ssl}\n else:\n return\n source[\"name\"] = self.src_name_from_url(index)\n # Add the package to the group.\n if \"source\" not in p:\n p[\"source\"] = [source]\n else:\n p[\"source\"].append(source)\n # Write Pipfile.\n self.write_toml(p)\n\n def recase_pipfile(self):\n if self.ensure_proper_casing():\n self.write_toml(self.parsed_pipfile)\n\n def load_lockfile(self, expand_env_vars=True):\n with io.open(self.lockfile_location, encoding=\"utf-8\") as lock:\n j = json.load(lock)\n self._lockfile_newlines = preferred_newlines(lock)\n # lockfile is just a string\n if not j or not hasattr(j, \"keys\"):\n return j\n\n if expand_env_vars:\n # Expand environment variables in Pipfile.lock at runtime.\n for i, source in enumerate(j[\"_meta\"][\"sources\"][:]):\n j[\"_meta\"][\"sources\"][i][\"url\"] = os.path.expandvars(\n j[\"_meta\"][\"sources\"][i][\"url\"]\n )\n\n return j\n\n def get_lockfile_hash(self):\n if not os.path.exists(self.lockfile_location):\n return\n\n try:\n lockfile = self.load_lockfile(expand_env_vars=False)\n except ValueError:\n # Lockfile corrupted\n return \"\"\n if \"_meta\" in lockfile and hasattr(lockfile, \"keys\"):\n return lockfile[\"_meta\"].get(\"hash\", {}).get(\"sha256\")\n # Lockfile exists but has no hash at all\n return \"\"\n\n def calculate_pipfile_hash(self):\n # Update the lockfile if it is out-of-date.\n p = pipfile.load(self.pipfile_location, inject_env=False)\n return p.hash\n\n def ensure_proper_casing(self):\n \"\"\"Ensures proper casing of Pipfile packages\"\"\"\n pfile = self.parsed_pipfile\n casing_changed = self.proper_case_section(pfile.get(\"packages\", {}))\n casing_changed |= self.proper_case_section(pfile.get(\"dev-packages\", {}))\n return casing_changed\n\n def proper_case_section(self, section):\n \"\"\"Verify proper casing is retrieved, when available, for each\n dependency in the section.\n \"\"\"\n # Casing for section.\n changed_values = False\n unknown_names = [k for k in section.keys() if k not in set(self.proper_names)]\n # Replace each package with proper casing.\n for dep in unknown_names:\n try:\n # Get new casing for package name.\n new_casing = proper_case(dep)\n except IOError:\n # Unable to normalize package name.\n continue\n\n if new_casing != dep:\n changed_values = True\n self.register_proper_name(new_casing)\n # Replace old value with new value.\n old_value = section[dep]\n section[new_casing] = old_value\n del section[dep]\n # Return whether or not values have been changed.\n return changed_values\n\n @cached_property\n def finders(self):\n from .vendor.pythonfinder import Finder\n scripts_dirname = \"Scripts\" if os.name == \"nt\" else \"bin\"\n scripts_dir = os.path.join(self.virtualenv_location, scripts_dirname)\n finders = [\n Finder(path=scripts_dir, global_search=gs, system=False)\n for gs in (False, True)\n ]\n return finders\n\n @property\n def finder(self):\n return next(iter(self.finders), None)\n\n def which(self, search, as_path=True):\n find = operator.methodcaller(\"which\", search)\n result = next(iter(filter(None, (find(finder) for finder in self.finders))), None)\n if not result:\n result = self._which(search)\n else:\n if as_path:\n result = str(result.path)\n return result\n", "path": "pipenv/project.py"}]} |
gh_patches_debug_1149 | rasdani/github-patches | git_diff | hi-primus__optimus-872 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Json file exploration/profiling
Unstructured data as JSON can not be explored as regular tabular data. I have been exploring using tree depth and count to highlight the user in which nodes could have important data.
Some work in progress, here. https://github.com/ironmussa/Optimus/blob/develop-3.0/optimus/engines/pandas/io/json.py
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `optimus/engines/pandas/io/json.py`
Content:
```
1 import glob
2
3 import pandas as pd
4 import ujson
5 from glom import glom
6
7 from optimus.infer import is_dict, is_list, is_str, is_int
8
9 META = "_meta"
10 PROPERTIES = "_properties"
11 ITEMS = "_items"
12
13 COL_DEPTH = "depth"
14
15
16 class JSON:
17 def __init__(self):
18 self.data = None
19
20 def load(self, path):
21 """
22 Load a file in JSON format
23 :param path:
24 :return:
25 """
26 all_json = glob.glob(path, recursive=True)
27 # pd.read_json("data/corona.json")
28 with open(all_json[0]) as f:
29 self.data = ujson.load(f)
30
31 def schema(self):
32 """
33 Return a JSON with the count, dtype and nested structure
34 :return:
35 """
36
37 def _schema(_data, _keys):
38 if isinstance(_data, dict):
39 for x, y in _data.items():
40 if is_dict(y):
41 _keys[x] = {META: {"count": len(y), "dtype": type(y)}}
42 if len(y) > 0:
43 _keys[x][PROPERTIES] = {}
44 _schema(y, _keys[x][PROPERTIES])
45 elif is_list(y):
46 _keys[x] = {META: {"count": len(y), "dtype": type(y)}}
47 if len(y) > 0:
48 _keys[x] = {ITEMS: {PROPERTIES: {}, META: {"count": len(y), "dtype": type(y)}}}
49 _schema(y, _keys[x][ITEMS][PROPERTIES])
50 elif is_str(y):
51 _keys[x] = {META: {"count": len(y), "dtype": type(y)}}
52 _schema(y, _keys[x])
53 elif is_int(y):
54 _keys[x] = {META: {"dtype": type(y)}}
55 _schema(y, _keys[x])
56
57 elif is_list(_data):
58 for x in _data:
59 _schema(x, _keys)
60
61 keys = {}
62 _schema(self.data, keys)
63 return keys
64
65 def freq(self, n=100):
66 """
67 Calculate the count on every dict or list in the json
68 :param n:
69 :return:
70 """
71
72 def _profile(keys, parent, result=None):
73 for key, values in keys.items():
74 if values.get(PROPERTIES):
75 _meta = values.get(META)
76 _properties = values.get(PROPERTIES)
77 elif values.get(ITEMS):
78 _meta = values.get(ITEMS).get(META)
79 _properties = values.get(ITEMS).get(PROPERTIES)
80
81 if values.get(PROPERTIES) or values.get(ITEMS):
82 result.append([key, _meta["count"], _meta["dtype"], parent, len(parent)])
83 _profile(_properties, parent + [key], result=result)
84
85 data = []
86 _profile(self.schema(), [], data)
87 df = pd.DataFrame(data, columns=['key', 'count', 'dtype', 'path', COL_DEPTH])
88 df = df.sort_values(by=["count", COL_DEPTH], ascending=[False, True]).head(n).to_dict(orient='row')
89 return df
90
91 def flatten(self, path):
92 """
93 Flatten a JSON from a json path
94 :param path:
95 :return:
96 """
97
98 def _flatten_json(_values):
99 out = {}
100
101 def flatten(x, name=''):
102 if type(x) is dict:
103 for a in x:
104 flatten(x[a], name + a + '_')
105 elif type(x) is list:
106 # i = 0
107 for a in x:
108 # flatten(a, name + str(i) + '_')
109 flatten(a, name + '_')
110 # i += 1
111 else:
112 out[name[:-1]] = x
113
114 flatten(_values)
115 return out
116
117 result = []
118 value = glom(self.data, path, skip_exc=KeyError)
119 if is_list(value):
120 for i in value:
121 result.append((_flatten_json(i)))
122 elif is_dict(value):
123 for i, j in value.items():
124 a = {"col": i}
125 a.update(_flatten_json(j))
126 result.append(a)
127 return result
128
129 def to_pandas(self, path):
130 result = self.flatten(path)
131 return pd.DataFrame(data=result)
132
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/optimus/engines/pandas/io/json.py b/optimus/engines/pandas/io/json.py
--- a/optimus/engines/pandas/io/json.py
+++ b/optimus/engines/pandas/io/json.py
@@ -121,7 +121,7 @@
result.append((_flatten_json(i)))
elif is_dict(value):
for i, j in value.items():
- a = {"col": i}
+ a = {path: i}
a.update(_flatten_json(j))
result.append(a)
return result
| {"golden_diff": "diff --git a/optimus/engines/pandas/io/json.py b/optimus/engines/pandas/io/json.py\n--- a/optimus/engines/pandas/io/json.py\n+++ b/optimus/engines/pandas/io/json.py\n@@ -121,7 +121,7 @@\n result.append((_flatten_json(i)))\n elif is_dict(value):\n for i, j in value.items():\n- a = {\"col\": i}\n+ a = {path: i}\n a.update(_flatten_json(j))\n result.append(a)\n return result\n", "issue": "Json file exploration/profiling\nUnstructured data as JSON can not be explored as regular tabular data. I have been exploring using tree depth and count to highlight the user in which nodes could have important data.\r\n\r\nSome work in progress, here. https://github.com/ironmussa/Optimus/blob/develop-3.0/optimus/engines/pandas/io/json.py\n", "before_files": [{"content": "import glob\n\nimport pandas as pd\nimport ujson\nfrom glom import glom\n\nfrom optimus.infer import is_dict, is_list, is_str, is_int\n\nMETA = \"_meta\"\nPROPERTIES = \"_properties\"\nITEMS = \"_items\"\n\nCOL_DEPTH = \"depth\"\n\n\nclass JSON:\n def __init__(self):\n self.data = None\n\n def load(self, path):\n \"\"\"\n Load a file in JSON format\n :param path:\n :return:\n \"\"\"\n all_json = glob.glob(path, recursive=True)\n # pd.read_json(\"data/corona.json\")\n with open(all_json[0]) as f:\n self.data = ujson.load(f)\n\n def schema(self):\n \"\"\"\n Return a JSON with the count, dtype and nested structure\n :return:\n \"\"\"\n\n def _schema(_data, _keys):\n if isinstance(_data, dict):\n for x, y in _data.items():\n if is_dict(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n if len(y) > 0:\n _keys[x][PROPERTIES] = {}\n _schema(y, _keys[x][PROPERTIES])\n elif is_list(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n if len(y) > 0:\n _keys[x] = {ITEMS: {PROPERTIES: {}, META: {\"count\": len(y), \"dtype\": type(y)}}}\n _schema(y, _keys[x][ITEMS][PROPERTIES])\n elif is_str(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n _schema(y, _keys[x])\n elif is_int(y):\n _keys[x] = {META: {\"dtype\": type(y)}}\n _schema(y, _keys[x])\n\n elif is_list(_data):\n for x in _data:\n _schema(x, _keys)\n\n keys = {}\n _schema(self.data, keys)\n return keys\n\n def freq(self, n=100):\n \"\"\"\n Calculate the count on every dict or list in the json\n :param n:\n :return:\n \"\"\"\n\n def _profile(keys, parent, result=None):\n for key, values in keys.items():\n if values.get(PROPERTIES):\n _meta = values.get(META)\n _properties = values.get(PROPERTIES)\n elif values.get(ITEMS):\n _meta = values.get(ITEMS).get(META)\n _properties = values.get(ITEMS).get(PROPERTIES)\n\n if values.get(PROPERTIES) or values.get(ITEMS):\n result.append([key, _meta[\"count\"], _meta[\"dtype\"], parent, len(parent)])\n _profile(_properties, parent + [key], result=result)\n\n data = []\n _profile(self.schema(), [], data)\n df = pd.DataFrame(data, columns=['key', 'count', 'dtype', 'path', COL_DEPTH])\n df = df.sort_values(by=[\"count\", COL_DEPTH], ascending=[False, True]).head(n).to_dict(orient='row')\n return df\n\n def flatten(self, path):\n \"\"\"\n Flatten a JSON from a json path\n :param path:\n :return:\n \"\"\"\n\n def _flatten_json(_values):\n out = {}\n\n def flatten(x, name=''):\n if type(x) is dict:\n for a in x:\n flatten(x[a], name + a + '_')\n elif type(x) is list:\n # i = 0\n for a in x:\n # flatten(a, name + str(i) + '_')\n flatten(a, name + '_')\n # i += 1\n else:\n out[name[:-1]] = x\n\n flatten(_values)\n return out\n\n result = []\n value = glom(self.data, path, skip_exc=KeyError)\n if is_list(value):\n for i in value:\n result.append((_flatten_json(i)))\n elif is_dict(value):\n for i, j in value.items():\n a = {\"col\": i}\n a.update(_flatten_json(j))\n result.append(a)\n return result\n\n def to_pandas(self, path):\n result = self.flatten(path)\n return pd.DataFrame(data=result)\n", "path": "optimus/engines/pandas/io/json.py"}], "after_files": [{"content": "import glob\n\nimport pandas as pd\nimport ujson\nfrom glom import glom\n\nfrom optimus.infer import is_dict, is_list, is_str, is_int\n\nMETA = \"_meta\"\nPROPERTIES = \"_properties\"\nITEMS = \"_items\"\n\nCOL_DEPTH = \"depth\"\n\n\nclass JSON:\n def __init__(self):\n self.data = None\n\n def load(self, path):\n \"\"\"\n Load a file in JSON format\n :param path:\n :return:\n \"\"\"\n all_json = glob.glob(path, recursive=True)\n # pd.read_json(\"data/corona.json\")\n with open(all_json[0]) as f:\n self.data = ujson.load(f)\n\n def schema(self):\n \"\"\"\n Return a JSON with the count, dtype and nested structure\n :return:\n \"\"\"\n\n def _schema(_data, _keys):\n if isinstance(_data, dict):\n for x, y in _data.items():\n if is_dict(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n if len(y) > 0:\n _keys[x][PROPERTIES] = {}\n _schema(y, _keys[x][PROPERTIES])\n elif is_list(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n if len(y) > 0:\n _keys[x] = {ITEMS: {PROPERTIES: {}, META: {\"count\": len(y), \"dtype\": type(y)}}}\n _schema(y, _keys[x][ITEMS][PROPERTIES])\n elif is_str(y):\n _keys[x] = {META: {\"count\": len(y), \"dtype\": type(y)}}\n _schema(y, _keys[x])\n elif is_int(y):\n _keys[x] = {META: {\"dtype\": type(y)}}\n _schema(y, _keys[x])\n\n elif is_list(_data):\n for x in _data:\n _schema(x, _keys)\n\n keys = {}\n _schema(self.data, keys)\n return keys\n\n def freq(self, n=100):\n \"\"\"\n Calculate the count on every dict or list in the json\n :param n:\n :return:\n \"\"\"\n\n def _profile(keys, parent, result=None):\n for key, values in keys.items():\n if values.get(PROPERTIES):\n _meta = values.get(META)\n _properties = values.get(PROPERTIES)\n elif values.get(ITEMS):\n _meta = values.get(ITEMS).get(META)\n _properties = values.get(ITEMS).get(PROPERTIES)\n\n if values.get(PROPERTIES) or values.get(ITEMS):\n result.append([key, _meta[\"count\"], _meta[\"dtype\"], parent, len(parent)])\n _profile(_properties, parent + [key], result=result)\n\n data = []\n _profile(self.schema(), [], data)\n df = pd.DataFrame(data, columns=['key', 'count', 'dtype', 'path', COL_DEPTH])\n df = df.sort_values(by=[\"count\", COL_DEPTH], ascending=[False, True]).head(n).to_dict(orient='row')\n return df\n\n def flatten(self, path):\n \"\"\"\n Flatten a JSON from a json path\n :param path:\n :return:\n \"\"\"\n\n def _flatten_json(_values):\n out = {}\n\n def flatten(x, name=''):\n if type(x) is dict:\n for a in x:\n flatten(x[a], name + a + '_')\n elif type(x) is list:\n # i = 0\n for a in x:\n # flatten(a, name + str(i) + '_')\n flatten(a, name + '_')\n # i += 1\n else:\n out[name[:-1]] = x\n\n flatten(_values)\n return out\n\n result = []\n value = glom(self.data, path, skip_exc=KeyError)\n if is_list(value):\n for i in value:\n result.append((_flatten_json(i)))\n elif is_dict(value):\n for i, j in value.items():\n a = {path: i}\n a.update(_flatten_json(j))\n result.append(a)\n return result\n\n def to_pandas(self, path):\n result = self.flatten(path)\n return pd.DataFrame(data=result)\n", "path": "optimus/engines/pandas/io/json.py"}]} |
gh_patches_debug_1150 | rasdani/github-patches | git_diff | keras-team__keras-11960 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Suggesting keras.utils.*_utils packages should not be part of the official API
In general, all `keras.utils.*_utils.*` functions and classes that are documented on keras.io are available directly in `keras.utils` and documented as such. However there are a few discrepancies:
* `keras.utils.vis_utils.model_to_dot` is not available in `keras.utils`.
* `keras.utils.np_utils.to_categorical` sometimes appears in the documentation, instead of `keras.utils.to_categorical`.
* `keras.utils.io_utils.HDF5Matrix` sometimes appears in the documentation, instead of `keras.utils.HDF5Matrix`.
This introduces some confusion as to what is part of the official Keras API or not: in particular, are `keras.utils.*_utils` packages part of the Keras API or not? Possibly as a result of this confusion, tf.keras is not consistent with keras-team/keras, as it has no `tf.keras.utils.*_utils` packages, and is missing `model_to_dot` altogether. Arguably this is a tf.keras issue, but the fact that only three utility functions are placed in `keras.utils.*_utils` packages is surprising IMHO.
I will propose a PR to fix this by:
* Adding `model_to_dot` to `keras.utils`
* Fixing the documentation to remove all references to `keras.utils.*_utils` packages.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `keras/utils/__init__.py`
Content:
```
1 from __future__ import absolute_import
2 from . import np_utils
3 from . import generic_utils
4 from . import data_utils
5 from . import io_utils
6 from . import conv_utils
7
8 # Globally-importable utils.
9 from .io_utils import HDF5Matrix
10 from .io_utils import H5Dict
11 from .data_utils import get_file
12 from .data_utils import Sequence
13 from .data_utils import GeneratorEnqueuer
14 from .data_utils import OrderedEnqueuer
15 from .generic_utils import CustomObjectScope
16 from .generic_utils import custom_object_scope
17 from .generic_utils import get_custom_objects
18 from .generic_utils import serialize_keras_object
19 from .generic_utils import deserialize_keras_object
20 from .generic_utils import Progbar
21 from .layer_utils import convert_all_kernels_in_model
22 from .layer_utils import get_source_inputs
23 from .layer_utils import print_summary
24 from .vis_utils import plot_model
25 from .np_utils import to_categorical
26 from .np_utils import normalize
27 from .multi_gpu_utils import multi_gpu_model
28
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/keras/utils/__init__.py b/keras/utils/__init__.py
--- a/keras/utils/__init__.py
+++ b/keras/utils/__init__.py
@@ -21,6 +21,7 @@
from .layer_utils import convert_all_kernels_in_model
from .layer_utils import get_source_inputs
from .layer_utils import print_summary
+from .vis_utils import model_to_dot
from .vis_utils import plot_model
from .np_utils import to_categorical
from .np_utils import normalize
| {"golden_diff": "diff --git a/keras/utils/__init__.py b/keras/utils/__init__.py\n--- a/keras/utils/__init__.py\n+++ b/keras/utils/__init__.py\n@@ -21,6 +21,7 @@\n from .layer_utils import convert_all_kernels_in_model\n from .layer_utils import get_source_inputs\n from .layer_utils import print_summary\n+from .vis_utils import model_to_dot\n from .vis_utils import plot_model\n from .np_utils import to_categorical\n from .np_utils import normalize\n", "issue": "Suggesting keras.utils.*_utils packages should not be part of the official API\nIn general, all `keras.utils.*_utils.*` functions and classes that are documented on keras.io are available directly in `keras.utils` and documented as such. However there are a few discrepancies:\r\n* `keras.utils.vis_utils.model_to_dot` is not available in `keras.utils`.\r\n* `keras.utils.np_utils.to_categorical` sometimes appears in the documentation, instead of `keras.utils.to_categorical`.\r\n* `keras.utils.io_utils.HDF5Matrix` sometimes appears in the documentation, instead of `keras.utils.HDF5Matrix`.\r\n\r\nThis introduces some confusion as to what is part of the official Keras API or not: in particular, are `keras.utils.*_utils` packages part of the Keras API or not? Possibly as a result of this confusion, tf.keras is not consistent with keras-team/keras, as it has no `tf.keras.utils.*_utils` packages, and is missing `model_to_dot` altogether. Arguably this is a tf.keras issue, but the fact that only three utility functions are placed in `keras.utils.*_utils` packages is surprising IMHO.\r\n\r\nI will propose a PR to fix this by:\r\n* Adding `model_to_dot` to `keras.utils`\r\n* Fixing the documentation to remove all references to `keras.utils.*_utils` packages.\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom . import np_utils\nfrom . import generic_utils\nfrom . import data_utils\nfrom . import io_utils\nfrom . import conv_utils\n\n# Globally-importable utils.\nfrom .io_utils import HDF5Matrix\nfrom .io_utils import H5Dict\nfrom .data_utils import get_file\nfrom .data_utils import Sequence\nfrom .data_utils import GeneratorEnqueuer\nfrom .data_utils import OrderedEnqueuer\nfrom .generic_utils import CustomObjectScope\nfrom .generic_utils import custom_object_scope\nfrom .generic_utils import get_custom_objects\nfrom .generic_utils import serialize_keras_object\nfrom .generic_utils import deserialize_keras_object\nfrom .generic_utils import Progbar\nfrom .layer_utils import convert_all_kernels_in_model\nfrom .layer_utils import get_source_inputs\nfrom .layer_utils import print_summary\nfrom .vis_utils import plot_model\nfrom .np_utils import to_categorical\nfrom .np_utils import normalize\nfrom .multi_gpu_utils import multi_gpu_model\n", "path": "keras/utils/__init__.py"}], "after_files": [{"content": "from __future__ import absolute_import\nfrom . import np_utils\nfrom . import generic_utils\nfrom . import data_utils\nfrom . import io_utils\nfrom . import conv_utils\n\n# Globally-importable utils.\nfrom .io_utils import HDF5Matrix\nfrom .io_utils import H5Dict\nfrom .data_utils import get_file\nfrom .data_utils import Sequence\nfrom .data_utils import GeneratorEnqueuer\nfrom .data_utils import OrderedEnqueuer\nfrom .generic_utils import CustomObjectScope\nfrom .generic_utils import custom_object_scope\nfrom .generic_utils import get_custom_objects\nfrom .generic_utils import serialize_keras_object\nfrom .generic_utils import deserialize_keras_object\nfrom .generic_utils import Progbar\nfrom .layer_utils import convert_all_kernels_in_model\nfrom .layer_utils import get_source_inputs\nfrom .layer_utils import print_summary\nfrom .vis_utils import model_to_dot\nfrom .vis_utils import plot_model\nfrom .np_utils import to_categorical\nfrom .np_utils import normalize\nfrom .multi_gpu_utils import multi_gpu_model\n", "path": "keras/utils/__init__.py"}]} |
gh_patches_debug_1151 | rasdani/github-patches | git_diff | pyjanitor-devs__pyjanitor-1191 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[INF/CI] Add `--cov-append` for `pytest`
<!-- Thank you for your PR!
BEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.
The abbreviations can be:
- [DOC]: Documentation fixes.
- [ENH]: Code contributions and new features.
- [TST]: Test-related contributions.
- [INF]: Infrastructure-related contributions.
Also, do not forget to tag the relevant issue here as well.
Finally, as commits come in, don't forget to regularly rebase!
-->
# PR Description
Please describe the changes proposed in the pull request:
> Another reason code coverage failed is that pytest doesn't add `--cov-append` option.
`--cov-append` can get a sum coverage. I'll add this option in the next PR.
First let us merge `codecov.yml` into `tests.yml`. Keep the same test logic for the dev branch or a PR.
_Originally posted by @Zeroto521 in https://github.com/pyjanitor-devs/pyjanitor/issues/1185#issuecomment-1296479926_
<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->
<!-- Please also identify below which issue that has been raised that you are going to close. -->
<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->
# PR Checklist
<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with
the project, please feel free to delete this section. -->
Please ensure that you have done the following:
1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.
<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->
2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.
<!-- We'd like to acknowledge your contributions! -->
3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is "on deck") describing the contribution.
- Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.
# Automatic checks
There will be automatic checks run on the PR. These include:
- Building a preview of the docs on Netlify
- Automatically linting the code
- Making sure the code is documented
- Making sure that all tests are passed
- Making sure that code coverage doesn't go down.
# Relevant Reviewers
<!-- Finally, please tag relevant maintainers to review. -->
Please tag maintainers to review.
- @ericmjl
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `janitor/accessors/__init__.py`
Content:
```
1 """Miscellaneous mathematical operators.
2
3 Lazy loading used here to speed up imports.
4 """
5
6 import warnings
7 from typing import Tuple
8
9
10 import lazy_loader as lazy
11
12 scipy_special = lazy.load("scipy.special")
13 ss = lazy.load("scipy.stats")
14 pf = lazy.load("pandas_flavor")
15 pd = lazy.load("pandas")
16 np = lazy.load("numpy")
17 pdtypes = lazy.load("pandas.api.types")
18
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/janitor/accessors/__init__.py b/janitor/accessors/__init__.py
--- a/janitor/accessors/__init__.py
+++ b/janitor/accessors/__init__.py
@@ -1,17 +1,3 @@
-"""Miscellaneous mathematical operators.
+"""Miscellaneous mathematical operators."""
-Lazy loading used here to speed up imports.
-"""
-
-import warnings
-from typing import Tuple
-
-
-import lazy_loader as lazy
-
-scipy_special = lazy.load("scipy.special")
-ss = lazy.load("scipy.stats")
-pf = lazy.load("pandas_flavor")
-pd = lazy.load("pandas")
-np = lazy.load("numpy")
-pdtypes = lazy.load("pandas.api.types")
+from janitor.accessors.data_description import DataDescription # noqa: F401
| {"golden_diff": "diff --git a/janitor/accessors/__init__.py b/janitor/accessors/__init__.py\n--- a/janitor/accessors/__init__.py\n+++ b/janitor/accessors/__init__.py\n@@ -1,17 +1,3 @@\n-\"\"\"Miscellaneous mathematical operators.\n+\"\"\"Miscellaneous mathematical operators.\"\"\"\n \n-Lazy loading used here to speed up imports.\n-\"\"\"\n-\n-import warnings\n-from typing import Tuple\n-\n-\n-import lazy_loader as lazy\n-\n-scipy_special = lazy.load(\"scipy.special\")\n-ss = lazy.load(\"scipy.stats\")\n-pf = lazy.load(\"pandas_flavor\")\n-pd = lazy.load(\"pandas\")\n-np = lazy.load(\"numpy\")\n-pdtypes = lazy.load(\"pandas.api.types\")\n+from janitor.accessors.data_description import DataDescription # noqa: F401\n", "issue": "[INF/CI] Add `--cov-append` for `pytest`\n<!-- Thank you for your PR!\r\n\r\nBEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.\r\n\r\nThe abbreviations can be:\r\n- [DOC]: Documentation fixes.\r\n- [ENH]: Code contributions and new features.\r\n- [TST]: Test-related contributions.\r\n- [INF]: Infrastructure-related contributions.\r\n\r\nAlso, do not forget to tag the relevant issue here as well.\r\n\r\nFinally, as commits come in, don't forget to regularly rebase!\r\n-->\r\n\r\n# PR Description\r\n\r\nPlease describe the changes proposed in the pull request:\r\n\r\n> Another reason code coverage failed is that pytest doesn't add `--cov-append` option.\r\n`--cov-append` can get a sum coverage. I'll add this option in the next PR.\r\nFirst let us merge `codecov.yml` into `tests.yml`. Keep the same test logic for the dev branch or a PR.\r\n\r\n_Originally posted by @Zeroto521 in https://github.com/pyjanitor-devs/pyjanitor/issues/1185#issuecomment-1296479926_\r\n\r\n<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->\r\n\r\n<!-- Please also identify below which issue that has been raised that you are going to close. -->\r\n\r\n<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->\r\n\r\n# PR Checklist\r\n\r\n<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with\r\nthe project, please feel free to delete this section. -->\r\n\r\nPlease ensure that you have done the following:\r\n\r\n1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.\r\n<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->\r\n2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.\r\n<!-- We'd like to acknowledge your contributions! -->\r\n3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is \"on deck\") describing the contribution.\r\n - Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.\r\n\r\n# Automatic checks\r\n\r\nThere will be automatic checks run on the PR. These include:\r\n\r\n- Building a preview of the docs on Netlify\r\n- Automatically linting the code\r\n- Making sure the code is documented\r\n- Making sure that all tests are passed\r\n- Making sure that code coverage doesn't go down.\r\n\r\n# Relevant Reviewers\r\n\r\n<!-- Finally, please tag relevant maintainers to review. -->\r\n\r\nPlease tag maintainers to review.\r\n\r\n- @ericmjl\r\n\n", "before_files": [{"content": "\"\"\"Miscellaneous mathematical operators.\n\nLazy loading used here to speed up imports.\n\"\"\"\n\nimport warnings\nfrom typing import Tuple\n\n\nimport lazy_loader as lazy\n\nscipy_special = lazy.load(\"scipy.special\")\nss = lazy.load(\"scipy.stats\")\npf = lazy.load(\"pandas_flavor\")\npd = lazy.load(\"pandas\")\nnp = lazy.load(\"numpy\")\npdtypes = lazy.load(\"pandas.api.types\")\n", "path": "janitor/accessors/__init__.py"}], "after_files": [{"content": "\"\"\"Miscellaneous mathematical operators.\"\"\"\n\nfrom janitor.accessors.data_description import DataDescription # noqa: F401\n", "path": "janitor/accessors/__init__.py"}]} |
gh_patches_debug_1152 | rasdani/github-patches | git_diff | benoitc__gunicorn-960 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Reloading sometimes gives a TypeError: 'NoneType' object is not callable
I'm running a custom app (subclass of `BaseApplication`) with reload set to True. In some situations, changing a file causes the following traceback:
```
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/maxm/lib/python2.7/threading.py", line 551, in __bootstrap_inner
File "/usr/maxm/lib/python2.7/site-packages/gunicorn/reloader.py", line 52, in run
File "/usr/maxm/lib/python2.7/site-packages/gunicorn/workers/base.py", line 87, in changed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
```
It's intermittent; I can sometimes reproduce it several times in a row by touching the same file, and then it stops happening. It certainly doesn't seem to interfere with the reloading behavior.
Line 87 is only `raise SystemExit()`. But line 86 is `os.kill(self.pid, signal.SIGQUIT)`, so I think what's happening is that the interpreter has started to tear down the environment and `SystemExit` has become `None`. (See also [this](http://article.gmane.org/gmane.comp.python.general/387087/) mailing list post.)
Anything I can do about this?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `gunicorn/workers/base.py`
Content:
```
1 # -*- coding: utf-8 -
2 #
3 # This file is part of gunicorn released under the MIT license.
4 # See the NOTICE for more information.
5
6 from datetime import datetime
7 import os
8 import signal
9 import sys
10 from random import randint
11
12
13 from gunicorn import util
14 from gunicorn.workers.workertmp import WorkerTmp
15 from gunicorn.reloader import Reloader
16 from gunicorn.http.errors import (
17 InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,
18 InvalidHTTPVersion, LimitRequestLine, LimitRequestHeaders,
19 )
20 from gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest
21 from gunicorn.http.wsgi import default_environ, Response
22 from gunicorn.six import MAXSIZE
23
24
25 class Worker(object):
26
27 SIGNALS = [getattr(signal, "SIG%s" % x)
28 for x in "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()]
29
30 PIPE = []
31
32 def __init__(self, age, ppid, sockets, app, timeout, cfg, log):
33 """\
34 This is called pre-fork so it shouldn't do anything to the
35 current process. If there's a need to make process wide
36 changes you'll want to do that in ``self.init_process()``.
37 """
38 self.age = age
39 self.ppid = ppid
40 self.sockets = sockets
41 self.app = app
42 self.timeout = timeout
43 self.cfg = cfg
44 self.booted = False
45 self.aborted = False
46
47 self.nr = 0
48 jitter = randint(0, cfg.max_requests_jitter)
49 self.max_requests = cfg.max_requests + jitter or MAXSIZE
50 self.alive = True
51 self.log = log
52 self.tmp = WorkerTmp(cfg)
53
54 def __str__(self):
55 return "<Worker %s>" % self.pid
56
57 @property
58 def pid(self):
59 return os.getpid()
60
61 def notify(self):
62 """\
63 Your worker subclass must arrange to have this method called
64 once every ``self.timeout`` seconds. If you fail in accomplishing
65 this task, the master process will murder your workers.
66 """
67 self.tmp.notify()
68
69 def run(self):
70 """\
71 This is the mainloop of a worker process. You should override
72 this method in a subclass to provide the intended behaviour
73 for your particular evil schemes.
74 """
75 raise NotImplementedError()
76
77 def init_process(self):
78 """\
79 If you override this method in a subclass, the last statement
80 in the function should be to call this method with
81 super(MyWorkerClass, self).init_process() so that the ``run()``
82 loop is initiated.
83 """
84
85 # start the reloader
86 if self.cfg.reload:
87 def changed(fname):
88 self.log.info("Worker reloading: %s modified", fname)
89 os.kill(self.pid, signal.SIGQUIT)
90 raise SystemExit()
91 Reloader(callback=changed).start()
92
93 # set environment' variables
94 if self.cfg.env:
95 for k, v in self.cfg.env.items():
96 os.environ[k] = v
97
98 util.set_owner_process(self.cfg.uid, self.cfg.gid)
99
100 # Reseed the random number generator
101 util.seed()
102
103 # For waking ourselves up
104 self.PIPE = os.pipe()
105 for p in self.PIPE:
106 util.set_non_blocking(p)
107 util.close_on_exec(p)
108
109 # Prevent fd inheritance
110 [util.close_on_exec(s) for s in self.sockets]
111 util.close_on_exec(self.tmp.fileno())
112
113 self.log.close_on_exec()
114
115 self.init_signals()
116
117 self.wsgi = self.app.wsgi()
118
119 self.cfg.post_worker_init(self)
120
121 # Enter main run loop
122 self.booted = True
123 self.run()
124
125 def init_signals(self):
126 # reset signaling
127 [signal.signal(s, signal.SIG_DFL) for s in self.SIGNALS]
128 # init new signaling
129 signal.signal(signal.SIGQUIT, self.handle_quit)
130 signal.signal(signal.SIGTERM, self.handle_exit)
131 signal.signal(signal.SIGINT, self.handle_quit)
132 signal.signal(signal.SIGWINCH, self.handle_winch)
133 signal.signal(signal.SIGUSR1, self.handle_usr1)
134 signal.signal(signal.SIGABRT, self.handle_abort)
135
136 # Don't let SIGTERM and SIGUSR1 disturb active requests
137 # by interrupting system calls
138 if hasattr(signal, 'siginterrupt'): # python >= 2.6
139 signal.siginterrupt(signal.SIGTERM, False)
140 signal.siginterrupt(signal.SIGUSR1, False)
141
142 def handle_usr1(self, sig, frame):
143 self.log.reopen_files()
144
145 def handle_exit(self, sig, frame):
146 self.alive = False
147
148 def handle_quit(self, sig, frame):
149 self.alive = False
150 # worker_int callback
151 self.cfg.worker_int(self)
152 sys.exit(0)
153
154 def handle_abort(self, sig, frame):
155 self.alive = False
156 self.cfg.worker_abort(self)
157 sys.exit(1)
158
159 def handle_error(self, req, client, addr, exc):
160 request_start = datetime.now()
161 addr = addr or ('', -1) # unix socket case
162 if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod,
163 InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,
164 LimitRequestLine, LimitRequestHeaders,
165 InvalidProxyLine, ForbiddenProxyRequest)):
166
167 status_int = 400
168 reason = "Bad Request"
169
170 if isinstance(exc, InvalidRequestLine):
171 mesg = "Invalid Request Line '%s'" % str(exc)
172 elif isinstance(exc, InvalidRequestMethod):
173 mesg = "Invalid Method '%s'" % str(exc)
174 elif isinstance(exc, InvalidHTTPVersion):
175 mesg = "Invalid HTTP Version '%s'" % str(exc)
176 elif isinstance(exc, (InvalidHeaderName, InvalidHeader,)):
177 mesg = "%s" % str(exc)
178 if not req and hasattr(exc, "req"):
179 req = exc.req # for access log
180 elif isinstance(exc, LimitRequestLine):
181 mesg = "%s" % str(exc)
182 elif isinstance(exc, LimitRequestHeaders):
183 mesg = "Error parsing headers: '%s'" % str(exc)
184 elif isinstance(exc, InvalidProxyLine):
185 mesg = "'%s'" % str(exc)
186 elif isinstance(exc, ForbiddenProxyRequest):
187 reason = "Forbidden"
188 mesg = "Request forbidden"
189 status_int = 403
190
191 msg = "Invalid request from ip={ip}: {error}"
192 self.log.debug(msg.format(ip=addr[0], error=str(exc)))
193 else:
194 self.log.exception("Error handling request")
195
196 status_int = 500
197 reason = "Internal Server Error"
198 mesg = ""
199
200 if req is not None:
201 request_time = datetime.now() - request_start
202 environ = default_environ(req, client, self.cfg)
203 environ['REMOTE_ADDR'] = addr[0]
204 environ['REMOTE_PORT'] = str(addr[1])
205 resp = Response(req, client, self.cfg)
206 resp.status = "%s %s" % (status_int, reason)
207 resp.response_length = len(mesg)
208 self.log.access(resp, req, environ, request_time)
209
210 try:
211 util.write_error(client, status_int, reason, mesg)
212 except:
213 self.log.debug("Failed to send error message.")
214
215 def handle_winch(self, sig, fname):
216 # Ignore SIGWINCH in worker. Fixes a crash on OpenBSD.
217 return
218
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/gunicorn/workers/base.py b/gunicorn/workers/base.py
--- a/gunicorn/workers/base.py
+++ b/gunicorn/workers/base.py
@@ -87,7 +87,6 @@
def changed(fname):
self.log.info("Worker reloading: %s modified", fname)
os.kill(self.pid, signal.SIGQUIT)
- raise SystemExit()
Reloader(callback=changed).start()
# set environment' variables
| {"golden_diff": "diff --git a/gunicorn/workers/base.py b/gunicorn/workers/base.py\n--- a/gunicorn/workers/base.py\n+++ b/gunicorn/workers/base.py\n@@ -87,7 +87,6 @@\n def changed(fname):\n self.log.info(\"Worker reloading: %s modified\", fname)\n os.kill(self.pid, signal.SIGQUIT)\n- raise SystemExit()\n Reloader(callback=changed).start()\n \n # set environment' variables\n", "issue": "Reloading sometimes gives a TypeError: 'NoneType' object is not callable\nI'm running a custom app (subclass of `BaseApplication`) with reload set to True. In some situations, changing a file causes the following traceback:\n\n```\nException in thread Thread-1 (most likely raised during interpreter shutdown):\nTraceback (most recent call last):\n File \"/usr/maxm/lib/python2.7/threading.py\", line 551, in __bootstrap_inner\n File \"/usr/maxm/lib/python2.7/site-packages/gunicorn/reloader.py\", line 52, in run\n File \"/usr/maxm/lib/python2.7/site-packages/gunicorn/workers/base.py\", line 87, in changed\n<type 'exceptions.TypeError'>: 'NoneType' object is not callable\n```\n\nIt's intermittent; I can sometimes reproduce it several times in a row by touching the same file, and then it stops happening. It certainly doesn't seem to interfere with the reloading behavior.\n\nLine 87 is only `raise SystemExit()`. But line 86 is `os.kill(self.pid, signal.SIGQUIT)`, so I think what's happening is that the interpreter has started to tear down the environment and `SystemExit` has become `None`. (See also [this](http://article.gmane.org/gmane.comp.python.general/387087/) mailing list post.) \n\nAnything I can do about this?\n\n", "before_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nfrom datetime import datetime\nimport os\nimport signal\nimport sys\nfrom random import randint\n\n\nfrom gunicorn import util\nfrom gunicorn.workers.workertmp import WorkerTmp\nfrom gunicorn.reloader import Reloader\nfrom gunicorn.http.errors import (\n InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, LimitRequestLine, LimitRequestHeaders,\n)\nfrom gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest\nfrom gunicorn.http.wsgi import default_environ, Response\nfrom gunicorn.six import MAXSIZE\n\n\nclass Worker(object):\n\n SIGNALS = [getattr(signal, \"SIG%s\" % x)\n for x in \"ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD\".split()]\n\n PIPE = []\n\n def __init__(self, age, ppid, sockets, app, timeout, cfg, log):\n \"\"\"\\\n This is called pre-fork so it shouldn't do anything to the\n current process. If there's a need to make process wide\n changes you'll want to do that in ``self.init_process()``.\n \"\"\"\n self.age = age\n self.ppid = ppid\n self.sockets = sockets\n self.app = app\n self.timeout = timeout\n self.cfg = cfg\n self.booted = False\n self.aborted = False\n\n self.nr = 0\n jitter = randint(0, cfg.max_requests_jitter)\n self.max_requests = cfg.max_requests + jitter or MAXSIZE\n self.alive = True\n self.log = log\n self.tmp = WorkerTmp(cfg)\n\n def __str__(self):\n return \"<Worker %s>\" % self.pid\n\n @property\n def pid(self):\n return os.getpid()\n\n def notify(self):\n \"\"\"\\\n Your worker subclass must arrange to have this method called\n once every ``self.timeout`` seconds. If you fail in accomplishing\n this task, the master process will murder your workers.\n \"\"\"\n self.tmp.notify()\n\n def run(self):\n \"\"\"\\\n This is the mainloop of a worker process. You should override\n this method in a subclass to provide the intended behaviour\n for your particular evil schemes.\n \"\"\"\n raise NotImplementedError()\n\n def init_process(self):\n \"\"\"\\\n If you override this method in a subclass, the last statement\n in the function should be to call this method with\n super(MyWorkerClass, self).init_process() so that the ``run()``\n loop is initiated.\n \"\"\"\n\n # start the reloader\n if self.cfg.reload:\n def changed(fname):\n self.log.info(\"Worker reloading: %s modified\", fname)\n os.kill(self.pid, signal.SIGQUIT)\n raise SystemExit()\n Reloader(callback=changed).start()\n\n # set environment' variables\n if self.cfg.env:\n for k, v in self.cfg.env.items():\n os.environ[k] = v\n\n util.set_owner_process(self.cfg.uid, self.cfg.gid)\n\n # Reseed the random number generator\n util.seed()\n\n # For waking ourselves up\n self.PIPE = os.pipe()\n for p in self.PIPE:\n util.set_non_blocking(p)\n util.close_on_exec(p)\n\n # Prevent fd inheritance\n [util.close_on_exec(s) for s in self.sockets]\n util.close_on_exec(self.tmp.fileno())\n\n self.log.close_on_exec()\n\n self.init_signals()\n\n self.wsgi = self.app.wsgi()\n\n self.cfg.post_worker_init(self)\n\n # Enter main run loop\n self.booted = True\n self.run()\n\n def init_signals(self):\n # reset signaling\n [signal.signal(s, signal.SIG_DFL) for s in self.SIGNALS]\n # init new signaling\n signal.signal(signal.SIGQUIT, self.handle_quit)\n signal.signal(signal.SIGTERM, self.handle_exit)\n signal.signal(signal.SIGINT, self.handle_quit)\n signal.signal(signal.SIGWINCH, self.handle_winch)\n signal.signal(signal.SIGUSR1, self.handle_usr1)\n signal.signal(signal.SIGABRT, self.handle_abort)\n\n # Don't let SIGTERM and SIGUSR1 disturb active requests\n # by interrupting system calls\n if hasattr(signal, 'siginterrupt'): # python >= 2.6\n signal.siginterrupt(signal.SIGTERM, False)\n signal.siginterrupt(signal.SIGUSR1, False)\n\n def handle_usr1(self, sig, frame):\n self.log.reopen_files()\n\n def handle_exit(self, sig, frame):\n self.alive = False\n\n def handle_quit(self, sig, frame):\n self.alive = False\n # worker_int callback\n self.cfg.worker_int(self)\n sys.exit(0)\n\n def handle_abort(self, sig, frame):\n self.alive = False\n self.cfg.worker_abort(self)\n sys.exit(1)\n\n def handle_error(self, req, client, addr, exc):\n request_start = datetime.now()\n addr = addr or ('', -1) # unix socket case\n if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,\n LimitRequestLine, LimitRequestHeaders,\n InvalidProxyLine, ForbiddenProxyRequest)):\n\n status_int = 400\n reason = \"Bad Request\"\n\n if isinstance(exc, InvalidRequestLine):\n mesg = \"Invalid Request Line '%s'\" % str(exc)\n elif isinstance(exc, InvalidRequestMethod):\n mesg = \"Invalid Method '%s'\" % str(exc)\n elif isinstance(exc, InvalidHTTPVersion):\n mesg = \"Invalid HTTP Version '%s'\" % str(exc)\n elif isinstance(exc, (InvalidHeaderName, InvalidHeader,)):\n mesg = \"%s\" % str(exc)\n if not req and hasattr(exc, \"req\"):\n req = exc.req # for access log\n elif isinstance(exc, LimitRequestLine):\n mesg = \"%s\" % str(exc)\n elif isinstance(exc, LimitRequestHeaders):\n mesg = \"Error parsing headers: '%s'\" % str(exc)\n elif isinstance(exc, InvalidProxyLine):\n mesg = \"'%s'\" % str(exc)\n elif isinstance(exc, ForbiddenProxyRequest):\n reason = \"Forbidden\"\n mesg = \"Request forbidden\"\n status_int = 403\n\n msg = \"Invalid request from ip={ip}: {error}\"\n self.log.debug(msg.format(ip=addr[0], error=str(exc)))\n else:\n self.log.exception(\"Error handling request\")\n\n status_int = 500\n reason = \"Internal Server Error\"\n mesg = \"\"\n\n if req is not None:\n request_time = datetime.now() - request_start\n environ = default_environ(req, client, self.cfg)\n environ['REMOTE_ADDR'] = addr[0]\n environ['REMOTE_PORT'] = str(addr[1])\n resp = Response(req, client, self.cfg)\n resp.status = \"%s %s\" % (status_int, reason)\n resp.response_length = len(mesg)\n self.log.access(resp, req, environ, request_time)\n\n try:\n util.write_error(client, status_int, reason, mesg)\n except:\n self.log.debug(\"Failed to send error message.\")\n\n def handle_winch(self, sig, fname):\n # Ignore SIGWINCH in worker. Fixes a crash on OpenBSD.\n return\n", "path": "gunicorn/workers/base.py"}], "after_files": [{"content": "# -*- coding: utf-8 -\n#\n# This file is part of gunicorn released under the MIT license.\n# See the NOTICE for more information.\n\nfrom datetime import datetime\nimport os\nimport signal\nimport sys\nfrom random import randint\n\n\nfrom gunicorn import util\nfrom gunicorn.workers.workertmp import WorkerTmp\nfrom gunicorn.reloader import Reloader\nfrom gunicorn.http.errors import (\n InvalidHeader, InvalidHeaderName, InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, LimitRequestLine, LimitRequestHeaders,\n)\nfrom gunicorn.http.errors import InvalidProxyLine, ForbiddenProxyRequest\nfrom gunicorn.http.wsgi import default_environ, Response\nfrom gunicorn.six import MAXSIZE\n\n\nclass Worker(object):\n\n SIGNALS = [getattr(signal, \"SIG%s\" % x)\n for x in \"ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD\".split()]\n\n PIPE = []\n\n def __init__(self, age, ppid, sockets, app, timeout, cfg, log):\n \"\"\"\\\n This is called pre-fork so it shouldn't do anything to the\n current process. If there's a need to make process wide\n changes you'll want to do that in ``self.init_process()``.\n \"\"\"\n self.age = age\n self.ppid = ppid\n self.sockets = sockets\n self.app = app\n self.timeout = timeout\n self.cfg = cfg\n self.booted = False\n self.aborted = False\n\n self.nr = 0\n jitter = randint(0, cfg.max_requests_jitter)\n self.max_requests = cfg.max_requests + jitter or MAXSIZE\n self.alive = True\n self.log = log\n self.tmp = WorkerTmp(cfg)\n\n def __str__(self):\n return \"<Worker %s>\" % self.pid\n\n @property\n def pid(self):\n return os.getpid()\n\n def notify(self):\n \"\"\"\\\n Your worker subclass must arrange to have this method called\n once every ``self.timeout`` seconds. If you fail in accomplishing\n this task, the master process will murder your workers.\n \"\"\"\n self.tmp.notify()\n\n def run(self):\n \"\"\"\\\n This is the mainloop of a worker process. You should override\n this method in a subclass to provide the intended behaviour\n for your particular evil schemes.\n \"\"\"\n raise NotImplementedError()\n\n def init_process(self):\n \"\"\"\\\n If you override this method in a subclass, the last statement\n in the function should be to call this method with\n super(MyWorkerClass, self).init_process() so that the ``run()``\n loop is initiated.\n \"\"\"\n\n # start the reloader\n if self.cfg.reload:\n def changed(fname):\n self.log.info(\"Worker reloading: %s modified\", fname)\n os.kill(self.pid, signal.SIGQUIT)\n Reloader(callback=changed).start()\n\n # set environment' variables\n if self.cfg.env:\n for k, v in self.cfg.env.items():\n os.environ[k] = v\n\n util.set_owner_process(self.cfg.uid, self.cfg.gid)\n\n # Reseed the random number generator\n util.seed()\n\n # For waking ourselves up\n self.PIPE = os.pipe()\n for p in self.PIPE:\n util.set_non_blocking(p)\n util.close_on_exec(p)\n\n # Prevent fd inheritance\n [util.close_on_exec(s) for s in self.sockets]\n util.close_on_exec(self.tmp.fileno())\n\n self.log.close_on_exec()\n\n self.init_signals()\n\n self.wsgi = self.app.wsgi()\n\n self.cfg.post_worker_init(self)\n\n # Enter main run loop\n self.booted = True\n self.run()\n\n def init_signals(self):\n # reset signaling\n [signal.signal(s, signal.SIG_DFL) for s in self.SIGNALS]\n # init new signaling\n signal.signal(signal.SIGQUIT, self.handle_quit)\n signal.signal(signal.SIGTERM, self.handle_exit)\n signal.signal(signal.SIGINT, self.handle_quit)\n signal.signal(signal.SIGWINCH, self.handle_winch)\n signal.signal(signal.SIGUSR1, self.handle_usr1)\n signal.signal(signal.SIGABRT, self.handle_abort)\n\n # Don't let SIGTERM and SIGUSR1 disturb active requests\n # by interrupting system calls\n if hasattr(signal, 'siginterrupt'): # python >= 2.6\n signal.siginterrupt(signal.SIGTERM, False)\n signal.siginterrupt(signal.SIGUSR1, False)\n\n def handle_usr1(self, sig, frame):\n self.log.reopen_files()\n\n def handle_exit(self, sig, frame):\n self.alive = False\n\n def handle_quit(self, sig, frame):\n self.alive = False\n # worker_int callback\n self.cfg.worker_int(self)\n sys.exit(0)\n\n def handle_abort(self, sig, frame):\n self.alive = False\n self.cfg.worker_abort(self)\n sys.exit(1)\n\n def handle_error(self, req, client, addr, exc):\n request_start = datetime.now()\n addr = addr or ('', -1) # unix socket case\n if isinstance(exc, (InvalidRequestLine, InvalidRequestMethod,\n InvalidHTTPVersion, InvalidHeader, InvalidHeaderName,\n LimitRequestLine, LimitRequestHeaders,\n InvalidProxyLine, ForbiddenProxyRequest)):\n\n status_int = 400\n reason = \"Bad Request\"\n\n if isinstance(exc, InvalidRequestLine):\n mesg = \"Invalid Request Line '%s'\" % str(exc)\n elif isinstance(exc, InvalidRequestMethod):\n mesg = \"Invalid Method '%s'\" % str(exc)\n elif isinstance(exc, InvalidHTTPVersion):\n mesg = \"Invalid HTTP Version '%s'\" % str(exc)\n elif isinstance(exc, (InvalidHeaderName, InvalidHeader,)):\n mesg = \"%s\" % str(exc)\n if not req and hasattr(exc, \"req\"):\n req = exc.req # for access log\n elif isinstance(exc, LimitRequestLine):\n mesg = \"%s\" % str(exc)\n elif isinstance(exc, LimitRequestHeaders):\n mesg = \"Error parsing headers: '%s'\" % str(exc)\n elif isinstance(exc, InvalidProxyLine):\n mesg = \"'%s'\" % str(exc)\n elif isinstance(exc, ForbiddenProxyRequest):\n reason = \"Forbidden\"\n mesg = \"Request forbidden\"\n status_int = 403\n\n msg = \"Invalid request from ip={ip}: {error}\"\n self.log.debug(msg.format(ip=addr[0], error=str(exc)))\n else:\n self.log.exception(\"Error handling request\")\n\n status_int = 500\n reason = \"Internal Server Error\"\n mesg = \"\"\n\n if req is not None:\n request_time = datetime.now() - request_start\n environ = default_environ(req, client, self.cfg)\n environ['REMOTE_ADDR'] = addr[0]\n environ['REMOTE_PORT'] = str(addr[1])\n resp = Response(req, client, self.cfg)\n resp.status = \"%s %s\" % (status_int, reason)\n resp.response_length = len(mesg)\n self.log.access(resp, req, environ, request_time)\n\n try:\n util.write_error(client, status_int, reason, mesg)\n except:\n self.log.debug(\"Failed to send error message.\")\n\n def handle_winch(self, sig, fname):\n # Ignore SIGWINCH in worker. Fixes a crash on OpenBSD.\n return\n", "path": "gunicorn/workers/base.py"}]} |
gh_patches_debug_1153 | rasdani/github-patches | git_diff | openmc-dev__openmc-1724 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Typo in `RectangularParallelepiped.__pos__` method.
A user [reported a problem](https://openmc.discourse.group/t/openmc-model-rectangularparallelepiped-usage/858) in the region output when using the positive halfspace of a `RectangularParallelepiped` object recently in discourse.
It seems that there was a typo when defining the `RectangularParallelepiped.__pos__` method (`ymax` is used as the positive x bound).
https://github.com/openmc-dev/openmc/blob/88fb7b03491907c7d4cddbdb67cfe289fda813ce/openmc/model/surface_composite.py#L165-L166
I'll submit a PR for this shortly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `openmc/model/surface_composite.py`
Content:
```
1 from abc import ABC, abstractmethod
2 from copy import copy
3
4 import openmc
5 from openmc.checkvalue import check_greater_than, check_value
6
7
8 class CompositeSurface(ABC):
9 """Multiple primitive surfaces combined into a composite surface"""
10
11 def translate(self, vector, inplace=False):
12 surf = self if inplace else copy(self)
13 for name in self._surface_names:
14 s = getattr(surf, name)
15 setattr(surf, name, s.translate(vector, inplace))
16 return surf
17
18 def rotate(self, rotation, pivot=(0., 0., 0.), order='xyz', inplace=False):
19 surf = copy(self)
20 for name in self._surface_names:
21 s = getattr(surf, name)
22 setattr(surf, name, s.rotate(rotation, pivot, order, inplace))
23 return surf
24
25 @property
26 def boundary_type(self):
27 return getattr(self, self._surface_names[0]).boundary_type
28
29 @boundary_type.setter
30 def boundary_type(self, boundary_type):
31 # Set boundary type on underlying surfaces, but not for ambiguity plane
32 # on one-sided cones
33 for name in self._surface_names:
34 if name != 'plane':
35 getattr(self, name).boundary_type = boundary_type
36
37 def __repr__(self):
38 return "<{} at 0x{:x}>".format(type(self).__name__, id(self))
39
40 @property
41 @abstractmethod
42 def _surface_names(self):
43 """Iterable of attribute names corresponding to underlying surfaces."""
44
45 @abstractmethod
46 def __pos__(self):
47 """Return the positive half-space of the composite surface."""
48
49 @abstractmethod
50 def __neg__(self):
51 """Return the negative half-space of the composite surface."""
52
53
54 class RightCircularCylinder(CompositeSurface):
55 """Right circular cylinder composite surface
56
57 A right circular cylinder is composed of a cylinder and two planar surface
58 perpendicular to the axis of the cylinder. This class acts as a proper
59 surface, meaning that unary `+` and `-` operators applied to it will produce
60 a half-space. The negative side is defined to be the region inside of the
61 right circular cylinder.
62
63 .. versionadded:: 0.12
64
65 Parameters
66 ----------
67 center_base : iterable of float
68 Cartesian coordinate of the center of the base of the cylinder
69 height : float
70 Height of the cylinder
71 radius : float
72 Radius of the cylinder
73 axis : {'x', 'y', 'z'}
74 Axis of the cylinder
75 **kwargs
76 Keyword arguments passed to underlying cylinder and plane classes
77
78 Attributes
79 ----------
80 cyl : openmc.Cylinder
81 Underlying cylinder surface
82 bottom : openmc.Plane
83 Bottom planar surface of the cylinder
84 top : openmc.Plane
85 Top planar surface of the cylinder
86
87 """
88 _surface_names = ('cyl', 'bottom', 'top')
89
90 def __init__(self, center_base, height, radius, axis='z', **kwargs):
91 cx, cy, cz = center_base
92 check_greater_than('cylinder height', height, 0.0)
93 check_greater_than('cylinder radius', radius, 0.0)
94 check_value('cylinder axis', axis, ('x', 'y', 'z'))
95 if axis == 'x':
96 self.cyl = openmc.XCylinder(y0=cy, z0=cz, r=radius, **kwargs)
97 self.bottom = openmc.XPlane(x0=cx, **kwargs)
98 self.top = openmc.XPlane(x0=cx + height, **kwargs)
99 elif axis == 'y':
100 self.cyl = openmc.YCylinder(x0=cx, z0=cz, r=radius, **kwargs)
101 self.bottom = openmc.YPlane(y0=cy, **kwargs)
102 self.top = openmc.YPlane(y0=cy + height, **kwargs)
103 elif axis == 'z':
104 self.cyl = openmc.ZCylinder(x0=cx, y0=cy, r=radius, **kwargs)
105 self.bottom = openmc.ZPlane(z0=cz, **kwargs)
106 self.top = openmc.ZPlane(z0=cz + height, **kwargs)
107
108 def __neg__(self):
109 return -self.cyl & +self.bottom & -self.top
110
111 def __pos__(self):
112 return +self.cyl | -self.bottom | +self.top
113
114
115 class RectangularParallelepiped(CompositeSurface):
116 """Rectangular parallelpiped composite surface
117
118 A rectangular parallelpiped is composed of six planar surfaces. This class
119 acts as a proper surface, meaning that unary `+` and `-` operators applied
120 to it will produce a half-space. The negative side is defined to be the
121 region inside of the rectangular parallelpiped.
122
123 .. versionadded:: 0.12
124
125 Parameters
126 ----------
127 xmin, xmax : float
128 Minimum and maximum x coordinates of the parallelepiped
129 ymin, ymax : float
130 Minimum and maximum y coordinates of the parallelepiped
131 zmin, zmax : float
132 Minimum and maximum z coordinates of the parallelepiped
133 **kwargs
134 Keyword arguments passed to underlying plane classes
135
136 Attributes
137 ----------
138 xmin, xmax : openmc.XPlane
139 Sides of the parallelepiped
140 ymin, ymax : openmc.YPlane
141 Sides of the parallelepiped
142 zmin, zmax : openmc.ZPlane
143 Sides of the parallelepiped
144
145 """
146 _surface_names = ('xmin', 'xmax', 'ymin', 'ymax', 'zmin', 'zmax')
147
148 def __init__(self, xmin, xmax, ymin, ymax, zmin, zmax, **kwargs):
149 if xmin >= xmax:
150 raise ValueError('xmin must be less than xmax')
151 if ymin >= ymax:
152 raise ValueError('ymin must be less than ymax')
153 if zmin >= zmax:
154 raise ValueError('zmin must be less than zmax')
155 self.xmin = openmc.XPlane(x0=xmin, **kwargs)
156 self.xmax = openmc.XPlane(x0=xmax, **kwargs)
157 self.ymin = openmc.YPlane(y0=ymin, **kwargs)
158 self.ymax = openmc.YPlane(y0=ymax, **kwargs)
159 self.zmin = openmc.ZPlane(z0=zmin, **kwargs)
160 self.zmax = openmc.ZPlane(z0=zmax, **kwargs)
161
162 def __neg__(self):
163 return +self.xmin & -self.xmax & +self.ymin & -self.ymax & +self.zmin & -self.zmax
164
165 def __pos__(self):
166 return -self.xmin | +self.ymax | -self.ymin | +self.ymax | -self.zmin | +self.zmax
167
168
169 class XConeOneSided(CompositeSurface):
170 """One-sided cone parallel the x-axis
171
172 A one-sided cone is composed of a normal cone surface and an "ambiguity"
173 surface that eliminates the ambiguity as to which region of space is
174 included. This class acts as a proper surface, meaning that unary `+` and
175 `-` operators applied to it will produce a half-space. The negative side is
176 defined to be the region inside of the cone.
177
178 .. versionadded:: 0.12
179
180 Parameters
181 ----------
182 x0 : float, optional
183 x-coordinate of the apex. Defaults to 0.
184 y0 : float, optional
185 y-coordinate of the apex. Defaults to 0.
186 z0 : float, optional
187 z-coordinate of the apex. Defaults to 0.
188 r2 : float, optional
189 Parameter related to the aperature. Defaults to 1.
190 up : bool
191 Whether to select the side of the cone that extends to infinity in the
192 positive direction of the coordinate axis (the positive half-space of
193 the ambiguity plane)
194 **kwargs
195 Keyword arguments passed to underlying plane classes
196
197 Attributes
198 ----------
199 cone : openmc.XCone
200 Regular two-sided cone
201 plane : openmc.XPlane
202 Ambiguity surface
203 up : bool
204 Whether to select the side of the cone that extends to infinity in the
205 positive direction of the coordinate axis (the positive half-space of
206 the ambiguity plane)
207
208 """
209 _surface_names = ('cone', 'plane')
210
211 def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):
212 check_greater_than('cone R^2', r2, 0.0)
213 self.cone = openmc.XCone(x0, y0, z0, r2, **kwargs)
214 self.plane = openmc.XPlane(x0)
215 self.up = up
216
217 def __neg__(self):
218 return -self.cone & (+self.plane if self.up else -self.plane)
219
220 def __pos__(self):
221 if self.up:
222 return (+self.cone & +self.plane) | -self.plane
223 else:
224 return (+self.cone & -self.plane) | +self.plane
225
226
227 class YConeOneSided(CompositeSurface):
228 """One-sided cone parallel the y-axis
229
230 A one-sided cone is composed of a normal cone surface and an "ambiguity"
231 surface that eliminates the ambiguity as to which region of space is
232 included. This class acts as a proper surface, meaning that unary `+` and
233 `-` operators applied to it will produce a half-space. The negative side is
234 defined to be the region inside of the cone.
235
236 .. versionadded:: 0.12
237
238 Parameters
239 ----------
240 x0 : float, optional
241 x-coordinate of the apex. Defaults to 0.
242 y0 : float, optional
243 y-coordinate of the apex. Defaults to 0.
244 z0 : float, optional
245 z-coordinate of the apex. Defaults to 0.
246 r2 : float, optional
247 Parameter related to the aperature. Defaults to 1.
248 up : bool
249 Whether to select the side of the cone that extends to infinity in the
250 positive direction of the coordinate axis (the positive half-space of
251 the ambiguity plane)
252 **kwargs
253 Keyword arguments passed to underlying plane classes
254
255 Attributes
256 ----------
257 cone : openmc.YCone
258 Regular two-sided cone
259 plane : openmc.YPlane
260 Ambiguity surface
261 up : bool
262 Whether to select the side of the cone that extends to infinity in the
263 positive direction of the coordinate axis (the positive half-space of
264 the ambiguity plane)
265
266 """
267 _surface_names = ('cone', 'plane')
268
269 def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):
270 check_greater_than('cone R^2', r2, 0.0)
271 self.cone = openmc.YCone(x0, y0, z0, r2, **kwargs)
272 self.plane = openmc.YPlane(y0)
273 self.up = up
274
275 __neg__ = XConeOneSided.__neg__
276 __pos__ = XConeOneSided.__pos__
277
278
279 class ZConeOneSided(CompositeSurface):
280 """One-sided cone parallel the z-axis
281
282 A one-sided cone is composed of a normal cone surface and an "ambiguity"
283 surface that eliminates the ambiguity as to which region of space is
284 included. This class acts as a proper surface, meaning that unary `+` and
285 `-` operators applied to it will produce a half-space. The negative side is
286 defined to be the region inside of the cone.
287
288 .. versionadded:: 0.12
289
290 Parameters
291 ----------
292 x0 : float, optional
293 x-coordinate of the apex. Defaults to 0.
294 y0 : float, optional
295 y-coordinate of the apex. Defaults to 0.
296 z0 : float, optional
297 z-coordinate of the apex. Defaults to 0.
298 r2 : float, optional
299 Parameter related to the aperature. Defaults to 1.
300 up : bool
301 Whether to select the side of the cone that extends to infinity in the
302 positive direction of the coordinate axis (the positive half-space of
303 the ambiguity plane)
304 **kwargs
305 Keyword arguments passed to underlying plane classes
306
307 Attributes
308 ----------
309 cone : openmc.ZCone
310 Regular two-sided cone
311 plane : openmc.ZPlane
312 Ambiguity surface
313 up : bool
314 Whether to select the side of the cone that extends to infinity in the
315 positive direction of the coordinate axis (the positive half-space of
316 the ambiguity plane)
317
318 """
319 _surface_names = ('cone', 'plane')
320
321 def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):
322 check_greater_than('cone R^2', r2, 0.0)
323 self.cone = openmc.ZCone(x0, y0, z0, r2, **kwargs)
324 self.plane = openmc.ZPlane(z0)
325 self.up = up
326
327 __neg__ = XConeOneSided.__neg__
328 __pos__ = XConeOneSided.__pos__
329
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/openmc/model/surface_composite.py b/openmc/model/surface_composite.py
--- a/openmc/model/surface_composite.py
+++ b/openmc/model/surface_composite.py
@@ -163,7 +163,7 @@
return +self.xmin & -self.xmax & +self.ymin & -self.ymax & +self.zmin & -self.zmax
def __pos__(self):
- return -self.xmin | +self.ymax | -self.ymin | +self.ymax | -self.zmin | +self.zmax
+ return -self.xmin | +self.xmax | -self.ymin | +self.ymax | -self.zmin | +self.zmax
class XConeOneSided(CompositeSurface):
| {"golden_diff": "diff --git a/openmc/model/surface_composite.py b/openmc/model/surface_composite.py\n--- a/openmc/model/surface_composite.py\n+++ b/openmc/model/surface_composite.py\n@@ -163,7 +163,7 @@\n return +self.xmin & -self.xmax & +self.ymin & -self.ymax & +self.zmin & -self.zmax\n \n def __pos__(self):\n- return -self.xmin | +self.ymax | -self.ymin | +self.ymax | -self.zmin | +self.zmax\n+ return -self.xmin | +self.xmax | -self.ymin | +self.ymax | -self.zmin | +self.zmax\n \n \n class XConeOneSided(CompositeSurface):\n", "issue": "Typo in `RectangularParallelepiped.__pos__` method.\nA user [reported a problem](https://openmc.discourse.group/t/openmc-model-rectangularparallelepiped-usage/858) in the region output when using the positive halfspace of a `RectangularParallelepiped` object recently in discourse.\r\n\r\nIt seems that there was a typo when defining the `RectangularParallelepiped.__pos__` method (`ymax` is used as the positive x bound).\r\n\r\nhttps://github.com/openmc-dev/openmc/blob/88fb7b03491907c7d4cddbdb67cfe289fda813ce/openmc/model/surface_composite.py#L165-L166\r\n\r\nI'll submit a PR for this shortly.\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom copy import copy\n\nimport openmc\nfrom openmc.checkvalue import check_greater_than, check_value\n\n\nclass CompositeSurface(ABC):\n \"\"\"Multiple primitive surfaces combined into a composite surface\"\"\"\n\n def translate(self, vector, inplace=False):\n surf = self if inplace else copy(self)\n for name in self._surface_names:\n s = getattr(surf, name)\n setattr(surf, name, s.translate(vector, inplace))\n return surf\n\n def rotate(self, rotation, pivot=(0., 0., 0.), order='xyz', inplace=False):\n surf = copy(self)\n for name in self._surface_names:\n s = getattr(surf, name)\n setattr(surf, name, s.rotate(rotation, pivot, order, inplace))\n return surf\n\n @property\n def boundary_type(self):\n return getattr(self, self._surface_names[0]).boundary_type\n\n @boundary_type.setter\n def boundary_type(self, boundary_type):\n # Set boundary type on underlying surfaces, but not for ambiguity plane\n # on one-sided cones\n for name in self._surface_names:\n if name != 'plane':\n getattr(self, name).boundary_type = boundary_type\n\n def __repr__(self):\n return \"<{} at 0x{:x}>\".format(type(self).__name__, id(self))\n\n @property\n @abstractmethod\n def _surface_names(self):\n \"\"\"Iterable of attribute names corresponding to underlying surfaces.\"\"\"\n\n @abstractmethod\n def __pos__(self):\n \"\"\"Return the positive half-space of the composite surface.\"\"\"\n\n @abstractmethod\n def __neg__(self):\n \"\"\"Return the negative half-space of the composite surface.\"\"\"\n\n\nclass RightCircularCylinder(CompositeSurface):\n \"\"\"Right circular cylinder composite surface\n\n A right circular cylinder is composed of a cylinder and two planar surface\n perpendicular to the axis of the cylinder. This class acts as a proper\n surface, meaning that unary `+` and `-` operators applied to it will produce\n a half-space. The negative side is defined to be the region inside of the\n right circular cylinder.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n center_base : iterable of float\n Cartesian coordinate of the center of the base of the cylinder\n height : float\n Height of the cylinder\n radius : float\n Radius of the cylinder\n axis : {'x', 'y', 'z'}\n Axis of the cylinder\n **kwargs\n Keyword arguments passed to underlying cylinder and plane classes\n\n Attributes\n ----------\n cyl : openmc.Cylinder\n Underlying cylinder surface\n bottom : openmc.Plane\n Bottom planar surface of the cylinder\n top : openmc.Plane\n Top planar surface of the cylinder\n\n \"\"\"\n _surface_names = ('cyl', 'bottom', 'top')\n\n def __init__(self, center_base, height, radius, axis='z', **kwargs):\n cx, cy, cz = center_base\n check_greater_than('cylinder height', height, 0.0)\n check_greater_than('cylinder radius', radius, 0.0)\n check_value('cylinder axis', axis, ('x', 'y', 'z'))\n if axis == 'x':\n self.cyl = openmc.XCylinder(y0=cy, z0=cz, r=radius, **kwargs)\n self.bottom = openmc.XPlane(x0=cx, **kwargs)\n self.top = openmc.XPlane(x0=cx + height, **kwargs)\n elif axis == 'y':\n self.cyl = openmc.YCylinder(x0=cx, z0=cz, r=radius, **kwargs)\n self.bottom = openmc.YPlane(y0=cy, **kwargs)\n self.top = openmc.YPlane(y0=cy + height, **kwargs)\n elif axis == 'z':\n self.cyl = openmc.ZCylinder(x0=cx, y0=cy, r=radius, **kwargs)\n self.bottom = openmc.ZPlane(z0=cz, **kwargs)\n self.top = openmc.ZPlane(z0=cz + height, **kwargs)\n\n def __neg__(self):\n return -self.cyl & +self.bottom & -self.top\n\n def __pos__(self):\n return +self.cyl | -self.bottom | +self.top\n\n\nclass RectangularParallelepiped(CompositeSurface):\n \"\"\"Rectangular parallelpiped composite surface\n\n A rectangular parallelpiped is composed of six planar surfaces. This class\n acts as a proper surface, meaning that unary `+` and `-` operators applied\n to it will produce a half-space. The negative side is defined to be the\n region inside of the rectangular parallelpiped.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n xmin, xmax : float\n Minimum and maximum x coordinates of the parallelepiped\n ymin, ymax : float\n Minimum and maximum y coordinates of the parallelepiped\n zmin, zmax : float\n Minimum and maximum z coordinates of the parallelepiped\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n xmin, xmax : openmc.XPlane\n Sides of the parallelepiped\n ymin, ymax : openmc.YPlane\n Sides of the parallelepiped\n zmin, zmax : openmc.ZPlane\n Sides of the parallelepiped\n\n \"\"\"\n _surface_names = ('xmin', 'xmax', 'ymin', 'ymax', 'zmin', 'zmax')\n\n def __init__(self, xmin, xmax, ymin, ymax, zmin, zmax, **kwargs):\n if xmin >= xmax:\n raise ValueError('xmin must be less than xmax')\n if ymin >= ymax:\n raise ValueError('ymin must be less than ymax')\n if zmin >= zmax:\n raise ValueError('zmin must be less than zmax')\n self.xmin = openmc.XPlane(x0=xmin, **kwargs)\n self.xmax = openmc.XPlane(x0=xmax, **kwargs)\n self.ymin = openmc.YPlane(y0=ymin, **kwargs)\n self.ymax = openmc.YPlane(y0=ymax, **kwargs)\n self.zmin = openmc.ZPlane(z0=zmin, **kwargs)\n self.zmax = openmc.ZPlane(z0=zmax, **kwargs)\n\n def __neg__(self):\n return +self.xmin & -self.xmax & +self.ymin & -self.ymax & +self.zmin & -self.zmax\n\n def __pos__(self):\n return -self.xmin | +self.ymax | -self.ymin | +self.ymax | -self.zmin | +self.zmax\n\n\nclass XConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the x-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.XCone\n Regular two-sided cone\n plane : openmc.XPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.XCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.XPlane(x0)\n self.up = up\n\n def __neg__(self):\n return -self.cone & (+self.plane if self.up else -self.plane)\n\n def __pos__(self):\n if self.up:\n return (+self.cone & +self.plane) | -self.plane\n else:\n return (+self.cone & -self.plane) | +self.plane\n\n\nclass YConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the y-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.YCone\n Regular two-sided cone\n plane : openmc.YPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.YCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.YPlane(y0)\n self.up = up\n\n __neg__ = XConeOneSided.__neg__\n __pos__ = XConeOneSided.__pos__\n\n\nclass ZConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the z-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.ZCone\n Regular two-sided cone\n plane : openmc.ZPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.ZCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.ZPlane(z0)\n self.up = up\n\n __neg__ = XConeOneSided.__neg__\n __pos__ = XConeOneSided.__pos__\n", "path": "openmc/model/surface_composite.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom copy import copy\n\nimport openmc\nfrom openmc.checkvalue import check_greater_than, check_value\n\n\nclass CompositeSurface(ABC):\n \"\"\"Multiple primitive surfaces combined into a composite surface\"\"\"\n\n def translate(self, vector, inplace=False):\n surf = self if inplace else copy(self)\n for name in self._surface_names:\n s = getattr(surf, name)\n setattr(surf, name, s.translate(vector, inplace))\n return surf\n\n def rotate(self, rotation, pivot=(0., 0., 0.), order='xyz', inplace=False):\n surf = copy(self)\n for name in self._surface_names:\n s = getattr(surf, name)\n setattr(surf, name, s.rotate(rotation, pivot, order, inplace))\n return surf\n\n @property\n def boundary_type(self):\n return getattr(self, self._surface_names[0]).boundary_type\n\n @boundary_type.setter\n def boundary_type(self, boundary_type):\n # Set boundary type on underlying surfaces, but not for ambiguity plane\n # on one-sided cones\n for name in self._surface_names:\n if name != 'plane':\n getattr(self, name).boundary_type = boundary_type\n\n def __repr__(self):\n return \"<{} at 0x{:x}>\".format(type(self).__name__, id(self))\n\n @property\n @abstractmethod\n def _surface_names(self):\n \"\"\"Iterable of attribute names corresponding to underlying surfaces.\"\"\"\n\n @abstractmethod\n def __pos__(self):\n \"\"\"Return the positive half-space of the composite surface.\"\"\"\n\n @abstractmethod\n def __neg__(self):\n \"\"\"Return the negative half-space of the composite surface.\"\"\"\n\n\nclass RightCircularCylinder(CompositeSurface):\n \"\"\"Right circular cylinder composite surface\n\n A right circular cylinder is composed of a cylinder and two planar surface\n perpendicular to the axis of the cylinder. This class acts as a proper\n surface, meaning that unary `+` and `-` operators applied to it will produce\n a half-space. The negative side is defined to be the region inside of the\n right circular cylinder.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n center_base : iterable of float\n Cartesian coordinate of the center of the base of the cylinder\n height : float\n Height of the cylinder\n radius : float\n Radius of the cylinder\n axis : {'x', 'y', 'z'}\n Axis of the cylinder\n **kwargs\n Keyword arguments passed to underlying cylinder and plane classes\n\n Attributes\n ----------\n cyl : openmc.Cylinder\n Underlying cylinder surface\n bottom : openmc.Plane\n Bottom planar surface of the cylinder\n top : openmc.Plane\n Top planar surface of the cylinder\n\n \"\"\"\n _surface_names = ('cyl', 'bottom', 'top')\n\n def __init__(self, center_base, height, radius, axis='z', **kwargs):\n cx, cy, cz = center_base\n check_greater_than('cylinder height', height, 0.0)\n check_greater_than('cylinder radius', radius, 0.0)\n check_value('cylinder axis', axis, ('x', 'y', 'z'))\n if axis == 'x':\n self.cyl = openmc.XCylinder(y0=cy, z0=cz, r=radius, **kwargs)\n self.bottom = openmc.XPlane(x0=cx, **kwargs)\n self.top = openmc.XPlane(x0=cx + height, **kwargs)\n elif axis == 'y':\n self.cyl = openmc.YCylinder(x0=cx, z0=cz, r=radius, **kwargs)\n self.bottom = openmc.YPlane(y0=cy, **kwargs)\n self.top = openmc.YPlane(y0=cy + height, **kwargs)\n elif axis == 'z':\n self.cyl = openmc.ZCylinder(x0=cx, y0=cy, r=radius, **kwargs)\n self.bottom = openmc.ZPlane(z0=cz, **kwargs)\n self.top = openmc.ZPlane(z0=cz + height, **kwargs)\n\n def __neg__(self):\n return -self.cyl & +self.bottom & -self.top\n\n def __pos__(self):\n return +self.cyl | -self.bottom | +self.top\n\n\nclass RectangularParallelepiped(CompositeSurface):\n \"\"\"Rectangular parallelpiped composite surface\n\n A rectangular parallelpiped is composed of six planar surfaces. This class\n acts as a proper surface, meaning that unary `+` and `-` operators applied\n to it will produce a half-space. The negative side is defined to be the\n region inside of the rectangular parallelpiped.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n xmin, xmax : float\n Minimum and maximum x coordinates of the parallelepiped\n ymin, ymax : float\n Minimum and maximum y coordinates of the parallelepiped\n zmin, zmax : float\n Minimum and maximum z coordinates of the parallelepiped\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n xmin, xmax : openmc.XPlane\n Sides of the parallelepiped\n ymin, ymax : openmc.YPlane\n Sides of the parallelepiped\n zmin, zmax : openmc.ZPlane\n Sides of the parallelepiped\n\n \"\"\"\n _surface_names = ('xmin', 'xmax', 'ymin', 'ymax', 'zmin', 'zmax')\n\n def __init__(self, xmin, xmax, ymin, ymax, zmin, zmax, **kwargs):\n if xmin >= xmax:\n raise ValueError('xmin must be less than xmax')\n if ymin >= ymax:\n raise ValueError('ymin must be less than ymax')\n if zmin >= zmax:\n raise ValueError('zmin must be less than zmax')\n self.xmin = openmc.XPlane(x0=xmin, **kwargs)\n self.xmax = openmc.XPlane(x0=xmax, **kwargs)\n self.ymin = openmc.YPlane(y0=ymin, **kwargs)\n self.ymax = openmc.YPlane(y0=ymax, **kwargs)\n self.zmin = openmc.ZPlane(z0=zmin, **kwargs)\n self.zmax = openmc.ZPlane(z0=zmax, **kwargs)\n\n def __neg__(self):\n return +self.xmin & -self.xmax & +self.ymin & -self.ymax & +self.zmin & -self.zmax\n\n def __pos__(self):\n return -self.xmin | +self.xmax | -self.ymin | +self.ymax | -self.zmin | +self.zmax\n\n\nclass XConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the x-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.XCone\n Regular two-sided cone\n plane : openmc.XPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.XCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.XPlane(x0)\n self.up = up\n\n def __neg__(self):\n return -self.cone & (+self.plane if self.up else -self.plane)\n\n def __pos__(self):\n if self.up:\n return (+self.cone & +self.plane) | -self.plane\n else:\n return (+self.cone & -self.plane) | +self.plane\n\n\nclass YConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the y-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.YCone\n Regular two-sided cone\n plane : openmc.YPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.YCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.YPlane(y0)\n self.up = up\n\n __neg__ = XConeOneSided.__neg__\n __pos__ = XConeOneSided.__pos__\n\n\nclass ZConeOneSided(CompositeSurface):\n \"\"\"One-sided cone parallel the z-axis\n\n A one-sided cone is composed of a normal cone surface and an \"ambiguity\"\n surface that eliminates the ambiguity as to which region of space is\n included. This class acts as a proper surface, meaning that unary `+` and\n `-` operators applied to it will produce a half-space. The negative side is\n defined to be the region inside of the cone.\n\n .. versionadded:: 0.12\n\n Parameters\n ----------\n x0 : float, optional\n x-coordinate of the apex. Defaults to 0.\n y0 : float, optional\n y-coordinate of the apex. Defaults to 0.\n z0 : float, optional\n z-coordinate of the apex. Defaults to 0.\n r2 : float, optional\n Parameter related to the aperature. Defaults to 1.\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n **kwargs\n Keyword arguments passed to underlying plane classes\n\n Attributes\n ----------\n cone : openmc.ZCone\n Regular two-sided cone\n plane : openmc.ZPlane\n Ambiguity surface\n up : bool\n Whether to select the side of the cone that extends to infinity in the\n positive direction of the coordinate axis (the positive half-space of\n the ambiguity plane)\n\n \"\"\"\n _surface_names = ('cone', 'plane')\n\n def __init__(self, x0=0., y0=0., z0=0., r2=1., up=True, **kwargs):\n check_greater_than('cone R^2', r2, 0.0)\n self.cone = openmc.ZCone(x0, y0, z0, r2, **kwargs)\n self.plane = openmc.ZPlane(z0)\n self.up = up\n\n __neg__ = XConeOneSided.__neg__\n __pos__ = XConeOneSided.__pos__\n", "path": "openmc/model/surface_composite.py"}]} |
gh_patches_debug_1154 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-3837 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Maybe a bug about module checking
### Bug description
<!-- Use this section to clearly and concisely describe the bug. -->
If I use conda to install only jupyterhub and python (conda install -c conda-forge python=3.9 jupyterhub), the following message showed as someone try to login:
```
Failed to set groups [Errno 1] Operation not permitted
Traceback (most recent call last):
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/bin/jupyterhub-singleuser", line 7, in <module>
from jupyterhub.singleuser import main
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/__init__.py", line 5, in <module>
from .app import main
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py", line 38, in <module>
raise _import_error
TypeError: exceptions must derive from BaseException
```
I think the problem is the lines from 32 to 36 in jupyterhub/singleuser/app.py
```
except ImportError as e:
continue
if _import_error is None:
_import_error = e
else:
break
```
I changed that with:
```
except ImportError as e:
if _import_error is None:
_import_error = e
else:
break
continue
```
then the better message showed:
```
Failed to set groups [Errno 1] Operation not permitted
Traceback (most recent call last):
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/bin/jupyterhub-singleuser", line 7, in <module>
from jupyterhub.singleuser import main
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/__init__.py", line 5, in <module>
from .app import main
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py", line 38, in <module>
raise _import_error
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py", line 30, in <module>
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
File "/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/traitlets/utils/importstring.py", line 30, in import_item
module = __import__(package, fromlist=[obj])
ModuleNotFoundError: No module named 'jupyter_server'
```
The above message let me know that I have to install jupyter_server.
This issue can be closed anytime.
Any suggestion is welcome.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/singleuser/app.py`
Content:
```
1 """Make a single-user app based on the environment:
2
3 - $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.
4 default: jupyter_server.serverapp.ServerApp
5
6 .. versionchanged:: 2.0
7
8 Default app changed to launch `jupyter labhub`.
9 Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.
10 """
11 import os
12
13 from traitlets import import_item
14
15 from .mixins import make_singleuser_app
16
17 JUPYTERHUB_SINGLEUSER_APP = os.environ.get("JUPYTERHUB_SINGLEUSER_APP")
18
19
20 if JUPYTERHUB_SINGLEUSER_APP:
21 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
22 else:
23 App = None
24 _import_error = None
25 for JUPYTERHUB_SINGLEUSER_APP in (
26 "jupyter_server.serverapp.ServerApp",
27 "notebook.notebookapp.NotebookApp",
28 ):
29 try:
30 App = import_item(JUPYTERHUB_SINGLEUSER_APP)
31 except ImportError as e:
32 continue
33 if _import_error is None:
34 _import_error = e
35 else:
36 break
37 if App is None:
38 raise _import_error
39
40
41 SingleUserNotebookApp = make_singleuser_app(App)
42
43
44 def main():
45 """Launch a jupyterhub single-user server"""
46 if not os.environ.get("JUPYTERHUB_SINGLEUSER_APP"):
47 # app not specified, launch jupyter-labhub by default,
48 # if jupyterlab is recent enough (3.1).
49 # This is a minimally extended ServerApp that does:
50 # 1. ensure lab extension is enabled, and
51 # 2. set default URL to `/lab`
52 import re
53
54 _version_pat = re.compile(r"(\d+)\.(\d+)")
55 try:
56 import jupyterlab
57 from jupyterlab.labhubapp import SingleUserLabApp
58
59 m = _version_pat.match(jupyterlab.__version__)
60 except Exception:
61 m = None
62
63 if m is not None:
64 version_tuple = tuple(int(v) for v in m.groups())
65 if version_tuple >= (3, 1):
66 return SingleUserLabApp.launch_instance()
67
68 return SingleUserNotebookApp.launch_instance()
69
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py
--- a/jupyterhub/singleuser/app.py
+++ b/jupyterhub/singleuser/app.py
@@ -29,9 +29,9 @@
try:
App = import_item(JUPYTERHUB_SINGLEUSER_APP)
except ImportError as e:
- continue
if _import_error is None:
_import_error = e
+ continue
else:
break
if App is None:
| {"golden_diff": "diff --git a/jupyterhub/singleuser/app.py b/jupyterhub/singleuser/app.py\n--- a/jupyterhub/singleuser/app.py\n+++ b/jupyterhub/singleuser/app.py\n@@ -29,9 +29,9 @@\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n- continue\n if _import_error is None:\n _import_error = e\n+ continue\n else:\n break\n if App is None:\n", "issue": "Maybe a bug about module checking\n### Bug description\r\n<!-- Use this section to clearly and concisely describe the bug. -->\r\nIf I use conda to install only jupyterhub and python (conda install -c conda-forge python=3.9 jupyterhub), the following message showed as someone try to login:\r\n\r\n```\r\nFailed to set groups [Errno 1] Operation not permitted\r\nTraceback (most recent call last):\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/bin/jupyterhub-singleuser\", line 7, in <module>\r\n from jupyterhub.singleuser import main\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/__init__.py\", line 5, in <module>\r\n from .app import main\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py\", line 38, in <module>\r\n raise _import_error\r\nTypeError: exceptions must derive from BaseException\r\n```\r\nI think the problem is the lines from 32 to 36 in jupyterhub/singleuser/app.py\r\n```\r\n except ImportError as e:\r\n continue\r\n if _import_error is None:\r\n _import_error = e\r\n else:\r\n break\r\n```\r\n\r\nI changed that with:\r\n```\r\n except ImportError as e:\r\n if _import_error is None:\r\n _import_error = e\r\n else:\r\n break\r\n continue\r\n```\r\nthen the better message showed:\r\n```\r\nFailed to set groups [Errno 1] Operation not permitted\r\nTraceback (most recent call last):\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/bin/jupyterhub-singleuser\", line 7, in <module>\r\n from jupyterhub.singleuser import main\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/__init__.py\", line 5, in <module>\r\n from .app import main\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py\", line 38, in <module>\r\n raise _import_error\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/jupyterhub/singleuser/app.py\", line 30, in <module>\r\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\r\n File \"/home/someone/bin/anaconda3/envs/py39jupyterhub222/lib/python3.9/site-packages/traitlets/utils/importstring.py\", line 30, in import_item\r\n module = __import__(package, fromlist=[obj])\r\nModuleNotFoundError: No module named 'jupyter_server'\r\n```\r\nThe above message let me know that I have to install jupyter_server.\r\nThis issue can be closed anytime.\r\nAny suggestion is welcome.\r\n\n", "before_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\")\n\n\nif JUPYTERHUB_SINGLEUSER_APP:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n continue\n if _import_error is None:\n _import_error = e\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}], "after_files": [{"content": "\"\"\"Make a single-user app based on the environment:\n\n- $JUPYTERHUB_SINGLEUSER_APP, the base Application class, to be wrapped in JupyterHub authentication.\n default: jupyter_server.serverapp.ServerApp\n\n.. versionchanged:: 2.0\n\n Default app changed to launch `jupyter labhub`.\n Use JUPYTERHUB_SINGLEUSER_APP=notebook.notebookapp.NotebookApp for the legacy 'classic' notebook server.\n\"\"\"\nimport os\n\nfrom traitlets import import_item\n\nfrom .mixins import make_singleuser_app\n\nJUPYTERHUB_SINGLEUSER_APP = os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\")\n\n\nif JUPYTERHUB_SINGLEUSER_APP:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\nelse:\n App = None\n _import_error = None\n for JUPYTERHUB_SINGLEUSER_APP in (\n \"jupyter_server.serverapp.ServerApp\",\n \"notebook.notebookapp.NotebookApp\",\n ):\n try:\n App = import_item(JUPYTERHUB_SINGLEUSER_APP)\n except ImportError as e:\n if _import_error is None:\n _import_error = e\n continue\n else:\n break\n if App is None:\n raise _import_error\n\n\nSingleUserNotebookApp = make_singleuser_app(App)\n\n\ndef main():\n \"\"\"Launch a jupyterhub single-user server\"\"\"\n if not os.environ.get(\"JUPYTERHUB_SINGLEUSER_APP\"):\n # app not specified, launch jupyter-labhub by default,\n # if jupyterlab is recent enough (3.1).\n # This is a minimally extended ServerApp that does:\n # 1. ensure lab extension is enabled, and\n # 2. set default URL to `/lab`\n import re\n\n _version_pat = re.compile(r\"(\\d+)\\.(\\d+)\")\n try:\n import jupyterlab\n from jupyterlab.labhubapp import SingleUserLabApp\n\n m = _version_pat.match(jupyterlab.__version__)\n except Exception:\n m = None\n\n if m is not None:\n version_tuple = tuple(int(v) for v in m.groups())\n if version_tuple >= (3, 1):\n return SingleUserLabApp.launch_instance()\n\n return SingleUserNotebookApp.launch_instance()\n", "path": "jupyterhub/singleuser/app.py"}]} |
gh_patches_debug_1155 | rasdani/github-patches | git_diff | espnet__espnet-3073 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Question on chunk shift in ChunkIterFactory.build_iter()
In the code, shift width is calculated as a ratio of utterance length as follows:
S = int(L * self.chunk_shift_ratio)
Shouldn't shift width be calculated as a ratio of chunk length like below ?
S = int(W * self.chunk_shift_ratio)
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `espnet2/iterators/chunk_iter_factory.py`
Content:
```
1 import logging
2 from typing import Any
3 from typing import Dict
4 from typing import Iterator
5 from typing import List
6 from typing import Sequence
7 from typing import Tuple
8 from typing import Union
9
10 import numpy as np
11 import torch
12 from typeguard import check_argument_types
13
14 from espnet2.iterators.abs_iter_factory import AbsIterFactory
15 from espnet2.iterators.sequence_iter_factory import SequenceIterFactory
16 from espnet2.samplers.abs_sampler import AbsSampler
17
18
19 class ChunkIterFactory(AbsIterFactory):
20 """Creates chunks from a sequence
21
22 Examples:
23 >>> batches = [["id1"], ["id2"], ...]
24 >>> batch_size = 128
25 >>> chunk_length = 1000
26 >>> iter_factory = ChunkIterFactory(dataset, batches, batch_size, chunk_length)
27 >>> it = iter_factory.build_iter(epoch)
28 >>> for ids, batch in it:
29 ... ...
30
31 - The number of mini-batches are varied in each epochs and
32 we can't get the number in advance
33 because IterFactory doesn't be given to the length information.
34 - Since the first reason, "num_iters_per_epoch" can't be implemented
35 for this iterator. Instead of it, "num_samples_per_epoch" is implemented.
36
37 """
38
39 def __init__(
40 self,
41 dataset,
42 batch_size: int,
43 batches: Union[AbsSampler, Sequence[Sequence[Any]]],
44 chunk_length: Union[int, str],
45 chunk_shift_ratio: float = 0.5,
46 num_cache_chunks: int = 1024,
47 num_samples_per_epoch: int = None,
48 seed: int = 0,
49 shuffle: bool = False,
50 num_workers: int = 0,
51 collate_fn=None,
52 pin_memory: bool = False,
53 ):
54 assert check_argument_types()
55 assert all(len(x) == 1 for x in batches), "batch-size must be 1"
56
57 self.per_sample_iter_factory = SequenceIterFactory(
58 dataset=dataset,
59 batches=batches,
60 num_iters_per_epoch=num_samples_per_epoch,
61 seed=seed,
62 shuffle=shuffle,
63 num_workers=num_workers,
64 collate_fn=collate_fn,
65 pin_memory=pin_memory,
66 )
67
68 self.num_cache_chunks = max(num_cache_chunks, batch_size)
69 if isinstance(chunk_length, str):
70 if len(chunk_length) == 0:
71 raise ValueError("e.g. 5,8 or 3-5: but got empty string")
72
73 self.chunk_lengths = []
74 for x in chunk_length.split(","):
75 try:
76 sps = list(map(int, x.split("-")))
77 except ValueError:
78 raise ValueError(f"e.g. 5,8 or 3-5: but got {chunk_length}")
79
80 if len(sps) > 2:
81 raise ValueError(f"e.g. 5,8 or 3-5: but got {chunk_length}")
82 elif len(sps) == 2:
83 # Append all numbers between the range into the candidates
84 self.chunk_lengths += list(range(sps[0], sps[1] + 1))
85 else:
86 self.chunk_lengths += [sps[0]]
87 else:
88 # Single candidates: Fixed chunk length
89 self.chunk_lengths = [chunk_length]
90
91 self.chunk_shift_ratio = chunk_shift_ratio
92 self.batch_size = batch_size
93 self.seed = seed
94 self.shuffle = shuffle
95
96 def build_iter(
97 self,
98 epoch: int,
99 shuffle: bool = None,
100 ) -> Iterator[Tuple[List[str], Dict[str, torch.Tensor]]]:
101 per_sample_loader = self.per_sample_iter_factory.build_iter(epoch, shuffle)
102
103 if shuffle is None:
104 shuffle = self.shuffle
105 state = np.random.RandomState(epoch + self.seed)
106
107 # NOTE(kamo):
108 # This iterator supports multiple chunk lengths and
109 # keep chunks for each lenghts here until collecting specified numbers
110 cache_chunks_dict = {}
111 cache_id_list_dict = {}
112 for ids, batch in per_sample_loader:
113 # Must be per-sample-loader
114 assert len(ids) == 1, f"Must be per-sample-loader: {len(ids)}"
115 assert all(len(x) == 1 for x in batch.values())
116
117 # Get keys of sequence data
118 sequence_keys = []
119 for key in batch:
120 if key + "_lengths" in batch:
121 sequence_keys.append(key)
122 # Remove lengths data and get the first sample
123 batch = {k: v[0] for k, v in batch.items() if not k.endswith("_lengths")}
124 id_ = ids[0]
125
126 for key in sequence_keys:
127 if len(batch[key]) != len(batch[sequence_keys[0]]):
128 raise RuntimeError(
129 f"All sequences must has same length: "
130 f"{len(batch[key])} != {len(batch[sequence_keys[0]])}"
131 )
132
133 L = len(batch[sequence_keys[0]])
134 # Select chunk length
135 chunk_lengths = [lg for lg in self.chunk_lengths if lg < L]
136 if len(chunk_lengths) == 0:
137 logging.warning(
138 f"The length of '{id_}' is {L}, but it is shorter than "
139 f"any candidates of chunk-length: {self.chunk_lengths}"
140 )
141 continue
142
143 W = int(state.choice(chunk_lengths, 1))
144 cache_id_list = cache_id_list_dict.setdefault(W, [])
145 cache_chunks = cache_chunks_dict.setdefault(W, {})
146
147 # Shift width to the next chunk
148 S = int(L * self.chunk_shift_ratio)
149 # Number of chunks
150 N = (L - W) // S + 1
151 if shuffle:
152 Z = state.randint(0, (L - W) % S + 1)
153 else:
154 Z = 0
155
156 # Split a sequence into chunks.
157 # Note that the marginal frames divided by chunk length are discarded
158 for k, v in batch.items():
159 if k not in cache_chunks:
160 cache_chunks[k] = []
161 if k in sequence_keys:
162 # Shift chunks with overlapped length for data augmentation
163 cache_chunks[k] += [v[Z + i * S : Z + i * S + W] for i in range(N)]
164 else:
165 # If not sequence, use whole data instead of chunk
166 cache_chunks[k] += [v for _ in range(N)]
167 cache_id_list += [id_ for _ in range(N)]
168
169 if len(cache_id_list) > self.num_cache_chunks:
170 cache_id_list, cache_chunks = yield from self._generate_mini_batches(
171 cache_id_list,
172 cache_chunks,
173 shuffle,
174 state,
175 )
176
177 cache_id_list_dict[W] = cache_id_list
178 cache_chunks_dict[W] = cache_chunks
179
180 else:
181 for W in cache_id_list_dict:
182 cache_id_list = cache_id_list_dict.setdefault(W, [])
183 cache_chunks = cache_chunks_dict.setdefault(W, {})
184
185 yield from self._generate_mini_batches(
186 cache_id_list,
187 cache_chunks,
188 shuffle,
189 state,
190 )
191
192 def _generate_mini_batches(
193 self,
194 id_list: List[str],
195 batches: Dict[str, List[torch.Tensor]],
196 shuffle: bool,
197 state: np.random.RandomState,
198 ):
199 if shuffle:
200 indices = np.arange(0, len(id_list))
201 state.shuffle(indices)
202 batches = {k: [v[i] for i in indices] for k, v in batches.items()}
203 id_list = [id_list[i] for i in indices]
204
205 bs = self.batch_size
206 while len(id_list) >= bs:
207 # Make mini-batch and yield
208 yield (
209 id_list[:bs],
210 {k: torch.stack(v[:bs], 0) for k, v in batches.items()},
211 )
212 id_list = id_list[bs:]
213 batches = {k: v[bs:] for k, v in batches.items()}
214
215 return id_list, batches
216
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/espnet2/iterators/chunk_iter_factory.py b/espnet2/iterators/chunk_iter_factory.py
--- a/espnet2/iterators/chunk_iter_factory.py
+++ b/espnet2/iterators/chunk_iter_factory.py
@@ -145,7 +145,7 @@
cache_chunks = cache_chunks_dict.setdefault(W, {})
# Shift width to the next chunk
- S = int(L * self.chunk_shift_ratio)
+ S = int(W * self.chunk_shift_ratio)
# Number of chunks
N = (L - W) // S + 1
if shuffle:
| {"golden_diff": "diff --git a/espnet2/iterators/chunk_iter_factory.py b/espnet2/iterators/chunk_iter_factory.py\n--- a/espnet2/iterators/chunk_iter_factory.py\n+++ b/espnet2/iterators/chunk_iter_factory.py\n@@ -145,7 +145,7 @@\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n \n # Shift width to the next chunk\n- S = int(L * self.chunk_shift_ratio)\n+ S = int(W * self.chunk_shift_ratio)\n # Number of chunks\n N = (L - W) // S + 1\n if shuffle:\n", "issue": "Question on chunk shift in ChunkIterFactory.build_iter()\nIn the code, shift width is calculated as a ratio of utterance length as follows:\r\nS = int(L * self.chunk_shift_ratio)\r\n\r\nShouldn't shift width be calculated as a ratio of chunk length like below ?\r\nS = int(W * self.chunk_shift_ratio)\r\n\n", "before_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Iterator\nfrom typing import List\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom typeguard import check_argument_types\n\nfrom espnet2.iterators.abs_iter_factory import AbsIterFactory\nfrom espnet2.iterators.sequence_iter_factory import SequenceIterFactory\nfrom espnet2.samplers.abs_sampler import AbsSampler\n\n\nclass ChunkIterFactory(AbsIterFactory):\n \"\"\"Creates chunks from a sequence\n\n Examples:\n >>> batches = [[\"id1\"], [\"id2\"], ...]\n >>> batch_size = 128\n >>> chunk_length = 1000\n >>> iter_factory = ChunkIterFactory(dataset, batches, batch_size, chunk_length)\n >>> it = iter_factory.build_iter(epoch)\n >>> for ids, batch in it:\n ... ...\n\n - The number of mini-batches are varied in each epochs and\n we can't get the number in advance\n because IterFactory doesn't be given to the length information.\n - Since the first reason, \"num_iters_per_epoch\" can't be implemented\n for this iterator. Instead of it, \"num_samples_per_epoch\" is implemented.\n\n \"\"\"\n\n def __init__(\n self,\n dataset,\n batch_size: int,\n batches: Union[AbsSampler, Sequence[Sequence[Any]]],\n chunk_length: Union[int, str],\n chunk_shift_ratio: float = 0.5,\n num_cache_chunks: int = 1024,\n num_samples_per_epoch: int = None,\n seed: int = 0,\n shuffle: bool = False,\n num_workers: int = 0,\n collate_fn=None,\n pin_memory: bool = False,\n ):\n assert check_argument_types()\n assert all(len(x) == 1 for x in batches), \"batch-size must be 1\"\n\n self.per_sample_iter_factory = SequenceIterFactory(\n dataset=dataset,\n batches=batches,\n num_iters_per_epoch=num_samples_per_epoch,\n seed=seed,\n shuffle=shuffle,\n num_workers=num_workers,\n collate_fn=collate_fn,\n pin_memory=pin_memory,\n )\n\n self.num_cache_chunks = max(num_cache_chunks, batch_size)\n if isinstance(chunk_length, str):\n if len(chunk_length) == 0:\n raise ValueError(\"e.g. 5,8 or 3-5: but got empty string\")\n\n self.chunk_lengths = []\n for x in chunk_length.split(\",\"):\n try:\n sps = list(map(int, x.split(\"-\")))\n except ValueError:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n\n if len(sps) > 2:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n elif len(sps) == 2:\n # Append all numbers between the range into the candidates\n self.chunk_lengths += list(range(sps[0], sps[1] + 1))\n else:\n self.chunk_lengths += [sps[0]]\n else:\n # Single candidates: Fixed chunk length\n self.chunk_lengths = [chunk_length]\n\n self.chunk_shift_ratio = chunk_shift_ratio\n self.batch_size = batch_size\n self.seed = seed\n self.shuffle = shuffle\n\n def build_iter(\n self,\n epoch: int,\n shuffle: bool = None,\n ) -> Iterator[Tuple[List[str], Dict[str, torch.Tensor]]]:\n per_sample_loader = self.per_sample_iter_factory.build_iter(epoch, shuffle)\n\n if shuffle is None:\n shuffle = self.shuffle\n state = np.random.RandomState(epoch + self.seed)\n\n # NOTE(kamo):\n # This iterator supports multiple chunk lengths and\n # keep chunks for each lenghts here until collecting specified numbers\n cache_chunks_dict = {}\n cache_id_list_dict = {}\n for ids, batch in per_sample_loader:\n # Must be per-sample-loader\n assert len(ids) == 1, f\"Must be per-sample-loader: {len(ids)}\"\n assert all(len(x) == 1 for x in batch.values())\n\n # Get keys of sequence data\n sequence_keys = []\n for key in batch:\n if key + \"_lengths\" in batch:\n sequence_keys.append(key)\n # Remove lengths data and get the first sample\n batch = {k: v[0] for k, v in batch.items() if not k.endswith(\"_lengths\")}\n id_ = ids[0]\n\n for key in sequence_keys:\n if len(batch[key]) != len(batch[sequence_keys[0]]):\n raise RuntimeError(\n f\"All sequences must has same length: \"\n f\"{len(batch[key])} != {len(batch[sequence_keys[0]])}\"\n )\n\n L = len(batch[sequence_keys[0]])\n # Select chunk length\n chunk_lengths = [lg for lg in self.chunk_lengths if lg < L]\n if len(chunk_lengths) == 0:\n logging.warning(\n f\"The length of '{id_}' is {L}, but it is shorter than \"\n f\"any candidates of chunk-length: {self.chunk_lengths}\"\n )\n continue\n\n W = int(state.choice(chunk_lengths, 1))\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n # Shift width to the next chunk\n S = int(L * self.chunk_shift_ratio)\n # Number of chunks\n N = (L - W) // S + 1\n if shuffle:\n Z = state.randint(0, (L - W) % S + 1)\n else:\n Z = 0\n\n # Split a sequence into chunks.\n # Note that the marginal frames divided by chunk length are discarded\n for k, v in batch.items():\n if k not in cache_chunks:\n cache_chunks[k] = []\n if k in sequence_keys:\n # Shift chunks with overlapped length for data augmentation\n cache_chunks[k] += [v[Z + i * S : Z + i * S + W] for i in range(N)]\n else:\n # If not sequence, use whole data instead of chunk\n cache_chunks[k] += [v for _ in range(N)]\n cache_id_list += [id_ for _ in range(N)]\n\n if len(cache_id_list) > self.num_cache_chunks:\n cache_id_list, cache_chunks = yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n cache_id_list_dict[W] = cache_id_list\n cache_chunks_dict[W] = cache_chunks\n\n else:\n for W in cache_id_list_dict:\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n def _generate_mini_batches(\n self,\n id_list: List[str],\n batches: Dict[str, List[torch.Tensor]],\n shuffle: bool,\n state: np.random.RandomState,\n ):\n if shuffle:\n indices = np.arange(0, len(id_list))\n state.shuffle(indices)\n batches = {k: [v[i] for i in indices] for k, v in batches.items()}\n id_list = [id_list[i] for i in indices]\n\n bs = self.batch_size\n while len(id_list) >= bs:\n # Make mini-batch and yield\n yield (\n id_list[:bs],\n {k: torch.stack(v[:bs], 0) for k, v in batches.items()},\n )\n id_list = id_list[bs:]\n batches = {k: v[bs:] for k, v in batches.items()}\n\n return id_list, batches\n", "path": "espnet2/iterators/chunk_iter_factory.py"}], "after_files": [{"content": "import logging\nfrom typing import Any\nfrom typing import Dict\nfrom typing import Iterator\nfrom typing import List\nfrom typing import Sequence\nfrom typing import Tuple\nfrom typing import Union\n\nimport numpy as np\nimport torch\nfrom typeguard import check_argument_types\n\nfrom espnet2.iterators.abs_iter_factory import AbsIterFactory\nfrom espnet2.iterators.sequence_iter_factory import SequenceIterFactory\nfrom espnet2.samplers.abs_sampler import AbsSampler\n\n\nclass ChunkIterFactory(AbsIterFactory):\n \"\"\"Creates chunks from a sequence\n\n Examples:\n >>> batches = [[\"id1\"], [\"id2\"], ...]\n >>> batch_size = 128\n >>> chunk_length = 1000\n >>> iter_factory = ChunkIterFactory(dataset, batches, batch_size, chunk_length)\n >>> it = iter_factory.build_iter(epoch)\n >>> for ids, batch in it:\n ... ...\n\n - The number of mini-batches are varied in each epochs and\n we can't get the number in advance\n because IterFactory doesn't be given to the length information.\n - Since the first reason, \"num_iters_per_epoch\" can't be implemented\n for this iterator. Instead of it, \"num_samples_per_epoch\" is implemented.\n\n \"\"\"\n\n def __init__(\n self,\n dataset,\n batch_size: int,\n batches: Union[AbsSampler, Sequence[Sequence[Any]]],\n chunk_length: Union[int, str],\n chunk_shift_ratio: float = 0.5,\n num_cache_chunks: int = 1024,\n num_samples_per_epoch: int = None,\n seed: int = 0,\n shuffle: bool = False,\n num_workers: int = 0,\n collate_fn=None,\n pin_memory: bool = False,\n ):\n assert check_argument_types()\n assert all(len(x) == 1 for x in batches), \"batch-size must be 1\"\n\n self.per_sample_iter_factory = SequenceIterFactory(\n dataset=dataset,\n batches=batches,\n num_iters_per_epoch=num_samples_per_epoch,\n seed=seed,\n shuffle=shuffle,\n num_workers=num_workers,\n collate_fn=collate_fn,\n pin_memory=pin_memory,\n )\n\n self.num_cache_chunks = max(num_cache_chunks, batch_size)\n if isinstance(chunk_length, str):\n if len(chunk_length) == 0:\n raise ValueError(\"e.g. 5,8 or 3-5: but got empty string\")\n\n self.chunk_lengths = []\n for x in chunk_length.split(\",\"):\n try:\n sps = list(map(int, x.split(\"-\")))\n except ValueError:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n\n if len(sps) > 2:\n raise ValueError(f\"e.g. 5,8 or 3-5: but got {chunk_length}\")\n elif len(sps) == 2:\n # Append all numbers between the range into the candidates\n self.chunk_lengths += list(range(sps[0], sps[1] + 1))\n else:\n self.chunk_lengths += [sps[0]]\n else:\n # Single candidates: Fixed chunk length\n self.chunk_lengths = [chunk_length]\n\n self.chunk_shift_ratio = chunk_shift_ratio\n self.batch_size = batch_size\n self.seed = seed\n self.shuffle = shuffle\n\n def build_iter(\n self,\n epoch: int,\n shuffle: bool = None,\n ) -> Iterator[Tuple[List[str], Dict[str, torch.Tensor]]]:\n per_sample_loader = self.per_sample_iter_factory.build_iter(epoch, shuffle)\n\n if shuffle is None:\n shuffle = self.shuffle\n state = np.random.RandomState(epoch + self.seed)\n\n # NOTE(kamo):\n # This iterator supports multiple chunk lengths and\n # keep chunks for each lenghts here until collecting specified numbers\n cache_chunks_dict = {}\n cache_id_list_dict = {}\n for ids, batch in per_sample_loader:\n # Must be per-sample-loader\n assert len(ids) == 1, f\"Must be per-sample-loader: {len(ids)}\"\n assert all(len(x) == 1 for x in batch.values())\n\n # Get keys of sequence data\n sequence_keys = []\n for key in batch:\n if key + \"_lengths\" in batch:\n sequence_keys.append(key)\n # Remove lengths data and get the first sample\n batch = {k: v[0] for k, v in batch.items() if not k.endswith(\"_lengths\")}\n id_ = ids[0]\n\n for key in sequence_keys:\n if len(batch[key]) != len(batch[sequence_keys[0]]):\n raise RuntimeError(\n f\"All sequences must has same length: \"\n f\"{len(batch[key])} != {len(batch[sequence_keys[0]])}\"\n )\n\n L = len(batch[sequence_keys[0]])\n # Select chunk length\n chunk_lengths = [lg for lg in self.chunk_lengths if lg < L]\n if len(chunk_lengths) == 0:\n logging.warning(\n f\"The length of '{id_}' is {L}, but it is shorter than \"\n f\"any candidates of chunk-length: {self.chunk_lengths}\"\n )\n continue\n\n W = int(state.choice(chunk_lengths, 1))\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n # Shift width to the next chunk\n S = int(W * self.chunk_shift_ratio)\n # Number of chunks\n N = (L - W) // S + 1\n if shuffle:\n Z = state.randint(0, (L - W) % S + 1)\n else:\n Z = 0\n\n # Split a sequence into chunks.\n # Note that the marginal frames divided by chunk length are discarded\n for k, v in batch.items():\n if k not in cache_chunks:\n cache_chunks[k] = []\n if k in sequence_keys:\n # Shift chunks with overlapped length for data augmentation\n cache_chunks[k] += [v[Z + i * S : Z + i * S + W] for i in range(N)]\n else:\n # If not sequence, use whole data instead of chunk\n cache_chunks[k] += [v for _ in range(N)]\n cache_id_list += [id_ for _ in range(N)]\n\n if len(cache_id_list) > self.num_cache_chunks:\n cache_id_list, cache_chunks = yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n cache_id_list_dict[W] = cache_id_list\n cache_chunks_dict[W] = cache_chunks\n\n else:\n for W in cache_id_list_dict:\n cache_id_list = cache_id_list_dict.setdefault(W, [])\n cache_chunks = cache_chunks_dict.setdefault(W, {})\n\n yield from self._generate_mini_batches(\n cache_id_list,\n cache_chunks,\n shuffle,\n state,\n )\n\n def _generate_mini_batches(\n self,\n id_list: List[str],\n batches: Dict[str, List[torch.Tensor]],\n shuffle: bool,\n state: np.random.RandomState,\n ):\n if shuffle:\n indices = np.arange(0, len(id_list))\n state.shuffle(indices)\n batches = {k: [v[i] for i in indices] for k, v in batches.items()}\n id_list = [id_list[i] for i in indices]\n\n bs = self.batch_size\n while len(id_list) >= bs:\n # Make mini-batch and yield\n yield (\n id_list[:bs],\n {k: torch.stack(v[:bs], 0) for k, v in batches.items()},\n )\n id_list = id_list[bs:]\n batches = {k: v[bs:] for k, v in batches.items()}\n\n return id_list, batches\n", "path": "espnet2/iterators/chunk_iter_factory.py"}]} |
gh_patches_debug_1156 | rasdani/github-patches | git_diff | getsentry__sentry-python-1852 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Empty DSN crashes Otel integration
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.12.0
### Steps to Reproduce
Do not give a DSN when initializing the SDK with the Otel integration.
You will get this:
```
recommendation-service | 2023-01-18 10:24:50,723 ERROR [grpc._server] [_server.py:454] [trace_id=0 span_id=0 resource.service.name=recommendationservice] - Exce
ption calling application: Unsupported scheme ''
recommendation-service | Traceback (most recent call last):
recommendation-service | File "/usr/local/lib/python3.10/site-packages/grpc/_server.py", line 444, in _call_behavior
recommendation-service | response_or_iterator = behavior(argument, context)
recommendation-service | File "/usr/local/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_server.py", line 282, in telemetry_interceptor
recommendation-service | with self._start_span(
recommendation-service | File "/usr/local/lib/python3.10/contextlib.py", line 135, in __enter__
recommendation-service | return next(self.gen)
recommendation-service | File "/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py", line 1017, in start_as_current_span
recommendation-service | span = self.start_span(
recommendation-service | File "/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py", line 1107, in start_span
recommendation-service | span.start(start_time=start_time, parent_context=context)
recommendation-service | File "/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py", line 870, in start
recommendation-service | self._span_processor.on_start(self, parent_context=parent_context)
recommendation-service | File "/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py", line 162, in on_start
recommendation-service | sp.on_start(span, parent_context=parent_context)
recommendation-service | File "/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/opentelemetry/span_processor.py", line 107, in on_start
recommendation-service | if self._is_sentry_span(hub, otel_span):
recommendation-service | File "/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/opentelemetry/span_processor.py", line 177, in _is_sentry_span
recommendation-service | dsn_url = hub.client and Dsn(hub.client.dsn or "").netloc
recommendation-service | File "/usr/local/lib/python3.10/site-packages/sentry_sdk/utils.py", line 200, in __init__
recommendation-service | raise BadDsn("Unsupported scheme %r" % parts.scheme)
recommendation-service | sentry_sdk.utils.BadDsn: Unsupported scheme ''
```
### Expected Result
The Sentry SDK should just give a Warning and then do nothing and NOT crash everything.
### Actual Result
Crash
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `sentry_sdk/integrations/opentelemetry/span_processor.py`
Content:
```
1 from datetime import datetime
2
3 from opentelemetry.context import get_value # type: ignore
4 from opentelemetry.sdk.trace import SpanProcessor # type: ignore
5 from opentelemetry.semconv.trace import SpanAttributes # type: ignore
6 from opentelemetry.trace import ( # type: ignore
7 format_span_id,
8 format_trace_id,
9 get_current_span,
10 SpanContext,
11 Span as OTelSpan,
12 SpanKind,
13 )
14 from opentelemetry.trace.span import ( # type: ignore
15 INVALID_SPAN_ID,
16 INVALID_TRACE_ID,
17 )
18 from sentry_sdk.consts import INSTRUMENTER
19 from sentry_sdk.hub import Hub
20 from sentry_sdk.integrations.opentelemetry.consts import (
21 SENTRY_BAGGAGE_KEY,
22 SENTRY_TRACE_KEY,
23 )
24 from sentry_sdk.scope import add_global_event_processor
25 from sentry_sdk.tracing import Transaction, Span as SentrySpan
26 from sentry_sdk.utils import Dsn
27 from sentry_sdk._types import MYPY
28
29 from urllib3.util import parse_url as urlparse # type: ignore
30
31 if MYPY:
32 from typing import Any
33 from typing import Dict
34 from typing import Union
35 from sentry_sdk._types import Event, Hint
36
37 OPEN_TELEMETRY_CONTEXT = "otel"
38
39
40 def link_trace_context_to_error_event(event, otel_span_map):
41 # type: (Event, Dict[str, Union[Transaction, OTelSpan]]) -> Event
42 hub = Hub.current
43 if not hub:
44 return event
45
46 if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
47 return event
48
49 if hasattr(event, "type") and event["type"] == "transaction":
50 return event
51
52 otel_span = get_current_span()
53 if not otel_span:
54 return event
55
56 ctx = otel_span.get_span_context()
57 trace_id = format_trace_id(ctx.trace_id)
58 span_id = format_span_id(ctx.span_id)
59
60 if trace_id == INVALID_TRACE_ID or span_id == INVALID_SPAN_ID:
61 return event
62
63 sentry_span = otel_span_map.get(span_id, None)
64 if not sentry_span:
65 return event
66
67 contexts = event.setdefault("contexts", {})
68 contexts.setdefault("trace", {}).update(sentry_span.get_trace_context())
69
70 return event
71
72
73 class SentrySpanProcessor(SpanProcessor): # type: ignore
74 """
75 Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.
76 """
77
78 # The mapping from otel span ids to sentry spans
79 otel_span_map = {} # type: Dict[str, Union[Transaction, OTelSpan]]
80
81 def __new__(cls):
82 # type: () -> SentrySpanProcessor
83 if not hasattr(cls, "instance"):
84 cls.instance = super(SentrySpanProcessor, cls).__new__(cls)
85
86 return cls.instance
87
88 def __init__(self):
89 # type: () -> None
90 @add_global_event_processor
91 def global_event_processor(event, hint):
92 # type: (Event, Hint) -> Event
93 return link_trace_context_to_error_event(event, self.otel_span_map)
94
95 def on_start(self, otel_span, parent_context=None):
96 # type: (OTelSpan, SpanContext) -> None
97 hub = Hub.current
98 if not hub:
99 return
100
101 if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
102 return
103
104 if not otel_span.context.is_valid:
105 return
106
107 if self._is_sentry_span(hub, otel_span):
108 return
109
110 trace_data = self._get_trace_data(otel_span, parent_context)
111
112 parent_span_id = trace_data["parent_span_id"]
113 sentry_parent_span = (
114 self.otel_span_map.get(parent_span_id, None) if parent_span_id else None
115 )
116
117 sentry_span = None
118 if sentry_parent_span:
119 sentry_span = sentry_parent_span.start_child(
120 span_id=trace_data["span_id"],
121 description=otel_span.name,
122 start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),
123 instrumenter=INSTRUMENTER.OTEL,
124 )
125 else:
126 sentry_span = hub.start_transaction(
127 name=otel_span.name,
128 span_id=trace_data["span_id"],
129 parent_span_id=parent_span_id,
130 trace_id=trace_data["trace_id"],
131 baggage=trace_data["baggage"],
132 start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),
133 instrumenter=INSTRUMENTER.OTEL,
134 )
135
136 self.otel_span_map[trace_data["span_id"]] = sentry_span
137
138 def on_end(self, otel_span):
139 # type: (OTelSpan) -> None
140 hub = Hub.current
141 if not hub:
142 return
143
144 if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
145 return
146
147 if not otel_span.context.is_valid:
148 return
149
150 span_id = format_span_id(otel_span.context.span_id)
151 sentry_span = self.otel_span_map.pop(span_id, None)
152 if not sentry_span:
153 return
154
155 sentry_span.op = otel_span.name
156
157 if isinstance(sentry_span, Transaction):
158 sentry_span.name = otel_span.name
159 sentry_span.set_context(
160 OPEN_TELEMETRY_CONTEXT, self._get_otel_context(otel_span)
161 )
162
163 else:
164 self._update_span_with_otel_data(sentry_span, otel_span)
165
166 sentry_span.finish(
167 end_timestamp=datetime.fromtimestamp(otel_span.end_time / 1e9)
168 )
169
170 def _is_sentry_span(self, hub, otel_span):
171 # type: (Hub, OTelSpan) -> bool
172 """
173 Break infinite loop:
174 HTTP requests to Sentry are caught by OTel and send again to Sentry.
175 """
176 otel_span_url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)
177 dsn_url = hub.client and Dsn(hub.client.dsn or "").netloc
178
179 if otel_span_url and dsn_url in otel_span_url:
180 return True
181
182 return False
183
184 def _get_otel_context(self, otel_span):
185 # type: (OTelSpan) -> Dict[str, Any]
186 """
187 Returns the OTel context for Sentry.
188 See: https://develop.sentry.dev/sdk/performance/opentelemetry/#step-5-add-opentelemetry-context
189 """
190 ctx = {}
191
192 if otel_span.attributes:
193 ctx["attributes"] = dict(otel_span.attributes)
194
195 if otel_span.resource.attributes:
196 ctx["resource"] = dict(otel_span.resource.attributes)
197
198 return ctx
199
200 def _get_trace_data(self, otel_span, parent_context):
201 # type: (OTelSpan, SpanContext) -> Dict[str, Any]
202 """
203 Extracts tracing information from one OTel span and its parent OTel context.
204 """
205 trace_data = {}
206
207 span_id = format_span_id(otel_span.context.span_id)
208 trace_data["span_id"] = span_id
209
210 trace_id = format_trace_id(otel_span.context.trace_id)
211 trace_data["trace_id"] = trace_id
212
213 parent_span_id = (
214 format_span_id(otel_span.parent.span_id) if otel_span.parent else None
215 )
216 trace_data["parent_span_id"] = parent_span_id
217
218 sentry_trace_data = get_value(SENTRY_TRACE_KEY, parent_context)
219 trace_data["parent_sampled"] = (
220 sentry_trace_data["parent_sampled"] if sentry_trace_data else None
221 )
222
223 baggage = get_value(SENTRY_BAGGAGE_KEY, parent_context)
224 trace_data["baggage"] = baggage
225
226 return trace_data
227
228 def _update_span_with_otel_data(self, sentry_span, otel_span):
229 # type: (SentrySpan, OTelSpan) -> None
230 """
231 Convert OTel span data and update the Sentry span with it.
232 This should eventually happen on the server when ingesting the spans.
233 """
234 for key, val in otel_span.attributes.items():
235 sentry_span.set_data(key, val)
236
237 sentry_span.set_data("otel.kind", otel_span.kind)
238
239 op = otel_span.name
240 description = otel_span.name
241
242 http_method = otel_span.attributes.get(SpanAttributes.HTTP_METHOD, None)
243 db_query = otel_span.attributes.get(SpanAttributes.DB_SYSTEM, None)
244
245 if http_method:
246 op = "http"
247
248 if otel_span.kind == SpanKind.SERVER:
249 op += ".server"
250 elif otel_span.kind == SpanKind.CLIENT:
251 op += ".client"
252
253 description = http_method
254
255 peer_name = otel_span.attributes.get(SpanAttributes.NET_PEER_NAME, None)
256 if peer_name:
257 description += " {}".format(peer_name)
258
259 target = otel_span.attributes.get(SpanAttributes.HTTP_TARGET, None)
260 if target:
261 description += " {}".format(target)
262
263 if not peer_name and not target:
264 url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)
265 if url:
266 parsed_url = urlparse(url)
267 url = f"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}"
268 description += " {}".format(url)
269
270 status_code = otel_span.attributes.get(
271 SpanAttributes.HTTP_STATUS_CODE, None
272 )
273 if status_code:
274 sentry_span.set_http_status(status_code)
275
276 elif db_query:
277 op = "db"
278 statement = otel_span.attributes.get(SpanAttributes.DB_STATEMENT, None)
279 if statement:
280 description = statement
281
282 sentry_span.op = op
283 sentry_span.description = description
284
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/sentry_sdk/integrations/opentelemetry/span_processor.py b/sentry_sdk/integrations/opentelemetry/span_processor.py
--- a/sentry_sdk/integrations/opentelemetry/span_processor.py
+++ b/sentry_sdk/integrations/opentelemetry/span_processor.py
@@ -98,6 +98,14 @@
if not hub:
return
+ if not hub.client or (hub.client and not hub.client.dsn):
+ return
+
+ try:
+ _ = Dsn(hub.client.dsn or "")
+ except Exception:
+ return
+
if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
return
| {"golden_diff": "diff --git a/sentry_sdk/integrations/opentelemetry/span_processor.py b/sentry_sdk/integrations/opentelemetry/span_processor.py\n--- a/sentry_sdk/integrations/opentelemetry/span_processor.py\n+++ b/sentry_sdk/integrations/opentelemetry/span_processor.py\n@@ -98,6 +98,14 @@\n if not hub:\n return\n \n+ if not hub.client or (hub.client and not hub.client.dsn):\n+ return\n+\n+ try:\n+ _ = Dsn(hub.client.dsn or \"\")\n+ except Exception:\n+ return\n+\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n", "issue": "Empty DSN crashes Otel integration\n### How do you use Sentry?\n\nSentry Saas (sentry.io)\n\n### Version\n\n1.12.0\n\n### Steps to Reproduce\n\nDo not give a DSN when initializing the SDK with the Otel integration. \r\n\r\nYou will get this:\r\n\r\n```\r\nrecommendation-service | 2023-01-18 10:24:50,723 ERROR [grpc._server] [_server.py:454] [trace_id=0 span_id=0 resource.service.name=recommendationservice] - Exce\r\nption calling application: Unsupported scheme ''\r\nrecommendation-service | Traceback (most recent call last):\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/grpc/_server.py\", line 444, in _call_behavior\r\nrecommendation-service | response_or_iterator = behavior(argument, context)\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/opentelemetry/instrumentation/grpc/_server.py\", line 282, in telemetry_interceptor\r\nrecommendation-service | with self._start_span(\r\nrecommendation-service | File \"/usr/local/lib/python3.10/contextlib.py\", line 135, in __enter__\r\nrecommendation-service | return next(self.gen)\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py\", line 1017, in start_as_current_span\r\nrecommendation-service | span = self.start_span(\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py\", line 1107, in start_span\r\nrecommendation-service | span.start(start_time=start_time, parent_context=context)\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py\", line 870, in start\r\nrecommendation-service | self._span_processor.on_start(self, parent_context=parent_context)\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/opentelemetry/sdk/trace/__init__.py\", line 162, in on_start\r\nrecommendation-service | sp.on_start(span, parent_context=parent_context)\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/opentelemetry/span_processor.py\", line 107, in on_start\r\nrecommendation-service | if self._is_sentry_span(hub, otel_span):\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/opentelemetry/span_processor.py\", line 177, in _is_sentry_span\r\nrecommendation-service | dsn_url = hub.client and Dsn(hub.client.dsn or \"\").netloc\r\nrecommendation-service | File \"/usr/local/lib/python3.10/site-packages/sentry_sdk/utils.py\", line 200, in __init__\r\nrecommendation-service | raise BadDsn(\"Unsupported scheme %r\" % parts.scheme)\r\nrecommendation-service | sentry_sdk.utils.BadDsn: Unsupported scheme ''\r\n```\n\n### Expected Result\n\nThe Sentry SDK should just give a Warning and then do nothing and NOT crash everything.\n\n### Actual Result\n\nCrash\n", "before_files": [{"content": "from datetime import datetime\n\nfrom opentelemetry.context import get_value # type: ignore\nfrom opentelemetry.sdk.trace import SpanProcessor # type: ignore\nfrom opentelemetry.semconv.trace import SpanAttributes # type: ignore\nfrom opentelemetry.trace import ( # type: ignore\n format_span_id,\n format_trace_id,\n get_current_span,\n SpanContext,\n Span as OTelSpan,\n SpanKind,\n)\nfrom opentelemetry.trace.span import ( # type: ignore\n INVALID_SPAN_ID,\n INVALID_TRACE_ID,\n)\nfrom sentry_sdk.consts import INSTRUMENTER\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations.opentelemetry.consts import (\n SENTRY_BAGGAGE_KEY,\n SENTRY_TRACE_KEY,\n)\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.tracing import Transaction, Span as SentrySpan\nfrom sentry_sdk.utils import Dsn\nfrom sentry_sdk._types import MYPY\n\nfrom urllib3.util import parse_url as urlparse # type: ignore\n\nif MYPY:\n from typing import Any\n from typing import Dict\n from typing import Union\n from sentry_sdk._types import Event, Hint\n\nOPEN_TELEMETRY_CONTEXT = \"otel\"\n\n\ndef link_trace_context_to_error_event(event, otel_span_map):\n # type: (Event, Dict[str, Union[Transaction, OTelSpan]]) -> Event\n hub = Hub.current\n if not hub:\n return event\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return event\n\n if hasattr(event, \"type\") and event[\"type\"] == \"transaction\":\n return event\n\n otel_span = get_current_span()\n if not otel_span:\n return event\n\n ctx = otel_span.get_span_context()\n trace_id = format_trace_id(ctx.trace_id)\n span_id = format_span_id(ctx.span_id)\n\n if trace_id == INVALID_TRACE_ID or span_id == INVALID_SPAN_ID:\n return event\n\n sentry_span = otel_span_map.get(span_id, None)\n if not sentry_span:\n return event\n\n contexts = event.setdefault(\"contexts\", {})\n contexts.setdefault(\"trace\", {}).update(sentry_span.get_trace_context())\n\n return event\n\n\nclass SentrySpanProcessor(SpanProcessor): # type: ignore\n \"\"\"\n Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.\n \"\"\"\n\n # The mapping from otel span ids to sentry spans\n otel_span_map = {} # type: Dict[str, Union[Transaction, OTelSpan]]\n\n def __new__(cls):\n # type: () -> SentrySpanProcessor\n if not hasattr(cls, \"instance\"):\n cls.instance = super(SentrySpanProcessor, cls).__new__(cls)\n\n return cls.instance\n\n def __init__(self):\n # type: () -> None\n @add_global_event_processor\n def global_event_processor(event, hint):\n # type: (Event, Hint) -> Event\n return link_trace_context_to_error_event(event, self.otel_span_map)\n\n def on_start(self, otel_span, parent_context=None):\n # type: (OTelSpan, SpanContext) -> None\n hub = Hub.current\n if not hub:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n if self._is_sentry_span(hub, otel_span):\n return\n\n trace_data = self._get_trace_data(otel_span, parent_context)\n\n parent_span_id = trace_data[\"parent_span_id\"]\n sentry_parent_span = (\n self.otel_span_map.get(parent_span_id, None) if parent_span_id else None\n )\n\n sentry_span = None\n if sentry_parent_span:\n sentry_span = sentry_parent_span.start_child(\n span_id=trace_data[\"span_id\"],\n description=otel_span.name,\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n else:\n sentry_span = hub.start_transaction(\n name=otel_span.name,\n span_id=trace_data[\"span_id\"],\n parent_span_id=parent_span_id,\n trace_id=trace_data[\"trace_id\"],\n baggage=trace_data[\"baggage\"],\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n\n self.otel_span_map[trace_data[\"span_id\"]] = sentry_span\n\n def on_end(self, otel_span):\n # type: (OTelSpan) -> None\n hub = Hub.current\n if not hub:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n span_id = format_span_id(otel_span.context.span_id)\n sentry_span = self.otel_span_map.pop(span_id, None)\n if not sentry_span:\n return\n\n sentry_span.op = otel_span.name\n\n if isinstance(sentry_span, Transaction):\n sentry_span.name = otel_span.name\n sentry_span.set_context(\n OPEN_TELEMETRY_CONTEXT, self._get_otel_context(otel_span)\n )\n\n else:\n self._update_span_with_otel_data(sentry_span, otel_span)\n\n sentry_span.finish(\n end_timestamp=datetime.fromtimestamp(otel_span.end_time / 1e9)\n )\n\n def _is_sentry_span(self, hub, otel_span):\n # type: (Hub, OTelSpan) -> bool\n \"\"\"\n Break infinite loop:\n HTTP requests to Sentry are caught by OTel and send again to Sentry.\n \"\"\"\n otel_span_url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n dsn_url = hub.client and Dsn(hub.client.dsn or \"\").netloc\n\n if otel_span_url and dsn_url in otel_span_url:\n return True\n\n return False\n\n def _get_otel_context(self, otel_span):\n # type: (OTelSpan) -> Dict[str, Any]\n \"\"\"\n Returns the OTel context for Sentry.\n See: https://develop.sentry.dev/sdk/performance/opentelemetry/#step-5-add-opentelemetry-context\n \"\"\"\n ctx = {}\n\n if otel_span.attributes:\n ctx[\"attributes\"] = dict(otel_span.attributes)\n\n if otel_span.resource.attributes:\n ctx[\"resource\"] = dict(otel_span.resource.attributes)\n\n return ctx\n\n def _get_trace_data(self, otel_span, parent_context):\n # type: (OTelSpan, SpanContext) -> Dict[str, Any]\n \"\"\"\n Extracts tracing information from one OTel span and its parent OTel context.\n \"\"\"\n trace_data = {}\n\n span_id = format_span_id(otel_span.context.span_id)\n trace_data[\"span_id\"] = span_id\n\n trace_id = format_trace_id(otel_span.context.trace_id)\n trace_data[\"trace_id\"] = trace_id\n\n parent_span_id = (\n format_span_id(otel_span.parent.span_id) if otel_span.parent else None\n )\n trace_data[\"parent_span_id\"] = parent_span_id\n\n sentry_trace_data = get_value(SENTRY_TRACE_KEY, parent_context)\n trace_data[\"parent_sampled\"] = (\n sentry_trace_data[\"parent_sampled\"] if sentry_trace_data else None\n )\n\n baggage = get_value(SENTRY_BAGGAGE_KEY, parent_context)\n trace_data[\"baggage\"] = baggage\n\n return trace_data\n\n def _update_span_with_otel_data(self, sentry_span, otel_span):\n # type: (SentrySpan, OTelSpan) -> None\n \"\"\"\n Convert OTel span data and update the Sentry span with it.\n This should eventually happen on the server when ingesting the spans.\n \"\"\"\n for key, val in otel_span.attributes.items():\n sentry_span.set_data(key, val)\n\n sentry_span.set_data(\"otel.kind\", otel_span.kind)\n\n op = otel_span.name\n description = otel_span.name\n\n http_method = otel_span.attributes.get(SpanAttributes.HTTP_METHOD, None)\n db_query = otel_span.attributes.get(SpanAttributes.DB_SYSTEM, None)\n\n if http_method:\n op = \"http\"\n\n if otel_span.kind == SpanKind.SERVER:\n op += \".server\"\n elif otel_span.kind == SpanKind.CLIENT:\n op += \".client\"\n\n description = http_method\n\n peer_name = otel_span.attributes.get(SpanAttributes.NET_PEER_NAME, None)\n if peer_name:\n description += \" {}\".format(peer_name)\n\n target = otel_span.attributes.get(SpanAttributes.HTTP_TARGET, None)\n if target:\n description += \" {}\".format(target)\n\n if not peer_name and not target:\n url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n if url:\n parsed_url = urlparse(url)\n url = f\"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}\"\n description += \" {}\".format(url)\n\n status_code = otel_span.attributes.get(\n SpanAttributes.HTTP_STATUS_CODE, None\n )\n if status_code:\n sentry_span.set_http_status(status_code)\n\n elif db_query:\n op = \"db\"\n statement = otel_span.attributes.get(SpanAttributes.DB_STATEMENT, None)\n if statement:\n description = statement\n\n sentry_span.op = op\n sentry_span.description = description\n", "path": "sentry_sdk/integrations/opentelemetry/span_processor.py"}], "after_files": [{"content": "from datetime import datetime\n\nfrom opentelemetry.context import get_value # type: ignore\nfrom opentelemetry.sdk.trace import SpanProcessor # type: ignore\nfrom opentelemetry.semconv.trace import SpanAttributes # type: ignore\nfrom opentelemetry.trace import ( # type: ignore\n format_span_id,\n format_trace_id,\n get_current_span,\n SpanContext,\n Span as OTelSpan,\n SpanKind,\n)\nfrom opentelemetry.trace.span import ( # type: ignore\n INVALID_SPAN_ID,\n INVALID_TRACE_ID,\n)\nfrom sentry_sdk.consts import INSTRUMENTER\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations.opentelemetry.consts import (\n SENTRY_BAGGAGE_KEY,\n SENTRY_TRACE_KEY,\n)\nfrom sentry_sdk.scope import add_global_event_processor\nfrom sentry_sdk.tracing import Transaction, Span as SentrySpan\nfrom sentry_sdk.utils import Dsn\nfrom sentry_sdk._types import MYPY\n\nfrom urllib3.util import parse_url as urlparse # type: ignore\n\nif MYPY:\n from typing import Any\n from typing import Dict\n from typing import Union\n from sentry_sdk._types import Event, Hint\n\nOPEN_TELEMETRY_CONTEXT = \"otel\"\n\n\ndef link_trace_context_to_error_event(event, otel_span_map):\n # type: (Event, Dict[str, Union[Transaction, OTelSpan]]) -> Event\n hub = Hub.current\n if not hub:\n return event\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return event\n\n if hasattr(event, \"type\") and event[\"type\"] == \"transaction\":\n return event\n\n otel_span = get_current_span()\n if not otel_span:\n return event\n\n ctx = otel_span.get_span_context()\n trace_id = format_trace_id(ctx.trace_id)\n span_id = format_span_id(ctx.span_id)\n\n if trace_id == INVALID_TRACE_ID or span_id == INVALID_SPAN_ID:\n return event\n\n sentry_span = otel_span_map.get(span_id, None)\n if not sentry_span:\n return event\n\n contexts = event.setdefault(\"contexts\", {})\n contexts.setdefault(\"trace\", {}).update(sentry_span.get_trace_context())\n\n return event\n\n\nclass SentrySpanProcessor(SpanProcessor): # type: ignore\n \"\"\"\n Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.\n \"\"\"\n\n # The mapping from otel span ids to sentry spans\n otel_span_map = {} # type: Dict[str, Union[Transaction, OTelSpan]]\n\n def __new__(cls):\n # type: () -> SentrySpanProcessor\n if not hasattr(cls, \"instance\"):\n cls.instance = super(SentrySpanProcessor, cls).__new__(cls)\n\n return cls.instance\n\n def __init__(self):\n # type: () -> None\n @add_global_event_processor\n def global_event_processor(event, hint):\n # type: (Event, Hint) -> Event\n return link_trace_context_to_error_event(event, self.otel_span_map)\n\n def on_start(self, otel_span, parent_context=None):\n # type: (OTelSpan, SpanContext) -> None\n hub = Hub.current\n if not hub:\n return\n\n if not hub.client or (hub.client and not hub.client.dsn):\n return\n\n try:\n _ = Dsn(hub.client.dsn or \"\")\n except Exception:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n if self._is_sentry_span(hub, otel_span):\n return\n\n trace_data = self._get_trace_data(otel_span, parent_context)\n\n parent_span_id = trace_data[\"parent_span_id\"]\n sentry_parent_span = (\n self.otel_span_map.get(parent_span_id, None) if parent_span_id else None\n )\n\n sentry_span = None\n if sentry_parent_span:\n sentry_span = sentry_parent_span.start_child(\n span_id=trace_data[\"span_id\"],\n description=otel_span.name,\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n else:\n sentry_span = hub.start_transaction(\n name=otel_span.name,\n span_id=trace_data[\"span_id\"],\n parent_span_id=parent_span_id,\n trace_id=trace_data[\"trace_id\"],\n baggage=trace_data[\"baggage\"],\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n\n self.otel_span_map[trace_data[\"span_id\"]] = sentry_span\n\n def on_end(self, otel_span):\n # type: (OTelSpan) -> None\n hub = Hub.current\n if not hub:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n span_id = format_span_id(otel_span.context.span_id)\n sentry_span = self.otel_span_map.pop(span_id, None)\n if not sentry_span:\n return\n\n sentry_span.op = otel_span.name\n\n if isinstance(sentry_span, Transaction):\n sentry_span.name = otel_span.name\n sentry_span.set_context(\n OPEN_TELEMETRY_CONTEXT, self._get_otel_context(otel_span)\n )\n\n else:\n self._update_span_with_otel_data(sentry_span, otel_span)\n\n sentry_span.finish(\n end_timestamp=datetime.fromtimestamp(otel_span.end_time / 1e9)\n )\n\n def _is_sentry_span(self, hub, otel_span):\n # type: (Hub, OTelSpan) -> bool\n \"\"\"\n Break infinite loop:\n HTTP requests to Sentry are caught by OTel and send again to Sentry.\n \"\"\"\n otel_span_url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n dsn_url = hub.client and Dsn(hub.client.dsn or \"\").netloc\n\n if otel_span_url and dsn_url in otel_span_url:\n return True\n\n return False\n\n def _get_otel_context(self, otel_span):\n # type: (OTelSpan) -> Dict[str, Any]\n \"\"\"\n Returns the OTel context for Sentry.\n See: https://develop.sentry.dev/sdk/performance/opentelemetry/#step-5-add-opentelemetry-context\n \"\"\"\n ctx = {}\n\n if otel_span.attributes:\n ctx[\"attributes\"] = dict(otel_span.attributes)\n\n if otel_span.resource.attributes:\n ctx[\"resource\"] = dict(otel_span.resource.attributes)\n\n return ctx\n\n def _get_trace_data(self, otel_span, parent_context):\n # type: (OTelSpan, SpanContext) -> Dict[str, Any]\n \"\"\"\n Extracts tracing information from one OTel span and its parent OTel context.\n \"\"\"\n trace_data = {}\n\n span_id = format_span_id(otel_span.context.span_id)\n trace_data[\"span_id\"] = span_id\n\n trace_id = format_trace_id(otel_span.context.trace_id)\n trace_data[\"trace_id\"] = trace_id\n\n parent_span_id = (\n format_span_id(otel_span.parent.span_id) if otel_span.parent else None\n )\n trace_data[\"parent_span_id\"] = parent_span_id\n\n sentry_trace_data = get_value(SENTRY_TRACE_KEY, parent_context)\n trace_data[\"parent_sampled\"] = (\n sentry_trace_data[\"parent_sampled\"] if sentry_trace_data else None\n )\n\n baggage = get_value(SENTRY_BAGGAGE_KEY, parent_context)\n trace_data[\"baggage\"] = baggage\n\n return trace_data\n\n def _update_span_with_otel_data(self, sentry_span, otel_span):\n # type: (SentrySpan, OTelSpan) -> None\n \"\"\"\n Convert OTel span data and update the Sentry span with it.\n This should eventually happen on the server when ingesting the spans.\n \"\"\"\n for key, val in otel_span.attributes.items():\n sentry_span.set_data(key, val)\n\n sentry_span.set_data(\"otel.kind\", otel_span.kind)\n\n op = otel_span.name\n description = otel_span.name\n\n http_method = otel_span.attributes.get(SpanAttributes.HTTP_METHOD, None)\n db_query = otel_span.attributes.get(SpanAttributes.DB_SYSTEM, None)\n\n if http_method:\n op = \"http\"\n\n if otel_span.kind == SpanKind.SERVER:\n op += \".server\"\n elif otel_span.kind == SpanKind.CLIENT:\n op += \".client\"\n\n description = http_method\n\n peer_name = otel_span.attributes.get(SpanAttributes.NET_PEER_NAME, None)\n if peer_name:\n description += \" {}\".format(peer_name)\n\n target = otel_span.attributes.get(SpanAttributes.HTTP_TARGET, None)\n if target:\n description += \" {}\".format(target)\n\n if not peer_name and not target:\n url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n if url:\n parsed_url = urlparse(url)\n url = f\"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}\"\n description += \" {}\".format(url)\n\n status_code = otel_span.attributes.get(\n SpanAttributes.HTTP_STATUS_CODE, None\n )\n if status_code:\n sentry_span.set_http_status(status_code)\n\n elif db_query:\n op = \"db\"\n statement = otel_span.attributes.get(SpanAttributes.DB_STATEMENT, None)\n if statement:\n description = statement\n\n sentry_span.op = op\n sentry_span.description = description\n", "path": "sentry_sdk/integrations/opentelemetry/span_processor.py"}]} |
gh_patches_debug_1157 | rasdani/github-patches | git_diff | DDMAL__CantusDB-819 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Source Edit Page: sidebar: source navigation links not displayed
breaking this issue out from #483.
OldCantus - https://cantus.uwaterloo.ca/node/711311/edit:

NewCantus - http://206.12.88.113/edit-source/123653:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `django/cantusdb_project/main_app/views/source.py`
Content:
```
1 from django.views.generic import DetailView, ListView, CreateView, UpdateView
2 from django.db.models import Q, Prefetch
3 from main_app.models import Source, Provenance, Century
4 from main_app.forms import SourceCreateForm, SourceEditForm
5 from django.contrib import messages
6 from django.urls import reverse
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.http import HttpResponseRedirect
9 from django.contrib.auth.mixins import UserPassesTestMixin
10 from django.core.exceptions import PermissionDenied
11 from django.shortcuts import get_object_or_404
12 from main_app.views.chant import get_feast_selector_options
13
14
15 class SourceDetailView(DetailView):
16 model = Source
17 context_object_name = "source"
18 template_name = "source_detail.html"
19
20 def get_context_data(self, **kwargs):
21 source = self.get_object()
22 display_unpublished = self.request.user.is_authenticated
23 if (source.published is False) and (not display_unpublished):
24 raise PermissionDenied()
25
26 context = super().get_context_data(**kwargs)
27
28 if source.segment and source.segment.id == 4064:
29 # if this is a sequence source
30 context["sequences"] = source.sequence_set.order_by("s_sequence")
31 context["folios"] = (
32 source.sequence_set.values_list("folio", flat=True)
33 .distinct()
34 .order_by("folio")
35 )
36 else:
37 # if this is a chant source
38 folios = (
39 source.chant_set.values_list("folio", flat=True)
40 .distinct()
41 .order_by("folio")
42 )
43 context["folios"] = folios
44 # the options for the feast selector on the right, only chant sources have this
45 context["feasts_with_folios"] = get_feast_selector_options(source, folios)
46 return context
47
48
49 class SourceListView(ListView):
50 paginate_by = 100
51 context_object_name = "sources"
52 template_name = "source_list.html"
53
54 def get_context_data(self, **kwargs):
55 context = super().get_context_data(**kwargs)
56 context["provenances"] = (
57 Provenance.objects.all().order_by("name").values("id", "name")
58 )
59 context["centuries"] = (
60 Century.objects.all().order_by("name").values("id", "name")
61 )
62 return context
63
64 def get_queryset(self):
65 # use select_related() for foreign keys to reduce DB queries
66 queryset = Source.objects.select_related(
67 "rism_siglum", "segment", "provenance"
68 ).order_by("siglum")
69
70 display_unpublished = self.request.user.is_authenticated
71 if display_unpublished:
72 q_obj_filter = Q()
73 else:
74 q_obj_filter = Q(published=True)
75
76 if self.request.GET.get("century"):
77 century_name = Century.objects.get(id=self.request.GET.get("century")).name
78 q_obj_filter &= Q(century__name__icontains=century_name)
79
80 if self.request.GET.get("provenance"):
81 provenance_id = int(self.request.GET.get("provenance"))
82 q_obj_filter &= Q(provenance__id=provenance_id)
83 if self.request.GET.get("segment"):
84 segment_id = int(self.request.GET.get("segment"))
85 q_obj_filter &= Q(segment__id=segment_id)
86 if self.request.GET.get("fullSource") in ["true", "false"]:
87 full_source_str = self.request.GET.get("fullSource")
88 if full_source_str == "true":
89 full_source_q = Q(full_source=True) | Q(full_source=None)
90 q_obj_filter &= full_source_q
91 else:
92 q_obj_filter &= Q(full_source=False)
93
94 if self.request.GET.get("general"):
95 # Strip spaces at the beginning and end. Then make list of terms split on spaces
96 general_search_terms = self.request.GET.get("general").strip(" ").split(" ")
97 # We need a Q Object for each field we're gonna look into
98 title_q = Q()
99 siglum_q = Q()
100 rism_siglum_q = Q()
101 description_q = Q()
102 # it seems that old cantus don't look into title and provenance for the general search terms
103 # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsmünster' in the general search field
104 # provenance_q = Q()
105 summary_q = Q()
106
107 # For each term, add it to the Q object of each field with an OR operation.
108 # We split the terms so that the words can be separated in the actual
109 # field, allowing for a more flexible search, and a field needs
110 # to match only one of the terms
111 for term in general_search_terms:
112 title_q |= Q(title__icontains=term)
113 siglum_q |= Q(siglum__icontains=term)
114 rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(
115 rism_siglum__description__icontains=term
116 )
117 description_q |= Q(description__icontains=term)
118 summary_q |= Q(summary__icontains=term)
119 # provenance_q |= Q(provenance__name__icontains=term)
120 # All the Q objects are put together with OR.
121 # The end result is that at least one term has to match in at least one
122 # field
123 # general_search_q = (
124 # title_q | siglum_q | rism_siglum_q | description_q | provenance_q
125 # )
126 general_search_q = (
127 title_q | siglum_q | rism_siglum_q | description_q | summary_q
128 )
129 q_obj_filter &= general_search_q
130
131 # For the indexing notes search we follow the same procedure as above but with
132 # different fields
133 if self.request.GET.get("indexing"):
134 # Make list of terms split on spaces
135 indexing_search_terms = self.request.GET.get("indexing").split(" ")
136 # We need a Q Object for each field we're gonna look into
137 inventoried_by_q = Q()
138 full_text_entered_by_q = Q()
139 melodies_entered_by_q = Q()
140 proofreaders_q = Q()
141 other_editors_q = Q()
142 indexing_notes_q = Q()
143 # For each term, add it to the Q object of each field with an OR operation.
144 # We split the terms so that the words can be separated in the actual
145 # field, allowing for a more flexible search, and a field needs
146 # to match only one of the terms
147 for term in indexing_search_terms:
148 inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)
149 full_text_entered_by_q |= Q(
150 full_text_entered_by__full_name__icontains=term
151 )
152 melodies_entered_by_q |= Q(
153 melodies_entered_by__full_name__icontains=term
154 )
155 proofreaders_q |= Q(proofreaders__full_name__icontains=term)
156 other_editors_q |= Q(other_editors__full_name__icontains=term)
157 indexing_notes_q |= Q(indexing_notes__icontains=term)
158 # All the Q objects are put together with OR.
159 # The end result is that at least one term has to match in at least one
160 # field
161 indexing_search_q = (
162 inventoried_by_q
163 | full_text_entered_by_q
164 | melodies_entered_by_q
165 | proofreaders_q
166 | other_editors_q
167 | indexing_notes_q
168 )
169 q_obj_filter &= indexing_search_q
170
171 return queryset.filter(q_obj_filter).prefetch_related(
172 Prefetch("century", queryset=Century.objects.all().order_by("id"))
173 )
174
175
176 class SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):
177 model = Source
178 template_name = "source_create_form.html"
179 form_class = SourceCreateForm
180
181 def test_func(self):
182 user = self.request.user
183 # checks if the user is allowed to create sources
184 is_authorized = user.groups.filter(
185 Q(name="project manager") | Q(name="editor") | Q(name="contributor")
186 ).exists()
187
188 if is_authorized:
189 return True
190 else:
191 return False
192
193 def get_success_url(self):
194 return reverse("source-create")
195
196 def form_valid(self, form):
197 form.instance.created_by = self.request.user
198 source = form.save()
199
200 # assign this source to the "current_editors"
201 current_editors = source.current_editors.all()
202
203 for editor in current_editors:
204 editor.sources_user_can_edit.add(source)
205
206 messages.success(
207 self.request,
208 "Source created successfully!",
209 )
210
211 return HttpResponseRedirect(self.get_success_url())
212
213
214 class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
215 template_name = "source_edit.html"
216 model = Source
217 form_class = SourceEditForm
218 pk_url_kwarg = "source_id"
219
220 def test_func(self):
221 user = self.request.user
222 source_id = self.kwargs.get(self.pk_url_kwarg)
223 source = get_object_or_404(Source, id=source_id)
224
225 assigned_to_source = user.sources_user_can_edit.filter(id=source_id)
226
227 # checks if the user is a project manager
228 is_project_manager = user.groups.filter(name="project manager").exists()
229 # checks if the user is an editor
230 is_editor = user.groups.filter(name="editor").exists()
231 # checks if the user is a contributor
232 is_contributor = user.groups.filter(name="contributor").exists()
233
234 if (
235 (is_project_manager)
236 or (is_editor and assigned_to_source)
237 or (is_editor and source.created_by == user)
238 or (is_contributor and source.created_by == user)
239 ):
240 return True
241 else:
242 return False
243
244 def form_valid(self, form):
245 form.instance.last_updated_by = self.request.user
246
247 # remove this source from the old "current_editors"
248 # assign this source to the new "current_editors"
249
250 old_current_editors = list(
251 Source.objects.get(id=form.instance.id).current_editors.all()
252 )
253 new_current_editors = form.cleaned_data["current_editors"]
254 source = form.save()
255
256 for old_editor in old_current_editors:
257 old_editor.sources_user_can_edit.remove(source)
258
259 for new_editor in new_current_editors:
260 new_editor.sources_user_can_edit.add(source)
261
262 return HttpResponseRedirect(self.get_success_url())
263
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py
--- a/django/cantusdb_project/main_app/views/source.py
+++ b/django/cantusdb_project/main_app/views/source.py
@@ -211,7 +211,9 @@
return HttpResponseRedirect(self.get_success_url())
-class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):
+class SourceEditView(
+ LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView
+):
template_name = "source_edit.html"
model = Source
form_class = SourceEditForm
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/source.py b/django/cantusdb_project/main_app/views/source.py\n--- a/django/cantusdb_project/main_app/views/source.py\n+++ b/django/cantusdb_project/main_app/views/source.py\n@@ -211,7 +211,9 @@\n return HttpResponseRedirect(self.get_success_url())\n \n \n-class SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n+class SourceEditView(\n+ LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView\n+):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n", "issue": "Source Edit Page: sidebar: source navigation links not displayed\nbreaking this issue out from #483. \r\n\r\nOldCantus - https://cantus.uwaterloo.ca/node/711311/edit:\r\n\r\n\r\nNewCantus - http://206.12.88.113/edit-source/123653:\r\n\n", "before_files": [{"content": "from django.views.generic import DetailView, ListView, CreateView, UpdateView\nfrom django.db.models import Q, Prefetch\nfrom main_app.models import Source, Provenance, Century\nfrom main_app.forms import SourceCreateForm, SourceEditForm\nfrom django.contrib import messages\nfrom django.urls import reverse\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.http import HttpResponseRedirect\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404\nfrom main_app.views.chant import get_feast_selector_options\n\n\nclass SourceDetailView(DetailView):\n model = Source\n context_object_name = \"source\"\n template_name = \"source_detail.html\"\n\n def get_context_data(self, **kwargs):\n source = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n if (source.published is False) and (not display_unpublished):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n\n if source.segment and source.segment.id == 4064:\n # if this is a sequence source\n context[\"sequences\"] = source.sequence_set.order_by(\"s_sequence\")\n context[\"folios\"] = (\n source.sequence_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n else:\n # if this is a chant source\n folios = (\n source.chant_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n context[\"folios\"] = folios\n # the options for the feast selector on the right, only chant sources have this\n context[\"feasts_with_folios\"] = get_feast_selector_options(source, folios)\n return context\n\n\nclass SourceListView(ListView):\n paginate_by = 100\n context_object_name = \"sources\"\n template_name = \"source_list.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"provenances\"] = (\n Provenance.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n context[\"centuries\"] = (\n Century.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n return context\n\n def get_queryset(self):\n # use select_related() for foreign keys to reduce DB queries\n queryset = Source.objects.select_related(\n \"rism_siglum\", \"segment\", \"provenance\"\n ).order_by(\"siglum\")\n\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(published=True)\n\n if self.request.GET.get(\"century\"):\n century_name = Century.objects.get(id=self.request.GET.get(\"century\")).name\n q_obj_filter &= Q(century__name__icontains=century_name)\n\n if self.request.GET.get(\"provenance\"):\n provenance_id = int(self.request.GET.get(\"provenance\"))\n q_obj_filter &= Q(provenance__id=provenance_id)\n if self.request.GET.get(\"segment\"):\n segment_id = int(self.request.GET.get(\"segment\"))\n q_obj_filter &= Q(segment__id=segment_id)\n if self.request.GET.get(\"fullSource\") in [\"true\", \"false\"]:\n full_source_str = self.request.GET.get(\"fullSource\")\n if full_source_str == \"true\":\n full_source_q = Q(full_source=True) | Q(full_source=None)\n q_obj_filter &= full_source_q\n else:\n q_obj_filter &= Q(full_source=False)\n\n if self.request.GET.get(\"general\"):\n # Strip spaces at the beginning and end. Then make list of terms split on spaces\n general_search_terms = self.request.GET.get(\"general\").strip(\" \").split(\" \")\n # We need a Q Object for each field we're gonna look into\n title_q = Q()\n siglum_q = Q()\n rism_siglum_q = Q()\n description_q = Q()\n # it seems that old cantus don't look into title and provenance for the general search terms\n # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsm\u00fcnster' in the general search field\n # provenance_q = Q()\n summary_q = Q()\n\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in general_search_terms:\n title_q |= Q(title__icontains=term)\n siglum_q |= Q(siglum__icontains=term)\n rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(\n rism_siglum__description__icontains=term\n )\n description_q |= Q(description__icontains=term)\n summary_q |= Q(summary__icontains=term)\n # provenance_q |= Q(provenance__name__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n # general_search_q = (\n # title_q | siglum_q | rism_siglum_q | description_q | provenance_q\n # )\n general_search_q = (\n title_q | siglum_q | rism_siglum_q | description_q | summary_q\n )\n q_obj_filter &= general_search_q\n\n # For the indexing notes search we follow the same procedure as above but with\n # different fields\n if self.request.GET.get(\"indexing\"):\n # Make list of terms split on spaces\n indexing_search_terms = self.request.GET.get(\"indexing\").split(\" \")\n # We need a Q Object for each field we're gonna look into\n inventoried_by_q = Q()\n full_text_entered_by_q = Q()\n melodies_entered_by_q = Q()\n proofreaders_q = Q()\n other_editors_q = Q()\n indexing_notes_q = Q()\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in indexing_search_terms:\n inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)\n full_text_entered_by_q |= Q(\n full_text_entered_by__full_name__icontains=term\n )\n melodies_entered_by_q |= Q(\n melodies_entered_by__full_name__icontains=term\n )\n proofreaders_q |= Q(proofreaders__full_name__icontains=term)\n other_editors_q |= Q(other_editors__full_name__icontains=term)\n indexing_notes_q |= Q(indexing_notes__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n indexing_search_q = (\n inventoried_by_q\n | full_text_entered_by_q\n | melodies_entered_by_q\n | proofreaders_q\n | other_editors_q\n | indexing_notes_q\n )\n q_obj_filter &= indexing_search_q\n\n return queryset.filter(q_obj_filter).prefetch_related(\n Prefetch(\"century\", queryset=Century.objects.all().order_by(\"id\"))\n )\n\n\nclass SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):\n model = Source\n template_name = \"source_create_form.html\"\n form_class = SourceCreateForm\n\n def test_func(self):\n user = self.request.user\n # checks if the user is allowed to create sources\n is_authorized = user.groups.filter(\n Q(name=\"project manager\") | Q(name=\"editor\") | Q(name=\"contributor\")\n ).exists()\n\n if is_authorized:\n return True\n else:\n return False\n\n def get_success_url(self):\n return reverse(\"source-create\")\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n source = form.save()\n\n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n\n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n\n messages.success(\n self.request,\n \"Source created successfully!\",\n )\n\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass SourceEditView(LoginRequiredMixin, UserPassesTestMixin, UpdateView):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n pk_url_kwarg = \"source_id\"\n\n def test_func(self):\n user = self.request.user\n source_id = self.kwargs.get(self.pk_url_kwarg)\n source = get_object_or_404(Source, id=source_id)\n\n assigned_to_source = user.sources_user_can_edit.filter(id=source_id)\n\n # checks if the user is a project manager\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n # checks if the user is an editor\n is_editor = user.groups.filter(name=\"editor\").exists()\n # checks if the user is a contributor\n is_contributor = user.groups.filter(name=\"contributor\").exists()\n\n if (\n (is_project_manager)\n or (is_editor and assigned_to_source)\n or (is_editor and source.created_by == user)\n or (is_contributor and source.created_by == user)\n ):\n return True\n else:\n return False\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n\n # remove this source from the old \"current_editors\"\n # assign this source to the new \"current_editors\"\n\n old_current_editors = list(\n Source.objects.get(id=form.instance.id).current_editors.all()\n )\n new_current_editors = form.cleaned_data[\"current_editors\"]\n source = form.save()\n\n for old_editor in old_current_editors:\n old_editor.sources_user_can_edit.remove(source)\n\n for new_editor in new_current_editors:\n new_editor.sources_user_can_edit.add(source)\n\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/source.py"}], "after_files": [{"content": "from django.views.generic import DetailView, ListView, CreateView, UpdateView\nfrom django.db.models import Q, Prefetch\nfrom main_app.models import Source, Provenance, Century\nfrom main_app.forms import SourceCreateForm, SourceEditForm\nfrom django.contrib import messages\nfrom django.urls import reverse\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.http import HttpResponseRedirect\nfrom django.contrib.auth.mixins import UserPassesTestMixin\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404\nfrom main_app.views.chant import get_feast_selector_options\n\n\nclass SourceDetailView(DetailView):\n model = Source\n context_object_name = \"source\"\n template_name = \"source_detail.html\"\n\n def get_context_data(self, **kwargs):\n source = self.get_object()\n display_unpublished = self.request.user.is_authenticated\n if (source.published is False) and (not display_unpublished):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n\n if source.segment and source.segment.id == 4064:\n # if this is a sequence source\n context[\"sequences\"] = source.sequence_set.order_by(\"s_sequence\")\n context[\"folios\"] = (\n source.sequence_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n else:\n # if this is a chant source\n folios = (\n source.chant_set.values_list(\"folio\", flat=True)\n .distinct()\n .order_by(\"folio\")\n )\n context[\"folios\"] = folios\n # the options for the feast selector on the right, only chant sources have this\n context[\"feasts_with_folios\"] = get_feast_selector_options(source, folios)\n return context\n\n\nclass SourceListView(ListView):\n paginate_by = 100\n context_object_name = \"sources\"\n template_name = \"source_list.html\"\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context[\"provenances\"] = (\n Provenance.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n context[\"centuries\"] = (\n Century.objects.all().order_by(\"name\").values(\"id\", \"name\")\n )\n return context\n\n def get_queryset(self):\n # use select_related() for foreign keys to reduce DB queries\n queryset = Source.objects.select_related(\n \"rism_siglum\", \"segment\", \"provenance\"\n ).order_by(\"siglum\")\n\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n q_obj_filter = Q()\n else:\n q_obj_filter = Q(published=True)\n\n if self.request.GET.get(\"century\"):\n century_name = Century.objects.get(id=self.request.GET.get(\"century\")).name\n q_obj_filter &= Q(century__name__icontains=century_name)\n\n if self.request.GET.get(\"provenance\"):\n provenance_id = int(self.request.GET.get(\"provenance\"))\n q_obj_filter &= Q(provenance__id=provenance_id)\n if self.request.GET.get(\"segment\"):\n segment_id = int(self.request.GET.get(\"segment\"))\n q_obj_filter &= Q(segment__id=segment_id)\n if self.request.GET.get(\"fullSource\") in [\"true\", \"false\"]:\n full_source_str = self.request.GET.get(\"fullSource\")\n if full_source_str == \"true\":\n full_source_q = Q(full_source=True) | Q(full_source=None)\n q_obj_filter &= full_source_q\n else:\n q_obj_filter &= Q(full_source=False)\n\n if self.request.GET.get(\"general\"):\n # Strip spaces at the beginning and end. Then make list of terms split on spaces\n general_search_terms = self.request.GET.get(\"general\").strip(\" \").split(\" \")\n # We need a Q Object for each field we're gonna look into\n title_q = Q()\n siglum_q = Q()\n rism_siglum_q = Q()\n description_q = Q()\n # it seems that old cantus don't look into title and provenance for the general search terms\n # cantus.uwaterloo.ca/source/123901 this source cannot be found by searching its provenance 'Kremsm\u00fcnster' in the general search field\n # provenance_q = Q()\n summary_q = Q()\n\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in general_search_terms:\n title_q |= Q(title__icontains=term)\n siglum_q |= Q(siglum__icontains=term)\n rism_siglum_q |= Q(rism_siglum__name__icontains=term) | Q(\n rism_siglum__description__icontains=term\n )\n description_q |= Q(description__icontains=term)\n summary_q |= Q(summary__icontains=term)\n # provenance_q |= Q(provenance__name__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n # general_search_q = (\n # title_q | siglum_q | rism_siglum_q | description_q | provenance_q\n # )\n general_search_q = (\n title_q | siglum_q | rism_siglum_q | description_q | summary_q\n )\n q_obj_filter &= general_search_q\n\n # For the indexing notes search we follow the same procedure as above but with\n # different fields\n if self.request.GET.get(\"indexing\"):\n # Make list of terms split on spaces\n indexing_search_terms = self.request.GET.get(\"indexing\").split(\" \")\n # We need a Q Object for each field we're gonna look into\n inventoried_by_q = Q()\n full_text_entered_by_q = Q()\n melodies_entered_by_q = Q()\n proofreaders_q = Q()\n other_editors_q = Q()\n indexing_notes_q = Q()\n # For each term, add it to the Q object of each field with an OR operation.\n # We split the terms so that the words can be separated in the actual\n # field, allowing for a more flexible search, and a field needs\n # to match only one of the terms\n for term in indexing_search_terms:\n inventoried_by_q |= Q(inventoried_by__full_name__icontains=term)\n full_text_entered_by_q |= Q(\n full_text_entered_by__full_name__icontains=term\n )\n melodies_entered_by_q |= Q(\n melodies_entered_by__full_name__icontains=term\n )\n proofreaders_q |= Q(proofreaders__full_name__icontains=term)\n other_editors_q |= Q(other_editors__full_name__icontains=term)\n indexing_notes_q |= Q(indexing_notes__icontains=term)\n # All the Q objects are put together with OR.\n # The end result is that at least one term has to match in at least one\n # field\n indexing_search_q = (\n inventoried_by_q\n | full_text_entered_by_q\n | melodies_entered_by_q\n | proofreaders_q\n | other_editors_q\n | indexing_notes_q\n )\n q_obj_filter &= indexing_search_q\n\n return queryset.filter(q_obj_filter).prefetch_related(\n Prefetch(\"century\", queryset=Century.objects.all().order_by(\"id\"))\n )\n\n\nclass SourceCreateView(LoginRequiredMixin, UserPassesTestMixin, CreateView):\n model = Source\n template_name = \"source_create_form.html\"\n form_class = SourceCreateForm\n\n def test_func(self):\n user = self.request.user\n # checks if the user is allowed to create sources\n is_authorized = user.groups.filter(\n Q(name=\"project manager\") | Q(name=\"editor\") | Q(name=\"contributor\")\n ).exists()\n\n if is_authorized:\n return True\n else:\n return False\n\n def get_success_url(self):\n return reverse(\"source-create\")\n\n def form_valid(self, form):\n form.instance.created_by = self.request.user\n source = form.save()\n\n # assign this source to the \"current_editors\"\n current_editors = source.current_editors.all()\n\n for editor in current_editors:\n editor.sources_user_can_edit.add(source)\n\n messages.success(\n self.request,\n \"Source created successfully!\",\n )\n\n return HttpResponseRedirect(self.get_success_url())\n\n\nclass SourceEditView(\n LoginRequiredMixin, UserPassesTestMixin, UpdateView, SourceDetailView\n):\n template_name = \"source_edit.html\"\n model = Source\n form_class = SourceEditForm\n pk_url_kwarg = \"source_id\"\n\n def test_func(self):\n user = self.request.user\n source_id = self.kwargs.get(self.pk_url_kwarg)\n source = get_object_or_404(Source, id=source_id)\n\n assigned_to_source = user.sources_user_can_edit.filter(id=source_id)\n\n # checks if the user is a project manager\n is_project_manager = user.groups.filter(name=\"project manager\").exists()\n # checks if the user is an editor\n is_editor = user.groups.filter(name=\"editor\").exists()\n # checks if the user is a contributor\n is_contributor = user.groups.filter(name=\"contributor\").exists()\n\n if (\n (is_project_manager)\n or (is_editor and assigned_to_source)\n or (is_editor and source.created_by == user)\n or (is_contributor and source.created_by == user)\n ):\n return True\n else:\n return False\n\n def form_valid(self, form):\n form.instance.last_updated_by = self.request.user\n\n # remove this source from the old \"current_editors\"\n # assign this source to the new \"current_editors\"\n\n old_current_editors = list(\n Source.objects.get(id=form.instance.id).current_editors.all()\n )\n new_current_editors = form.cleaned_data[\"current_editors\"]\n source = form.save()\n\n for old_editor in old_current_editors:\n old_editor.sources_user_can_edit.remove(source)\n\n for new_editor in new_current_editors:\n new_editor.sources_user_can_edit.add(source)\n\n return HttpResponseRedirect(self.get_success_url())\n", "path": "django/cantusdb_project/main_app/views/source.py"}]} |
gh_patches_debug_1158 | rasdani/github-patches | git_diff | buildbot__buildbot-6121 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Git poller state exceeds database field size after some time
When the Git poller covers all branches, it can happen that eventually the `lastRev` state gets too large so that it cannot be stored in the DB anymore (resulting in `Data too long for column 'value_json' at row 1` exceptions). This is because it never forgets branches that used to exist in the past (https://github.com/buildbot/buildbot/blob/master/master/buildbot/changes/gitpoller.py#L239), so its `lastRev` state never shrinks.
I'm wondering whether the `lastRev` state really needs to be updated (and not replaced) by the current set of references. The only reason why update is used (instead of replacing) I can think of is that if a branch is deleted and then restored (with the same HEAD), it will not trigger a change.
I'd like to create a PR which removes the updating, or at least makes it configurable. What is the preferred solution here?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `master/buildbot/changes/gitpoller.py`
Content:
```
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 import os
17 import re
18 import stat
19 from urllib.parse import quote as urlquote
20
21 from twisted.internet import defer
22 from twisted.python import log
23
24 from buildbot import config
25 from buildbot.changes import base
26 from buildbot.util import bytes2unicode
27 from buildbot.util import private_tempdir
28 from buildbot.util import runprocess
29 from buildbot.util.git import GitMixin
30 from buildbot.util.git import getSshKnownHostsContents
31 from buildbot.util.misc import writeLocalFile
32 from buildbot.util.state import StateMixin
33
34
35 class GitError(Exception):
36
37 """Raised when git exits with code 128."""
38
39
40 class GitPoller(base.PollingChangeSource, StateMixin, GitMixin):
41
42 """This source will poll a remote git repo for changes and submit
43 them to the change master."""
44
45 compare_attrs = ("repourl", "branches", "workdir", "pollInterval", "gitbin", "usetimestamps",
46 "category", "project", "pollAtLaunch", "buildPushesWithNoCommits",
47 "sshPrivateKey", "sshHostKey", "sshKnownHosts", "pollRandomDelayMin",
48 "pollRandomDelayMax")
49
50 secrets = ("sshPrivateKey", "sshHostKey", "sshKnownHosts")
51
52 def __init__(self, repourl, branches=None, branch=None, workdir=None, pollInterval=10 * 60,
53 gitbin="git", usetimestamps=True, category=None, project=None, pollinterval=-2,
54 fetch_refspec=None, encoding="utf-8", name=None, pollAtLaunch=False,
55 buildPushesWithNoCommits=False, only_tags=False, sshPrivateKey=None,
56 sshHostKey=None, sshKnownHosts=None, pollRandomDelayMin=0, pollRandomDelayMax=0):
57
58 # for backward compatibility; the parameter used to be spelled with 'i'
59 if pollinterval != -2:
60 pollInterval = pollinterval
61
62 if name is None:
63 name = repourl
64
65 super().__init__(name=name, pollInterval=pollInterval, pollAtLaunch=pollAtLaunch,
66 pollRandomDelayMin=pollRandomDelayMin,
67 pollRandomDelayMax=pollRandomDelayMax, sshPrivateKey=sshPrivateKey,
68 sshHostKey=sshHostKey, sshKnownHosts=sshKnownHosts)
69
70 if project is None:
71 project = ''
72
73 if only_tags and (branch or branches):
74 config.error("GitPoller: can't specify only_tags and branch/branches")
75 if branch and branches:
76 config.error("GitPoller: can't specify both branch and branches")
77 elif branch:
78 branches = [branch]
79 elif not branches:
80 if only_tags:
81 branches = lambda ref: ref.startswith('refs/tags/') # noqa: E731
82 else:
83 branches = ['master']
84
85 self.repourl = repourl
86 self.branches = branches
87 self.encoding = encoding
88 self.buildPushesWithNoCommits = buildPushesWithNoCommits
89 self.gitbin = gitbin
90 self.workdir = workdir
91 self.usetimestamps = usetimestamps
92 self.category = category if callable(
93 category) else bytes2unicode(category, encoding=self.encoding)
94 self.project = bytes2unicode(project, encoding=self.encoding)
95 self.changeCount = 0
96 self.lastRev = {}
97 self.sshPrivateKey = sshPrivateKey
98 self.sshHostKey = sshHostKey
99 self.sshKnownHosts = sshKnownHosts
100 self.setupGit(logname='GitPoller')
101
102 if fetch_refspec is not None:
103 config.error("GitPoller: fetch_refspec is no longer supported. "
104 "Instead, only the given branches are downloaded.")
105
106 if self.workdir is None:
107 self.workdir = 'gitpoller-work'
108
109 @defer.inlineCallbacks
110 def _checkGitFeatures(self):
111 stdout = yield self._dovccmd('--version', [])
112
113 self.parseGitFeatures(stdout)
114 if not self.gitInstalled:
115 raise EnvironmentError('Git is not installed')
116
117 if (self.sshPrivateKey is not None and
118 not self.supportsSshPrivateKeyAsEnvOption):
119 raise EnvironmentError('SSH private keys require Git 2.3.0 or newer')
120
121 @defer.inlineCallbacks
122 def activate(self):
123 # make our workdir absolute, relative to the master's basedir
124 if not os.path.isabs(self.workdir):
125 self.workdir = os.path.join(self.master.basedir, self.workdir)
126 log.msg("gitpoller: using workdir '{}'".format(self.workdir))
127
128 try:
129 self.lastRev = yield self.getState('lastRev', {})
130
131 super().activate()
132 except Exception as e:
133 log.err(e, 'while initializing GitPoller repository')
134
135 def describe(self):
136 str = ('GitPoller watching the remote git repository ' +
137 bytes2unicode(self.repourl, self.encoding))
138
139 if self.branches:
140 if self.branches is True:
141 str += ', branches: ALL'
142 elif not callable(self.branches):
143 str += ', branches: ' + ', '.join(self.branches)
144
145 if not self.master:
146 str += " [STOPPED - check log]"
147
148 return str
149
150 def _getBranches(self):
151 d = self._dovccmd('ls-remote', ['--refs', self.repourl])
152
153 @d.addCallback
154 def parseRemote(rows):
155 branches = []
156 for row in rows.splitlines():
157 if '\t' not in row:
158 # Not a useful line
159 continue
160 sha, ref = row.split("\t")
161 branches.append(ref)
162 return branches
163 return d
164
165 def _headsFilter(self, branch):
166 """Filter out remote references that don't begin with 'refs/heads'."""
167 return branch.startswith("refs/heads/")
168
169 def _removeHeads(self, branch):
170 """Remove 'refs/heads/' prefix from remote references."""
171 if branch.startswith("refs/heads/"):
172 branch = branch[11:]
173 return branch
174
175 def _trackerBranch(self, branch):
176 # manually quote tilde for Python 3.7
177 url = urlquote(self.repourl, '').replace('~', '%7E')
178 return "refs/buildbot/{}/{}".format(url, self._removeHeads(branch))
179
180 def poll_should_exit(self):
181 # A single gitpoller loop may take a while on a loaded master, which would block
182 # reconfiguration, so we try to exit early.
183 return not self.doPoll.running
184
185 @defer.inlineCallbacks
186 def poll(self):
187 yield self._checkGitFeatures()
188
189 try:
190 yield self._dovccmd('init', ['--bare', self.workdir])
191 except GitError as e:
192 log.msg(e.args[0])
193 return
194
195 branches = self.branches if self.branches else []
196 remote_refs = yield self._getBranches()
197
198 if self.poll_should_exit():
199 return
200
201 if branches is True or callable(branches):
202 if callable(self.branches):
203 branches = [b for b in remote_refs if self.branches(b)]
204 else:
205 branches = [b for b in remote_refs if self._headsFilter(b)]
206 elif branches and remote_refs:
207 remote_branches = [self._removeHeads(b) for b in remote_refs]
208 branches = sorted(list(set(branches) & set(remote_branches)))
209
210 refspecs = [
211 '+{}:{}'.format(self._removeHeads(branch), self._trackerBranch(branch))
212 for branch in branches
213 ]
214
215 try:
216 yield self._dovccmd('fetch', [self.repourl] + refspecs,
217 path=self.workdir)
218 except GitError as e:
219 log.msg(e.args[0])
220 return
221
222 revs = {}
223 log.msg('gitpoller: processing changes from "{}"'.format(self.repourl))
224 for branch in branches:
225 try:
226 if self.poll_should_exit(): # pragma: no cover
227 # Note that we still want to update the last known revisions for the branches
228 # we did process
229 break
230
231 rev = yield self._dovccmd(
232 'rev-parse', [self._trackerBranch(branch)], path=self.workdir)
233 revs[branch] = bytes2unicode(rev, self.encoding)
234 yield self._process_changes(revs[branch], branch)
235 except Exception:
236 log.err(_why="trying to poll branch {} of {}".format(
237 branch, self.repourl))
238
239 self.lastRev.update(revs)
240 yield self.setState('lastRev', self.lastRev)
241
242 def _get_commit_comments(self, rev):
243 args = ['--no-walk', r'--format=%s%n%b', rev, '--']
244 d = self._dovccmd('log', args, path=self.workdir)
245 return d
246
247 def _get_commit_timestamp(self, rev):
248 # unix timestamp
249 args = ['--no-walk', r'--format=%ct', rev, '--']
250 d = self._dovccmd('log', args, path=self.workdir)
251
252 @d.addCallback
253 def process(git_output):
254 if self.usetimestamps:
255 try:
256 stamp = int(git_output)
257 except Exception as e:
258 log.msg(('gitpoller: caught exception converting output \'{}\' to timestamp'
259 ).format(git_output))
260 raise e
261 return stamp
262 return None
263 return d
264
265 def _get_commit_files(self, rev):
266 args = ['--name-only', '--no-walk', r'--format=%n', rev, '--']
267 d = self._dovccmd('log', args, path=self.workdir)
268
269 def decode_file(file):
270 # git use octal char sequences in quotes when non ASCII
271 match = re.match('^"(.*)"$', file)
272 if match:
273 file = bytes2unicode(match.groups()[0], encoding=self.encoding,
274 errors='unicode_escape')
275 return bytes2unicode(file, encoding=self.encoding)
276
277 @d.addCallback
278 def process(git_output):
279 fileList = [decode_file(file)
280 for file in
281 [s for s in git_output.splitlines() if len(s)]]
282 return fileList
283 return d
284
285 def _get_commit_author(self, rev):
286 args = ['--no-walk', r'--format=%aN <%aE>', rev, '--']
287 d = self._dovccmd('log', args, path=self.workdir)
288
289 @d.addCallback
290 def process(git_output):
291 if not git_output:
292 raise EnvironmentError('could not get commit author for rev')
293 return git_output
294 return d
295
296 @defer.inlineCallbacks
297 def _get_commit_committer(self, rev):
298 args = ['--no-walk', r'--format=%cN <%cE>', rev, '--']
299 res = yield self._dovccmd('log', args, path=self.workdir)
300 if not res:
301 raise EnvironmentError('could not get commit committer for rev')
302 return res
303
304 @defer.inlineCallbacks
305 def _process_changes(self, newRev, branch):
306 """
307 Read changes since last change.
308
309 - Read list of commit hashes.
310 - Extract details from each commit.
311 - Add changes to database.
312 """
313
314 # initial run, don't parse all history
315 if not self.lastRev:
316 return
317
318 # get the change list
319 revListArgs = (['--ignore-missing'] +
320 ['--format=%H', '{}'.format(newRev)] +
321 ['^' + rev
322 for rev in sorted(self.lastRev.values())] +
323 ['--'])
324 self.changeCount = 0
325 results = yield self._dovccmd('log', revListArgs, path=self.workdir)
326
327 # process oldest change first
328 revList = results.split()
329 revList.reverse()
330
331 if self.buildPushesWithNoCommits and not revList:
332 existingRev = self.lastRev.get(branch)
333 if existingRev != newRev:
334 revList = [newRev]
335 if existingRev is None:
336 # This branch was completely unknown, rebuild
337 log.msg('gitpoller: rebuilding {} for new branch "{}"'.format(
338 newRev, branch))
339 else:
340 # This branch is known, but it now points to a different
341 # commit than last time we saw it, rebuild.
342 log.msg('gitpoller: rebuilding {} for updated branch "{}"'.format(
343 newRev, branch))
344
345 self.changeCount = len(revList)
346 self.lastRev[branch] = newRev
347
348 if self.changeCount:
349 log.msg('gitpoller: processing {} changes: {} from "{}" branch "{}"'.format(
350 self.changeCount, revList, self.repourl, branch))
351
352 for rev in revList:
353 dl = defer.DeferredList([
354 self._get_commit_timestamp(rev),
355 self._get_commit_author(rev),
356 self._get_commit_committer(rev),
357 self._get_commit_files(rev),
358 self._get_commit_comments(rev),
359 ], consumeErrors=True)
360
361 results = yield dl
362
363 # check for failures
364 failures = [r[1] for r in results if not r[0]]
365 if failures:
366 for failure in failures:
367 log.err(
368 failure, "while processing changes for {} {}".format(newRev, branch))
369 # just fail on the first error; they're probably all related!
370 failures[0].raiseException()
371
372 timestamp, author, committer, files, comments = [r[1] for r in results]
373
374 yield self.master.data.updates.addChange(
375 author=author,
376 committer=committer,
377 revision=bytes2unicode(rev, encoding=self.encoding),
378 files=files, comments=comments, when_timestamp=timestamp,
379 branch=bytes2unicode(self._removeHeads(branch)),
380 project=self.project,
381 repository=bytes2unicode(self.repourl, encoding=self.encoding),
382 category=self.category, src='git')
383
384 def _isSshPrivateKeyNeededForCommand(self, command):
385 commandsThatNeedKey = [
386 'fetch',
387 'ls-remote',
388 ]
389 if self.sshPrivateKey is not None and command in commandsThatNeedKey:
390 return True
391 return False
392
393 def _downloadSshPrivateKey(self, keyPath):
394 # We change the permissions of the key file to be user-readable only so
395 # that ssh does not complain. This is not used for security because the
396 # parent directory will have proper permissions.
397 writeLocalFile(keyPath, self.sshPrivateKey, mode=stat.S_IRUSR)
398
399 def _downloadSshKnownHosts(self, path):
400 if self.sshKnownHosts is not None:
401 contents = self.sshKnownHosts
402 else:
403 contents = getSshKnownHostsContents(self.sshHostKey)
404 writeLocalFile(path, contents)
405
406 def _getSshPrivateKeyPath(self, ssh_data_path):
407 return os.path.join(ssh_data_path, 'ssh-key')
408
409 def _getSshKnownHostsPath(self, ssh_data_path):
410 return os.path.join(ssh_data_path, 'ssh-known-hosts')
411
412 @defer.inlineCallbacks
413 def _dovccmd(self, command, args, path=None):
414 if self._isSshPrivateKeyNeededForCommand(command):
415 with private_tempdir.PrivateTemporaryDirectory(
416 dir=self.workdir, prefix='.buildbot-ssh') as tmp_path:
417 stdout = yield self._dovccmdImpl(command, args, path, tmp_path)
418 else:
419 stdout = yield self._dovccmdImpl(command, args, path, None)
420 return stdout
421
422 @defer.inlineCallbacks
423 def _dovccmdImpl(self, command, args, path, ssh_workdir):
424 full_args = []
425 full_env = os.environ.copy()
426
427 if self._isSshPrivateKeyNeededForCommand(command):
428 key_path = self._getSshPrivateKeyPath(ssh_workdir)
429 self._downloadSshPrivateKey(key_path)
430
431 known_hosts_path = None
432 if self.sshHostKey is not None or self.sshKnownHosts is not None:
433 known_hosts_path = self._getSshKnownHostsPath(ssh_workdir)
434 self._downloadSshKnownHosts(known_hosts_path)
435
436 self.adjustCommandParamsForSshPrivateKey(full_args, full_env,
437 key_path, None,
438 known_hosts_path)
439
440 full_args += [command] + args
441
442 res = yield runprocess.run_process(self.master.reactor, [self.gitbin] + full_args, path,
443 env=full_env)
444 (code, stdout, stderr) = res
445 stdout = bytes2unicode(stdout, self.encoding)
446 stderr = bytes2unicode(stderr, self.encoding)
447 if code != 0:
448 if code == 128:
449 raise GitError('command {} in {} on repourl {} failed with exit code {}: {}'.format(
450 full_args, path, self.repourl, code, stderr))
451 raise EnvironmentError(('command {} in {} on repourl {} failed with exit code {}: {}'
452 ).format(full_args, path, self.repourl, code, stderr))
453 return stdout.strip()
454
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/master/buildbot/changes/gitpoller.py b/master/buildbot/changes/gitpoller.py
--- a/master/buildbot/changes/gitpoller.py
+++ b/master/buildbot/changes/gitpoller.py
@@ -236,7 +236,7 @@
log.err(_why="trying to poll branch {} of {}".format(
branch, self.repourl))
- self.lastRev.update(revs)
+ self.lastRev = revs
yield self.setState('lastRev', self.lastRev)
def _get_commit_comments(self, rev):
| {"golden_diff": "diff --git a/master/buildbot/changes/gitpoller.py b/master/buildbot/changes/gitpoller.py\n--- a/master/buildbot/changes/gitpoller.py\n+++ b/master/buildbot/changes/gitpoller.py\n@@ -236,7 +236,7 @@\n log.err(_why=\"trying to poll branch {} of {}\".format(\n branch, self.repourl))\n \n- self.lastRev.update(revs)\n+ self.lastRev = revs\n yield self.setState('lastRev', self.lastRev)\n \n def _get_commit_comments(self, rev):\n", "issue": "Git poller state exceeds database field size after some time\nWhen the Git poller covers all branches, it can happen that eventually the `lastRev` state gets too large so that it cannot be stored in the DB anymore (resulting in `Data too long for column 'value_json' at row 1` exceptions). This is because it never forgets branches that used to exist in the past (https://github.com/buildbot/buildbot/blob/master/master/buildbot/changes/gitpoller.py#L239), so its `lastRev` state never shrinks.\r\n\r\nI'm wondering whether the `lastRev` state really needs to be updated (and not replaced) by the current set of references. The only reason why update is used (instead of replacing) I can think of is that if a branch is deleted and then restored (with the same HEAD), it will not trigger a change.\r\n\r\nI'd like to create a PR which removes the updating, or at least makes it configurable. What is the preferred solution here?\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport os\nimport re\nimport stat\nfrom urllib.parse import quote as urlquote\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.changes import base\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import private_tempdir\nfrom buildbot.util import runprocess\nfrom buildbot.util.git import GitMixin\nfrom buildbot.util.git import getSshKnownHostsContents\nfrom buildbot.util.misc import writeLocalFile\nfrom buildbot.util.state import StateMixin\n\n\nclass GitError(Exception):\n\n \"\"\"Raised when git exits with code 128.\"\"\"\n\n\nclass GitPoller(base.PollingChangeSource, StateMixin, GitMixin):\n\n \"\"\"This source will poll a remote git repo for changes and submit\n them to the change master.\"\"\"\n\n compare_attrs = (\"repourl\", \"branches\", \"workdir\", \"pollInterval\", \"gitbin\", \"usetimestamps\",\n \"category\", \"project\", \"pollAtLaunch\", \"buildPushesWithNoCommits\",\n \"sshPrivateKey\", \"sshHostKey\", \"sshKnownHosts\", \"pollRandomDelayMin\",\n \"pollRandomDelayMax\")\n\n secrets = (\"sshPrivateKey\", \"sshHostKey\", \"sshKnownHosts\")\n\n def __init__(self, repourl, branches=None, branch=None, workdir=None, pollInterval=10 * 60,\n gitbin=\"git\", usetimestamps=True, category=None, project=None, pollinterval=-2,\n fetch_refspec=None, encoding=\"utf-8\", name=None, pollAtLaunch=False,\n buildPushesWithNoCommits=False, only_tags=False, sshPrivateKey=None,\n sshHostKey=None, sshKnownHosts=None, pollRandomDelayMin=0, pollRandomDelayMax=0):\n\n # for backward compatibility; the parameter used to be spelled with 'i'\n if pollinterval != -2:\n pollInterval = pollinterval\n\n if name is None:\n name = repourl\n\n super().__init__(name=name, pollInterval=pollInterval, pollAtLaunch=pollAtLaunch,\n pollRandomDelayMin=pollRandomDelayMin,\n pollRandomDelayMax=pollRandomDelayMax, sshPrivateKey=sshPrivateKey,\n sshHostKey=sshHostKey, sshKnownHosts=sshKnownHosts)\n\n if project is None:\n project = ''\n\n if only_tags and (branch or branches):\n config.error(\"GitPoller: can't specify only_tags and branch/branches\")\n if branch and branches:\n config.error(\"GitPoller: can't specify both branch and branches\")\n elif branch:\n branches = [branch]\n elif not branches:\n if only_tags:\n branches = lambda ref: ref.startswith('refs/tags/') # noqa: E731\n else:\n branches = ['master']\n\n self.repourl = repourl\n self.branches = branches\n self.encoding = encoding\n self.buildPushesWithNoCommits = buildPushesWithNoCommits\n self.gitbin = gitbin\n self.workdir = workdir\n self.usetimestamps = usetimestamps\n self.category = category if callable(\n category) else bytes2unicode(category, encoding=self.encoding)\n self.project = bytes2unicode(project, encoding=self.encoding)\n self.changeCount = 0\n self.lastRev = {}\n self.sshPrivateKey = sshPrivateKey\n self.sshHostKey = sshHostKey\n self.sshKnownHosts = sshKnownHosts\n self.setupGit(logname='GitPoller')\n\n if fetch_refspec is not None:\n config.error(\"GitPoller: fetch_refspec is no longer supported. \"\n \"Instead, only the given branches are downloaded.\")\n\n if self.workdir is None:\n self.workdir = 'gitpoller-work'\n\n @defer.inlineCallbacks\n def _checkGitFeatures(self):\n stdout = yield self._dovccmd('--version', [])\n\n self.parseGitFeatures(stdout)\n if not self.gitInstalled:\n raise EnvironmentError('Git is not installed')\n\n if (self.sshPrivateKey is not None and\n not self.supportsSshPrivateKeyAsEnvOption):\n raise EnvironmentError('SSH private keys require Git 2.3.0 or newer')\n\n @defer.inlineCallbacks\n def activate(self):\n # make our workdir absolute, relative to the master's basedir\n if not os.path.isabs(self.workdir):\n self.workdir = os.path.join(self.master.basedir, self.workdir)\n log.msg(\"gitpoller: using workdir '{}'\".format(self.workdir))\n\n try:\n self.lastRev = yield self.getState('lastRev', {})\n\n super().activate()\n except Exception as e:\n log.err(e, 'while initializing GitPoller repository')\n\n def describe(self):\n str = ('GitPoller watching the remote git repository ' +\n bytes2unicode(self.repourl, self.encoding))\n\n if self.branches:\n if self.branches is True:\n str += ', branches: ALL'\n elif not callable(self.branches):\n str += ', branches: ' + ', '.join(self.branches)\n\n if not self.master:\n str += \" [STOPPED - check log]\"\n\n return str\n\n def _getBranches(self):\n d = self._dovccmd('ls-remote', ['--refs', self.repourl])\n\n @d.addCallback\n def parseRemote(rows):\n branches = []\n for row in rows.splitlines():\n if '\\t' not in row:\n # Not a useful line\n continue\n sha, ref = row.split(\"\\t\")\n branches.append(ref)\n return branches\n return d\n\n def _headsFilter(self, branch):\n \"\"\"Filter out remote references that don't begin with 'refs/heads'.\"\"\"\n return branch.startswith(\"refs/heads/\")\n\n def _removeHeads(self, branch):\n \"\"\"Remove 'refs/heads/' prefix from remote references.\"\"\"\n if branch.startswith(\"refs/heads/\"):\n branch = branch[11:]\n return branch\n\n def _trackerBranch(self, branch):\n # manually quote tilde for Python 3.7\n url = urlquote(self.repourl, '').replace('~', '%7E')\n return \"refs/buildbot/{}/{}\".format(url, self._removeHeads(branch))\n\n def poll_should_exit(self):\n # A single gitpoller loop may take a while on a loaded master, which would block\n # reconfiguration, so we try to exit early.\n return not self.doPoll.running\n\n @defer.inlineCallbacks\n def poll(self):\n yield self._checkGitFeatures()\n\n try:\n yield self._dovccmd('init', ['--bare', self.workdir])\n except GitError as e:\n log.msg(e.args[0])\n return\n\n branches = self.branches if self.branches else []\n remote_refs = yield self._getBranches()\n\n if self.poll_should_exit():\n return\n\n if branches is True or callable(branches):\n if callable(self.branches):\n branches = [b for b in remote_refs if self.branches(b)]\n else:\n branches = [b for b in remote_refs if self._headsFilter(b)]\n elif branches and remote_refs:\n remote_branches = [self._removeHeads(b) for b in remote_refs]\n branches = sorted(list(set(branches) & set(remote_branches)))\n\n refspecs = [\n '+{}:{}'.format(self._removeHeads(branch), self._trackerBranch(branch))\n for branch in branches\n ]\n\n try:\n yield self._dovccmd('fetch', [self.repourl] + refspecs,\n path=self.workdir)\n except GitError as e:\n log.msg(e.args[0])\n return\n\n revs = {}\n log.msg('gitpoller: processing changes from \"{}\"'.format(self.repourl))\n for branch in branches:\n try:\n if self.poll_should_exit(): # pragma: no cover\n # Note that we still want to update the last known revisions for the branches\n # we did process\n break\n\n rev = yield self._dovccmd(\n 'rev-parse', [self._trackerBranch(branch)], path=self.workdir)\n revs[branch] = bytes2unicode(rev, self.encoding)\n yield self._process_changes(revs[branch], branch)\n except Exception:\n log.err(_why=\"trying to poll branch {} of {}\".format(\n branch, self.repourl))\n\n self.lastRev.update(revs)\n yield self.setState('lastRev', self.lastRev)\n\n def _get_commit_comments(self, rev):\n args = ['--no-walk', r'--format=%s%n%b', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n return d\n\n def _get_commit_timestamp(self, rev):\n # unix timestamp\n args = ['--no-walk', r'--format=%ct', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n @d.addCallback\n def process(git_output):\n if self.usetimestamps:\n try:\n stamp = int(git_output)\n except Exception as e:\n log.msg(('gitpoller: caught exception converting output \\'{}\\' to timestamp'\n ).format(git_output))\n raise e\n return stamp\n return None\n return d\n\n def _get_commit_files(self, rev):\n args = ['--name-only', '--no-walk', r'--format=%n', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n def decode_file(file):\n # git use octal char sequences in quotes when non ASCII\n match = re.match('^\"(.*)\"$', file)\n if match:\n file = bytes2unicode(match.groups()[0], encoding=self.encoding,\n errors='unicode_escape')\n return bytes2unicode(file, encoding=self.encoding)\n\n @d.addCallback\n def process(git_output):\n fileList = [decode_file(file)\n for file in\n [s for s in git_output.splitlines() if len(s)]]\n return fileList\n return d\n\n def _get_commit_author(self, rev):\n args = ['--no-walk', r'--format=%aN <%aE>', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n @d.addCallback\n def process(git_output):\n if not git_output:\n raise EnvironmentError('could not get commit author for rev')\n return git_output\n return d\n\n @defer.inlineCallbacks\n def _get_commit_committer(self, rev):\n args = ['--no-walk', r'--format=%cN <%cE>', rev, '--']\n res = yield self._dovccmd('log', args, path=self.workdir)\n if not res:\n raise EnvironmentError('could not get commit committer for rev')\n return res\n\n @defer.inlineCallbacks\n def _process_changes(self, newRev, branch):\n \"\"\"\n Read changes since last change.\n\n - Read list of commit hashes.\n - Extract details from each commit.\n - Add changes to database.\n \"\"\"\n\n # initial run, don't parse all history\n if not self.lastRev:\n return\n\n # get the change list\n revListArgs = (['--ignore-missing'] +\n ['--format=%H', '{}'.format(newRev)] +\n ['^' + rev\n for rev in sorted(self.lastRev.values())] +\n ['--'])\n self.changeCount = 0\n results = yield self._dovccmd('log', revListArgs, path=self.workdir)\n\n # process oldest change first\n revList = results.split()\n revList.reverse()\n\n if self.buildPushesWithNoCommits and not revList:\n existingRev = self.lastRev.get(branch)\n if existingRev != newRev:\n revList = [newRev]\n if existingRev is None:\n # This branch was completely unknown, rebuild\n log.msg('gitpoller: rebuilding {} for new branch \"{}\"'.format(\n newRev, branch))\n else:\n # This branch is known, but it now points to a different\n # commit than last time we saw it, rebuild.\n log.msg('gitpoller: rebuilding {} for updated branch \"{}\"'.format(\n newRev, branch))\n\n self.changeCount = len(revList)\n self.lastRev[branch] = newRev\n\n if self.changeCount:\n log.msg('gitpoller: processing {} changes: {} from \"{}\" branch \"{}\"'.format(\n self.changeCount, revList, self.repourl, branch))\n\n for rev in revList:\n dl = defer.DeferredList([\n self._get_commit_timestamp(rev),\n self._get_commit_author(rev),\n self._get_commit_committer(rev),\n self._get_commit_files(rev),\n self._get_commit_comments(rev),\n ], consumeErrors=True)\n\n results = yield dl\n\n # check for failures\n failures = [r[1] for r in results if not r[0]]\n if failures:\n for failure in failures:\n log.err(\n failure, \"while processing changes for {} {}\".format(newRev, branch))\n # just fail on the first error; they're probably all related!\n failures[0].raiseException()\n\n timestamp, author, committer, files, comments = [r[1] for r in results]\n\n yield self.master.data.updates.addChange(\n author=author,\n committer=committer,\n revision=bytes2unicode(rev, encoding=self.encoding),\n files=files, comments=comments, when_timestamp=timestamp,\n branch=bytes2unicode(self._removeHeads(branch)),\n project=self.project,\n repository=bytes2unicode(self.repourl, encoding=self.encoding),\n category=self.category, src='git')\n\n def _isSshPrivateKeyNeededForCommand(self, command):\n commandsThatNeedKey = [\n 'fetch',\n 'ls-remote',\n ]\n if self.sshPrivateKey is not None and command in commandsThatNeedKey:\n return True\n return False\n\n def _downloadSshPrivateKey(self, keyPath):\n # We change the permissions of the key file to be user-readable only so\n # that ssh does not complain. This is not used for security because the\n # parent directory will have proper permissions.\n writeLocalFile(keyPath, self.sshPrivateKey, mode=stat.S_IRUSR)\n\n def _downloadSshKnownHosts(self, path):\n if self.sshKnownHosts is not None:\n contents = self.sshKnownHosts\n else:\n contents = getSshKnownHostsContents(self.sshHostKey)\n writeLocalFile(path, contents)\n\n def _getSshPrivateKeyPath(self, ssh_data_path):\n return os.path.join(ssh_data_path, 'ssh-key')\n\n def _getSshKnownHostsPath(self, ssh_data_path):\n return os.path.join(ssh_data_path, 'ssh-known-hosts')\n\n @defer.inlineCallbacks\n def _dovccmd(self, command, args, path=None):\n if self._isSshPrivateKeyNeededForCommand(command):\n with private_tempdir.PrivateTemporaryDirectory(\n dir=self.workdir, prefix='.buildbot-ssh') as tmp_path:\n stdout = yield self._dovccmdImpl(command, args, path, tmp_path)\n else:\n stdout = yield self._dovccmdImpl(command, args, path, None)\n return stdout\n\n @defer.inlineCallbacks\n def _dovccmdImpl(self, command, args, path, ssh_workdir):\n full_args = []\n full_env = os.environ.copy()\n\n if self._isSshPrivateKeyNeededForCommand(command):\n key_path = self._getSshPrivateKeyPath(ssh_workdir)\n self._downloadSshPrivateKey(key_path)\n\n known_hosts_path = None\n if self.sshHostKey is not None or self.sshKnownHosts is not None:\n known_hosts_path = self._getSshKnownHostsPath(ssh_workdir)\n self._downloadSshKnownHosts(known_hosts_path)\n\n self.adjustCommandParamsForSshPrivateKey(full_args, full_env,\n key_path, None,\n known_hosts_path)\n\n full_args += [command] + args\n\n res = yield runprocess.run_process(self.master.reactor, [self.gitbin] + full_args, path,\n env=full_env)\n (code, stdout, stderr) = res\n stdout = bytes2unicode(stdout, self.encoding)\n stderr = bytes2unicode(stderr, self.encoding)\n if code != 0:\n if code == 128:\n raise GitError('command {} in {} on repourl {} failed with exit code {}: {}'.format(\n full_args, path, self.repourl, code, stderr))\n raise EnvironmentError(('command {} in {} on repourl {} failed with exit code {}: {}'\n ).format(full_args, path, self.repourl, code, stderr))\n return stdout.strip()\n", "path": "master/buildbot/changes/gitpoller.py"}], "after_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nimport os\nimport re\nimport stat\nfrom urllib.parse import quote as urlquote\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nfrom buildbot import config\nfrom buildbot.changes import base\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import private_tempdir\nfrom buildbot.util import runprocess\nfrom buildbot.util.git import GitMixin\nfrom buildbot.util.git import getSshKnownHostsContents\nfrom buildbot.util.misc import writeLocalFile\nfrom buildbot.util.state import StateMixin\n\n\nclass GitError(Exception):\n\n \"\"\"Raised when git exits with code 128.\"\"\"\n\n\nclass GitPoller(base.PollingChangeSource, StateMixin, GitMixin):\n\n \"\"\"This source will poll a remote git repo for changes and submit\n them to the change master.\"\"\"\n\n compare_attrs = (\"repourl\", \"branches\", \"workdir\", \"pollInterval\", \"gitbin\", \"usetimestamps\",\n \"category\", \"project\", \"pollAtLaunch\", \"buildPushesWithNoCommits\",\n \"sshPrivateKey\", \"sshHostKey\", \"sshKnownHosts\", \"pollRandomDelayMin\",\n \"pollRandomDelayMax\")\n\n secrets = (\"sshPrivateKey\", \"sshHostKey\", \"sshKnownHosts\")\n\n def __init__(self, repourl, branches=None, branch=None, workdir=None, pollInterval=10 * 60,\n gitbin=\"git\", usetimestamps=True, category=None, project=None, pollinterval=-2,\n fetch_refspec=None, encoding=\"utf-8\", name=None, pollAtLaunch=False,\n buildPushesWithNoCommits=False, only_tags=False, sshPrivateKey=None,\n sshHostKey=None, sshKnownHosts=None, pollRandomDelayMin=0, pollRandomDelayMax=0):\n\n # for backward compatibility; the parameter used to be spelled with 'i'\n if pollinterval != -2:\n pollInterval = pollinterval\n\n if name is None:\n name = repourl\n\n super().__init__(name=name, pollInterval=pollInterval, pollAtLaunch=pollAtLaunch,\n pollRandomDelayMin=pollRandomDelayMin,\n pollRandomDelayMax=pollRandomDelayMax, sshPrivateKey=sshPrivateKey,\n sshHostKey=sshHostKey, sshKnownHosts=sshKnownHosts)\n\n if project is None:\n project = ''\n\n if only_tags and (branch or branches):\n config.error(\"GitPoller: can't specify only_tags and branch/branches\")\n if branch and branches:\n config.error(\"GitPoller: can't specify both branch and branches\")\n elif branch:\n branches = [branch]\n elif not branches:\n if only_tags:\n branches = lambda ref: ref.startswith('refs/tags/') # noqa: E731\n else:\n branches = ['master']\n\n self.repourl = repourl\n self.branches = branches\n self.encoding = encoding\n self.buildPushesWithNoCommits = buildPushesWithNoCommits\n self.gitbin = gitbin\n self.workdir = workdir\n self.usetimestamps = usetimestamps\n self.category = category if callable(\n category) else bytes2unicode(category, encoding=self.encoding)\n self.project = bytes2unicode(project, encoding=self.encoding)\n self.changeCount = 0\n self.lastRev = {}\n self.sshPrivateKey = sshPrivateKey\n self.sshHostKey = sshHostKey\n self.sshKnownHosts = sshKnownHosts\n self.setupGit(logname='GitPoller')\n\n if fetch_refspec is not None:\n config.error(\"GitPoller: fetch_refspec is no longer supported. \"\n \"Instead, only the given branches are downloaded.\")\n\n if self.workdir is None:\n self.workdir = 'gitpoller-work'\n\n @defer.inlineCallbacks\n def _checkGitFeatures(self):\n stdout = yield self._dovccmd('--version', [])\n\n self.parseGitFeatures(stdout)\n if not self.gitInstalled:\n raise EnvironmentError('Git is not installed')\n\n if (self.sshPrivateKey is not None and\n not self.supportsSshPrivateKeyAsEnvOption):\n raise EnvironmentError('SSH private keys require Git 2.3.0 or newer')\n\n @defer.inlineCallbacks\n def activate(self):\n # make our workdir absolute, relative to the master's basedir\n if not os.path.isabs(self.workdir):\n self.workdir = os.path.join(self.master.basedir, self.workdir)\n log.msg(\"gitpoller: using workdir '{}'\".format(self.workdir))\n\n try:\n self.lastRev = yield self.getState('lastRev', {})\n\n super().activate()\n except Exception as e:\n log.err(e, 'while initializing GitPoller repository')\n\n def describe(self):\n str = ('GitPoller watching the remote git repository ' +\n bytes2unicode(self.repourl, self.encoding))\n\n if self.branches:\n if self.branches is True:\n str += ', branches: ALL'\n elif not callable(self.branches):\n str += ', branches: ' + ', '.join(self.branches)\n\n if not self.master:\n str += \" [STOPPED - check log]\"\n\n return str\n\n def _getBranches(self):\n d = self._dovccmd('ls-remote', ['--refs', self.repourl])\n\n @d.addCallback\n def parseRemote(rows):\n branches = []\n for row in rows.splitlines():\n if '\\t' not in row:\n # Not a useful line\n continue\n sha, ref = row.split(\"\\t\")\n branches.append(ref)\n return branches\n return d\n\n def _headsFilter(self, branch):\n \"\"\"Filter out remote references that don't begin with 'refs/heads'.\"\"\"\n return branch.startswith(\"refs/heads/\")\n\n def _removeHeads(self, branch):\n \"\"\"Remove 'refs/heads/' prefix from remote references.\"\"\"\n if branch.startswith(\"refs/heads/\"):\n branch = branch[11:]\n return branch\n\n def _trackerBranch(self, branch):\n # manually quote tilde for Python 3.7\n url = urlquote(self.repourl, '').replace('~', '%7E')\n return \"refs/buildbot/{}/{}\".format(url, self._removeHeads(branch))\n\n def poll_should_exit(self):\n # A single gitpoller loop may take a while on a loaded master, which would block\n # reconfiguration, so we try to exit early.\n return not self.doPoll.running\n\n @defer.inlineCallbacks\n def poll(self):\n yield self._checkGitFeatures()\n\n try:\n yield self._dovccmd('init', ['--bare', self.workdir])\n except GitError as e:\n log.msg(e.args[0])\n return\n\n branches = self.branches if self.branches else []\n remote_refs = yield self._getBranches()\n\n if self.poll_should_exit():\n return\n\n if branches is True or callable(branches):\n if callable(self.branches):\n branches = [b for b in remote_refs if self.branches(b)]\n else:\n branches = [b for b in remote_refs if self._headsFilter(b)]\n elif branches and remote_refs:\n remote_branches = [self._removeHeads(b) for b in remote_refs]\n branches = sorted(list(set(branches) & set(remote_branches)))\n\n refspecs = [\n '+{}:{}'.format(self._removeHeads(branch), self._trackerBranch(branch))\n for branch in branches\n ]\n\n try:\n yield self._dovccmd('fetch', [self.repourl] + refspecs,\n path=self.workdir)\n except GitError as e:\n log.msg(e.args[0])\n return\n\n revs = {}\n log.msg('gitpoller: processing changes from \"{}\"'.format(self.repourl))\n for branch in branches:\n try:\n if self.poll_should_exit(): # pragma: no cover\n # Note that we still want to update the last known revisions for the branches\n # we did process\n break\n\n rev = yield self._dovccmd(\n 'rev-parse', [self._trackerBranch(branch)], path=self.workdir)\n revs[branch] = bytes2unicode(rev, self.encoding)\n yield self._process_changes(revs[branch], branch)\n except Exception:\n log.err(_why=\"trying to poll branch {} of {}\".format(\n branch, self.repourl))\n\n self.lastRev = revs\n yield self.setState('lastRev', self.lastRev)\n\n def _get_commit_comments(self, rev):\n args = ['--no-walk', r'--format=%s%n%b', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n return d\n\n def _get_commit_timestamp(self, rev):\n # unix timestamp\n args = ['--no-walk', r'--format=%ct', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n @d.addCallback\n def process(git_output):\n if self.usetimestamps:\n try:\n stamp = int(git_output)\n except Exception as e:\n log.msg(('gitpoller: caught exception converting output \\'{}\\' to timestamp'\n ).format(git_output))\n raise e\n return stamp\n return None\n return d\n\n def _get_commit_files(self, rev):\n args = ['--name-only', '--no-walk', r'--format=%n', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n def decode_file(file):\n # git use octal char sequences in quotes when non ASCII\n match = re.match('^\"(.*)\"$', file)\n if match:\n file = bytes2unicode(match.groups()[0], encoding=self.encoding,\n errors='unicode_escape')\n return bytes2unicode(file, encoding=self.encoding)\n\n @d.addCallback\n def process(git_output):\n fileList = [decode_file(file)\n for file in\n [s for s in git_output.splitlines() if len(s)]]\n return fileList\n return d\n\n def _get_commit_author(self, rev):\n args = ['--no-walk', r'--format=%aN <%aE>', rev, '--']\n d = self._dovccmd('log', args, path=self.workdir)\n\n @d.addCallback\n def process(git_output):\n if not git_output:\n raise EnvironmentError('could not get commit author for rev')\n return git_output\n return d\n\n @defer.inlineCallbacks\n def _get_commit_committer(self, rev):\n args = ['--no-walk', r'--format=%cN <%cE>', rev, '--']\n res = yield self._dovccmd('log', args, path=self.workdir)\n if not res:\n raise EnvironmentError('could not get commit committer for rev')\n return res\n\n @defer.inlineCallbacks\n def _process_changes(self, newRev, branch):\n \"\"\"\n Read changes since last change.\n\n - Read list of commit hashes.\n - Extract details from each commit.\n - Add changes to database.\n \"\"\"\n\n # initial run, don't parse all history\n if not self.lastRev:\n return\n\n # get the change list\n revListArgs = (['--ignore-missing'] +\n ['--format=%H', '{}'.format(newRev)] +\n ['^' + rev\n for rev in sorted(self.lastRev.values())] +\n ['--'])\n self.changeCount = 0\n results = yield self._dovccmd('log', revListArgs, path=self.workdir)\n\n # process oldest change first\n revList = results.split()\n revList.reverse()\n\n if self.buildPushesWithNoCommits and not revList:\n existingRev = self.lastRev.get(branch)\n if existingRev != newRev:\n revList = [newRev]\n if existingRev is None:\n # This branch was completely unknown, rebuild\n log.msg('gitpoller: rebuilding {} for new branch \"{}\"'.format(\n newRev, branch))\n else:\n # This branch is known, but it now points to a different\n # commit than last time we saw it, rebuild.\n log.msg('gitpoller: rebuilding {} for updated branch \"{}\"'.format(\n newRev, branch))\n\n self.changeCount = len(revList)\n self.lastRev[branch] = newRev\n\n if self.changeCount:\n log.msg('gitpoller: processing {} changes: {} from \"{}\" branch \"{}\"'.format(\n self.changeCount, revList, self.repourl, branch))\n\n for rev in revList:\n dl = defer.DeferredList([\n self._get_commit_timestamp(rev),\n self._get_commit_author(rev),\n self._get_commit_committer(rev),\n self._get_commit_files(rev),\n self._get_commit_comments(rev),\n ], consumeErrors=True)\n\n results = yield dl\n\n # check for failures\n failures = [r[1] for r in results if not r[0]]\n if failures:\n for failure in failures:\n log.err(\n failure, \"while processing changes for {} {}\".format(newRev, branch))\n # just fail on the first error; they're probably all related!\n failures[0].raiseException()\n\n timestamp, author, committer, files, comments = [r[1] for r in results]\n\n yield self.master.data.updates.addChange(\n author=author,\n committer=committer,\n revision=bytes2unicode(rev, encoding=self.encoding),\n files=files, comments=comments, when_timestamp=timestamp,\n branch=bytes2unicode(self._removeHeads(branch)),\n project=self.project,\n repository=bytes2unicode(self.repourl, encoding=self.encoding),\n category=self.category, src='git')\n\n def _isSshPrivateKeyNeededForCommand(self, command):\n commandsThatNeedKey = [\n 'fetch',\n 'ls-remote',\n ]\n if self.sshPrivateKey is not None and command in commandsThatNeedKey:\n return True\n return False\n\n def _downloadSshPrivateKey(self, keyPath):\n # We change the permissions of the key file to be user-readable only so\n # that ssh does not complain. This is not used for security because the\n # parent directory will have proper permissions.\n writeLocalFile(keyPath, self.sshPrivateKey, mode=stat.S_IRUSR)\n\n def _downloadSshKnownHosts(self, path):\n if self.sshKnownHosts is not None:\n contents = self.sshKnownHosts\n else:\n contents = getSshKnownHostsContents(self.sshHostKey)\n writeLocalFile(path, contents)\n\n def _getSshPrivateKeyPath(self, ssh_data_path):\n return os.path.join(ssh_data_path, 'ssh-key')\n\n def _getSshKnownHostsPath(self, ssh_data_path):\n return os.path.join(ssh_data_path, 'ssh-known-hosts')\n\n @defer.inlineCallbacks\n def _dovccmd(self, command, args, path=None):\n if self._isSshPrivateKeyNeededForCommand(command):\n with private_tempdir.PrivateTemporaryDirectory(\n dir=self.workdir, prefix='.buildbot-ssh') as tmp_path:\n stdout = yield self._dovccmdImpl(command, args, path, tmp_path)\n else:\n stdout = yield self._dovccmdImpl(command, args, path, None)\n return stdout\n\n @defer.inlineCallbacks\n def _dovccmdImpl(self, command, args, path, ssh_workdir):\n full_args = []\n full_env = os.environ.copy()\n\n if self._isSshPrivateKeyNeededForCommand(command):\n key_path = self._getSshPrivateKeyPath(ssh_workdir)\n self._downloadSshPrivateKey(key_path)\n\n known_hosts_path = None\n if self.sshHostKey is not None or self.sshKnownHosts is not None:\n known_hosts_path = self._getSshKnownHostsPath(ssh_workdir)\n self._downloadSshKnownHosts(known_hosts_path)\n\n self.adjustCommandParamsForSshPrivateKey(full_args, full_env,\n key_path, None,\n known_hosts_path)\n\n full_args += [command] + args\n\n res = yield runprocess.run_process(self.master.reactor, [self.gitbin] + full_args, path,\n env=full_env)\n (code, stdout, stderr) = res\n stdout = bytes2unicode(stdout, self.encoding)\n stderr = bytes2unicode(stderr, self.encoding)\n if code != 0:\n if code == 128:\n raise GitError('command {} in {} on repourl {} failed with exit code {}: {}'.format(\n full_args, path, self.repourl, code, stderr))\n raise EnvironmentError(('command {} in {} on repourl {} failed with exit code {}: {}'\n ).format(full_args, path, self.repourl, code, stderr))\n return stdout.strip()\n", "path": "master/buildbot/changes/gitpoller.py"}]} |
gh_patches_debug_1159 | rasdani/github-patches | git_diff | pwr-Solaar__Solaar-1301 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Support for m500s mouse
**Information**
<!-- Please update to Solaar from this repository before asking for a new feature. -->
- Solaar version 1.0.7
- Distribution: Fedora 34
- Kernel version: Linux 5.14.13-200.fc34.x86_64 x86_64 GNU/Linux
- Output of `solaar show` for the target device (if applicable):
**Is your feature request related to a problem? Please describe.**
The Logitech m500s mouse is not detected.
**Describe the solution you'd like**
The ability to adjust the dpi primarily, but any other available features would be nice.
**Additional context**
Here is some output based off the information requested in issue 1225. My logitech mx master 2 works fine and was unplugged for this output. The hidraw appears to give the correct rw permissions when the mouse is plugged and unplugged.
```
lsusb
Bus 001 Device 005: ID 046d:c093 Logitech, Inc. Advanced Corded Mouse M500s
ls -l /dev/hidraw*
crw-------. 1 root root 240, 2 Oct 23 13:44 /dev/hidraw2
crw-------. 1 root root 240, 3 Oct 23 13:24 /dev/hidraw3
crw-------. 1 root root 240, 4 Oct 23 13:24 /dev/hidraw4
crw-rw----+ 1 root root 240, 5 Oct 23 14:28 /dev/hidraw5
crw-rw----+ 1 root root 240, 6 Oct 23 14:28 /dev/hidraw6
solaar -dd show
14:38:09,697 DEBUG [MainThread] logitech_receiver.diversion: rule Key assuming action "pressed" for "Brightness Down"
14:38:09,697 DEBUG [MainThread] logitech_receiver.diversion: rule Key assuming action "pressed" for "Brightness Up"
14:38:09,698 DEBUG [MainThread] solaar.ui.tray: using AppIndicator3
14:38:09,708 DEBUG [MainThread] hidapi.udev: Found device BID 0003 VID 0000046D PID 0000C093 INTERFACE 0 FILTER 2
14:38:09,709 DEBUG [MainThread] hidapi.udev: Found device BID 0003 VID 0000046D PID 0000C093 INTERFACE 1 FILTER 2
solaar: error: Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/solaar/cli/__init__.py", line 203, in run
raise Exception('No devices found')
Exception: No devices found
```
Please let me know if there is any additional information needed. Thank you.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lib/logitech_receiver/descriptors.py`
Content:
```
1 # -*- python-mode -*-
2 # -*- coding: UTF-8 -*-
3
4 ## Copyright (C) 2012-2013 Daniel Pavel
5 ##
6 ## This program is free software; you can redistribute it and/or modify
7 ## it under the terms of the GNU General Public License as published by
8 ## the Free Software Foundation; either version 2 of the License, or
9 ## (at your option) any later version.
10 ##
11 ## This program is distributed in the hope that it will be useful,
12 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
13 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 ## GNU General Public License for more details.
15 ##
16 ## You should have received a copy of the GNU General Public License along
17 ## with this program; if not, write to the Free Software Foundation, Inc.,
18 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
19
20 from __future__ import absolute_import, division, print_function, unicode_literals
21
22 from collections import namedtuple
23
24 from .common import NamedInts as _NamedInts
25 from .hidpp10 import DEVICE_KIND as _DK
26 from .hidpp10 import REGISTERS as _R
27 from .settings_templates import FeatureSettings as _FS
28 from .settings_templates import RegisterSettings as _RS
29
30 #
31 #
32 #
33
34 _DeviceDescriptor = namedtuple(
35 '_DeviceDescriptor',
36 ('name', 'kind', 'wpid', 'codename', 'protocol', 'registers', 'settings', 'persister', 'usbid', 'interface', 'btid')
37 )
38 del namedtuple
39
40 DEVICES_WPID = {}
41 DEVICES = {}
42
43
44 def _D(
45 name,
46 codename=None,
47 kind=None,
48 wpid=None,
49 protocol=None,
50 registers=None,
51 settings=None,
52 persister=None,
53 usbid=None,
54 interface=None,
55 btid=None,
56 ):
57 assert name
58
59 if kind is None:
60 kind = (
61 _DK.mouse if 'Mouse' in name else _DK.keyboard if 'Keyboard' in name else _DK.numpad
62 if 'Number Pad' in name else _DK.touchpad if 'Touchpad' in name else _DK.trackball if 'Trackball' in name else None
63 )
64 assert kind is not None, 'descriptor for %s does not have kind set' % name
65
66 # heuristic: the codename is the last word in the device name
67 if codename is None and ' ' in name:
68 codename = name.split(' ')[-1]
69 assert codename is not None, 'descriptor for %s does not have codename set' % name
70
71 if protocol is not None:
72 # ? 2.0 devices should not have any registers
73 _kind = lambda s: s._rw.kind if hasattr(s, '_rw') else s._rw_kind
74 if protocol < 2.0:
75 assert settings is None or all(_kind(s) == 1 for s in settings)
76 else:
77 assert registers is None
78 assert settings is None or all(_kind(s) == 2 for s in settings)
79
80 if wpid:
81 for w in wpid if isinstance(wpid, tuple) else (wpid, ):
82 if protocol > 1.0:
83 assert w[0:1] == '4', '%s has protocol %0.1f, wpid %s' % (name, protocol, w)
84 else:
85 if w[0:1] == '1':
86 assert kind == _DK.mouse, '%s has protocol %0.1f, wpid %s' % (name, protocol, w)
87 elif w[0:1] == '2':
88 assert kind in (_DK.keyboard, _DK.numpad), '%s has protocol %0.1f, wpid %s' % (name, protocol, w)
89
90 device_descriptor = _DeviceDescriptor(
91 name=name,
92 kind=kind,
93 wpid=wpid,
94 codename=codename,
95 protocol=protocol,
96 registers=registers,
97 settings=settings,
98 persister=persister,
99 usbid=usbid,
100 interface=interface,
101 btid=btid
102 )
103
104 if usbid:
105 found = get_usbid(usbid)
106 assert found is None, 'duplicate usbid in device descriptors: %s' % (found, )
107 if btid:
108 found = get_btid(btid)
109 assert found is None, 'duplicate btid in device descriptors: %s' % (found, )
110
111 assert codename not in DEVICES, 'duplicate codename in device descriptors: %s' % (DEVICES[codename], )
112 DEVICES[codename] = device_descriptor
113
114 if wpid:
115 for w in wpid if isinstance(wpid, tuple) else (wpid, ):
116 assert w not in DEVICES_WPID, 'duplicate wpid in device descriptors: %s' % (DEVICES_WPID[w], )
117 DEVICES_WPID[w] = device_descriptor
118
119
120 def get_wpid(wpid):
121 return DEVICES_WPID.get(wpid)
122
123
124 def get_codename(codename):
125 return DEVICES.get(codename)
126
127
128 def get_usbid(usbid):
129 if isinstance(usbid, str):
130 usbid = int(usbid, 16)
131 found = next((x for x in DEVICES.values() if x.usbid == usbid), None)
132 return found
133
134
135 def get_btid(btid):
136 if isinstance(btid, str):
137 btid = int(btid, 16)
138 found = next((x for x in DEVICES.values() if x.btid == btid), None)
139 return found
140
141
142 #
143 #
144 #
145
146 _PERFORMANCE_MX_DPIS = _NamedInts.range(0x81, 0x8F, lambda x: str((x - 0x80) * 100))
147
148 #
149 #
150 #
151
152 # Some HID++1.0 registers and HID++2.0 features can be discovered at run-time,
153 # so they are not specified here.
154 #
155 # For known registers, however, please do specify them here -- avoids
156 # unnecessary communication with the device and makes it easier to make certain
157 # decisions when querying the device's state.
158 #
159 # Specify a negative value to blacklist a certain register for a device.
160 #
161 # Usually, state registers (battery, leds, some features, etc) are only used by
162 # HID++ 1.0 devices, while HID++ 2.0 devices use features for the same
163 # functionalities. This is a rule that's been discovered by trial-and-error,
164 # so it may change in the future.
165
166 # Well-known registers (in hex):
167 # * 00 - notification flags (all devices)
168 # 01 - mice: smooth scrolling
169 # 07 - battery status
170 # 09 - keyboards: FN swap (if it has the FN key)
171 # 0D - battery charge
172 # a device may have either the 07 or 0D register available;
173 # no known device uses both
174 # 51 - leds
175 # 63 - mice: DPI
176 # * F1 - firmware info
177 # Some registers appear to be universally supported, no matter the HID++ version
178 # (marked with *). The rest may or may not be supported, and their values may or
179 # may not mean the same thing across different devices.
180
181 # The 'codename' and 'kind' fields are usually guessed from the device name,
182 # but in some cases (like the Logitech Cube) that heuristic fails and they have
183 # to be specified.
184 #
185 # The 'protocol' and 'wpid' fields are optional (they can be discovered at
186 # runtime), but specifying them here speeds up device discovery and reduces the
187 # USB traffic Solaar has to do to fully identify peripherals.
188 # Same goes for HID++ 2.0 feature settings (like _feature_fn_swap).
189 #
190 # The 'registers' field indicates read-only registers, specifying a state. These
191 # are valid (AFAIK) only to HID++ 1.0 devices.
192 # The 'settings' field indicates a read/write register; based on them Solaar
193 # generates, at runtime, the settings controls in the device panel. HID++ 1.0
194 # devices may only have register-based settings; HID++ 2.0 devices may only have
195 # feature-based settings.
196
197 # Keyboards
198
199 _D('Wireless Keyboard K230', protocol=2.0, wpid='400D')
200 _D('Wireless Keyboard K270(unifying)', protocol=2.0, wpid='4003')
201 _D(
202 'Wireless Keyboard MK270',
203 protocol=2.0,
204 wpid='4023',
205 settings=[_FS.fn_swap()],
206 )
207 _D(
208 'Wireless Keyboard K270',
209 protocol=1.0,
210 registers=(_R.battery_status, ),
211 )
212 _D(
213 'Wireless Keyboard MK300',
214 protocol=1.0,
215 wpid='0068',
216 registers=(_R.battery_status, ),
217 )
218
219 _D(
220 'Wireless Keyboard MK320',
221 protocol=1.0,
222 wpid='200F',
223 registers=(_R.battery_status, ),
224 )
225 _D('Wireless Keyboard MK330')
226 _D(
227 'Wireless Compact Keyboard K340',
228 protocol=1.0,
229 wpid='2007',
230 registers=(_R.battery_status, ),
231 )
232 _D(
233 'Wireless Wave Keyboard K350',
234 protocol=1.0,
235 wpid='200A',
236 registers=(_R.battery_status, ),
237 )
238 _D(
239 'Wireless Keyboard K360',
240 protocol=2.0,
241 wpid='4004',
242 settings=[_FS.fn_swap()],
243 )
244 _D(
245 'Wireless Keyboard K375s',
246 protocol=2.0,
247 wpid='4061',
248 settings=[_FS.k375s_fn_swap()],
249 )
250 _D(
251 'Wireless Touch Keyboard K400',
252 protocol=2.0,
253 wpid=('400E', '4024'),
254 settings=[_FS.fn_swap()],
255 )
256 _D(
257 'Wireless Touch Keyboard K400 Plus',
258 codename='K400 Plus',
259 protocol=2.0,
260 wpid='404D',
261 settings=[
262 _FS.new_fn_swap(),
263 _FS.reprogrammable_keys(),
264 _FS.disable_keyboard_keys(),
265 _FS.gesture2_gestures(),
266 _FS.gesture2_params(),
267 ],
268 )
269 _D(
270 'Wireless Keyboard K520',
271 protocol=1.0,
272 wpid='2011',
273 registers=(_R.battery_status, ),
274 settings=[
275 _RS.fn_swap(),
276 ],
277 )
278 _D(
279 'Number Pad N545',
280 protocol=1.0,
281 wpid='2006',
282 registers=(_R.battery_status, ),
283 )
284 _D('Wireless Keyboard MK550')
285 _D(
286 'Wireless Keyboard MK700',
287 protocol=1.0,
288 wpid='2008',
289 registers=(_R.battery_status, ),
290 settings=[
291 _RS.fn_swap(),
292 ],
293 )
294 _D(
295 'Wireless Solar Keyboard K750',
296 protocol=2.0,
297 wpid='4002',
298 settings=[_FS.fn_swap()],
299 )
300 _D(
301 'Wireless Multi-Device Keyboard K780',
302 protocol=4.5,
303 wpid='405B',
304 settings=[_FS.new_fn_swap()],
305 )
306 _D(
307 'Wireless Illuminated Keyboard K800',
308 protocol=1.0,
309 wpid='2010',
310 registers=(
311 _R.battery_status,
312 _R.three_leds,
313 ),
314 settings=[
315 _RS.fn_swap(),
316 _RS.hand_detection(),
317 ],
318 )
319 _D(
320 'Wireless Illuminated Keyboard K800 new',
321 codename='K800 new',
322 protocol=4.5,
323 wpid='406E',
324 settings=[_FS.fn_swap()],
325 )
326 _D(
327 'Illuminated Living-Room Keyboard K830',
328 protocol=2.0,
329 wpid='4032',
330 settings=[_FS.new_fn_swap()],
331 )
332 _D('Craft Advanced Keyboard', codename='Craft', protocol=4.5, wpid='4066', btid=0xB350)
333 _D('MX Keys Keyboard', codename='MX Keys', protocol=4.5, wpid='408A', btid=0xB35B)
334 _D(
335 'Wireless Keyboard S510',
336 codename='S510',
337 protocol=1.0,
338 wpid='0056',
339 registers=(_R.battery_status, ),
340 )
341 _D(
342 'Wireless Keyboard EX100',
343 codename='EX100',
344 protocol=1.0,
345 wpid='0065',
346 registers=(_R.battery_status, ),
347 )
348
349 # Mice
350
351 _D('Wireless Mouse M150', protocol=2.0, wpid='4022')
352 _D('Wireless Mouse M175', protocol=2.0, wpid='4008')
353 _D(
354 'Wireless Mouse M185 new',
355 codename='M185n',
356 protocol=4.5,
357 wpid='4054',
358 settings=[
359 _FS.lowres_smooth_scroll(),
360 _FS.pointer_speed(),
361 ]
362 )
363 # Apparently Logitech uses wpid 4055 for three different mice
364 # That's not so strange, as M185 is used on both Unifying-ready and non-Unifying-ready mice
365 _D(
366 'Wireless Mouse M185/M235/M310',
367 codename='M185/M235/M310',
368 protocol=4.5,
369 wpid='4055',
370 settings=[
371 _FS.lowres_smooth_scroll(),
372 _FS.pointer_speed(),
373 ]
374 )
375 _D('Wireless Mouse M185', protocol=2.0, wpid='4038')
376 _D('Wireless Mouse M187', protocol=2.0, wpid='4019')
377 _D('Wireless Mouse M215', protocol=1.0, wpid='1020')
378 _D(
379 'Wireless Mouse M305',
380 protocol=1.0,
381 wpid='101F',
382 registers=(_R.battery_status, ),
383 settings=[
384 _RS.side_scroll(),
385 ],
386 )
387 _D(
388 'Wireless Mouse M310',
389 protocol=1.0,
390 wpid='1024',
391 registers=(_R.battery_status, ),
392 )
393 _D('Wireless Mouse M315')
394 _D('Wireless Mouse M317')
395 _D('Wireless Mouse M325', protocol=2.0, wpid='400A', settings=[
396 _FS.hi_res_scroll(),
397 ])
398 _D('Wireless Mouse M345', protocol=2.0, wpid='4017')
399 _D(
400 'Wireless Mouse M350',
401 protocol=1.0,
402 wpid='101C',
403 registers=(_R.battery_charge, ),
404 )
405 _D('Wireless Mouse Pebble M350', codename='Pebble', protocol=2.0, wpid='4080')
406 _D(
407 'Wireless Mouse M505',
408 codename='M505/B605',
409 protocol=1.0,
410 wpid='101D',
411 registers=(_R.battery_charge, ),
412 settings=[
413 _RS.smooth_scroll(),
414 _RS.side_scroll(),
415 ],
416 )
417 _D(
418 'Wireless Mouse M510',
419 protocol=1.0,
420 wpid='1025',
421 registers=(_R.battery_status, ),
422 settings=[
423 # _RS.smooth_scroll(), # writing the bit to the register doesn't cause an error, but the bit doesn't turn on
424 _RS.side_scroll(),
425 ],
426 )
427 _D('Wireless Mouse M510', codename='M510v2', protocol=2.0, wpid='4051', settings=[
428 _FS.lowres_smooth_scroll(),
429 ])
430 _D('Couch Mouse M515', protocol=2.0, wpid='4007')
431 _D('Wireless Mouse M525', protocol=2.0, wpid='4013')
432 _D(
433 'Multi Device Silent Mouse M585/M590',
434 codename='M585/M590',
435 protocol=4.5,
436 wpid='406B',
437 settings=[
438 _FS.lowres_smooth_scroll(),
439 _FS.pointer_speed(),
440 ],
441 )
442 _D('Touch Mouse M600', protocol=2.0, wpid='401A')
443 _D(
444 'Marathon Mouse M705 (M-R0009)',
445 codename='M705 (M-R0009)',
446 protocol=1.0,
447 wpid='101B',
448 registers=(_R.battery_charge, ),
449 settings=[
450 _RS.smooth_scroll(),
451 _RS.side_scroll(),
452 ],
453 )
454 _D(
455 'Marathon Mouse M705 (M-R0073)',
456 codename='M705 (M-R0073)',
457 protocol=4.5,
458 wpid='406D',
459 settings=[
460 _FS.hires_smooth_invert(),
461 # _FS.hires_smooth_resolution(),
462 _FS.pointer_speed(),
463 ]
464 )
465 _D('Zone Touch Mouse T400')
466 _D('Touch Mouse T620', protocol=2.0)
467 _D('Logitech Cube', kind=_DK.mouse, protocol=2.0)
468 _D(
469 'Anywhere Mouse MX',
470 codename='Anywhere MX',
471 protocol=1.0,
472 wpid='1017',
473 registers=(_R.battery_charge, ),
474 settings=[
475 _RS.smooth_scroll(),
476 _RS.side_scroll(),
477 ],
478 )
479 _D(
480 'Anywhere Mouse MX 2',
481 codename='Anywhere MX 2',
482 protocol=4.5,
483 wpid='404A',
484 settings=[
485 _FS.hires_smooth_invert(),
486 # _FS.hires_smooth_resolution(),
487 ],
488 )
489 _D(
490 'Performance Mouse MX',
491 codename='Performance MX',
492 protocol=1.0,
493 wpid='101A',
494 registers=(
495 _R.battery_status,
496 _R.three_leds,
497 ),
498 settings=[
499 _RS.dpi(choices=_PERFORMANCE_MX_DPIS),
500 _RS.smooth_scroll(),
501 _RS.side_scroll(),
502 ],
503 )
504
505 _D(
506 'Wireless Mouse MX Master',
507 codename='MX Master',
508 protocol=4.5,
509 wpid='4041',
510 btid=0xb012,
511 settings=[
512 _FS.hires_smooth_invert(),
513 # _FS.hires_smooth_resolution(),
514 ],
515 )
516
517 _D(
518 'Wireless Mouse MX Master 2S',
519 codename='MX Master 2S',
520 protocol=4.5,
521 wpid='4069',
522 btid=0xb019,
523 settings=[
524 _FS.hires_smooth_invert(),
525 # _FS.hires_smooth_resolution(),
526 _FS.gesture2_gestures(),
527 ],
528 )
529
530 _D('MX Master 3 Wireless Mouse', codename='MX Master 3', protocol=4.5, wpid='4082', btid=0xb023)
531
532 _D('MX Vertical Wireless Mouse', codename='MX Vertical', protocol=4.5, wpid='407B', btid=0xb020, usbid=0xc08a)
533
534 _D(
535 'G7 Cordless Laser Mouse',
536 codename='G7',
537 protocol=1.0,
538 wpid='1002',
539 registers=(_R.battery_status, ),
540 )
541 _D(
542 'G700 Gaming Mouse',
543 codename='G700',
544 protocol=1.0,
545 wpid='1023',
546 usbid=0xc06b,
547 interface=1,
548 registers=(
549 _R.battery_status,
550 _R.three_leds,
551 ),
552 settings=[
553 _RS.smooth_scroll(),
554 _RS.side_scroll(),
555 ],
556 )
557 _D(
558 'G700s Gaming Mouse',
559 codename='G700s',
560 protocol=1.0,
561 wpid='102A',
562 usbid=0xc07c,
563 interface=1,
564 registers=(
565 _R.battery_status,
566 _R.three_leds,
567 ),
568 settings=[
569 _RS.smooth_scroll(),
570 _RS.side_scroll(),
571 ],
572 )
573
574 _D('G102 Lightsync Mouse', codename='G102', usbid=0xc092, interface=1)
575 _D('G403 Gaming Mouse', codename='G403', usbid=0xc082)
576 _D('G502 Hero Gaming Mouse', codename='G502 Hero', usbid=0xc08d)
577 _D('G703 Lightspeed Gaming Mouse', codename='G703', usbid=0xc087)
578 _D('G703 Hero Gaming Mouse', codename='G703 Hero', usbid=0xc090)
579 _D('G900 Chaos Spectrum Gaming Mouse', codename='G900', usbid=0xc081)
580 _D('G903 Lightspeed Gaming Mouse', codename='G903', usbid=0xc086)
581 _D('G903 Hero Gaming Mouse', codename='G903 Hero', usbid=0xc091)
582 _D('GPro Gaming Mouse', codename='GPro', usbid=0xc088)
583
584 _D(
585 'LX5 Cordless Mouse',
586 codename='LX5',
587 protocol=1.0,
588 wpid='0036',
589 registers=(_R.battery_status, ),
590 )
591 _D(
592 'Wireless Mouse M30',
593 codename='M30',
594 protocol=1.0,
595 wpid='0085',
596 registers=(_R.battery_status, ),
597 )
598 _D(
599 'Wireless Mouse EX100',
600 codename='EX100m',
601 protocol=1.0,
602 wpid='003F',
603 registers=(_R.battery_status, ),
604 # settings=[ _RS.smooth_scroll(), ], # command accepted, but no change in whell action
605 )
606
607 # Trackballs
608
609 _D('Wireless Trackball M570')
610
611 # Touchpads
612
613 _D('Wireless Rechargeable Touchpad T650', protocol=2.0, wpid='4101')
614 _D('Wireless Touchpad', codename='Wireless Touch', protocol=2.0, wpid='4011')
615
616 #
617 # Classic Nano peripherals (that don't support the Unifying protocol).
618 # A wpid is necessary to properly identify them.
619 #
620
621 _D(
622 'VX Nano Cordless Laser Mouse',
623 codename='VX Nano',
624 protocol=1.0,
625 wpid=('100B', '100F'),
626 registers=(_R.battery_charge, ),
627 settings=[
628 _RS.smooth_scroll(),
629 _RS.side_scroll(),
630 ],
631 )
632 _D(
633 'V450 Nano Cordless Laser Mouse',
634 codename='V450 Nano',
635 protocol=1.0,
636 wpid='1011',
637 registers=(_R.battery_charge, ),
638 )
639 _D(
640 'V550 Nano Cordless Laser Mouse',
641 codename='V550 Nano',
642 protocol=1.0,
643 wpid='1013',
644 registers=(_R.battery_charge, ),
645 settings=[
646 _RS.smooth_scroll(),
647 _RS.side_scroll(),
648 ],
649 )
650
651 # Mini receiver mice
652
653 _D(
654 'MX610 Laser Cordless Mouse',
655 codename='MX610',
656 protocol=1.0,
657 wpid='1001',
658 registers=(_R.battery_status, ),
659 )
660 _D(
661 'MX620 Laser Cordless Mouse',
662 codename='MX620',
663 protocol=1.0,
664 wpid=('100A', '1016'),
665 registers=(_R.battery_charge, ),
666 )
667 _D(
668 'MX610 Left-Handled Mouse',
669 codename='MX610L',
670 protocol=1.0,
671 wpid='1004',
672 registers=(_R.battery_status, ),
673 )
674 _D(
675 'V400 Laser Cordless Mouse',
676 codename='V400',
677 protocol=1.0,
678 wpid='1003',
679 registers=(_R.battery_status, ),
680 )
681 _D(
682 'V450 Laser Cordless Mouse',
683 codename='V450',
684 protocol=1.0,
685 wpid='1005',
686 registers=(_R.battery_status, ),
687 )
688 _D(
689 'VX Revolution',
690 codename='VX Revolution',
691 kind=_DK.mouse,
692 protocol=1.0,
693 wpid=('1006', '100D', '0612'), # WPID 0612 from Issue #921
694 registers=(_R.battery_charge, ),
695 )
696 _D(
697 'MX Air',
698 codename='MX Air',
699 protocol=1.0,
700 kind=_DK.mouse,
701 wpid=('1007', '100E'),
702 registers=(_R.battery_charge, ),
703 )
704 _D(
705 'MX Revolution',
706 codename='MX Revolution',
707 protocol=1.0,
708 kind=_DK.mouse,
709 wpid=('1008', '100C'),
710 registers=(_R.battery_charge, ),
711 )
712 _D(
713 'MX 1100 Cordless Laser Mouse',
714 codename='MX 1100',
715 protocol=1.0,
716 kind=_DK.mouse,
717 wpid='1014',
718 registers=(_R.battery_charge, ),
719 settings=[
720 _RS.smooth_scroll(),
721 _RS.side_scroll(),
722 ],
723 )
724
725 # Some exotics...
726
727 _D('Fujitsu Sonic Mouse', codename='Sonic', protocol=1.0, wpid='1029')
728
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lib/logitech_receiver/descriptors.py b/lib/logitech_receiver/descriptors.py
--- a/lib/logitech_receiver/descriptors.py
+++ b/lib/logitech_receiver/descriptors.py
@@ -580,6 +580,7 @@
_D('G903 Lightspeed Gaming Mouse', codename='G903', usbid=0xc086)
_D('G903 Hero Gaming Mouse', codename='G903 Hero', usbid=0xc091)
_D('GPro Gaming Mouse', codename='GPro', usbid=0xc088)
+_D('M500S Mouse', codename='M500S', usbid=0xc093, interface=1)
_D(
'LX5 Cordless Mouse',
| {"golden_diff": "diff --git a/lib/logitech_receiver/descriptors.py b/lib/logitech_receiver/descriptors.py\n--- a/lib/logitech_receiver/descriptors.py\n+++ b/lib/logitech_receiver/descriptors.py\n@@ -580,6 +580,7 @@\n _D('G903 Lightspeed Gaming Mouse', codename='G903', usbid=0xc086)\n _D('G903 Hero Gaming Mouse', codename='G903 Hero', usbid=0xc091)\n _D('GPro Gaming Mouse', codename='GPro', usbid=0xc088)\n+_D('M500S Mouse', codename='M500S', usbid=0xc093, interface=1)\n \n _D(\n 'LX5 Cordless Mouse',\n", "issue": "Support for m500s mouse\n**Information**\r\n<!-- Please update to Solaar from this repository before asking for a new feature. -->\r\n- Solaar version 1.0.7\r\n- Distribution: Fedora 34\r\n- Kernel version: Linux 5.14.13-200.fc34.x86_64 x86_64 GNU/Linux\r\n- Output of `solaar show` for the target device (if applicable):\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\nThe Logitech m500s mouse is not detected.\r\n**Describe the solution you'd like**\r\nThe ability to adjust the dpi primarily, but any other available features would be nice.\r\n**Additional context**\r\nHere is some output based off the information requested in issue 1225. My logitech mx master 2 works fine and was unplugged for this output. The hidraw appears to give the correct rw permissions when the mouse is plugged and unplugged.\r\n```\r\nlsusb\r\nBus 001 Device 005: ID 046d:c093 Logitech, Inc. Advanced Corded Mouse M500s\r\n\r\nls -l /dev/hidraw*\r\ncrw-------. 1 root root 240, 2 Oct 23 13:44 /dev/hidraw2\r\ncrw-------. 1 root root 240, 3 Oct 23 13:24 /dev/hidraw3\r\ncrw-------. 1 root root 240, 4 Oct 23 13:24 /dev/hidraw4\r\ncrw-rw----+ 1 root root 240, 5 Oct 23 14:28 /dev/hidraw5\r\ncrw-rw----+ 1 root root 240, 6 Oct 23 14:28 /dev/hidraw6\r\n\r\nsolaar -dd show\r\n14:38:09,697 DEBUG [MainThread] logitech_receiver.diversion: rule Key assuming action \"pressed\" for \"Brightness Down\"\r\n14:38:09,697 DEBUG [MainThread] logitech_receiver.diversion: rule Key assuming action \"pressed\" for \"Brightness Up\"\r\n14:38:09,698 DEBUG [MainThread] solaar.ui.tray: using AppIndicator3\r\n14:38:09,708 DEBUG [MainThread] hidapi.udev: Found device BID 0003 VID 0000046D PID 0000C093 INTERFACE 0 FILTER 2\r\n14:38:09,709 DEBUG [MainThread] hidapi.udev: Found device BID 0003 VID 0000046D PID 0000C093 INTERFACE 1 FILTER 2\r\nsolaar: error: Traceback (most recent call last):\r\n File \"/usr/lib/python3.9/site-packages/solaar/cli/__init__.py\", line 203, in run\r\n raise Exception('No devices found')\r\nException: No devices found\r\n```\r\nPlease let me know if there is any additional information needed. Thank you.\n", "before_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import namedtuple\n\nfrom .common import NamedInts as _NamedInts\nfrom .hidpp10 import DEVICE_KIND as _DK\nfrom .hidpp10 import REGISTERS as _R\nfrom .settings_templates import FeatureSettings as _FS\nfrom .settings_templates import RegisterSettings as _RS\n\n#\n#\n#\n\n_DeviceDescriptor = namedtuple(\n '_DeviceDescriptor',\n ('name', 'kind', 'wpid', 'codename', 'protocol', 'registers', 'settings', 'persister', 'usbid', 'interface', 'btid')\n)\ndel namedtuple\n\nDEVICES_WPID = {}\nDEVICES = {}\n\n\ndef _D(\n name,\n codename=None,\n kind=None,\n wpid=None,\n protocol=None,\n registers=None,\n settings=None,\n persister=None,\n usbid=None,\n interface=None,\n btid=None,\n):\n assert name\n\n if kind is None:\n kind = (\n _DK.mouse if 'Mouse' in name else _DK.keyboard if 'Keyboard' in name else _DK.numpad\n if 'Number Pad' in name else _DK.touchpad if 'Touchpad' in name else _DK.trackball if 'Trackball' in name else None\n )\n assert kind is not None, 'descriptor for %s does not have kind set' % name\n\n # heuristic: the codename is the last word in the device name\n if codename is None and ' ' in name:\n codename = name.split(' ')[-1]\n assert codename is not None, 'descriptor for %s does not have codename set' % name\n\n if protocol is not None:\n # ? 2.0 devices should not have any registers\n _kind = lambda s: s._rw.kind if hasattr(s, '_rw') else s._rw_kind\n if protocol < 2.0:\n assert settings is None or all(_kind(s) == 1 for s in settings)\n else:\n assert registers is None\n assert settings is None or all(_kind(s) == 2 for s in settings)\n\n if wpid:\n for w in wpid if isinstance(wpid, tuple) else (wpid, ):\n if protocol > 1.0:\n assert w[0:1] == '4', '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n else:\n if w[0:1] == '1':\n assert kind == _DK.mouse, '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n elif w[0:1] == '2':\n assert kind in (_DK.keyboard, _DK.numpad), '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n\n device_descriptor = _DeviceDescriptor(\n name=name,\n kind=kind,\n wpid=wpid,\n codename=codename,\n protocol=protocol,\n registers=registers,\n settings=settings,\n persister=persister,\n usbid=usbid,\n interface=interface,\n btid=btid\n )\n\n if usbid:\n found = get_usbid(usbid)\n assert found is None, 'duplicate usbid in device descriptors: %s' % (found, )\n if btid:\n found = get_btid(btid)\n assert found is None, 'duplicate btid in device descriptors: %s' % (found, )\n\n assert codename not in DEVICES, 'duplicate codename in device descriptors: %s' % (DEVICES[codename], )\n DEVICES[codename] = device_descriptor\n\n if wpid:\n for w in wpid if isinstance(wpid, tuple) else (wpid, ):\n assert w not in DEVICES_WPID, 'duplicate wpid in device descriptors: %s' % (DEVICES_WPID[w], )\n DEVICES_WPID[w] = device_descriptor\n\n\ndef get_wpid(wpid):\n return DEVICES_WPID.get(wpid)\n\n\ndef get_codename(codename):\n return DEVICES.get(codename)\n\n\ndef get_usbid(usbid):\n if isinstance(usbid, str):\n usbid = int(usbid, 16)\n found = next((x for x in DEVICES.values() if x.usbid == usbid), None)\n return found\n\n\ndef get_btid(btid):\n if isinstance(btid, str):\n btid = int(btid, 16)\n found = next((x for x in DEVICES.values() if x.btid == btid), None)\n return found\n\n\n#\n#\n#\n\n_PERFORMANCE_MX_DPIS = _NamedInts.range(0x81, 0x8F, lambda x: str((x - 0x80) * 100))\n\n#\n#\n#\n\n# Some HID++1.0 registers and HID++2.0 features can be discovered at run-time,\n# so they are not specified here.\n#\n# For known registers, however, please do specify them here -- avoids\n# unnecessary communication with the device and makes it easier to make certain\n# decisions when querying the device's state.\n#\n# Specify a negative value to blacklist a certain register for a device.\n#\n# Usually, state registers (battery, leds, some features, etc) are only used by\n# HID++ 1.0 devices, while HID++ 2.0 devices use features for the same\n# functionalities. This is a rule that's been discovered by trial-and-error,\n# so it may change in the future.\n\n# Well-known registers (in hex):\n# * 00 - notification flags (all devices)\n# 01 - mice: smooth scrolling\n# 07 - battery status\n# 09 - keyboards: FN swap (if it has the FN key)\n# 0D - battery charge\n# a device may have either the 07 or 0D register available;\n# no known device uses both\n# 51 - leds\n# 63 - mice: DPI\n# * F1 - firmware info\n# Some registers appear to be universally supported, no matter the HID++ version\n# (marked with *). The rest may or may not be supported, and their values may or\n# may not mean the same thing across different devices.\n\n# The 'codename' and 'kind' fields are usually guessed from the device name,\n# but in some cases (like the Logitech Cube) that heuristic fails and they have\n# to be specified.\n#\n# The 'protocol' and 'wpid' fields are optional (they can be discovered at\n# runtime), but specifying them here speeds up device discovery and reduces the\n# USB traffic Solaar has to do to fully identify peripherals.\n# Same goes for HID++ 2.0 feature settings (like _feature_fn_swap).\n#\n# The 'registers' field indicates read-only registers, specifying a state. These\n# are valid (AFAIK) only to HID++ 1.0 devices.\n# The 'settings' field indicates a read/write register; based on them Solaar\n# generates, at runtime, the settings controls in the device panel. HID++ 1.0\n# devices may only have register-based settings; HID++ 2.0 devices may only have\n# feature-based settings.\n\n# Keyboards\n\n_D('Wireless Keyboard K230', protocol=2.0, wpid='400D')\n_D('Wireless Keyboard K270(unifying)', protocol=2.0, wpid='4003')\n_D(\n 'Wireless Keyboard MK270',\n protocol=2.0,\n wpid='4023',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Keyboard K270',\n protocol=1.0,\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard MK300',\n protocol=1.0,\n wpid='0068',\n registers=(_R.battery_status, ),\n)\n\n_D(\n 'Wireless Keyboard MK320',\n protocol=1.0,\n wpid='200F',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Keyboard MK330')\n_D(\n 'Wireless Compact Keyboard K340',\n protocol=1.0,\n wpid='2007',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Wave Keyboard K350',\n protocol=1.0,\n wpid='200A',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard K360',\n protocol=2.0,\n wpid='4004',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Keyboard K375s',\n protocol=2.0,\n wpid='4061',\n settings=[_FS.k375s_fn_swap()],\n)\n_D(\n 'Wireless Touch Keyboard K400',\n protocol=2.0,\n wpid=('400E', '4024'),\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Touch Keyboard K400 Plus',\n codename='K400 Plus',\n protocol=2.0,\n wpid='404D',\n settings=[\n _FS.new_fn_swap(),\n _FS.reprogrammable_keys(),\n _FS.disable_keyboard_keys(),\n _FS.gesture2_gestures(),\n _FS.gesture2_params(),\n ],\n)\n_D(\n 'Wireless Keyboard K520',\n protocol=1.0,\n wpid='2011',\n registers=(_R.battery_status, ),\n settings=[\n _RS.fn_swap(),\n ],\n)\n_D(\n 'Number Pad N545',\n protocol=1.0,\n wpid='2006',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Keyboard MK550')\n_D(\n 'Wireless Keyboard MK700',\n protocol=1.0,\n wpid='2008',\n registers=(_R.battery_status, ),\n settings=[\n _RS.fn_swap(),\n ],\n)\n_D(\n 'Wireless Solar Keyboard K750',\n protocol=2.0,\n wpid='4002',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Multi-Device Keyboard K780',\n protocol=4.5,\n wpid='405B',\n settings=[_FS.new_fn_swap()],\n)\n_D(\n 'Wireless Illuminated Keyboard K800',\n protocol=1.0,\n wpid='2010',\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.fn_swap(),\n _RS.hand_detection(),\n ],\n)\n_D(\n 'Wireless Illuminated Keyboard K800 new',\n codename='K800 new',\n protocol=4.5,\n wpid='406E',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Illuminated Living-Room Keyboard K830',\n protocol=2.0,\n wpid='4032',\n settings=[_FS.new_fn_swap()],\n)\n_D('Craft Advanced Keyboard', codename='Craft', protocol=4.5, wpid='4066', btid=0xB350)\n_D('MX Keys Keyboard', codename='MX Keys', protocol=4.5, wpid='408A', btid=0xB35B)\n_D(\n 'Wireless Keyboard S510',\n codename='S510',\n protocol=1.0,\n wpid='0056',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard EX100',\n codename='EX100',\n protocol=1.0,\n wpid='0065',\n registers=(_R.battery_status, ),\n)\n\n# Mice\n\n_D('Wireless Mouse M150', protocol=2.0, wpid='4022')\n_D('Wireless Mouse M175', protocol=2.0, wpid='4008')\n_D(\n 'Wireless Mouse M185 new',\n codename='M185n',\n protocol=4.5,\n wpid='4054',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ]\n)\n# Apparently Logitech uses wpid 4055 for three different mice\n# That's not so strange, as M185 is used on both Unifying-ready and non-Unifying-ready mice\n_D(\n 'Wireless Mouse M185/M235/M310',\n codename='M185/M235/M310',\n protocol=4.5,\n wpid='4055',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ]\n)\n_D('Wireless Mouse M185', protocol=2.0, wpid='4038')\n_D('Wireless Mouse M187', protocol=2.0, wpid='4019')\n_D('Wireless Mouse M215', protocol=1.0, wpid='1020')\n_D(\n 'Wireless Mouse M305',\n protocol=1.0,\n wpid='101F',\n registers=(_R.battery_status, ),\n settings=[\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Wireless Mouse M310',\n protocol=1.0,\n wpid='1024',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Mouse M315')\n_D('Wireless Mouse M317')\n_D('Wireless Mouse M325', protocol=2.0, wpid='400A', settings=[\n _FS.hi_res_scroll(),\n])\n_D('Wireless Mouse M345', protocol=2.0, wpid='4017')\n_D(\n 'Wireless Mouse M350',\n protocol=1.0,\n wpid='101C',\n registers=(_R.battery_charge, ),\n)\n_D('Wireless Mouse Pebble M350', codename='Pebble', protocol=2.0, wpid='4080')\n_D(\n 'Wireless Mouse M505',\n codename='M505/B605',\n protocol=1.0,\n wpid='101D',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Wireless Mouse M510',\n protocol=1.0,\n wpid='1025',\n registers=(_R.battery_status, ),\n settings=[\n # _RS.smooth_scroll(),\t# writing the bit to the register doesn't cause an error, but the bit doesn't turn on\n _RS.side_scroll(),\n ],\n)\n_D('Wireless Mouse M510', codename='M510v2', protocol=2.0, wpid='4051', settings=[\n _FS.lowres_smooth_scroll(),\n])\n_D('Couch Mouse M515', protocol=2.0, wpid='4007')\n_D('Wireless Mouse M525', protocol=2.0, wpid='4013')\n_D(\n 'Multi Device Silent Mouse M585/M590',\n codename='M585/M590',\n protocol=4.5,\n wpid='406B',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ],\n)\n_D('Touch Mouse M600', protocol=2.0, wpid='401A')\n_D(\n 'Marathon Mouse M705 (M-R0009)',\n codename='M705 (M-R0009)',\n protocol=1.0,\n wpid='101B',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Marathon Mouse M705 (M-R0073)',\n codename='M705 (M-R0073)',\n protocol=4.5,\n wpid='406D',\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n _FS.pointer_speed(),\n ]\n)\n_D('Zone Touch Mouse T400')\n_D('Touch Mouse T620', protocol=2.0)\n_D('Logitech Cube', kind=_DK.mouse, protocol=2.0)\n_D(\n 'Anywhere Mouse MX',\n codename='Anywhere MX',\n protocol=1.0,\n wpid='1017',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Anywhere Mouse MX 2',\n codename='Anywhere MX 2',\n protocol=4.5,\n wpid='404A',\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n ],\n)\n_D(\n 'Performance Mouse MX',\n codename='Performance MX',\n protocol=1.0,\n wpid='101A',\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.dpi(choices=_PERFORMANCE_MX_DPIS),\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n_D(\n 'Wireless Mouse MX Master',\n codename='MX Master',\n protocol=4.5,\n wpid='4041',\n btid=0xb012,\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n ],\n)\n\n_D(\n 'Wireless Mouse MX Master 2S',\n codename='MX Master 2S',\n protocol=4.5,\n wpid='4069',\n btid=0xb019,\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n _FS.gesture2_gestures(),\n ],\n)\n\n_D('MX Master 3 Wireless Mouse', codename='MX Master 3', protocol=4.5, wpid='4082', btid=0xb023)\n\n_D('MX Vertical Wireless Mouse', codename='MX Vertical', protocol=4.5, wpid='407B', btid=0xb020, usbid=0xc08a)\n\n_D(\n 'G7 Cordless Laser Mouse',\n codename='G7',\n protocol=1.0,\n wpid='1002',\n registers=(_R.battery_status, ),\n)\n_D(\n 'G700 Gaming Mouse',\n codename='G700',\n protocol=1.0,\n wpid='1023',\n usbid=0xc06b,\n interface=1,\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'G700s Gaming Mouse',\n codename='G700s',\n protocol=1.0,\n wpid='102A',\n usbid=0xc07c,\n interface=1,\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n_D('G102 Lightsync Mouse', codename='G102', usbid=0xc092, interface=1)\n_D('G403 Gaming Mouse', codename='G403', usbid=0xc082)\n_D('G502 Hero Gaming Mouse', codename='G502 Hero', usbid=0xc08d)\n_D('G703 Lightspeed Gaming Mouse', codename='G703', usbid=0xc087)\n_D('G703 Hero Gaming Mouse', codename='G703 Hero', usbid=0xc090)\n_D('G900 Chaos Spectrum Gaming Mouse', codename='G900', usbid=0xc081)\n_D('G903 Lightspeed Gaming Mouse', codename='G903', usbid=0xc086)\n_D('G903 Hero Gaming Mouse', codename='G903 Hero', usbid=0xc091)\n_D('GPro Gaming Mouse', codename='GPro', usbid=0xc088)\n\n_D(\n 'LX5 Cordless Mouse',\n codename='LX5',\n protocol=1.0,\n wpid='0036',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Mouse M30',\n codename='M30',\n protocol=1.0,\n wpid='0085',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Mouse EX100',\n codename='EX100m',\n protocol=1.0,\n wpid='003F',\n registers=(_R.battery_status, ),\n # settings=[ _RS.smooth_scroll(), ], # command accepted, but no change in whell action\n)\n\n# Trackballs\n\n_D('Wireless Trackball M570')\n\n# Touchpads\n\n_D('Wireless Rechargeable Touchpad T650', protocol=2.0, wpid='4101')\n_D('Wireless Touchpad', codename='Wireless Touch', protocol=2.0, wpid='4011')\n\n#\n# Classic Nano peripherals (that don't support the Unifying protocol).\n# A wpid is necessary to properly identify them.\n#\n\n_D(\n 'VX Nano Cordless Laser Mouse',\n codename='VX Nano',\n protocol=1.0,\n wpid=('100B', '100F'),\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'V450 Nano Cordless Laser Mouse',\n codename='V450 Nano',\n protocol=1.0,\n wpid='1011',\n registers=(_R.battery_charge, ),\n)\n_D(\n 'V550 Nano Cordless Laser Mouse',\n codename='V550 Nano',\n protocol=1.0,\n wpid='1013',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n# Mini receiver mice\n\n_D(\n 'MX610 Laser Cordless Mouse',\n codename='MX610',\n protocol=1.0,\n wpid='1001',\n registers=(_R.battery_status, ),\n)\n_D(\n 'MX620 Laser Cordless Mouse',\n codename='MX620',\n protocol=1.0,\n wpid=('100A', '1016'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX610 Left-Handled Mouse',\n codename='MX610L',\n protocol=1.0,\n wpid='1004',\n registers=(_R.battery_status, ),\n)\n_D(\n 'V400 Laser Cordless Mouse',\n codename='V400',\n protocol=1.0,\n wpid='1003',\n registers=(_R.battery_status, ),\n)\n_D(\n 'V450 Laser Cordless Mouse',\n codename='V450',\n protocol=1.0,\n wpid='1005',\n registers=(_R.battery_status, ),\n)\n_D(\n 'VX Revolution',\n codename='VX Revolution',\n kind=_DK.mouse,\n protocol=1.0,\n wpid=('1006', '100D', '0612'), # WPID 0612 from Issue #921\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX Air',\n codename='MX Air',\n protocol=1.0,\n kind=_DK.mouse,\n wpid=('1007', '100E'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX Revolution',\n codename='MX Revolution',\n protocol=1.0,\n kind=_DK.mouse,\n wpid=('1008', '100C'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX 1100 Cordless Laser Mouse',\n codename='MX 1100',\n protocol=1.0,\n kind=_DK.mouse,\n wpid='1014',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n# Some exotics...\n\n_D('Fujitsu Sonic Mouse', codename='Sonic', protocol=1.0, wpid='1029')\n", "path": "lib/logitech_receiver/descriptors.py"}], "after_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom collections import namedtuple\n\nfrom .common import NamedInts as _NamedInts\nfrom .hidpp10 import DEVICE_KIND as _DK\nfrom .hidpp10 import REGISTERS as _R\nfrom .settings_templates import FeatureSettings as _FS\nfrom .settings_templates import RegisterSettings as _RS\n\n#\n#\n#\n\n_DeviceDescriptor = namedtuple(\n '_DeviceDescriptor',\n ('name', 'kind', 'wpid', 'codename', 'protocol', 'registers', 'settings', 'persister', 'usbid', 'interface', 'btid')\n)\ndel namedtuple\n\nDEVICES_WPID = {}\nDEVICES = {}\n\n\ndef _D(\n name,\n codename=None,\n kind=None,\n wpid=None,\n protocol=None,\n registers=None,\n settings=None,\n persister=None,\n usbid=None,\n interface=None,\n btid=None,\n):\n assert name\n\n if kind is None:\n kind = (\n _DK.mouse if 'Mouse' in name else _DK.keyboard if 'Keyboard' in name else _DK.numpad\n if 'Number Pad' in name else _DK.touchpad if 'Touchpad' in name else _DK.trackball if 'Trackball' in name else None\n )\n assert kind is not None, 'descriptor for %s does not have kind set' % name\n\n # heuristic: the codename is the last word in the device name\n if codename is None and ' ' in name:\n codename = name.split(' ')[-1]\n assert codename is not None, 'descriptor for %s does not have codename set' % name\n\n if protocol is not None:\n # ? 2.0 devices should not have any registers\n _kind = lambda s: s._rw.kind if hasattr(s, '_rw') else s._rw_kind\n if protocol < 2.0:\n assert settings is None or all(_kind(s) == 1 for s in settings)\n else:\n assert registers is None\n assert settings is None or all(_kind(s) == 2 for s in settings)\n\n if wpid:\n for w in wpid if isinstance(wpid, tuple) else (wpid, ):\n if protocol > 1.0:\n assert w[0:1] == '4', '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n else:\n if w[0:1] == '1':\n assert kind == _DK.mouse, '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n elif w[0:1] == '2':\n assert kind in (_DK.keyboard, _DK.numpad), '%s has protocol %0.1f, wpid %s' % (name, protocol, w)\n\n device_descriptor = _DeviceDescriptor(\n name=name,\n kind=kind,\n wpid=wpid,\n codename=codename,\n protocol=protocol,\n registers=registers,\n settings=settings,\n persister=persister,\n usbid=usbid,\n interface=interface,\n btid=btid\n )\n\n if usbid:\n found = get_usbid(usbid)\n assert found is None, 'duplicate usbid in device descriptors: %s' % (found, )\n if btid:\n found = get_btid(btid)\n assert found is None, 'duplicate btid in device descriptors: %s' % (found, )\n\n assert codename not in DEVICES, 'duplicate codename in device descriptors: %s' % (DEVICES[codename], )\n DEVICES[codename] = device_descriptor\n\n if wpid:\n for w in wpid if isinstance(wpid, tuple) else (wpid, ):\n assert w not in DEVICES_WPID, 'duplicate wpid in device descriptors: %s' % (DEVICES_WPID[w], )\n DEVICES_WPID[w] = device_descriptor\n\n\ndef get_wpid(wpid):\n return DEVICES_WPID.get(wpid)\n\n\ndef get_codename(codename):\n return DEVICES.get(codename)\n\n\ndef get_usbid(usbid):\n if isinstance(usbid, str):\n usbid = int(usbid, 16)\n found = next((x for x in DEVICES.values() if x.usbid == usbid), None)\n return found\n\n\ndef get_btid(btid):\n if isinstance(btid, str):\n btid = int(btid, 16)\n found = next((x for x in DEVICES.values() if x.btid == btid), None)\n return found\n\n\n#\n#\n#\n\n_PERFORMANCE_MX_DPIS = _NamedInts.range(0x81, 0x8F, lambda x: str((x - 0x80) * 100))\n\n#\n#\n#\n\n# Some HID++1.0 registers and HID++2.0 features can be discovered at run-time,\n# so they are not specified here.\n#\n# For known registers, however, please do specify them here -- avoids\n# unnecessary communication with the device and makes it easier to make certain\n# decisions when querying the device's state.\n#\n# Specify a negative value to blacklist a certain register for a device.\n#\n# Usually, state registers (battery, leds, some features, etc) are only used by\n# HID++ 1.0 devices, while HID++ 2.0 devices use features for the same\n# functionalities. This is a rule that's been discovered by trial-and-error,\n# so it may change in the future.\n\n# Well-known registers (in hex):\n# * 00 - notification flags (all devices)\n# 01 - mice: smooth scrolling\n# 07 - battery status\n# 09 - keyboards: FN swap (if it has the FN key)\n# 0D - battery charge\n# a device may have either the 07 or 0D register available;\n# no known device uses both\n# 51 - leds\n# 63 - mice: DPI\n# * F1 - firmware info\n# Some registers appear to be universally supported, no matter the HID++ version\n# (marked with *). The rest may or may not be supported, and their values may or\n# may not mean the same thing across different devices.\n\n# The 'codename' and 'kind' fields are usually guessed from the device name,\n# but in some cases (like the Logitech Cube) that heuristic fails and they have\n# to be specified.\n#\n# The 'protocol' and 'wpid' fields are optional (they can be discovered at\n# runtime), but specifying them here speeds up device discovery and reduces the\n# USB traffic Solaar has to do to fully identify peripherals.\n# Same goes for HID++ 2.0 feature settings (like _feature_fn_swap).\n#\n# The 'registers' field indicates read-only registers, specifying a state. These\n# are valid (AFAIK) only to HID++ 1.0 devices.\n# The 'settings' field indicates a read/write register; based on them Solaar\n# generates, at runtime, the settings controls in the device panel. HID++ 1.0\n# devices may only have register-based settings; HID++ 2.0 devices may only have\n# feature-based settings.\n\n# Keyboards\n\n_D('Wireless Keyboard K230', protocol=2.0, wpid='400D')\n_D('Wireless Keyboard K270(unifying)', protocol=2.0, wpid='4003')\n_D(\n 'Wireless Keyboard MK270',\n protocol=2.0,\n wpid='4023',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Keyboard K270',\n protocol=1.0,\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard MK300',\n protocol=1.0,\n wpid='0068',\n registers=(_R.battery_status, ),\n)\n\n_D(\n 'Wireless Keyboard MK320',\n protocol=1.0,\n wpid='200F',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Keyboard MK330')\n_D(\n 'Wireless Compact Keyboard K340',\n protocol=1.0,\n wpid='2007',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Wave Keyboard K350',\n protocol=1.0,\n wpid='200A',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard K360',\n protocol=2.0,\n wpid='4004',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Keyboard K375s',\n protocol=2.0,\n wpid='4061',\n settings=[_FS.k375s_fn_swap()],\n)\n_D(\n 'Wireless Touch Keyboard K400',\n protocol=2.0,\n wpid=('400E', '4024'),\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Touch Keyboard K400 Plus',\n codename='K400 Plus',\n protocol=2.0,\n wpid='404D',\n settings=[\n _FS.new_fn_swap(),\n _FS.reprogrammable_keys(),\n _FS.disable_keyboard_keys(),\n _FS.gesture2_gestures(),\n _FS.gesture2_params(),\n ],\n)\n_D(\n 'Wireless Keyboard K520',\n protocol=1.0,\n wpid='2011',\n registers=(_R.battery_status, ),\n settings=[\n _RS.fn_swap(),\n ],\n)\n_D(\n 'Number Pad N545',\n protocol=1.0,\n wpid='2006',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Keyboard MK550')\n_D(\n 'Wireless Keyboard MK700',\n protocol=1.0,\n wpid='2008',\n registers=(_R.battery_status, ),\n settings=[\n _RS.fn_swap(),\n ],\n)\n_D(\n 'Wireless Solar Keyboard K750',\n protocol=2.0,\n wpid='4002',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Wireless Multi-Device Keyboard K780',\n protocol=4.5,\n wpid='405B',\n settings=[_FS.new_fn_swap()],\n)\n_D(\n 'Wireless Illuminated Keyboard K800',\n protocol=1.0,\n wpid='2010',\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.fn_swap(),\n _RS.hand_detection(),\n ],\n)\n_D(\n 'Wireless Illuminated Keyboard K800 new',\n codename='K800 new',\n protocol=4.5,\n wpid='406E',\n settings=[_FS.fn_swap()],\n)\n_D(\n 'Illuminated Living-Room Keyboard K830',\n protocol=2.0,\n wpid='4032',\n settings=[_FS.new_fn_swap()],\n)\n_D('Craft Advanced Keyboard', codename='Craft', protocol=4.5, wpid='4066', btid=0xB350)\n_D('MX Keys Keyboard', codename='MX Keys', protocol=4.5, wpid='408A', btid=0xB35B)\n_D(\n 'Wireless Keyboard S510',\n codename='S510',\n protocol=1.0,\n wpid='0056',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Keyboard EX100',\n codename='EX100',\n protocol=1.0,\n wpid='0065',\n registers=(_R.battery_status, ),\n)\n\n# Mice\n\n_D('Wireless Mouse M150', protocol=2.0, wpid='4022')\n_D('Wireless Mouse M175', protocol=2.0, wpid='4008')\n_D(\n 'Wireless Mouse M185 new',\n codename='M185n',\n protocol=4.5,\n wpid='4054',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ]\n)\n# Apparently Logitech uses wpid 4055 for three different mice\n# That's not so strange, as M185 is used on both Unifying-ready and non-Unifying-ready mice\n_D(\n 'Wireless Mouse M185/M235/M310',\n codename='M185/M235/M310',\n protocol=4.5,\n wpid='4055',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ]\n)\n_D('Wireless Mouse M185', protocol=2.0, wpid='4038')\n_D('Wireless Mouse M187', protocol=2.0, wpid='4019')\n_D('Wireless Mouse M215', protocol=1.0, wpid='1020')\n_D(\n 'Wireless Mouse M305',\n protocol=1.0,\n wpid='101F',\n registers=(_R.battery_status, ),\n settings=[\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Wireless Mouse M310',\n protocol=1.0,\n wpid='1024',\n registers=(_R.battery_status, ),\n)\n_D('Wireless Mouse M315')\n_D('Wireless Mouse M317')\n_D('Wireless Mouse M325', protocol=2.0, wpid='400A', settings=[\n _FS.hi_res_scroll(),\n])\n_D('Wireless Mouse M345', protocol=2.0, wpid='4017')\n_D(\n 'Wireless Mouse M350',\n protocol=1.0,\n wpid='101C',\n registers=(_R.battery_charge, ),\n)\n_D('Wireless Mouse Pebble M350', codename='Pebble', protocol=2.0, wpid='4080')\n_D(\n 'Wireless Mouse M505',\n codename='M505/B605',\n protocol=1.0,\n wpid='101D',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Wireless Mouse M510',\n protocol=1.0,\n wpid='1025',\n registers=(_R.battery_status, ),\n settings=[\n # _RS.smooth_scroll(),\t# writing the bit to the register doesn't cause an error, but the bit doesn't turn on\n _RS.side_scroll(),\n ],\n)\n_D('Wireless Mouse M510', codename='M510v2', protocol=2.0, wpid='4051', settings=[\n _FS.lowres_smooth_scroll(),\n])\n_D('Couch Mouse M515', protocol=2.0, wpid='4007')\n_D('Wireless Mouse M525', protocol=2.0, wpid='4013')\n_D(\n 'Multi Device Silent Mouse M585/M590',\n codename='M585/M590',\n protocol=4.5,\n wpid='406B',\n settings=[\n _FS.lowres_smooth_scroll(),\n _FS.pointer_speed(),\n ],\n)\n_D('Touch Mouse M600', protocol=2.0, wpid='401A')\n_D(\n 'Marathon Mouse M705 (M-R0009)',\n codename='M705 (M-R0009)',\n protocol=1.0,\n wpid='101B',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Marathon Mouse M705 (M-R0073)',\n codename='M705 (M-R0073)',\n protocol=4.5,\n wpid='406D',\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n _FS.pointer_speed(),\n ]\n)\n_D('Zone Touch Mouse T400')\n_D('Touch Mouse T620', protocol=2.0)\n_D('Logitech Cube', kind=_DK.mouse, protocol=2.0)\n_D(\n 'Anywhere Mouse MX',\n codename='Anywhere MX',\n protocol=1.0,\n wpid='1017',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'Anywhere Mouse MX 2',\n codename='Anywhere MX 2',\n protocol=4.5,\n wpid='404A',\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n ],\n)\n_D(\n 'Performance Mouse MX',\n codename='Performance MX',\n protocol=1.0,\n wpid='101A',\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.dpi(choices=_PERFORMANCE_MX_DPIS),\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n_D(\n 'Wireless Mouse MX Master',\n codename='MX Master',\n protocol=4.5,\n wpid='4041',\n btid=0xb012,\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n ],\n)\n\n_D(\n 'Wireless Mouse MX Master 2S',\n codename='MX Master 2S',\n protocol=4.5,\n wpid='4069',\n btid=0xb019,\n settings=[\n _FS.hires_smooth_invert(),\n # _FS.hires_smooth_resolution(),\n _FS.gesture2_gestures(),\n ],\n)\n\n_D('MX Master 3 Wireless Mouse', codename='MX Master 3', protocol=4.5, wpid='4082', btid=0xb023)\n\n_D('MX Vertical Wireless Mouse', codename='MX Vertical', protocol=4.5, wpid='407B', btid=0xb020, usbid=0xc08a)\n\n_D(\n 'G7 Cordless Laser Mouse',\n codename='G7',\n protocol=1.0,\n wpid='1002',\n registers=(_R.battery_status, ),\n)\n_D(\n 'G700 Gaming Mouse',\n codename='G700',\n protocol=1.0,\n wpid='1023',\n usbid=0xc06b,\n interface=1,\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'G700s Gaming Mouse',\n codename='G700s',\n protocol=1.0,\n wpid='102A',\n usbid=0xc07c,\n interface=1,\n registers=(\n _R.battery_status,\n _R.three_leds,\n ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n_D('G102 Lightsync Mouse', codename='G102', usbid=0xc092, interface=1)\n_D('G403 Gaming Mouse', codename='G403', usbid=0xc082)\n_D('G502 Hero Gaming Mouse', codename='G502 Hero', usbid=0xc08d)\n_D('G703 Lightspeed Gaming Mouse', codename='G703', usbid=0xc087)\n_D('G703 Hero Gaming Mouse', codename='G703 Hero', usbid=0xc090)\n_D('G900 Chaos Spectrum Gaming Mouse', codename='G900', usbid=0xc081)\n_D('G903 Lightspeed Gaming Mouse', codename='G903', usbid=0xc086)\n_D('G903 Hero Gaming Mouse', codename='G903 Hero', usbid=0xc091)\n_D('GPro Gaming Mouse', codename='GPro', usbid=0xc088)\n_D('M500S Mouse', codename='M500S', usbid=0xc093, interface=1)\n\n_D(\n 'LX5 Cordless Mouse',\n codename='LX5',\n protocol=1.0,\n wpid='0036',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Mouse M30',\n codename='M30',\n protocol=1.0,\n wpid='0085',\n registers=(_R.battery_status, ),\n)\n_D(\n 'Wireless Mouse EX100',\n codename='EX100m',\n protocol=1.0,\n wpid='003F',\n registers=(_R.battery_status, ),\n # settings=[ _RS.smooth_scroll(), ], # command accepted, but no change in whell action\n)\n\n# Trackballs\n\n_D('Wireless Trackball M570')\n\n# Touchpads\n\n_D('Wireless Rechargeable Touchpad T650', protocol=2.0, wpid='4101')\n_D('Wireless Touchpad', codename='Wireless Touch', protocol=2.0, wpid='4011')\n\n#\n# Classic Nano peripherals (that don't support the Unifying protocol).\n# A wpid is necessary to properly identify them.\n#\n\n_D(\n 'VX Nano Cordless Laser Mouse',\n codename='VX Nano',\n protocol=1.0,\n wpid=('100B', '100F'),\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n_D(\n 'V450 Nano Cordless Laser Mouse',\n codename='V450 Nano',\n protocol=1.0,\n wpid='1011',\n registers=(_R.battery_charge, ),\n)\n_D(\n 'V550 Nano Cordless Laser Mouse',\n codename='V550 Nano',\n protocol=1.0,\n wpid='1013',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n# Mini receiver mice\n\n_D(\n 'MX610 Laser Cordless Mouse',\n codename='MX610',\n protocol=1.0,\n wpid='1001',\n registers=(_R.battery_status, ),\n)\n_D(\n 'MX620 Laser Cordless Mouse',\n codename='MX620',\n protocol=1.0,\n wpid=('100A', '1016'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX610 Left-Handled Mouse',\n codename='MX610L',\n protocol=1.0,\n wpid='1004',\n registers=(_R.battery_status, ),\n)\n_D(\n 'V400 Laser Cordless Mouse',\n codename='V400',\n protocol=1.0,\n wpid='1003',\n registers=(_R.battery_status, ),\n)\n_D(\n 'V450 Laser Cordless Mouse',\n codename='V450',\n protocol=1.0,\n wpid='1005',\n registers=(_R.battery_status, ),\n)\n_D(\n 'VX Revolution',\n codename='VX Revolution',\n kind=_DK.mouse,\n protocol=1.0,\n wpid=('1006', '100D', '0612'), # WPID 0612 from Issue #921\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX Air',\n codename='MX Air',\n protocol=1.0,\n kind=_DK.mouse,\n wpid=('1007', '100E'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX Revolution',\n codename='MX Revolution',\n protocol=1.0,\n kind=_DK.mouse,\n wpid=('1008', '100C'),\n registers=(_R.battery_charge, ),\n)\n_D(\n 'MX 1100 Cordless Laser Mouse',\n codename='MX 1100',\n protocol=1.0,\n kind=_DK.mouse,\n wpid='1014',\n registers=(_R.battery_charge, ),\n settings=[\n _RS.smooth_scroll(),\n _RS.side_scroll(),\n ],\n)\n\n# Some exotics...\n\n_D('Fujitsu Sonic Mouse', codename='Sonic', protocol=1.0, wpid='1029')\n", "path": "lib/logitech_receiver/descriptors.py"}]} |
gh_patches_debug_1160 | rasdani/github-patches | git_diff | uclapi__uclapi-128 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Bug] Search People should return HTTP status 400 when query is missing
Currently, the `/search/people` returns a HTTP 200 code when even for an incorrect API request. For example, if you leave out the `query` param it returns the following body:
```json
{ "error": "No query provided", "ok": false}
```
Yet, the HTTP status code is 200, while it should be 400.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `backend/uclapi/search/views.py`
Content:
```
1 from rest_framework.decorators import api_view
2 from django.http import JsonResponse
3
4 from roombookings.decorators import does_token_exist, log_api_call, throttle
5
6 import os
7 import requests
8
9
10 @api_view(['GET'])
11 @does_token_exist
12 @throttle
13 @log_api_call
14 def people(request):
15 if "query" not in request.GET:
16 return JsonResponse({
17 "ok": False,
18 "error": "No query provided"
19 })
20
21 query = request.GET["query"]
22
23 url = (
24 "{}?{}={}"
25 .format(
26 os.environ["SEARCH_API_URL"],
27 os.environ["SEARCH_API_QUERY_PARAMS"],
28 query,
29 )
30 )
31
32 r = requests.get(url)
33
34 results = r.json()["response"]["resultPacket"]["results"][:20]
35
36 def serialize_person(person):
37 return {
38 "name": person["title"],
39 "department": person["metaData"].get("7", ""),
40 "email": person["metaData"].get("E", ""),
41 "status": person["metaData"].get("g", ""),
42 }
43
44 people = [serialize_person(person) for person in results]
45
46 return JsonResponse({
47 "ok": True,
48 "people": people
49 })
50
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/backend/uclapi/search/views.py b/backend/uclapi/search/views.py
--- a/backend/uclapi/search/views.py
+++ b/backend/uclapi/search/views.py
@@ -13,10 +13,12 @@
@log_api_call
def people(request):
if "query" not in request.GET:
- return JsonResponse({
+ response = JsonResponse({
"ok": False,
- "error": "No query provided"
+ "error": "No query provided."
})
+ response.status_code = 400
+ return response
query = request.GET["query"]
| {"golden_diff": "diff --git a/backend/uclapi/search/views.py b/backend/uclapi/search/views.py\n--- a/backend/uclapi/search/views.py\n+++ b/backend/uclapi/search/views.py\n@@ -13,10 +13,12 @@\n @log_api_call\n def people(request):\n if \"query\" not in request.GET:\n- return JsonResponse({\n+ response = JsonResponse({\n \"ok\": False,\n- \"error\": \"No query provided\"\n+ \"error\": \"No query provided.\"\n })\n+ response.status_code = 400\n+ return response\n \n query = request.GET[\"query\"]\n", "issue": "[Bug] Search People should return HTTP status 400 when query is missing\nCurrently, the `/search/people` returns a HTTP 200 code when even for an incorrect API request. For example, if you leave out the `query` param it returns the following body:\r\n\r\n```json\r\n{ \"error\": \"No query provided\", \"ok\": false}\r\n```\r\n\r\nYet, the HTTP status code is 200, while it should be 400.\r\n\n", "before_files": [{"content": "from rest_framework.decorators import api_view\nfrom django.http import JsonResponse\n\nfrom roombookings.decorators import does_token_exist, log_api_call, throttle\n\nimport os\nimport requests\n\n\n@api_view(['GET'])\n@does_token_exist\n@throttle\n@log_api_call\ndef people(request):\n if \"query\" not in request.GET:\n return JsonResponse({\n \"ok\": False,\n \"error\": \"No query provided\"\n })\n\n query = request.GET[\"query\"]\n\n url = (\n \"{}?{}={}\"\n .format(\n os.environ[\"SEARCH_API_URL\"],\n os.environ[\"SEARCH_API_QUERY_PARAMS\"],\n query,\n )\n )\n\n r = requests.get(url)\n\n results = r.json()[\"response\"][\"resultPacket\"][\"results\"][:20]\n\n def serialize_person(person):\n return {\n \"name\": person[\"title\"],\n \"department\": person[\"metaData\"].get(\"7\", \"\"),\n \"email\": person[\"metaData\"].get(\"E\", \"\"),\n \"status\": person[\"metaData\"].get(\"g\", \"\"),\n }\n\n people = [serialize_person(person) for person in results]\n\n return JsonResponse({\n \"ok\": True,\n \"people\": people\n })\n", "path": "backend/uclapi/search/views.py"}], "after_files": [{"content": "from rest_framework.decorators import api_view\nfrom django.http import JsonResponse\n\nfrom roombookings.decorators import does_token_exist, log_api_call, throttle\n\nimport os\nimport requests\n\n\n@api_view(['GET'])\n@does_token_exist\n@throttle\n@log_api_call\ndef people(request):\n if \"query\" not in request.GET:\n response = JsonResponse({\n \"ok\": False,\n \"error\": \"No query provided.\"\n })\n response.status_code = 400\n return response\n\n query = request.GET[\"query\"]\n\n url = (\n \"{}?{}={}\"\n .format(\n os.environ[\"SEARCH_API_URL\"],\n os.environ[\"SEARCH_API_QUERY_PARAMS\"],\n query,\n )\n )\n\n r = requests.get(url)\n\n results = r.json()[\"response\"][\"resultPacket\"][\"results\"][:20]\n\n def serialize_person(person):\n return {\n \"name\": person[\"title\"],\n \"department\": person[\"metaData\"].get(\"7\", \"\"),\n \"email\": person[\"metaData\"].get(\"E\", \"\"),\n \"status\": person[\"metaData\"].get(\"g\", \"\"),\n }\n\n people = [serialize_person(person) for person in results]\n\n return JsonResponse({\n \"ok\": True,\n \"people\": people\n })\n", "path": "backend/uclapi/search/views.py"}]} |
gh_patches_debug_1161 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-1749 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Jupyterhub remote host issue
Hi,
I'm trying to use jupyterhub from a docker. I was able to get it to run properly on my local using this dockerfile but it doesn't work on the remote host.
```
FROM jupyterhub/jupyterhub
RUN apt-get -y update
RUN conda install -y jupyter
RUN jupyterhub --generate-config
RUN useradd -ms /bin/bash abc
RUN echo "abc:123" | chpasswd
EXPOSE 8000
CMD jupyterhub
```
I'm able to see the login page but when I enter the credentials, I get the following error:
```
[I 2018-03-19 12:48:32.236 JupyterHub app:925] Writing cookie_secret to /srv/jupyterhub/jupyterhub_cookie_secret
[I 2018-03-19 12:48:32.257 alembic.runtime.migration migration:117] Context impl SQLiteImpl.
[I 2018-03-19 12:48:32.257 alembic.runtime.migration migration:122] Will assume non-transactional DDL.
[I 2018-03-19 12:48:32.266 alembic.runtime.migration migration:327] Running stamp_revision -> 56cc5a70207e
[W 2018-03-19 12:48:32.344 JupyterHub app:1008] No admin users, admin interface will be unavailable.
[W 2018-03-19 12:48:32.345 JupyterHub app:1009] Add any administrative users to `c.Authenticator.admin_users` in config.
[I 2018-03-19 12:48:32.345 JupyterHub app:1036] Not using whitelist. Any authenticated user will be allowed.
[I 2018-03-19 12:48:32.374 JupyterHub app:1615] Hub API listening on http://127.0.0.1:8081/hub/
[W 2018-03-19 12:48:32.375 JupyterHub proxy:392]
Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.
Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.
[W 2018-03-19 12:48:32.375 JupyterHub proxy:434] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...
[I 2018-03-19 12:48:32.375 JupyterHub proxy:436] Starting proxy @ http://*:8000/
12:48:32.558 - info: [ConfigProxy] Proxying http://*:8000 to (no default)
12:48:32.561 - info: [ConfigProxy] Proxy API at http://127.0.0.1:8001/api/routes
[W 2018-03-19 12:48:32.742 JupyterHub proxy:289] Adding missing default route
12:48:32.742 - info: [ConfigProxy] 200 GET /api/routes
[I 2018-03-19 12:48:32.743 JupyterHub proxy:348] Adding default route for Hub: / => http://127.0.0.1:8081
12:48:32.746 - info: [ConfigProxy] Adding route / -> http://127.0.0.1:8081
12:48:32.747 - info: [ConfigProxy] 201 POST /api/routes/
[I 2018-03-19 12:48:32.747 JupyterHub app:1668] JupyterHub is now running at http://:8000/
[I 2018-03-19 12:49:13.084 JupyterHub log:134] 200 GET /hub/login (@::ffff:172.17.0.1) 28.40ms
[E 2018-03-19 12:49:16.642 JupyterHub web:1591] Uncaught exception POST /hub/login?next= (::ffff:172.17.0.1)
HTTPServerRequest(protocol='http', host='localhost:8000', method='POST', uri='/hub/login?next=', version='HTTP/1.1', remote_ip='::ffff:172.17.0.1', headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36', 'Content-Type': 'application/x-www-form-urlencoded', 'X-Forwarded-Host': 'localhost:8000', 'X-Forwarded-For': '::ffff:172.17.0.1', 'X-Forwarded-Port': '8000', 'X-Forwarded-Proto': 'http', 'Upgrade-Insecure-Requests': '1', 'Origin': 'http://localhost:8000', 'Cache-Control': 'max-age=0', 'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8', 'Host': 'localhost:8000', 'Referer': 'http://localhost:8000/hub/login', 'Connection': 'close', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Content-Length': '30', 'Accept-Encoding': 'gzip, deflate, br'})
Traceback (most recent call last):
File "/opt/conda/lib/python3.5/site-packages/tornado/web.py", line 1512, in _execute
result = yield result
File "<string>", line 6, in _wrap_awaitable
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/login.py", line 81, in post
user = await self.login_user(data)
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py", line 414, in login_user
authenticated = await self.authenticate(data)
File "/opt/conda/lib/python3.5/asyncio/futures.py", line 381, in __iter__
yield self # This tells Task to wait for completion.
File "/opt/conda/lib/python3.5/asyncio/tasks.py", line 240, in _step
result = coro.send(None)
File "/opt/conda/lib/python3.5/site-packages/jupyterhub/auth.py", line 228, in get_authenticated_user
authenticated = await self.authenticate(handler, data)
TypeError: object Future can't be used in 'await' expression
[E 2018-03-19 12:49:16.654 JupyterHub log:126] {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36",
"Content-Type": "application/x-www-form-urlencoded",
"X-Forwarded-Host": "localhost:8000",
"X-Forwarded-For": "::ffff:172.17.0.1",
"X-Forwarded-Port": "8000",
"X-Forwarded-Proto": "http",
"Upgrade-Insecure-Requests": "1",
"Origin": "http://localhost:8000",
"Cache-Control": "max-age=0",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8",
"Host": "localhost:8000",
"Referer": "http://localhost:8000/hub/login",
"Connection": "close",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
"Content-Length": "30",
"Accept-Encoding": "gzip, deflate, br"
}
```
Any ideas what I am doing wrong?
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `jupyterhub/auth.py`
Content:
```
1 """Base Authenticator class and the default PAM Authenticator"""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from concurrent.futures import ThreadPoolExecutor
7 import pipes
8 import re
9 from shutil import which
10 import sys
11 from subprocess import Popen, PIPE, STDOUT
12
13 try:
14 import pamela
15 except Exception as e:
16 pamela = None
17 _pamela_error = e
18
19 from tornado.concurrent import run_on_executor
20 from tornado import gen
21
22 from traitlets.config import LoggingConfigurable
23 from traitlets import Bool, Set, Unicode, Dict, Any, default, observe
24
25 from .handlers.login import LoginHandler
26 from .utils import maybe_future, url_path_join
27 from .traitlets import Command
28
29
30 def getgrnam(name):
31 """Wrapper function to protect against `grp` not being available
32 on Windows
33 """
34 import grp
35 return grp.getgrnam(name)
36
37
38 class Authenticator(LoggingConfigurable):
39 """Base class for implementing an authentication provider for JupyterHub"""
40
41 db = Any()
42
43 enable_auth_state = Bool(False, config=True,
44 help="""Enable persisting auth_state (if available).
45
46 auth_state will be encrypted and stored in the Hub's database.
47 This can include things like authentication tokens, etc.
48 to be passed to Spawners as environment variables.
49
50 Encrypting auth_state requires the cryptography package.
51
52 Additionally, the JUPYTERHUB_CRYPTO_KEY envirionment variable must
53 contain one (or more, separated by ;) 32B encryption keys.
54 These can be either base64 or hex-encoded.
55
56 If encryption is unavailable, auth_state cannot be persisted.
57
58 New in JupyterHub 0.8
59 """,
60 )
61
62 admin_users = Set(
63 help="""
64 Set of users that will have admin rights on this JupyterHub.
65
66 Admin users have extra privileges:
67 - Use the admin panel to see list of users logged in
68 - Add / remove users in some authenticators
69 - Restart / halt the hub
70 - Start / stop users' single-user servers
71 - Can access each individual users' single-user server (if configured)
72
73 Admin access should be treated the same way root access is.
74
75 Defaults to an empty set, in which case no user has admin access.
76 """
77 ).tag(config=True)
78
79 whitelist = Set(
80 help="""
81 Whitelist of usernames that are allowed to log in.
82
83 Use this with supported authenticators to restrict which users can log in. This is an
84 additional whitelist that further restricts users, beyond whatever restrictions the
85 authenticator has in place.
86
87 If empty, does not perform any additional restriction.
88 """
89 ).tag(config=True)
90
91 @observe('whitelist')
92 def _check_whitelist(self, change):
93 short_names = [name for name in change['new'] if len(name) <= 1]
94 if short_names:
95 sorted_names = sorted(short_names)
96 single = ''.join(sorted_names)
97 string_set_typo = "set('%s')" % single
98 self.log.warning("whitelist contains single-character names: %s; did you mean set([%r]) instead of %s?",
99 sorted_names[:8], single, string_set_typo,
100 )
101
102 custom_html = Unicode(
103 help="""
104 HTML form to be overridden by authenticators if they want a custom authentication form.
105
106 Defaults to an empty string, which shows the default username/password form.
107 """
108 )
109
110 login_service = Unicode(
111 help="""
112 Name of the login service that this authenticator is providing using to authenticate users.
113
114 Example: GitHub, MediaWiki, Google, etc.
115
116 Setting this value replaces the login form with a "Login with <login_service>" button.
117
118 Any authenticator that redirects to an external service (e.g. using OAuth) should set this.
119 """
120 )
121
122 username_pattern = Unicode(
123 help="""
124 Regular expression pattern that all valid usernames must match.
125
126 If a username does not match the pattern specified here, authentication will not be attempted.
127
128 If not set, allow any username.
129 """
130 ).tag(config=True)
131
132 @observe('username_pattern')
133 def _username_pattern_changed(self, change):
134 if not change['new']:
135 self.username_regex = None
136 self.username_regex = re.compile(change['new'])
137
138 username_regex = Any(
139 help="""
140 Compiled regex kept in sync with `username_pattern`
141 """
142 )
143
144 def validate_username(self, username):
145 """Validate a normalized username
146
147 Return True if username is valid, False otherwise.
148 """
149 if '/' in username:
150 # / is not allowed in usernames
151 return False
152 if not username:
153 # empty usernames are not allowed
154 return False
155 if not self.username_regex:
156 return True
157 return bool(self.username_regex.match(username))
158
159 username_map = Dict(
160 help="""Dictionary mapping authenticator usernames to JupyterHub users.
161
162 Primarily used to normalize OAuth user names to local users.
163 """
164 ).tag(config=True)
165
166 delete_invalid_users = Bool(False,
167 help="""Delete any users from the database that do not pass validation
168
169 When JupyterHub starts, `.add_user` will be called
170 on each user in the database to verify that all users are still valid.
171
172 If `delete_invalid_users` is True,
173 any users that do not pass validation will be deleted from the database.
174 Use this if users might be deleted from an external system,
175 such as local user accounts.
176
177 If False (default), invalid users remain in the Hub's database
178 and a warning will be issued.
179 This is the default to avoid data loss due to config changes.
180 """
181 )
182
183 def normalize_username(self, username):
184 """Normalize the given username and return it
185
186 Override in subclasses if usernames need different normalization rules.
187
188 The default attempts to lowercase the username and apply `username_map` if it is
189 set.
190 """
191 username = username.lower()
192 username = self.username_map.get(username, username)
193 return username
194
195 def check_whitelist(self, username):
196 """Check if a username is allowed to authenticate based on whitelist configuration
197
198 Return True if username is allowed, False otherwise.
199 No whitelist means any username is allowed.
200
201 Names are normalized *before* being checked against the whitelist.
202 """
203 if not self.whitelist:
204 # No whitelist means any name is allowed
205 return True
206 return username in self.whitelist
207
208 async def get_authenticated_user(self, handler, data):
209 """Authenticate the user who is attempting to log in
210
211 Returns user dict if successful, None otherwise.
212
213 This calls `authenticate`, which should be overridden in subclasses,
214 normalizes the username if any normalization should be done,
215 and then validates the name in the whitelist.
216
217 This is the outer API for authenticating a user.
218 Subclasses should not override this method.
219
220 The various stages can be overridden separately:
221 - `authenticate` turns formdata into a username
222 - `normalize_username` normalizes the username
223 - `check_whitelist` checks against the user whitelist
224
225 .. versionchanged:: 0.8
226 return dict instead of username
227 """
228 authenticated = await self.authenticate(handler, data)
229 if authenticated is None:
230 return
231 if isinstance(authenticated, dict):
232 if 'name' not in authenticated:
233 raise ValueError("user missing a name: %r" % authenticated)
234 else:
235 authenticated = {
236 'name': authenticated,
237 }
238 authenticated.setdefault('auth_state', None)
239 authenticated.setdefault('admin', None)
240
241 # normalize the username
242 authenticated['name'] = username = self.normalize_username(authenticated['name'])
243 if not self.validate_username(username):
244 self.log.warning("Disallowing invalid username %r.", username)
245 return
246
247 whitelist_pass = await maybe_future(self.check_whitelist(username))
248 if whitelist_pass:
249 return authenticated
250 else:
251 self.log.warning("User %r not in whitelist.", username)
252 return
253
254 async def authenticate(self, handler, data):
255 """Authenticate a user with login form data
256
257 This must be a tornado gen.coroutine.
258 It must return the username on successful authentication,
259 and return None on failed authentication.
260
261 Checking the whitelist is handled separately by the caller.
262
263 .. versionchanged:: 0.8
264 Allow `authenticate` to return a dict containing auth_state.
265
266 Args:
267 handler (tornado.web.RequestHandler): the current request handler
268 data (dict): The formdata of the login form.
269 The default form has 'username' and 'password' fields.
270 Returns:
271 user (str or dict or None): The username of the authenticated user,
272 or None if Authentication failed.
273 The Authenticator may return a dict instead, which MUST have a
274 key 'name' holding the username, and may have two optional keys
275 set - 'auth_state', a dictionary of of auth state that will be
276 persisted; and 'admin', the admin setting value for the user.
277 """
278
279 def pre_spawn_start(self, user, spawner):
280 """Hook called before spawning a user's server
281
282 Can be used to do auth-related startup, e.g. opening PAM sessions.
283 """
284
285 def post_spawn_stop(self, user, spawner):
286 """Hook called after stopping a user container
287
288 Can be used to do auth-related cleanup, e.g. closing PAM sessions.
289 """
290
291 def add_user(self, user):
292 """Hook called when a user is added to JupyterHub
293
294 This is called:
295 - When a user first authenticates
296 - When the hub restarts, for all users.
297
298 This method may be a coroutine.
299
300 By default, this just adds the user to the whitelist.
301
302 Subclasses may do more extensive things, such as adding actual unix users,
303 but they should call super to ensure the whitelist is updated.
304
305 Note that this should be idempotent, since it is called whenever the hub restarts
306 for all users.
307
308 Args:
309 user (User): The User wrapper object
310 """
311 if not self.validate_username(user.name):
312 raise ValueError("Invalid username: %s" % user.name)
313 if self.whitelist:
314 self.whitelist.add(user.name)
315
316 def delete_user(self, user):
317 """Hook called when a user is deleted
318
319 Removes the user from the whitelist.
320 Subclasses should call super to ensure the whitelist is updated.
321
322 Args:
323 user (User): The User wrapper object
324 """
325 self.whitelist.discard(user.name)
326
327 auto_login = Bool(False, config=True,
328 help="""Automatically begin the login process
329
330 rather than starting with a "Login with..." link at `/hub/login`
331
332 To work, `.login_url()` must give a URL other than the default `/hub/login`,
333 such as an oauth handler or another automatic login handler,
334 registered with `.get_handlers()`.
335
336 .. versionadded:: 0.8
337 """
338 )
339
340 def login_url(self, base_url):
341 """Override this when registering a custom login handler
342
343 Generally used by authenticators that do not use simple form-based authentication.
344
345 The subclass overriding this is responsible for making sure there is a handler
346 available to handle the URL returned from this method, using the `get_handlers`
347 method.
348
349 Args:
350 base_url (str): the base URL of the Hub (e.g. /hub/)
351
352 Returns:
353 str: The login URL, e.g. '/hub/login'
354 """
355 return url_path_join(base_url, 'login')
356
357 def logout_url(self, base_url):
358 """Override when registering a custom logout handler
359
360 The subclass overriding this is responsible for making sure there is a handler
361 available to handle the URL returned from this method, using the `get_handlers`
362 method.
363
364 Args:
365 base_url (str): the base URL of the Hub (e.g. /hub/)
366
367 Returns:
368 str: The logout URL, e.g. '/hub/logout'
369 """
370 return url_path_join(base_url, 'logout')
371
372 def get_handlers(self, app):
373 """Return any custom handlers the authenticator needs to register
374
375 Used in conjugation with `login_url` and `logout_url`.
376
377 Args:
378 app (JupyterHub Application):
379 the application object, in case it needs to be accessed for info.
380 Returns:
381 handlers (list):
382 list of ``('/url', Handler)`` tuples passed to tornado.
383 The Hub prefix is added to any URLs.
384 """
385 return [
386 ('/login', LoginHandler),
387 ]
388
389
390 class LocalAuthenticator(Authenticator):
391 """Base class for Authenticators that work with local Linux/UNIX users
392
393 Checks for local users, and can attempt to create them if they exist.
394 """
395
396 create_system_users = Bool(False,
397 help="""
398 If set to True, will attempt to create local system users if they do not exist already.
399
400 Supports Linux and BSD variants only.
401 """
402 ).tag(config=True)
403
404 add_user_cmd = Command(
405 help="""
406 The command to use for creating users as a list of strings
407
408 For each element in the list, the string USERNAME will be replaced with
409 the user's username. The username will also be appended as the final argument.
410
411 For Linux, the default value is:
412
413 ['adduser', '-q', '--gecos', '""', '--disabled-password']
414
415 To specify a custom home directory, set this to:
416
417 ['adduser', '-q', '--gecos', '""', '--home', '/customhome/USERNAME', '--disabled-password']
418
419 This will run the command:
420
421 adduser -q --gecos "" --home /customhome/river --disabled-password river
422
423 when the user 'river' is created.
424 """
425 ).tag(config=True)
426
427 @default('add_user_cmd')
428 def _add_user_cmd_default(self):
429 """Guess the most likely-to-work adduser command for each platform"""
430 if sys.platform == 'darwin':
431 raise ValueError("I don't know how to create users on OS X")
432 elif which('pw'):
433 # Probably BSD
434 return ['pw', 'useradd', '-m']
435 else:
436 # This appears to be the Linux non-interactive adduser command:
437 return ['adduser', '-q', '--gecos', '""', '--disabled-password']
438
439 group_whitelist = Set(
440 help="""
441 Whitelist all users from this UNIX group.
442
443 This makes the username whitelist ineffective.
444 """
445 ).tag(config=True)
446
447 @observe('group_whitelist')
448 def _group_whitelist_changed(self, change):
449 """
450 Log a warning if both group_whitelist and user whitelist are set.
451 """
452 if self.whitelist:
453 self.log.warning(
454 "Ignoring username whitelist because group whitelist supplied!"
455 )
456
457 def check_whitelist(self, username):
458 if self.group_whitelist:
459 return self.check_group_whitelist(username)
460 else:
461 return super().check_whitelist(username)
462
463 def check_group_whitelist(self, username):
464 """
465 If group_whitelist is configured, check if authenticating user is part of group.
466 """
467 if not self.group_whitelist:
468 return False
469 for grnam in self.group_whitelist:
470 try:
471 group = getgrnam(grnam)
472 except KeyError:
473 self.log.error('No such group: [%s]' % grnam)
474 continue
475 if username in group.gr_mem:
476 return True
477 return False
478
479 async def add_user(self, user):
480 """Hook called whenever a new user is added
481
482 If self.create_system_users, the user will attempt to be created if it doesn't exist.
483 """
484 user_exists = await maybe_future(self.system_user_exists(user))
485 if not user_exists:
486 if self.create_system_users:
487 await maybe_future(self.add_system_user(user))
488 else:
489 raise KeyError("User %s does not exist." % user.name)
490
491 await maybe_future(super().add_user(user))
492
493 @staticmethod
494 def system_user_exists(user):
495 """Check if the user exists on the system"""
496 import pwd
497 try:
498 pwd.getpwnam(user.name)
499 except KeyError:
500 return False
501 else:
502 return True
503
504 def add_system_user(self, user):
505 """Create a new local UNIX user on the system.
506
507 Tested to work on FreeBSD and Linux, at least.
508 """
509 name = user.name
510 cmd = [ arg.replace('USERNAME', name) for arg in self.add_user_cmd ] + [name]
511 self.log.info("Creating user: %s", ' '.join(map(pipes.quote, cmd)))
512 p = Popen(cmd, stdout=PIPE, stderr=STDOUT)
513 p.wait()
514 if p.returncode:
515 err = p.stdout.read().decode('utf8', 'replace')
516 raise RuntimeError("Failed to create system user %s: %s" % (name, err))
517
518
519 class PAMAuthenticator(LocalAuthenticator):
520 """Authenticate local UNIX users with PAM"""
521
522 # run PAM in a thread, since it can be slow
523 executor = Any()
524 @default('executor')
525 def _default_executor(self):
526 return ThreadPoolExecutor(1)
527
528 encoding = Unicode('utf8',
529 help="""
530 The text encoding to use when communicating with PAM
531 """
532 ).tag(config=True)
533
534 service = Unicode('login',
535 help="""
536 The name of the PAM service to use for authentication
537 """
538 ).tag(config=True)
539
540 open_sessions = Bool(True,
541 help="""
542 Whether to open a new PAM session when spawners are started.
543
544 This may trigger things like mounting shared filsystems,
545 loading credentials, etc. depending on system configuration,
546 but it does not always work.
547
548 If any errors are encountered when opening/closing PAM sessions,
549 this is automatically set to False.
550 """
551 ).tag(config=True)
552
553 check_account = Bool(True,
554 help="""
555 Whether to check the user's account status via PAM during authentication.
556
557 The PAM account stack performs non-authentication based account
558 management. It is typically used to restrict/permit access to a
559 service and this step is needed to access the host's user access control.
560
561 Disabling this can be dangerous as authenticated but unauthorized users may
562 be granted access and, therefore, arbitrary execution on the system.
563 """
564 ).tag(config=True)
565
566 def __init__(self, **kwargs):
567 if pamela is None:
568 raise _pamela_error from None
569 super().__init__(**kwargs)
570
571 @run_on_executor
572 def authenticate(self, handler, data):
573 """Authenticate with PAM, and return the username if login is successful.
574
575 Return None otherwise.
576 """
577 username = data['username']
578 try:
579 pamela.authenticate(username, data['password'], service=self.service, encoding=self.encoding)
580 except pamela.PAMError as e:
581 if handler is not None:
582 self.log.warning("PAM Authentication failed (%s@%s): %s", username, handler.request.remote_ip, e)
583 else:
584 self.log.warning("PAM Authentication failed: %s", e)
585 else:
586 if not self.check_account:
587 return username
588 try:
589 pamela.check_account(username, service=self.service, encoding=self.encoding)
590 except pamela.PAMError as e:
591 if handler is not None:
592 self.log.warning("PAM Account Check failed (%s@%s): %s", username, handler.request.remote_ip, e)
593 else:
594 self.log.warning("PAM Account Check failed: %s", e)
595 else:
596 return username
597
598 @run_on_executor
599 def pre_spawn_start(self, user, spawner):
600 """Open PAM session for user if so configured"""
601 if not self.open_sessions:
602 return
603 try:
604 pamela.open_session(user.name, service=self.service, encoding=self.encoding)
605 except pamela.PAMError as e:
606 self.log.warning("Failed to open PAM session for %s: %s", user.name, e)
607 self.log.warning("Disabling PAM sessions from now on.")
608 self.open_sessions = False
609
610 @run_on_executor
611 def post_spawn_stop(self, user, spawner):
612 """Close PAM session for user if we were configured to opened one"""
613 if not self.open_sessions:
614 return
615 try:
616 pamela.close_session(user.name, service=self.service, encoding=self.encoding)
617 except pamela.PAMError as e:
618 self.log.warning("Failed to close PAM session for %s: %s", user.name, e)
619 self.log.warning("Disabling PAM sessions from now on.")
620 self.open_sessions = False
621
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py
--- a/jupyterhub/auth.py
+++ b/jupyterhub/auth.py
@@ -225,7 +225,7 @@
.. versionchanged:: 0.8
return dict instead of username
"""
- authenticated = await self.authenticate(handler, data)
+ authenticated = await maybe_future(self.authenticate(handler, data))
if authenticated is None:
return
if isinstance(authenticated, dict):
| {"golden_diff": "diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py\n--- a/jupyterhub/auth.py\n+++ b/jupyterhub/auth.py\n@@ -225,7 +225,7 @@\n .. versionchanged:: 0.8\n return dict instead of username\n \"\"\"\n- authenticated = await self.authenticate(handler, data)\n+ authenticated = await maybe_future(self.authenticate(handler, data))\n if authenticated is None:\n return\n if isinstance(authenticated, dict):\n", "issue": "Jupyterhub remote host issue\nHi,\r\nI'm trying to use jupyterhub from a docker. I was able to get it to run properly on my local using this dockerfile but it doesn't work on the remote host.\r\n```\r\nFROM jupyterhub/jupyterhub\r\nRUN apt-get -y update\r\nRUN conda install -y jupyter\r\nRUN jupyterhub --generate-config\r\nRUN useradd -ms /bin/bash abc\r\nRUN echo \"abc:123\" | chpasswd\r\nEXPOSE 8000\r\nCMD jupyterhub\r\n```\r\n\r\nI'm able to see the login page but when I enter the credentials, I get the following error:\r\n```\r\n[I 2018-03-19 12:48:32.236 JupyterHub app:925] Writing cookie_secret to /srv/jupyterhub/jupyterhub_cookie_secret\r\n[I 2018-03-19 12:48:32.257 alembic.runtime.migration migration:117] Context impl SQLiteImpl.\r\n[I 2018-03-19 12:48:32.257 alembic.runtime.migration migration:122] Will assume non-transactional DDL.\r\n[I 2018-03-19 12:48:32.266 alembic.runtime.migration migration:327] Running stamp_revision -> 56cc5a70207e\r\n[W 2018-03-19 12:48:32.344 JupyterHub app:1008] No admin users, admin interface will be unavailable.\r\n[W 2018-03-19 12:48:32.345 JupyterHub app:1009] Add any administrative users to `c.Authenticator.admin_users` in config.\r\n[I 2018-03-19 12:48:32.345 JupyterHub app:1036] Not using whitelist. Any authenticated user will be allowed.\r\n[I 2018-03-19 12:48:32.374 JupyterHub app:1615] Hub API listening on http://127.0.0.1:8081/hub/\r\n[W 2018-03-19 12:48:32.375 JupyterHub proxy:392] \r\n Generating CONFIGPROXY_AUTH_TOKEN. Restarting the Hub will require restarting the proxy.\r\n Set CONFIGPROXY_AUTH_TOKEN env or JupyterHub.proxy_auth_token config to avoid this message.\r\n \r\n[W 2018-03-19 12:48:32.375 JupyterHub proxy:434] Running JupyterHub without SSL. I hope there is SSL termination happening somewhere else...\r\n[I 2018-03-19 12:48:32.375 JupyterHub proxy:436] Starting proxy @ http://*:8000/\r\n12:48:32.558 - info: [ConfigProxy] Proxying http://*:8000 to (no default)\r\n12:48:32.561 - info: [ConfigProxy] Proxy API at http://127.0.0.1:8001/api/routes\r\n[W 2018-03-19 12:48:32.742 JupyterHub proxy:289] Adding missing default route\r\n12:48:32.742 - info: [ConfigProxy] 200 GET /api/routes \r\n[I 2018-03-19 12:48:32.743 JupyterHub proxy:348] Adding default route for Hub: / => http://127.0.0.1:8081\r\n12:48:32.746 - info: [ConfigProxy] Adding route / -> http://127.0.0.1:8081\r\n12:48:32.747 - info: [ConfigProxy] 201 POST /api/routes/ \r\n[I 2018-03-19 12:48:32.747 JupyterHub app:1668] JupyterHub is now running at http://:8000/\r\n[I 2018-03-19 12:49:13.084 JupyterHub log:134] 200 GET /hub/login (@::ffff:172.17.0.1) 28.40ms\r\n[E 2018-03-19 12:49:16.642 JupyterHub web:1591] Uncaught exception POST /hub/login?next= (::ffff:172.17.0.1)\r\n HTTPServerRequest(protocol='http', host='localhost:8000', method='POST', uri='/hub/login?next=', version='HTTP/1.1', remote_ip='::ffff:172.17.0.1', headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36', 'Content-Type': 'application/x-www-form-urlencoded', 'X-Forwarded-Host': 'localhost:8000', 'X-Forwarded-For': '::ffff:172.17.0.1', 'X-Forwarded-Port': '8000', 'X-Forwarded-Proto': 'http', 'Upgrade-Insecure-Requests': '1', 'Origin': 'http://localhost:8000', 'Cache-Control': 'max-age=0', 'Accept-Language': 'en-GB,en-US;q=0.9,en;q=0.8', 'Host': 'localhost:8000', 'Referer': 'http://localhost:8000/hub/login', 'Connection': 'close', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Content-Length': '30', 'Accept-Encoding': 'gzip, deflate, br'})\r\n Traceback (most recent call last):\r\n File \"/opt/conda/lib/python3.5/site-packages/tornado/web.py\", line 1512, in _execute\r\n result = yield result\r\n File \"<string>\", line 6, in _wrap_awaitable\r\n File \"/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/login.py\", line 81, in post\r\n user = await self.login_user(data)\r\n File \"/opt/conda/lib/python3.5/site-packages/jupyterhub/handlers/base.py\", line 414, in login_user\r\n authenticated = await self.authenticate(data)\r\n File \"/opt/conda/lib/python3.5/asyncio/futures.py\", line 381, in __iter__\r\n yield self # This tells Task to wait for completion.\r\n File \"/opt/conda/lib/python3.5/asyncio/tasks.py\", line 240, in _step\r\n result = coro.send(None)\r\n File \"/opt/conda/lib/python3.5/site-packages/jupyterhub/auth.py\", line 228, in get_authenticated_user\r\n authenticated = await self.authenticate(handler, data)\r\n TypeError: object Future can't be used in 'await' expression\r\n \r\n[E 2018-03-19 12:49:16.654 JupyterHub log:126] {\r\n \"User-Agent\": \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36\",\r\n \"Content-Type\": \"application/x-www-form-urlencoded\",\r\n \"X-Forwarded-Host\": \"localhost:8000\",\r\n \"X-Forwarded-For\": \"::ffff:172.17.0.1\",\r\n \"X-Forwarded-Port\": \"8000\",\r\n \"X-Forwarded-Proto\": \"http\",\r\n \"Upgrade-Insecure-Requests\": \"1\",\r\n \"Origin\": \"http://localhost:8000\",\r\n \"Cache-Control\": \"max-age=0\",\r\n \"Accept-Language\": \"en-GB,en-US;q=0.9,en;q=0.8\",\r\n \"Host\": \"localhost:8000\",\r\n \"Referer\": \"http://localhost:8000/hub/login\",\r\n \"Connection\": \"close\",\r\n \"Accept\": \"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8\",\r\n \"Content-Length\": \"30\",\r\n \"Accept-Encoding\": \"gzip, deflate, br\"\r\n }\r\n```\r\n\r\nAny ideas what I am doing wrong?\n", "before_files": [{"content": "\"\"\"Base Authenticator class and the default PAM Authenticator\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom concurrent.futures import ThreadPoolExecutor\nimport pipes\nimport re\nfrom shutil import which\nimport sys\nfrom subprocess import Popen, PIPE, STDOUT\n\ntry:\n import pamela\nexcept Exception as e:\n pamela = None\n _pamela_error = e\n\nfrom tornado.concurrent import run_on_executor\nfrom tornado import gen\n\nfrom traitlets.config import LoggingConfigurable\nfrom traitlets import Bool, Set, Unicode, Dict, Any, default, observe\n\nfrom .handlers.login import LoginHandler\nfrom .utils import maybe_future, url_path_join\nfrom .traitlets import Command\n\n\ndef getgrnam(name):\n \"\"\"Wrapper function to protect against `grp` not being available\n on Windows\n \"\"\"\n import grp\n return grp.getgrnam(name)\n\n\nclass Authenticator(LoggingConfigurable):\n \"\"\"Base class for implementing an authentication provider for JupyterHub\"\"\"\n\n db = Any()\n\n enable_auth_state = Bool(False, config=True,\n help=\"\"\"Enable persisting auth_state (if available).\n\n auth_state will be encrypted and stored in the Hub's database.\n This can include things like authentication tokens, etc.\n to be passed to Spawners as environment variables.\n\n Encrypting auth_state requires the cryptography package.\n\n Additionally, the JUPYTERHUB_CRYPTO_KEY envirionment variable must\n contain one (or more, separated by ;) 32B encryption keys.\n These can be either base64 or hex-encoded.\n\n If encryption is unavailable, auth_state cannot be persisted.\n\n New in JupyterHub 0.8\n \"\"\",\n )\n\n admin_users = Set(\n help=\"\"\"\n Set of users that will have admin rights on this JupyterHub.\n\n Admin users have extra privileges:\n - Use the admin panel to see list of users logged in\n - Add / remove users in some authenticators\n - Restart / halt the hub\n - Start / stop users' single-user servers\n - Can access each individual users' single-user server (if configured)\n\n Admin access should be treated the same way root access is.\n\n Defaults to an empty set, in which case no user has admin access.\n \"\"\"\n ).tag(config=True)\n\n whitelist = Set(\n help=\"\"\"\n Whitelist of usernames that are allowed to log in.\n\n Use this with supported authenticators to restrict which users can log in. This is an\n additional whitelist that further restricts users, beyond whatever restrictions the\n authenticator has in place.\n\n If empty, does not perform any additional restriction.\n \"\"\"\n ).tag(config=True)\n\n @observe('whitelist')\n def _check_whitelist(self, change):\n short_names = [name for name in change['new'] if len(name) <= 1]\n if short_names:\n sorted_names = sorted(short_names)\n single = ''.join(sorted_names)\n string_set_typo = \"set('%s')\" % single\n self.log.warning(\"whitelist contains single-character names: %s; did you mean set([%r]) instead of %s?\",\n sorted_names[:8], single, string_set_typo,\n )\n\n custom_html = Unicode(\n help=\"\"\"\n HTML form to be overridden by authenticators if they want a custom authentication form.\n\n Defaults to an empty string, which shows the default username/password form.\n \"\"\"\n )\n\n login_service = Unicode(\n help=\"\"\"\n Name of the login service that this authenticator is providing using to authenticate users.\n\n Example: GitHub, MediaWiki, Google, etc.\n\n Setting this value replaces the login form with a \"Login with <login_service>\" button.\n\n Any authenticator that redirects to an external service (e.g. using OAuth) should set this.\n \"\"\"\n )\n\n username_pattern = Unicode(\n help=\"\"\"\n Regular expression pattern that all valid usernames must match.\n\n If a username does not match the pattern specified here, authentication will not be attempted.\n\n If not set, allow any username.\n \"\"\"\n ).tag(config=True)\n\n @observe('username_pattern')\n def _username_pattern_changed(self, change):\n if not change['new']:\n self.username_regex = None\n self.username_regex = re.compile(change['new'])\n\n username_regex = Any(\n help=\"\"\"\n Compiled regex kept in sync with `username_pattern`\n \"\"\"\n )\n\n def validate_username(self, username):\n \"\"\"Validate a normalized username\n\n Return True if username is valid, False otherwise.\n \"\"\"\n if '/' in username:\n # / is not allowed in usernames\n return False\n if not username:\n # empty usernames are not allowed\n return False\n if not self.username_regex:\n return True\n return bool(self.username_regex.match(username))\n\n username_map = Dict(\n help=\"\"\"Dictionary mapping authenticator usernames to JupyterHub users.\n\n Primarily used to normalize OAuth user names to local users.\n \"\"\"\n ).tag(config=True)\n\n delete_invalid_users = Bool(False,\n help=\"\"\"Delete any users from the database that do not pass validation\n\n When JupyterHub starts, `.add_user` will be called\n on each user in the database to verify that all users are still valid.\n\n If `delete_invalid_users` is True,\n any users that do not pass validation will be deleted from the database.\n Use this if users might be deleted from an external system,\n such as local user accounts.\n\n If False (default), invalid users remain in the Hub's database\n and a warning will be issued.\n This is the default to avoid data loss due to config changes.\n \"\"\"\n )\n\n def normalize_username(self, username):\n \"\"\"Normalize the given username and return it\n\n Override in subclasses if usernames need different normalization rules.\n\n The default attempts to lowercase the username and apply `username_map` if it is\n set.\n \"\"\"\n username = username.lower()\n username = self.username_map.get(username, username)\n return username\n\n def check_whitelist(self, username):\n \"\"\"Check if a username is allowed to authenticate based on whitelist configuration\n\n Return True if username is allowed, False otherwise.\n No whitelist means any username is allowed.\n\n Names are normalized *before* being checked against the whitelist.\n \"\"\"\n if not self.whitelist:\n # No whitelist means any name is allowed\n return True\n return username in self.whitelist\n\n async def get_authenticated_user(self, handler, data):\n \"\"\"Authenticate the user who is attempting to log in\n\n Returns user dict if successful, None otherwise.\n\n This calls `authenticate`, which should be overridden in subclasses,\n normalizes the username if any normalization should be done,\n and then validates the name in the whitelist.\n\n This is the outer API for authenticating a user.\n Subclasses should not override this method.\n\n The various stages can be overridden separately:\n - `authenticate` turns formdata into a username\n - `normalize_username` normalizes the username\n - `check_whitelist` checks against the user whitelist\n\n .. versionchanged:: 0.8\n return dict instead of username\n \"\"\"\n authenticated = await self.authenticate(handler, data)\n if authenticated is None:\n return\n if isinstance(authenticated, dict):\n if 'name' not in authenticated:\n raise ValueError(\"user missing a name: %r\" % authenticated)\n else:\n authenticated = {\n 'name': authenticated,\n }\n authenticated.setdefault('auth_state', None)\n authenticated.setdefault('admin', None)\n\n # normalize the username\n authenticated['name'] = username = self.normalize_username(authenticated['name'])\n if not self.validate_username(username):\n self.log.warning(\"Disallowing invalid username %r.\", username)\n return\n\n whitelist_pass = await maybe_future(self.check_whitelist(username))\n if whitelist_pass:\n return authenticated\n else:\n self.log.warning(\"User %r not in whitelist.\", username)\n return\n\n async def authenticate(self, handler, data):\n \"\"\"Authenticate a user with login form data\n\n This must be a tornado gen.coroutine.\n It must return the username on successful authentication,\n and return None on failed authentication.\n\n Checking the whitelist is handled separately by the caller.\n\n .. versionchanged:: 0.8\n Allow `authenticate` to return a dict containing auth_state.\n\n Args:\n handler (tornado.web.RequestHandler): the current request handler\n data (dict): The formdata of the login form.\n The default form has 'username' and 'password' fields.\n Returns:\n user (str or dict or None): The username of the authenticated user,\n or None if Authentication failed.\n The Authenticator may return a dict instead, which MUST have a\n key 'name' holding the username, and may have two optional keys\n set - 'auth_state', a dictionary of of auth state that will be\n persisted; and 'admin', the admin setting value for the user.\n \"\"\"\n\n def pre_spawn_start(self, user, spawner):\n \"\"\"Hook called before spawning a user's server\n\n Can be used to do auth-related startup, e.g. opening PAM sessions.\n \"\"\"\n\n def post_spawn_stop(self, user, spawner):\n \"\"\"Hook called after stopping a user container\n\n Can be used to do auth-related cleanup, e.g. closing PAM sessions.\n \"\"\"\n\n def add_user(self, user):\n \"\"\"Hook called when a user is added to JupyterHub\n\n This is called:\n - When a user first authenticates\n - When the hub restarts, for all users.\n\n This method may be a coroutine.\n\n By default, this just adds the user to the whitelist.\n\n Subclasses may do more extensive things, such as adding actual unix users,\n but they should call super to ensure the whitelist is updated.\n\n Note that this should be idempotent, since it is called whenever the hub restarts\n for all users.\n\n Args:\n user (User): The User wrapper object\n \"\"\"\n if not self.validate_username(user.name):\n raise ValueError(\"Invalid username: %s\" % user.name)\n if self.whitelist:\n self.whitelist.add(user.name)\n\n def delete_user(self, user):\n \"\"\"Hook called when a user is deleted\n\n Removes the user from the whitelist.\n Subclasses should call super to ensure the whitelist is updated.\n\n Args:\n user (User): The User wrapper object\n \"\"\"\n self.whitelist.discard(user.name)\n\n auto_login = Bool(False, config=True,\n help=\"\"\"Automatically begin the login process\n\n rather than starting with a \"Login with...\" link at `/hub/login`\n\n To work, `.login_url()` must give a URL other than the default `/hub/login`,\n such as an oauth handler or another automatic login handler,\n registered with `.get_handlers()`.\n\n .. versionadded:: 0.8\n \"\"\"\n )\n\n def login_url(self, base_url):\n \"\"\"Override this when registering a custom login handler\n\n Generally used by authenticators that do not use simple form-based authentication.\n\n The subclass overriding this is responsible for making sure there is a handler\n available to handle the URL returned from this method, using the `get_handlers`\n method.\n\n Args:\n base_url (str): the base URL of the Hub (e.g. /hub/)\n\n Returns:\n str: The login URL, e.g. '/hub/login'\n \"\"\"\n return url_path_join(base_url, 'login')\n\n def logout_url(self, base_url):\n \"\"\"Override when registering a custom logout handler\n\n The subclass overriding this is responsible for making sure there is a handler\n available to handle the URL returned from this method, using the `get_handlers`\n method.\n\n Args:\n base_url (str): the base URL of the Hub (e.g. /hub/)\n\n Returns:\n str: The logout URL, e.g. '/hub/logout'\n \"\"\"\n return url_path_join(base_url, 'logout')\n\n def get_handlers(self, app):\n \"\"\"Return any custom handlers the authenticator needs to register\n\n Used in conjugation with `login_url` and `logout_url`.\n\n Args:\n app (JupyterHub Application):\n the application object, in case it needs to be accessed for info.\n Returns:\n handlers (list):\n list of ``('/url', Handler)`` tuples passed to tornado.\n The Hub prefix is added to any URLs.\n \"\"\"\n return [\n ('/login', LoginHandler),\n ]\n\n\nclass LocalAuthenticator(Authenticator):\n \"\"\"Base class for Authenticators that work with local Linux/UNIX users\n\n Checks for local users, and can attempt to create them if they exist.\n \"\"\"\n\n create_system_users = Bool(False,\n help=\"\"\"\n If set to True, will attempt to create local system users if they do not exist already.\n\n Supports Linux and BSD variants only.\n \"\"\"\n ).tag(config=True)\n\n add_user_cmd = Command(\n help=\"\"\"\n The command to use for creating users as a list of strings\n\n For each element in the list, the string USERNAME will be replaced with\n the user's username. The username will also be appended as the final argument.\n\n For Linux, the default value is:\n\n ['adduser', '-q', '--gecos', '\"\"', '--disabled-password']\n\n To specify a custom home directory, set this to:\n\n ['adduser', '-q', '--gecos', '\"\"', '--home', '/customhome/USERNAME', '--disabled-password']\n\n This will run the command:\n\n adduser -q --gecos \"\" --home /customhome/river --disabled-password river\n\n when the user 'river' is created.\n \"\"\"\n ).tag(config=True)\n\n @default('add_user_cmd')\n def _add_user_cmd_default(self):\n \"\"\"Guess the most likely-to-work adduser command for each platform\"\"\"\n if sys.platform == 'darwin':\n raise ValueError(\"I don't know how to create users on OS X\")\n elif which('pw'):\n # Probably BSD\n return ['pw', 'useradd', '-m']\n else:\n # This appears to be the Linux non-interactive adduser command:\n return ['adduser', '-q', '--gecos', '\"\"', '--disabled-password']\n\n group_whitelist = Set(\n help=\"\"\"\n Whitelist all users from this UNIX group.\n\n This makes the username whitelist ineffective.\n \"\"\"\n ).tag(config=True)\n\n @observe('group_whitelist')\n def _group_whitelist_changed(self, change):\n \"\"\"\n Log a warning if both group_whitelist and user whitelist are set.\n \"\"\"\n if self.whitelist:\n self.log.warning(\n \"Ignoring username whitelist because group whitelist supplied!\"\n )\n\n def check_whitelist(self, username):\n if self.group_whitelist:\n return self.check_group_whitelist(username)\n else:\n return super().check_whitelist(username)\n\n def check_group_whitelist(self, username):\n \"\"\"\n If group_whitelist is configured, check if authenticating user is part of group.\n \"\"\"\n if not self.group_whitelist:\n return False\n for grnam in self.group_whitelist:\n try:\n group = getgrnam(grnam)\n except KeyError:\n self.log.error('No such group: [%s]' % grnam)\n continue\n if username in group.gr_mem:\n return True\n return False\n\n async def add_user(self, user):\n \"\"\"Hook called whenever a new user is added\n\n If self.create_system_users, the user will attempt to be created if it doesn't exist.\n \"\"\"\n user_exists = await maybe_future(self.system_user_exists(user))\n if not user_exists:\n if self.create_system_users:\n await maybe_future(self.add_system_user(user))\n else:\n raise KeyError(\"User %s does not exist.\" % user.name)\n\n await maybe_future(super().add_user(user))\n\n @staticmethod\n def system_user_exists(user):\n \"\"\"Check if the user exists on the system\"\"\"\n import pwd\n try:\n pwd.getpwnam(user.name)\n except KeyError:\n return False\n else:\n return True\n\n def add_system_user(self, user):\n \"\"\"Create a new local UNIX user on the system.\n\n Tested to work on FreeBSD and Linux, at least.\n \"\"\"\n name = user.name\n cmd = [ arg.replace('USERNAME', name) for arg in self.add_user_cmd ] + [name]\n self.log.info(\"Creating user: %s\", ' '.join(map(pipes.quote, cmd)))\n p = Popen(cmd, stdout=PIPE, stderr=STDOUT)\n p.wait()\n if p.returncode:\n err = p.stdout.read().decode('utf8', 'replace')\n raise RuntimeError(\"Failed to create system user %s: %s\" % (name, err))\n\n\nclass PAMAuthenticator(LocalAuthenticator):\n \"\"\"Authenticate local UNIX users with PAM\"\"\"\n\n # run PAM in a thread, since it can be slow\n executor = Any()\n @default('executor')\n def _default_executor(self):\n return ThreadPoolExecutor(1)\n\n encoding = Unicode('utf8',\n help=\"\"\"\n The text encoding to use when communicating with PAM\n \"\"\"\n ).tag(config=True)\n\n service = Unicode('login',\n help=\"\"\"\n The name of the PAM service to use for authentication\n \"\"\"\n ).tag(config=True)\n\n open_sessions = Bool(True,\n help=\"\"\"\n Whether to open a new PAM session when spawners are started.\n\n This may trigger things like mounting shared filsystems,\n loading credentials, etc. depending on system configuration,\n but it does not always work.\n\n If any errors are encountered when opening/closing PAM sessions,\n this is automatically set to False.\n \"\"\"\n ).tag(config=True)\n\n check_account = Bool(True,\n help=\"\"\"\n Whether to check the user's account status via PAM during authentication.\n\n The PAM account stack performs non-authentication based account \n management. It is typically used to restrict/permit access to a \n service and this step is needed to access the host's user access control.\n\n Disabling this can be dangerous as authenticated but unauthorized users may\n be granted access and, therefore, arbitrary execution on the system.\n \"\"\"\n ).tag(config=True)\n\n def __init__(self, **kwargs):\n if pamela is None:\n raise _pamela_error from None\n super().__init__(**kwargs)\n\n @run_on_executor\n def authenticate(self, handler, data):\n \"\"\"Authenticate with PAM, and return the username if login is successful.\n\n Return None otherwise.\n \"\"\"\n username = data['username']\n try:\n pamela.authenticate(username, data['password'], service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n if handler is not None:\n self.log.warning(\"PAM Authentication failed (%s@%s): %s\", username, handler.request.remote_ip, e)\n else:\n self.log.warning(\"PAM Authentication failed: %s\", e)\n else:\n if not self.check_account:\n return username\n try:\n pamela.check_account(username, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n if handler is not None:\n self.log.warning(\"PAM Account Check failed (%s@%s): %s\", username, handler.request.remote_ip, e)\n else:\n self.log.warning(\"PAM Account Check failed: %s\", e)\n else:\n return username\n\n @run_on_executor\n def pre_spawn_start(self, user, spawner):\n \"\"\"Open PAM session for user if so configured\"\"\"\n if not self.open_sessions:\n return\n try:\n pamela.open_session(user.name, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n self.log.warning(\"Failed to open PAM session for %s: %s\", user.name, e)\n self.log.warning(\"Disabling PAM sessions from now on.\")\n self.open_sessions = False\n\n @run_on_executor\n def post_spawn_stop(self, user, spawner):\n \"\"\"Close PAM session for user if we were configured to opened one\"\"\"\n if not self.open_sessions:\n return\n try:\n pamela.close_session(user.name, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n self.log.warning(\"Failed to close PAM session for %s: %s\", user.name, e)\n self.log.warning(\"Disabling PAM sessions from now on.\")\n self.open_sessions = False\n", "path": "jupyterhub/auth.py"}], "after_files": [{"content": "\"\"\"Base Authenticator class and the default PAM Authenticator\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom concurrent.futures import ThreadPoolExecutor\nimport pipes\nimport re\nfrom shutil import which\nimport sys\nfrom subprocess import Popen, PIPE, STDOUT\n\ntry:\n import pamela\nexcept Exception as e:\n pamela = None\n _pamela_error = e\n\nfrom tornado.concurrent import run_on_executor\nfrom tornado import gen\n\nfrom traitlets.config import LoggingConfigurable\nfrom traitlets import Bool, Set, Unicode, Dict, Any, default, observe\n\nfrom .handlers.login import LoginHandler\nfrom .utils import maybe_future, url_path_join\nfrom .traitlets import Command\n\n\ndef getgrnam(name):\n \"\"\"Wrapper function to protect against `grp` not being available\n on Windows\n \"\"\"\n import grp\n return grp.getgrnam(name)\n\n\nclass Authenticator(LoggingConfigurable):\n \"\"\"Base class for implementing an authentication provider for JupyterHub\"\"\"\n\n db = Any()\n\n enable_auth_state = Bool(False, config=True,\n help=\"\"\"Enable persisting auth_state (if available).\n\n auth_state will be encrypted and stored in the Hub's database.\n This can include things like authentication tokens, etc.\n to be passed to Spawners as environment variables.\n\n Encrypting auth_state requires the cryptography package.\n\n Additionally, the JUPYTERHUB_CRYPTO_KEY envirionment variable must\n contain one (or more, separated by ;) 32B encryption keys.\n These can be either base64 or hex-encoded.\n\n If encryption is unavailable, auth_state cannot be persisted.\n\n New in JupyterHub 0.8\n \"\"\",\n )\n\n admin_users = Set(\n help=\"\"\"\n Set of users that will have admin rights on this JupyterHub.\n\n Admin users have extra privileges:\n - Use the admin panel to see list of users logged in\n - Add / remove users in some authenticators\n - Restart / halt the hub\n - Start / stop users' single-user servers\n - Can access each individual users' single-user server (if configured)\n\n Admin access should be treated the same way root access is.\n\n Defaults to an empty set, in which case no user has admin access.\n \"\"\"\n ).tag(config=True)\n\n whitelist = Set(\n help=\"\"\"\n Whitelist of usernames that are allowed to log in.\n\n Use this with supported authenticators to restrict which users can log in. This is an\n additional whitelist that further restricts users, beyond whatever restrictions the\n authenticator has in place.\n\n If empty, does not perform any additional restriction.\n \"\"\"\n ).tag(config=True)\n\n @observe('whitelist')\n def _check_whitelist(self, change):\n short_names = [name for name in change['new'] if len(name) <= 1]\n if short_names:\n sorted_names = sorted(short_names)\n single = ''.join(sorted_names)\n string_set_typo = \"set('%s')\" % single\n self.log.warning(\"whitelist contains single-character names: %s; did you mean set([%r]) instead of %s?\",\n sorted_names[:8], single, string_set_typo,\n )\n\n custom_html = Unicode(\n help=\"\"\"\n HTML form to be overridden by authenticators if they want a custom authentication form.\n\n Defaults to an empty string, which shows the default username/password form.\n \"\"\"\n )\n\n login_service = Unicode(\n help=\"\"\"\n Name of the login service that this authenticator is providing using to authenticate users.\n\n Example: GitHub, MediaWiki, Google, etc.\n\n Setting this value replaces the login form with a \"Login with <login_service>\" button.\n\n Any authenticator that redirects to an external service (e.g. using OAuth) should set this.\n \"\"\"\n )\n\n username_pattern = Unicode(\n help=\"\"\"\n Regular expression pattern that all valid usernames must match.\n\n If a username does not match the pattern specified here, authentication will not be attempted.\n\n If not set, allow any username.\n \"\"\"\n ).tag(config=True)\n\n @observe('username_pattern')\n def _username_pattern_changed(self, change):\n if not change['new']:\n self.username_regex = None\n self.username_regex = re.compile(change['new'])\n\n username_regex = Any(\n help=\"\"\"\n Compiled regex kept in sync with `username_pattern`\n \"\"\"\n )\n\n def validate_username(self, username):\n \"\"\"Validate a normalized username\n\n Return True if username is valid, False otherwise.\n \"\"\"\n if '/' in username:\n # / is not allowed in usernames\n return False\n if not username:\n # empty usernames are not allowed\n return False\n if not self.username_regex:\n return True\n return bool(self.username_regex.match(username))\n\n username_map = Dict(\n help=\"\"\"Dictionary mapping authenticator usernames to JupyterHub users.\n\n Primarily used to normalize OAuth user names to local users.\n \"\"\"\n ).tag(config=True)\n\n delete_invalid_users = Bool(False,\n help=\"\"\"Delete any users from the database that do not pass validation\n\n When JupyterHub starts, `.add_user` will be called\n on each user in the database to verify that all users are still valid.\n\n If `delete_invalid_users` is True,\n any users that do not pass validation will be deleted from the database.\n Use this if users might be deleted from an external system,\n such as local user accounts.\n\n If False (default), invalid users remain in the Hub's database\n and a warning will be issued.\n This is the default to avoid data loss due to config changes.\n \"\"\"\n )\n\n def normalize_username(self, username):\n \"\"\"Normalize the given username and return it\n\n Override in subclasses if usernames need different normalization rules.\n\n The default attempts to lowercase the username and apply `username_map` if it is\n set.\n \"\"\"\n username = username.lower()\n username = self.username_map.get(username, username)\n return username\n\n def check_whitelist(self, username):\n \"\"\"Check if a username is allowed to authenticate based on whitelist configuration\n\n Return True if username is allowed, False otherwise.\n No whitelist means any username is allowed.\n\n Names are normalized *before* being checked against the whitelist.\n \"\"\"\n if not self.whitelist:\n # No whitelist means any name is allowed\n return True\n return username in self.whitelist\n\n async def get_authenticated_user(self, handler, data):\n \"\"\"Authenticate the user who is attempting to log in\n\n Returns user dict if successful, None otherwise.\n\n This calls `authenticate`, which should be overridden in subclasses,\n normalizes the username if any normalization should be done,\n and then validates the name in the whitelist.\n\n This is the outer API for authenticating a user.\n Subclasses should not override this method.\n\n The various stages can be overridden separately:\n - `authenticate` turns formdata into a username\n - `normalize_username` normalizes the username\n - `check_whitelist` checks against the user whitelist\n\n .. versionchanged:: 0.8\n return dict instead of username\n \"\"\"\n authenticated = await maybe_future(self.authenticate(handler, data))\n if authenticated is None:\n return\n if isinstance(authenticated, dict):\n if 'name' not in authenticated:\n raise ValueError(\"user missing a name: %r\" % authenticated)\n else:\n authenticated = {\n 'name': authenticated,\n }\n authenticated.setdefault('auth_state', None)\n authenticated.setdefault('admin', None)\n\n # normalize the username\n authenticated['name'] = username = self.normalize_username(authenticated['name'])\n if not self.validate_username(username):\n self.log.warning(\"Disallowing invalid username %r.\", username)\n return\n\n whitelist_pass = await maybe_future(self.check_whitelist(username))\n if whitelist_pass:\n return authenticated\n else:\n self.log.warning(\"User %r not in whitelist.\", username)\n return\n\n async def authenticate(self, handler, data):\n \"\"\"Authenticate a user with login form data\n\n This must be a tornado gen.coroutine.\n It must return the username on successful authentication,\n and return None on failed authentication.\n\n Checking the whitelist is handled separately by the caller.\n\n .. versionchanged:: 0.8\n Allow `authenticate` to return a dict containing auth_state.\n\n Args:\n handler (tornado.web.RequestHandler): the current request handler\n data (dict): The formdata of the login form.\n The default form has 'username' and 'password' fields.\n Returns:\n user (str or dict or None): The username of the authenticated user,\n or None if Authentication failed.\n The Authenticator may return a dict instead, which MUST have a\n key 'name' holding the username, and may have two optional keys\n set - 'auth_state', a dictionary of of auth state that will be\n persisted; and 'admin', the admin setting value for the user.\n \"\"\"\n\n def pre_spawn_start(self, user, spawner):\n \"\"\"Hook called before spawning a user's server\n\n Can be used to do auth-related startup, e.g. opening PAM sessions.\n \"\"\"\n\n def post_spawn_stop(self, user, spawner):\n \"\"\"Hook called after stopping a user container\n\n Can be used to do auth-related cleanup, e.g. closing PAM sessions.\n \"\"\"\n\n def add_user(self, user):\n \"\"\"Hook called when a user is added to JupyterHub\n\n This is called:\n - When a user first authenticates\n - When the hub restarts, for all users.\n\n This method may be a coroutine.\n\n By default, this just adds the user to the whitelist.\n\n Subclasses may do more extensive things, such as adding actual unix users,\n but they should call super to ensure the whitelist is updated.\n\n Note that this should be idempotent, since it is called whenever the hub restarts\n for all users.\n\n Args:\n user (User): The User wrapper object\n \"\"\"\n if not self.validate_username(user.name):\n raise ValueError(\"Invalid username: %s\" % user.name)\n if self.whitelist:\n self.whitelist.add(user.name)\n\n def delete_user(self, user):\n \"\"\"Hook called when a user is deleted\n\n Removes the user from the whitelist.\n Subclasses should call super to ensure the whitelist is updated.\n\n Args:\n user (User): The User wrapper object\n \"\"\"\n self.whitelist.discard(user.name)\n\n auto_login = Bool(False, config=True,\n help=\"\"\"Automatically begin the login process\n\n rather than starting with a \"Login with...\" link at `/hub/login`\n\n To work, `.login_url()` must give a URL other than the default `/hub/login`,\n such as an oauth handler or another automatic login handler,\n registered with `.get_handlers()`.\n\n .. versionadded:: 0.8\n \"\"\"\n )\n\n def login_url(self, base_url):\n \"\"\"Override this when registering a custom login handler\n\n Generally used by authenticators that do not use simple form-based authentication.\n\n The subclass overriding this is responsible for making sure there is a handler\n available to handle the URL returned from this method, using the `get_handlers`\n method.\n\n Args:\n base_url (str): the base URL of the Hub (e.g. /hub/)\n\n Returns:\n str: The login URL, e.g. '/hub/login'\n \"\"\"\n return url_path_join(base_url, 'login')\n\n def logout_url(self, base_url):\n \"\"\"Override when registering a custom logout handler\n\n The subclass overriding this is responsible for making sure there is a handler\n available to handle the URL returned from this method, using the `get_handlers`\n method.\n\n Args:\n base_url (str): the base URL of the Hub (e.g. /hub/)\n\n Returns:\n str: The logout URL, e.g. '/hub/logout'\n \"\"\"\n return url_path_join(base_url, 'logout')\n\n def get_handlers(self, app):\n \"\"\"Return any custom handlers the authenticator needs to register\n\n Used in conjugation with `login_url` and `logout_url`.\n\n Args:\n app (JupyterHub Application):\n the application object, in case it needs to be accessed for info.\n Returns:\n handlers (list):\n list of ``('/url', Handler)`` tuples passed to tornado.\n The Hub prefix is added to any URLs.\n \"\"\"\n return [\n ('/login', LoginHandler),\n ]\n\n\nclass LocalAuthenticator(Authenticator):\n \"\"\"Base class for Authenticators that work with local Linux/UNIX users\n\n Checks for local users, and can attempt to create them if they exist.\n \"\"\"\n\n create_system_users = Bool(False,\n help=\"\"\"\n If set to True, will attempt to create local system users if they do not exist already.\n\n Supports Linux and BSD variants only.\n \"\"\"\n ).tag(config=True)\n\n add_user_cmd = Command(\n help=\"\"\"\n The command to use for creating users as a list of strings\n\n For each element in the list, the string USERNAME will be replaced with\n the user's username. The username will also be appended as the final argument.\n\n For Linux, the default value is:\n\n ['adduser', '-q', '--gecos', '\"\"', '--disabled-password']\n\n To specify a custom home directory, set this to:\n\n ['adduser', '-q', '--gecos', '\"\"', '--home', '/customhome/USERNAME', '--disabled-password']\n\n This will run the command:\n\n adduser -q --gecos \"\" --home /customhome/river --disabled-password river\n\n when the user 'river' is created.\n \"\"\"\n ).tag(config=True)\n\n @default('add_user_cmd')\n def _add_user_cmd_default(self):\n \"\"\"Guess the most likely-to-work adduser command for each platform\"\"\"\n if sys.platform == 'darwin':\n raise ValueError(\"I don't know how to create users on OS X\")\n elif which('pw'):\n # Probably BSD\n return ['pw', 'useradd', '-m']\n else:\n # This appears to be the Linux non-interactive adduser command:\n return ['adduser', '-q', '--gecos', '\"\"', '--disabled-password']\n\n group_whitelist = Set(\n help=\"\"\"\n Whitelist all users from this UNIX group.\n\n This makes the username whitelist ineffective.\n \"\"\"\n ).tag(config=True)\n\n @observe('group_whitelist')\n def _group_whitelist_changed(self, change):\n \"\"\"\n Log a warning if both group_whitelist and user whitelist are set.\n \"\"\"\n if self.whitelist:\n self.log.warning(\n \"Ignoring username whitelist because group whitelist supplied!\"\n )\n\n def check_whitelist(self, username):\n if self.group_whitelist:\n return self.check_group_whitelist(username)\n else:\n return super().check_whitelist(username)\n\n def check_group_whitelist(self, username):\n \"\"\"\n If group_whitelist is configured, check if authenticating user is part of group.\n \"\"\"\n if not self.group_whitelist:\n return False\n for grnam in self.group_whitelist:\n try:\n group = getgrnam(grnam)\n except KeyError:\n self.log.error('No such group: [%s]' % grnam)\n continue\n if username in group.gr_mem:\n return True\n return False\n\n async def add_user(self, user):\n \"\"\"Hook called whenever a new user is added\n\n If self.create_system_users, the user will attempt to be created if it doesn't exist.\n \"\"\"\n user_exists = await maybe_future(self.system_user_exists(user))\n if not user_exists:\n if self.create_system_users:\n await maybe_future(self.add_system_user(user))\n else:\n raise KeyError(\"User %s does not exist.\" % user.name)\n\n await maybe_future(super().add_user(user))\n\n @staticmethod\n def system_user_exists(user):\n \"\"\"Check if the user exists on the system\"\"\"\n import pwd\n try:\n pwd.getpwnam(user.name)\n except KeyError:\n return False\n else:\n return True\n\n def add_system_user(self, user):\n \"\"\"Create a new local UNIX user on the system.\n\n Tested to work on FreeBSD and Linux, at least.\n \"\"\"\n name = user.name\n cmd = [ arg.replace('USERNAME', name) for arg in self.add_user_cmd ] + [name]\n self.log.info(\"Creating user: %s\", ' '.join(map(pipes.quote, cmd)))\n p = Popen(cmd, stdout=PIPE, stderr=STDOUT)\n p.wait()\n if p.returncode:\n err = p.stdout.read().decode('utf8', 'replace')\n raise RuntimeError(\"Failed to create system user %s: %s\" % (name, err))\n\n\nclass PAMAuthenticator(LocalAuthenticator):\n \"\"\"Authenticate local UNIX users with PAM\"\"\"\n\n # run PAM in a thread, since it can be slow\n executor = Any()\n @default('executor')\n def _default_executor(self):\n return ThreadPoolExecutor(1)\n\n encoding = Unicode('utf8',\n help=\"\"\"\n The text encoding to use when communicating with PAM\n \"\"\"\n ).tag(config=True)\n\n service = Unicode('login',\n help=\"\"\"\n The name of the PAM service to use for authentication\n \"\"\"\n ).tag(config=True)\n\n open_sessions = Bool(True,\n help=\"\"\"\n Whether to open a new PAM session when spawners are started.\n\n This may trigger things like mounting shared filsystems,\n loading credentials, etc. depending on system configuration,\n but it does not always work.\n\n If any errors are encountered when opening/closing PAM sessions,\n this is automatically set to False.\n \"\"\"\n ).tag(config=True)\n\n check_account = Bool(True,\n help=\"\"\"\n Whether to check the user's account status via PAM during authentication.\n\n The PAM account stack performs non-authentication based account \n management. It is typically used to restrict/permit access to a \n service and this step is needed to access the host's user access control.\n\n Disabling this can be dangerous as authenticated but unauthorized users may\n be granted access and, therefore, arbitrary execution on the system.\n \"\"\"\n ).tag(config=True)\n\n def __init__(self, **kwargs):\n if pamela is None:\n raise _pamela_error from None\n super().__init__(**kwargs)\n\n @run_on_executor\n def authenticate(self, handler, data):\n \"\"\"Authenticate with PAM, and return the username if login is successful.\n\n Return None otherwise.\n \"\"\"\n username = data['username']\n try:\n pamela.authenticate(username, data['password'], service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n if handler is not None:\n self.log.warning(\"PAM Authentication failed (%s@%s): %s\", username, handler.request.remote_ip, e)\n else:\n self.log.warning(\"PAM Authentication failed: %s\", e)\n else:\n if not self.check_account:\n return username\n try:\n pamela.check_account(username, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n if handler is not None:\n self.log.warning(\"PAM Account Check failed (%s@%s): %s\", username, handler.request.remote_ip, e)\n else:\n self.log.warning(\"PAM Account Check failed: %s\", e)\n else:\n return username\n\n @run_on_executor\n def pre_spawn_start(self, user, spawner):\n \"\"\"Open PAM session for user if so configured\"\"\"\n if not self.open_sessions:\n return\n try:\n pamela.open_session(user.name, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n self.log.warning(\"Failed to open PAM session for %s: %s\", user.name, e)\n self.log.warning(\"Disabling PAM sessions from now on.\")\n self.open_sessions = False\n\n @run_on_executor\n def post_spawn_stop(self, user, spawner):\n \"\"\"Close PAM session for user if we were configured to opened one\"\"\"\n if not self.open_sessions:\n return\n try:\n pamela.close_session(user.name, service=self.service, encoding=self.encoding)\n except pamela.PAMError as e:\n self.log.warning(\"Failed to close PAM session for %s: %s\", user.name, e)\n self.log.warning(\"Disabling PAM sessions from now on.\")\n self.open_sessions = False\n", "path": "jupyterhub/auth.py"}]} |
gh_patches_debug_1162 | rasdani/github-patches | git_diff | napari__napari-1293 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
UnboundLocalError in create_worker
## 🐛 Bug
When creating a local worker with multiple yield connections, I get an unboundlocalerror:
```pytb
-------------------------------------------------------------------------
UnboundLocalError Traceback (most recent call last)
<ipython-input-23-1749d5f75cac> in <module>
52
53 viewer.window.add_dock_widget(loss_canvas)
---> 54 worker = train(model, data_loader, 500)
~/projects/napari/napari/_qt/threading.py in worker_function(*args, **kwargs)
628 kwargs['_worker_class'] = kwargs.get('_worker_class', worker_class)
629 kwargs['_ignore_errors'] = kwargs.get('_ignore_errors', ignore_errors)
--> 630 return create_worker(function, *args, **kwargs,)
631
632 return worker_function
~/projects/napari/napari/_qt/threading.py in create_worker(func, _start_thread, _connect, _worker_class, _ignore_errors, *args, **kwargs)
505 if not isinstance(val, (tuple, list)):
506 _val = [val]
--> 507 for v in _val:
508 if not callable(v):
509 raise TypeError(
UnboundLocalError: local variable '_val' referenced before assignment
```
napari info:
```
napari: 0.3.2rc0
Platform: Linux-4.15.0-29-generic-x86_64-with-glibc2.10
Python: 3.8.2 (default, Mar 26 2020, 15:53:00) [GCC 7.3.0]
Qt: 5.14.2
PyQt5: 5.14.2
NumPy: 1.19.0rc1
SciPy: 1.4.1
Dask: 2.15.0
VisPy: 0.6.4
GL version: 3.0 Mesa 19.2.8
MAX_TEXTURE_SIZE: 16384
Plugins:
- napari-plugin-engine: 0.1.4
- ome_zarr: 0.0.7
- svg: 0.1.2
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `napari/_qt/threading.py`
Content:
```
1 import inspect
2 import time
3 from functools import wraps
4 from typing import Any, Callable, Dict, Optional, Sequence, Set, Type, Union
5
6 import toolz as tz
7 from qtpy.QtCore import QObject, QRunnable, QThread, QThreadPool, Signal, Slot
8
9
10 def as_generator_function(func: Callable) -> Callable:
11 """Turns a regular function (single return) into a generator function."""
12
13 @wraps(func)
14 def genwrapper(*args, **kwargs):
15 yield
16 return func(*args, **kwargs)
17
18 return genwrapper
19
20
21 class WorkerBaseSignals(QObject):
22
23 started = Signal() # emitted when the work is started
24 finished = Signal() # emitted when the work is finished
25 returned = Signal(object) # emitted with return value
26 errored = Signal(object) # emitted with error object on Exception
27
28
29 class WorkerBase(QRunnable):
30 """Base class for creating a Worker that can run in another thread.
31
32 Parameters
33 ----------
34 SignalsClass : type, optional
35 A QObject subclass that contains signals, by default WorkerBaseSignals
36 """
37
38 #: A set of Workers. Add to set using :meth:`WorkerBase.start`
39 _worker_set: Set['WorkerBase'] = set()
40
41 def __init__(
42 self, *args, SignalsClass: Type[QObject] = WorkerBaseSignals, **kwargs
43 ) -> None:
44 super().__init__()
45 self._abort_requested = False
46 self._running = False
47 self._signals = SignalsClass()
48
49 def __getattr__(self, name):
50 """Pass through attr requests to signals to simplify connection API.
51
52 The goal is to enable ``worker.signal.connect`` instead of
53 ``worker.signals.yielded.connect``. Because multiple inheritance of Qt
54 classes is not well supported in PyQt, we have to use composition here
55 (signals are provided by QObjects, and QRunnable is not a QObject). So
56 this passthrough allows us to connect to signals on the ``_signals``
57 object.
58 """
59 # the Signal object is actually a class attribute
60 attr = getattr(self._signals.__class__, name, None)
61 if isinstance(attr, Signal):
62 # but what we need to connect to is the instantiated signal
63 # (which is of type `SignalInstance` in PySide and
64 # `pyqtBoundSignal` in PyQt)
65 return getattr(self._signals, name)
66
67 def quit(self) -> None:
68 """Send a request to abort the worker.
69
70 .. note::
71
72 It is entirely up to subclasses to honor this method by checking
73 ``self.abort_requested`` periodically in their ``worker.work``
74 method, and exiting if ``True``.
75 """
76 self._abort_requested = True
77
78 @property
79 def abort_requested(self) -> bool:
80 """Whether the worker has been requested to stop."""
81 return self._abort_requested
82
83 @property
84 def is_running(self) -> bool:
85 """Whether the worker has been started"""
86 return self._running
87
88 @Slot()
89 def run(self):
90 """Start the worker.
91
92 The end-user should never need to call this function.
93 But it cannot be made private or renamed, since it is called by Qt.
94
95 The order of method calls when starting a worker is:
96
97 .. code-block:: none
98
99 calls QThreadPool.globalInstance().start(worker)
100 | triggered by the QThreadPool.start() method
101 | | called by worker.run
102 | | |
103 V V V
104 worker.start -> worker.run -> worker.work
105
106 **This** is the function that actually gets called when calling
107 :func:`QThreadPool.start(worker)`. It simply wraps the :meth:`work`
108 method, and emits a few signals. Subclasses should NOT override this
109 method (except with good reason), and instead should implement
110 :meth:`work`.
111 """
112 self.started.emit()
113 self._running = True
114 try:
115 result = self.work()
116 self.returned.emit(result)
117 except Exception as exc:
118 self.errored.emit(exc)
119 self.finished.emit()
120
121 def work(self):
122 """Main method to execute the worker.
123
124 The end-user should never need to call this function.
125 But subclasses must implement this method (See
126 :meth:`GeneratorFunction.work` for an example implementation).
127 Minimally, it should check ``self.abort_requested`` periodically and
128 exit if True.
129
130 Examples
131 --------
132
133 .. code-block:: python
134
135 class MyWorker(WorkerBase):
136
137 def work(self):
138 i = 0
139 while True:
140 if self.abort_requested:
141 self.aborted.emit()
142 break
143 i += 1
144 if i > max_iters:
145 break
146 time.sleep(0.5)
147 """
148 raise NotImplementedError(
149 f'"{self.__class__.__name__}" failed to define work() method'
150 )
151
152 def start(self):
153 """Start this worker in a thread and add it to the global threadpool.
154
155 The order of method calls when starting a worker is:
156
157 .. code-block:: none
158
159 calls QThreadPool.globalInstance().start(worker)
160 | triggered by the QThreadPool.start() method
161 | | called by worker.run
162 | | |
163 V V V
164 worker.start -> worker.run -> worker.work
165 """
166 if self in WorkerBase._worker_set:
167 raise RuntimeError('This worker is already started!')
168
169 # This will raise a RunTimeError if the worker is already deleted
170 repr(self)
171
172 WorkerBase._worker_set.add(self)
173 self.finished.connect(lambda: WorkerBase._worker_set.discard(self))
174 QThreadPool.globalInstance().start(self)
175
176
177 class FunctionWorker(WorkerBase):
178 """QRunnable with signals that wraps a simple long-running function.
179
180 .. note::
181
182 ``FunctionWorker`` does not provide a way to stop a very long-running
183 function (e.g. ``time.sleep(10000)``). So whenever possible, it is
184 better to implement your long running function as a generator that
185 yields periodically, and use the :class:`GeneratorWorker` instead.
186
187 Parameters
188 ----------
189 func : Callable
190 A function to call in another thread
191 *args
192 will be passed to the function
193 **kwargs
194 will be passed to the function
195
196 Raises
197 ------
198 TypeError
199 If ``func`` is a generator function and not a regular function.
200 """
201
202 def __init__(self, func: Callable, *args, **kwargs):
203 if inspect.isgeneratorfunction(func):
204 raise TypeError(
205 f"Generator function {func} cannot be used with "
206 "FunctionWorker, use GeneratorWorker instead"
207 )
208 super().__init__()
209
210 self._func = func
211 self._args = args
212 self._kwargs = kwargs
213
214 def work(self):
215 return self._func(*self._args, **self._kwargs)
216
217
218 class GeneratorWorkerSignals(WorkerBaseSignals):
219
220 yielded = Signal(object) # emitted with yielded values (if generator used)
221 paused = Signal() # emitted when a running job has successfully paused
222 resumed = Signal() # emitted when a paused job has successfully resumed
223 aborted = Signal() # emitted when a running job is successfully aborted
224
225
226 class GeneratorWorker(WorkerBase):
227 """QRunnable with signals that wraps a long-running generator.
228
229 Provides a convenient way to run a generator function in another thread,
230 while allowing 2-way communication between threads, using plain-python
231 generator syntax in the original function.
232
233 Parameters
234 ----------
235 func : callable
236 The function being run in another thread. May be a generator function.
237 SignalsClass : type, optional
238 A QObject subclass that contains signals, by default
239 GeneratorWorkerSignals
240 *args
241 Will be passed to func on instantiation
242 **kwargs
243 Will be passed to func on instantiation
244 """
245
246 def __init__(
247 self,
248 func: Callable,
249 *args,
250 SignalsClass: Type[QObject] = GeneratorWorkerSignals,
251 **kwargs,
252 ):
253 if not inspect.isgeneratorfunction(func):
254 raise TypeError(
255 f"Regular function {func} cannot be used with "
256 "GeneratorWorker, use FunctionWorker instead"
257 )
258 super().__init__(SignalsClass=SignalsClass)
259
260 self._gen = func(*args, **kwargs)
261 self._incoming_value = None
262 self._pause_requested = False
263 self._resume_requested = False
264 self._paused = False
265 # polling interval: ONLY relevant if the user paused a running worker
266 self._pause_interval = 0.01
267
268 def work(self) -> None:
269 """Core event loop that calls the original function.
270
271 Enters a continual loop, yielding and returning from the original
272 function. Checks for various events (quit, pause, resume, etc...).
273 (To clarify: we are creating a rudimentary event loop here because
274 there IS NO Qt event loop running in the other thread to hook into)
275 """
276 while True:
277 if self.abort_requested:
278 self.aborted.emit()
279 break
280 if self._paused:
281 if self._resume_requested:
282 self._paused = False
283 self._resume_requested = False
284 self.resumed.emit()
285 else:
286 time.sleep(self._pause_interval)
287 continue
288 elif self._pause_requested:
289 self._paused = True
290 self._pause_requested = False
291 self.paused.emit()
292 continue
293 try:
294 self.yielded.emit(self._gen.send(self._next_value()))
295 except StopIteration as exc:
296 return exc.value
297
298 def send(self, value: Any):
299 """Send a value into the function (if a generator was used)."""
300 self._incoming_value = value
301
302 def _next_value(self) -> Any:
303 out = None
304 if self._incoming_value is not None:
305 out = self._incoming_value
306 self._incoming_value = None
307 return out
308
309 @property
310 def is_paused(self) -> bool:
311 """Whether the worker is currently paused."""
312 return self._paused
313
314 def toggle_pause(self) -> None:
315 """Request to pause the worker if playing or resume if paused."""
316 if self.is_paused:
317 self._resume_requested = True
318 else:
319 self._pause_requested = True
320
321 def pause(self) -> None:
322 """Request to pause the worker."""
323 if not self.is_paused:
324 self._pause_requested = True
325
326 def resume(self) -> None:
327 """Send a request to resume the worker.
328 """
329 if self.is_paused:
330 self._resume_requested = True
331
332
333 ############################################################################
334
335 # public API
336
337 # For now, the next three functions simply wrap the QThreadPool API, and allow
338 # us to track and cleanup all workers that were started with ``start_worker``,
339 # provided that ``wait_for_workers_to_quit`` is called at shutdown.
340 # In the future, this could wrap any API, or a pure python threadpool.
341
342
343 def set_max_thread_count(num: int):
344 """Set the maximum number of threads used by the thread pool.
345
346 Note: The thread pool will always use at least 1 thread, even if
347 maxThreadCount limit is zero or negative.
348 """
349 QThreadPool.globalInstance().setMaxThreadCount(num)
350
351
352 def wait_for_workers_to_quit(msecs: int = None):
353 """Ask all workers to quit, and wait up to `msec` for quit.
354
355 Attempts to clean up all running workers by calling ``worker.quit()``
356 method. Any workers in the ``WorkerBase._worker_set`` set will have this
357 method.
358
359 By default, this function will block indefinitely, until worker threads
360 finish. If a timeout is provided, a ``RuntimeError`` will be raised if
361 the workers do not gracefully exit in the time requests, but the threads
362 will NOT be killed. It is (currently) left to the user to use their OS
363 to force-quit rogue threads.
364
365 .. important::
366
367 If the user does not put any yields in their function, and the function
368 is super long, it will just hang... For instance, there's no graceful
369 way to kill this thread in python:
370
371 .. code-block:: python
372
373 @thread_worker
374 def ZZZzzz():
375 time.sleep(10000000)
376
377 This is why it's always advisable to use a generator that periodically
378 yields for long-running computations in another thread.
379
380 See `this stack-overflow post
381 <https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread>`_
382 for a good discussion on the difficulty of killing a rogue python thread:
383
384 Parameters
385 ----------
386 msecs : int, optional
387 Waits up to msecs milliseconds for all threads to exit and removes all
388 threads from the thread pool. If msecs is `None` (the default), the
389 timeout is ignored (waits for the last thread to exit).
390
391 Raises
392 -------
393 RuntimeError
394 If a timeout is provided and workers do not quit successfully within
395 the time alotted.
396 """
397 for worker in WorkerBase._worker_set:
398 worker.quit()
399
400 msecs = msecs if msecs is not None else -1
401 if not QThreadPool.globalInstance().waitForDone(msecs):
402 raise RuntimeError(
403 f"Workers did not quit gracefully in the time alotted ({msecs} ms)"
404 )
405
406
407 def active_thread_count() -> int:
408 """Return the number of active threads in the global ThreadPool."""
409 return QThreadPool.globalInstance().activeThreadCount()
410
411
412 #############################################################################
413
414 # convenience functions for creating Worker instances
415
416
417 def create_worker(
418 func: Callable,
419 *args,
420 _start_thread: Optional[bool] = None,
421 _connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,
422 _worker_class: Optional[Type[WorkerBase]] = None,
423 _ignore_errors: bool = False,
424 **kwargs,
425 ) -> WorkerBase:
426 """Convenience function to start a function in another thread.
427
428 By default, uses :class:`Worker`, but a custom ``WorkerBase`` subclass may
429 be provided. If so, it must be a subclass of :class:`Worker`, which
430 defines a standard set of signals and a run method.
431
432 Parameters
433 ----------
434 func : Callable
435 The function to call in another thread.
436 _start_thread : bool, optional
437 Whether to immediaetly start the thread. If False, the returned worker
438 must be manually started with ``worker.start()``. by default it will be
439 ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.
440 _connect : Dict[str, Union[Callable, Sequence]], optional
441 A mapping of ``"signal_name"`` -> ``callable`` or list of ``callable``:
442 callback functions to connect to the various signals offered by the
443 worker class. by default None
444 _worker_class : Type[WorkerBase], optional
445 The :class`WorkerBase` to instantiate, by default
446 :class:`FunctionWorker` will be used if ``func`` is a regular function,
447 and :class:`GeneratorWorker` will be used if it is a generator.
448 _ignore_errors : bool, optional
449 If ``False`` (the default), errors raised in the other thread will be
450 reraised in the main thread (makes debugging significantly easier).
451 *args
452 will be passed to ``func``
453 **kwargs
454 will be passed to ``func``
455
456 Returns
457 -------
458 worker : WorkerBase
459 An instantiated worker. If ``_start_thread`` was ``False``, the worker
460 will have a `.start()` method that can be used to start the thread.
461
462 Raises
463 ------
464 TypeError
465 If a worker_class is provided that is not a subclass of WorkerBase.
466 TypeError
467 If _connect is provided and is not a dict of ``{str: callable}``
468
469 Examples
470 --------
471
472 .. code-block:: python
473
474 def long_function(duration):
475 import time
476 time.sleep(duration)
477
478 worker = create_worker(long_function, 10)
479
480 """
481 if not _worker_class:
482 if inspect.isgeneratorfunction(func):
483 _worker_class = GeneratorWorker
484 else:
485 _worker_class = FunctionWorker
486
487 if not (
488 inspect.isclass(_worker_class)
489 and issubclass(_worker_class, WorkerBase)
490 ):
491 raise TypeError(
492 f'Worker {_worker_class} must be a subclass of WorkerBase'
493 )
494
495 worker = _worker_class(func, *args, **kwargs)
496
497 if _connect is not None:
498 if not isinstance(_connect, dict):
499 raise TypeError("The '_connect' argument must be a dict")
500
501 if _start_thread is None:
502 _start_thread = True
503
504 for key, val in _connect.items():
505 if not isinstance(val, (tuple, list)):
506 _val = [val]
507 for v in _val:
508 if not callable(v):
509 raise TypeError(
510 f'"_connect[{key!r}]" must be a function or '
511 'sequence of functions'
512 )
513 getattr(worker, key).connect(v)
514
515 # if the user has not provided a default connection for the "errored"
516 # signal... and they have not explicitly set ``ignore_errors=True``
517 # Then rereaise any errors from the thread.
518 if not _ignore_errors and not (_connect or {}).get('errored', False):
519
520 def reraise(e):
521 raise e
522
523 worker.errored.connect(reraise)
524
525 if _start_thread:
526 worker.start()
527 return worker
528
529
530 @tz.curry
531 def thread_worker(
532 function: Callable,
533 start_thread: Optional[bool] = None,
534 connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,
535 worker_class: Optional[Type[WorkerBase]] = None,
536 ignore_errors: bool = False,
537 ) -> Callable:
538 """Decorator that runs a function in a seperate thread when called.
539
540 When called, the decorated function returns a :class:`WorkerBase`. See
541 :func:`create_worker` for additional keyword arguments that can be used
542 when calling the function.
543
544 The returned worker will have these signals:
545
546 - *started*: emitted when the work is started
547 - *finished*: emitted when the work is finished
548 - *returned*: emitted with return value
549 - *errored*: emitted with error object on Exception
550
551 It will also have a ``worker.start()`` method that can be used to start
552 execution of the function in another thread. (useful if you need to connect
553 callbacks to signals prior to execution)
554
555 If the decorated function is a generator, the returned worker will also
556 provide these signals:
557
558 - *yielded*: emitted with yielded values
559 - *paused*: emitted when a running job has successfully paused
560 - *resumed*: emitted when a paused job has successfully resumed
561 - *aborted*: emitted when a running job is successfully aborted
562
563 And these methods:
564
565 - *quit*: ask the thread to quit
566 - *toggle_paused*: toggle the running state of the thread.
567 - *send*: send a value into the generator. (This requires that your
568 decorator function uses the ``value = yield`` syntax)
569
570
571 Parameters
572 ----------
573 func : callable
574 Function to call in another thread. For communication between threads
575 may be a generator function.
576 start_thread : bool, optional
577 Whether to immediaetly start the thread. If False, the returned worker
578 must be manually started with ``worker.start()``. by default it will be
579 ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.
580 connect : Dict[str, Union[Callable, Sequence]], optional
581 A mapping of ``"signal_name"`` -> ``callable`` or list of ``callable``:
582 callback functions to connect to the various signals offered by the
583 worker class. by default None
584 worker_class : Type[WorkerBase], optional
585 The :class`WorkerBase` to instantiate, by default
586 :class:`FunctionWorker` will be used if ``func`` is a regular function,
587 and :class:`GeneratorWorker` will be used if it is a generator.
588 ignore_errors : bool, optional
589 If ``False`` (the default), errors raised in the other thread will be
590 reraised in the main thread (makes debugging significantly easier).
591
592 Returns
593 -------
594 callable
595 function that creates a worker, puts it in a new thread and returns
596 the worker instance.
597
598 Examples
599 --------
600
601 .. code-block:: python
602
603 @thread_worker
604 def long_function(start, end):
605 # do work, periodically yielding
606 i = start
607 while i <= end:
608 time.sleep(0.1)
609 yield i
610
611 # do teardown
612 return 'anything'
613
614 # call the function to start running in another thread.
615 worker = long_function()
616 # connect signals here if desired... or they may be added using the
617 # `connect` argument in the `@thread_worker` decorator... in which
618 # case the worker will start immediately when long_function() is called
619 worker.start()
620 """
621
622 @wraps(function)
623 def worker_function(*args, **kwargs):
624 # decorator kwargs can be overridden at call time by using the
625 # underscore-prefixed version of the kwarg.
626 kwargs['_start_thread'] = kwargs.get('_start_thread', start_thread)
627 kwargs['_connect'] = kwargs.get('_connect', connect)
628 kwargs['_worker_class'] = kwargs.get('_worker_class', worker_class)
629 kwargs['_ignore_errors'] = kwargs.get('_ignore_errors', ignore_errors)
630 return create_worker(function, *args, **kwargs,)
631
632 return worker_function
633
634
635 ############################################################################
636
637 # This is a variant on the above pattern, it uses QThread instead of Qrunnable
638 # see https://doc.qt.io/qt-5/threads-technologies.html#comparison-of-solutions
639 # (it appears from that table that QRunnable cannot emit or receive signals,
640 # but we circumvent that here with our WorkerBase class that also inherits from
641 # QObject... providing signals/slots).
642 #
643 # A benefit of the QRunnable pattern is that Qt manages the threads for you,
644 # in the QThreadPool.globalInstance() ... making it easier to reuse threads,
645 # and reduce overhead.
646 #
647 # However, a disadvantage is that you have no access to (and therefore less
648 # control over) the QThread itself. See for example all of the methods
649 # provided on the QThread object: https://doc.qt.io/qt-5/qthread.html
650
651
652 # TODO: potentially remove this altogether, by refactoring the dims
653 # AnimationWorker to subclass WorkerBase
654
655
656 def _new_worker_qthread(
657 Worker: Type[QObject],
658 *args,
659 _start_thread: bool = False,
660 _connect: Dict[str, Callable] = None,
661 **kwargs,
662 ):
663 """This is a convenience function to start a worker in a Qthread.
664
665 In most cases, the @thread_worker decorator is sufficient and preferable.
666 But this allows the user to completely customize the Worker object.
667 However, they must then maintain control over the thread and clean up
668 appropriately.
669
670 It follows the pattern described here:
671 https://www.qt.io/blog/2010/06/17/youre-doing-it-wrong
672 and
673 https://doc.qt.io/qt-5/qthread.html#details
674
675 see also:
676 https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/
677
678 A QThread object is not a thread! It should be thought of as a class to
679 *manage* a thread, not as the actual code or object that runs in that
680 thread. The QThread object is created on the main thread and lives there.
681
682 Worker objects which derive from QObject are the things that actually do
683 the work. They can be moved to a QThread as is done here.
684
685 .. note:: Mostly ignorable detail
686
687 While the signals/slots syntax of the worker looks very similar to
688 standard "single-threaded" signals & slots, note that inter-thread
689 signals and slots (automatically) use an event-based QueuedConnection,
690 while intra-thread signals use a DirectConnection. See `Signals and
691 Slots Across Threads
692 <https://doc.qt.io/qt-5/threads-qobject.html#signals-and-slots-across-threads>`_
693
694 Parameters
695 ----------
696 Worker : QObject
697 QObject type that implements a work() method. The Worker should also
698 emit a finished signal when the work is done.
699 start_thread : bool
700 If True, thread will be started immediately, otherwise, thread must
701 be manually started with thread.start().
702 connections: dict, optional
703 Optional dictionary of {signal: function} to connect to the new worker.
704 for instance: connections = {'incremented': myfunc} will result in:
705 worker.incremented.connect(myfunc)
706 *args
707 will be passed to the Worker class on instantiation.
708 **kwargs
709 will be passed to the Worker class on instantiation.
710
711 Returns
712 -------
713 worker : WorkerBase
714 The created worker.
715 thread : QThread
716 The thread on which the worker is running.
717
718 Examples
719 --------
720 Create some QObject that has a long-running work method:
721
722 .. code-block:: python
723
724 class Worker(QObject):
725
726 finished = Signal()
727 increment = Signal(int)
728
729 def __init__(self, argument):
730 super().__init__()
731 self.argument = argument
732
733 @Slot()
734 def work(self):
735 # some long running task...
736 import time
737 for i in range(10):
738 time.sleep(1)
739 self.increment.emit(i)
740 self.finished.emit()
741
742 worker, thread = _new_worker_qthread(
743 Worker,
744 'argument',
745 start_thread=True,
746 connections={'increment': print},
747 )
748
749 """
750
751 if _connect and not isinstance(_connect, dict):
752 raise TypeError('_connect parameter must be a dict')
753
754 thread = QThread()
755 worker = Worker(*args, **kwargs)
756 worker.moveToThread(thread)
757 thread.started.connect(worker.work)
758 worker.finished.connect(thread.quit)
759 worker.finished.connect(worker.deleteLater)
760 thread.finished.connect(thread.deleteLater)
761
762 if _connect:
763 [getattr(worker, key).connect(val) for key, val in _connect.items()]
764
765 if _start_thread:
766 thread.start() # sometimes need to connect stuff before starting
767 return worker, thread
768
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/napari/_qt/threading.py b/napari/_qt/threading.py
--- a/napari/_qt/threading.py
+++ b/napari/_qt/threading.py
@@ -502,8 +502,7 @@
_start_thread = True
for key, val in _connect.items():
- if not isinstance(val, (tuple, list)):
- _val = [val]
+ _val = val if isinstance(val, (tuple, list)) else [val]
for v in _val:
if not callable(v):
raise TypeError(
| {"golden_diff": "diff --git a/napari/_qt/threading.py b/napari/_qt/threading.py\n--- a/napari/_qt/threading.py\n+++ b/napari/_qt/threading.py\n@@ -502,8 +502,7 @@\n _start_thread = True\n \n for key, val in _connect.items():\n- if not isinstance(val, (tuple, list)):\n- _val = [val]\n+ _val = val if isinstance(val, (tuple, list)) else [val]\n for v in _val:\n if not callable(v):\n raise TypeError(\n", "issue": "UnboundLocalError in create_worker\n## \ud83d\udc1b Bug\r\n\r\nWhen creating a local worker with multiple yield connections, I get an unboundlocalerror:\r\n\r\n```pytb\r\n-------------------------------------------------------------------------\r\nUnboundLocalError Traceback (most recent call last)\r\n<ipython-input-23-1749d5f75cac> in <module>\r\n 52 \r\n 53 viewer.window.add_dock_widget(loss_canvas)\r\n---> 54 worker = train(model, data_loader, 500)\r\n\r\n~/projects/napari/napari/_qt/threading.py in worker_function(*args, **kwargs)\r\n 628 kwargs['_worker_class'] = kwargs.get('_worker_class', worker_class)\r\n 629 kwargs['_ignore_errors'] = kwargs.get('_ignore_errors', ignore_errors)\r\n--> 630 return create_worker(function, *args, **kwargs,)\r\n 631 \r\n 632 return worker_function\r\n\r\n~/projects/napari/napari/_qt/threading.py in create_worker(func, _start_thread, _connect, _worker_class, _ignore_errors, *args, **kwargs)\r\n 505 if not isinstance(val, (tuple, list)):\r\n 506 _val = [val]\r\n--> 507 for v in _val:\r\n 508 if not callable(v):\r\n 509 raise TypeError(\r\n\r\nUnboundLocalError: local variable '_val' referenced before assignment\r\n```\r\n\r\nnapari info:\r\n\r\n```\r\nnapari: 0.3.2rc0\r\nPlatform: Linux-4.15.0-29-generic-x86_64-with-glibc2.10\r\nPython: 3.8.2 (default, Mar 26 2020, 15:53:00) [GCC 7.3.0]\r\nQt: 5.14.2\r\nPyQt5: 5.14.2\r\nNumPy: 1.19.0rc1\r\nSciPy: 1.4.1\r\nDask: 2.15.0\r\nVisPy: 0.6.4\r\n\r\nGL version: 3.0 Mesa 19.2.8\r\nMAX_TEXTURE_SIZE: 16384\r\n\r\nPlugins:\r\n- napari-plugin-engine: 0.1.4\r\n- ome_zarr: 0.0.7\r\n- svg: 0.1.2\r\n```\n", "before_files": [{"content": "import inspect\nimport time\nfrom functools import wraps\nfrom typing import Any, Callable, Dict, Optional, Sequence, Set, Type, Union\n\nimport toolz as tz\nfrom qtpy.QtCore import QObject, QRunnable, QThread, QThreadPool, Signal, Slot\n\n\ndef as_generator_function(func: Callable) -> Callable:\n \"\"\"Turns a regular function (single return) into a generator function.\"\"\"\n\n @wraps(func)\n def genwrapper(*args, **kwargs):\n yield\n return func(*args, **kwargs)\n\n return genwrapper\n\n\nclass WorkerBaseSignals(QObject):\n\n started = Signal() # emitted when the work is started\n finished = Signal() # emitted when the work is finished\n returned = Signal(object) # emitted with return value\n errored = Signal(object) # emitted with error object on Exception\n\n\nclass WorkerBase(QRunnable):\n \"\"\"Base class for creating a Worker that can run in another thread.\n\n Parameters\n ----------\n SignalsClass : type, optional\n A QObject subclass that contains signals, by default WorkerBaseSignals\n \"\"\"\n\n #: A set of Workers. Add to set using :meth:`WorkerBase.start`\n _worker_set: Set['WorkerBase'] = set()\n\n def __init__(\n self, *args, SignalsClass: Type[QObject] = WorkerBaseSignals, **kwargs\n ) -> None:\n super().__init__()\n self._abort_requested = False\n self._running = False\n self._signals = SignalsClass()\n\n def __getattr__(self, name):\n \"\"\"Pass through attr requests to signals to simplify connection API.\n\n The goal is to enable ``worker.signal.connect`` instead of\n ``worker.signals.yielded.connect``. Because multiple inheritance of Qt\n classes is not well supported in PyQt, we have to use composition here\n (signals are provided by QObjects, and QRunnable is not a QObject). So\n this passthrough allows us to connect to signals on the ``_signals``\n object.\n \"\"\"\n # the Signal object is actually a class attribute\n attr = getattr(self._signals.__class__, name, None)\n if isinstance(attr, Signal):\n # but what we need to connect to is the instantiated signal\n # (which is of type `SignalInstance` in PySide and\n # `pyqtBoundSignal` in PyQt)\n return getattr(self._signals, name)\n\n def quit(self) -> None:\n \"\"\"Send a request to abort the worker.\n\n .. note::\n\n It is entirely up to subclasses to honor this method by checking\n ``self.abort_requested`` periodically in their ``worker.work``\n method, and exiting if ``True``.\n \"\"\"\n self._abort_requested = True\n\n @property\n def abort_requested(self) -> bool:\n \"\"\"Whether the worker has been requested to stop.\"\"\"\n return self._abort_requested\n\n @property\n def is_running(self) -> bool:\n \"\"\"Whether the worker has been started\"\"\"\n return self._running\n\n @Slot()\n def run(self):\n \"\"\"Start the worker.\n\n The end-user should never need to call this function.\n But it cannot be made private or renamed, since it is called by Qt.\n\n The order of method calls when starting a worker is:\n\n .. code-block:: none\n\n calls QThreadPool.globalInstance().start(worker)\n | triggered by the QThreadPool.start() method\n | | called by worker.run\n | | |\n V V V\n worker.start -> worker.run -> worker.work\n\n **This** is the function that actually gets called when calling\n :func:`QThreadPool.start(worker)`. It simply wraps the :meth:`work`\n method, and emits a few signals. Subclasses should NOT override this\n method (except with good reason), and instead should implement\n :meth:`work`.\n \"\"\"\n self.started.emit()\n self._running = True\n try:\n result = self.work()\n self.returned.emit(result)\n except Exception as exc:\n self.errored.emit(exc)\n self.finished.emit()\n\n def work(self):\n \"\"\"Main method to execute the worker.\n\n The end-user should never need to call this function.\n But subclasses must implement this method (See\n :meth:`GeneratorFunction.work` for an example implementation).\n Minimally, it should check ``self.abort_requested`` periodically and\n exit if True.\n\n Examples\n --------\n\n .. code-block:: python\n\n class MyWorker(WorkerBase):\n\n def work(self):\n i = 0\n while True:\n if self.abort_requested:\n self.aborted.emit()\n break\n i += 1\n if i > max_iters:\n break\n time.sleep(0.5)\n \"\"\"\n raise NotImplementedError(\n f'\"{self.__class__.__name__}\" failed to define work() method'\n )\n\n def start(self):\n \"\"\"Start this worker in a thread and add it to the global threadpool.\n\n The order of method calls when starting a worker is:\n\n .. code-block:: none\n\n calls QThreadPool.globalInstance().start(worker)\n | triggered by the QThreadPool.start() method\n | | called by worker.run\n | | |\n V V V\n worker.start -> worker.run -> worker.work\n \"\"\"\n if self in WorkerBase._worker_set:\n raise RuntimeError('This worker is already started!')\n\n # This will raise a RunTimeError if the worker is already deleted\n repr(self)\n\n WorkerBase._worker_set.add(self)\n self.finished.connect(lambda: WorkerBase._worker_set.discard(self))\n QThreadPool.globalInstance().start(self)\n\n\nclass FunctionWorker(WorkerBase):\n \"\"\"QRunnable with signals that wraps a simple long-running function.\n\n .. note::\n\n ``FunctionWorker`` does not provide a way to stop a very long-running\n function (e.g. ``time.sleep(10000)``). So whenever possible, it is\n better to implement your long running function as a generator that\n yields periodically, and use the :class:`GeneratorWorker` instead.\n\n Parameters\n ----------\n func : Callable\n A function to call in another thread\n *args\n will be passed to the function\n **kwargs\n will be passed to the function\n\n Raises\n ------\n TypeError\n If ``func`` is a generator function and not a regular function.\n \"\"\"\n\n def __init__(self, func: Callable, *args, **kwargs):\n if inspect.isgeneratorfunction(func):\n raise TypeError(\n f\"Generator function {func} cannot be used with \"\n \"FunctionWorker, use GeneratorWorker instead\"\n )\n super().__init__()\n\n self._func = func\n self._args = args\n self._kwargs = kwargs\n\n def work(self):\n return self._func(*self._args, **self._kwargs)\n\n\nclass GeneratorWorkerSignals(WorkerBaseSignals):\n\n yielded = Signal(object) # emitted with yielded values (if generator used)\n paused = Signal() # emitted when a running job has successfully paused\n resumed = Signal() # emitted when a paused job has successfully resumed\n aborted = Signal() # emitted when a running job is successfully aborted\n\n\nclass GeneratorWorker(WorkerBase):\n \"\"\"QRunnable with signals that wraps a long-running generator.\n\n Provides a convenient way to run a generator function in another thread,\n while allowing 2-way communication between threads, using plain-python\n generator syntax in the original function.\n\n Parameters\n ----------\n func : callable\n The function being run in another thread. May be a generator function.\n SignalsClass : type, optional\n A QObject subclass that contains signals, by default\n GeneratorWorkerSignals\n *args\n Will be passed to func on instantiation\n **kwargs\n Will be passed to func on instantiation\n \"\"\"\n\n def __init__(\n self,\n func: Callable,\n *args,\n SignalsClass: Type[QObject] = GeneratorWorkerSignals,\n **kwargs,\n ):\n if not inspect.isgeneratorfunction(func):\n raise TypeError(\n f\"Regular function {func} cannot be used with \"\n \"GeneratorWorker, use FunctionWorker instead\"\n )\n super().__init__(SignalsClass=SignalsClass)\n\n self._gen = func(*args, **kwargs)\n self._incoming_value = None\n self._pause_requested = False\n self._resume_requested = False\n self._paused = False\n # polling interval: ONLY relevant if the user paused a running worker\n self._pause_interval = 0.01\n\n def work(self) -> None:\n \"\"\"Core event loop that calls the original function.\n\n Enters a continual loop, yielding and returning from the original\n function. Checks for various events (quit, pause, resume, etc...).\n (To clarify: we are creating a rudimentary event loop here because\n there IS NO Qt event loop running in the other thread to hook into)\n \"\"\"\n while True:\n if self.abort_requested:\n self.aborted.emit()\n break\n if self._paused:\n if self._resume_requested:\n self._paused = False\n self._resume_requested = False\n self.resumed.emit()\n else:\n time.sleep(self._pause_interval)\n continue\n elif self._pause_requested:\n self._paused = True\n self._pause_requested = False\n self.paused.emit()\n continue\n try:\n self.yielded.emit(self._gen.send(self._next_value()))\n except StopIteration as exc:\n return exc.value\n\n def send(self, value: Any):\n \"\"\"Send a value into the function (if a generator was used).\"\"\"\n self._incoming_value = value\n\n def _next_value(self) -> Any:\n out = None\n if self._incoming_value is not None:\n out = self._incoming_value\n self._incoming_value = None\n return out\n\n @property\n def is_paused(self) -> bool:\n \"\"\"Whether the worker is currently paused.\"\"\"\n return self._paused\n\n def toggle_pause(self) -> None:\n \"\"\"Request to pause the worker if playing or resume if paused.\"\"\"\n if self.is_paused:\n self._resume_requested = True\n else:\n self._pause_requested = True\n\n def pause(self) -> None:\n \"\"\"Request to pause the worker.\"\"\"\n if not self.is_paused:\n self._pause_requested = True\n\n def resume(self) -> None:\n \"\"\"Send a request to resume the worker.\n \"\"\"\n if self.is_paused:\n self._resume_requested = True\n\n\n############################################################################\n\n# public API\n\n# For now, the next three functions simply wrap the QThreadPool API, and allow\n# us to track and cleanup all workers that were started with ``start_worker``,\n# provided that ``wait_for_workers_to_quit`` is called at shutdown.\n# In the future, this could wrap any API, or a pure python threadpool.\n\n\ndef set_max_thread_count(num: int):\n \"\"\"Set the maximum number of threads used by the thread pool.\n\n Note: The thread pool will always use at least 1 thread, even if\n maxThreadCount limit is zero or negative.\n \"\"\"\n QThreadPool.globalInstance().setMaxThreadCount(num)\n\n\ndef wait_for_workers_to_quit(msecs: int = None):\n \"\"\"Ask all workers to quit, and wait up to `msec` for quit.\n\n Attempts to clean up all running workers by calling ``worker.quit()``\n method. Any workers in the ``WorkerBase._worker_set`` set will have this\n method.\n\n By default, this function will block indefinitely, until worker threads\n finish. If a timeout is provided, a ``RuntimeError`` will be raised if\n the workers do not gracefully exit in the time requests, but the threads\n will NOT be killed. It is (currently) left to the user to use their OS\n to force-quit rogue threads.\n\n .. important::\n\n If the user does not put any yields in their function, and the function\n is super long, it will just hang... For instance, there's no graceful\n way to kill this thread in python:\n\n .. code-block:: python\n\n @thread_worker\n def ZZZzzz():\n time.sleep(10000000)\n\n This is why it's always advisable to use a generator that periodically\n yields for long-running computations in another thread.\n\n See `this stack-overflow post\n <https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread>`_\n for a good discussion on the difficulty of killing a rogue python thread:\n\n Parameters\n ----------\n msecs : int, optional\n Waits up to msecs milliseconds for all threads to exit and removes all\n threads from the thread pool. If msecs is `None` (the default), the\n timeout is ignored (waits for the last thread to exit).\n\n Raises\n -------\n RuntimeError\n If a timeout is provided and workers do not quit successfully within\n the time alotted.\n \"\"\"\n for worker in WorkerBase._worker_set:\n worker.quit()\n\n msecs = msecs if msecs is not None else -1\n if not QThreadPool.globalInstance().waitForDone(msecs):\n raise RuntimeError(\n f\"Workers did not quit gracefully in the time alotted ({msecs} ms)\"\n )\n\n\ndef active_thread_count() -> int:\n \"\"\"Return the number of active threads in the global ThreadPool.\"\"\"\n return QThreadPool.globalInstance().activeThreadCount()\n\n\n#############################################################################\n\n# convenience functions for creating Worker instances\n\n\ndef create_worker(\n func: Callable,\n *args,\n _start_thread: Optional[bool] = None,\n _connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,\n _worker_class: Optional[Type[WorkerBase]] = None,\n _ignore_errors: bool = False,\n **kwargs,\n) -> WorkerBase:\n \"\"\"Convenience function to start a function in another thread.\n\n By default, uses :class:`Worker`, but a custom ``WorkerBase`` subclass may\n be provided. If so, it must be a subclass of :class:`Worker`, which\n defines a standard set of signals and a run method.\n\n Parameters\n ----------\n func : Callable\n The function to call in another thread.\n _start_thread : bool, optional\n Whether to immediaetly start the thread. If False, the returned worker\n must be manually started with ``worker.start()``. by default it will be\n ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.\n _connect : Dict[str, Union[Callable, Sequence]], optional\n A mapping of ``\"signal_name\"`` -> ``callable`` or list of ``callable``:\n callback functions to connect to the various signals offered by the\n worker class. by default None\n _worker_class : Type[WorkerBase], optional\n The :class`WorkerBase` to instantiate, by default\n :class:`FunctionWorker` will be used if ``func`` is a regular function,\n and :class:`GeneratorWorker` will be used if it is a generator.\n _ignore_errors : bool, optional\n If ``False`` (the default), errors raised in the other thread will be\n reraised in the main thread (makes debugging significantly easier).\n *args\n will be passed to ``func``\n **kwargs\n will be passed to ``func``\n\n Returns\n -------\n worker : WorkerBase\n An instantiated worker. If ``_start_thread`` was ``False``, the worker\n will have a `.start()` method that can be used to start the thread.\n\n Raises\n ------\n TypeError\n If a worker_class is provided that is not a subclass of WorkerBase.\n TypeError\n If _connect is provided and is not a dict of ``{str: callable}``\n\n Examples\n --------\n\n .. code-block:: python\n\n def long_function(duration):\n import time\n time.sleep(duration)\n\n worker = create_worker(long_function, 10)\n\n \"\"\"\n if not _worker_class:\n if inspect.isgeneratorfunction(func):\n _worker_class = GeneratorWorker\n else:\n _worker_class = FunctionWorker\n\n if not (\n inspect.isclass(_worker_class)\n and issubclass(_worker_class, WorkerBase)\n ):\n raise TypeError(\n f'Worker {_worker_class} must be a subclass of WorkerBase'\n )\n\n worker = _worker_class(func, *args, **kwargs)\n\n if _connect is not None:\n if not isinstance(_connect, dict):\n raise TypeError(\"The '_connect' argument must be a dict\")\n\n if _start_thread is None:\n _start_thread = True\n\n for key, val in _connect.items():\n if not isinstance(val, (tuple, list)):\n _val = [val]\n for v in _val:\n if not callable(v):\n raise TypeError(\n f'\"_connect[{key!r}]\" must be a function or '\n 'sequence of functions'\n )\n getattr(worker, key).connect(v)\n\n # if the user has not provided a default connection for the \"errored\"\n # signal... and they have not explicitly set ``ignore_errors=True``\n # Then rereaise any errors from the thread.\n if not _ignore_errors and not (_connect or {}).get('errored', False):\n\n def reraise(e):\n raise e\n\n worker.errored.connect(reraise)\n\n if _start_thread:\n worker.start()\n return worker\n\n\[email protected]\ndef thread_worker(\n function: Callable,\n start_thread: Optional[bool] = None,\n connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,\n worker_class: Optional[Type[WorkerBase]] = None,\n ignore_errors: bool = False,\n) -> Callable:\n \"\"\"Decorator that runs a function in a seperate thread when called.\n\n When called, the decorated function returns a :class:`WorkerBase`. See\n :func:`create_worker` for additional keyword arguments that can be used\n when calling the function.\n\n The returned worker will have these signals:\n\n - *started*: emitted when the work is started\n - *finished*: emitted when the work is finished\n - *returned*: emitted with return value\n - *errored*: emitted with error object on Exception\n\n It will also have a ``worker.start()`` method that can be used to start\n execution of the function in another thread. (useful if you need to connect\n callbacks to signals prior to execution)\n\n If the decorated function is a generator, the returned worker will also\n provide these signals:\n\n - *yielded*: emitted with yielded values\n - *paused*: emitted when a running job has successfully paused\n - *resumed*: emitted when a paused job has successfully resumed\n - *aborted*: emitted when a running job is successfully aborted\n\n And these methods:\n\n - *quit*: ask the thread to quit\n - *toggle_paused*: toggle the running state of the thread.\n - *send*: send a value into the generator. (This requires that your\n decorator function uses the ``value = yield`` syntax)\n\n\n Parameters\n ----------\n func : callable\n Function to call in another thread. For communication between threads\n may be a generator function.\n start_thread : bool, optional\n Whether to immediaetly start the thread. If False, the returned worker\n must be manually started with ``worker.start()``. by default it will be\n ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.\n connect : Dict[str, Union[Callable, Sequence]], optional\n A mapping of ``\"signal_name\"`` -> ``callable`` or list of ``callable``:\n callback functions to connect to the various signals offered by the\n worker class. by default None\n worker_class : Type[WorkerBase], optional\n The :class`WorkerBase` to instantiate, by default\n :class:`FunctionWorker` will be used if ``func`` is a regular function,\n and :class:`GeneratorWorker` will be used if it is a generator.\n ignore_errors : bool, optional\n If ``False`` (the default), errors raised in the other thread will be\n reraised in the main thread (makes debugging significantly easier).\n\n Returns\n -------\n callable\n function that creates a worker, puts it in a new thread and returns\n the worker instance.\n\n Examples\n --------\n\n .. code-block:: python\n\n @thread_worker\n def long_function(start, end):\n # do work, periodically yielding\n i = start\n while i <= end:\n time.sleep(0.1)\n yield i\n\n # do teardown\n return 'anything'\n\n # call the function to start running in another thread.\n worker = long_function()\n # connect signals here if desired... or they may be added using the\n # `connect` argument in the `@thread_worker` decorator... in which\n # case the worker will start immediately when long_function() is called\n worker.start()\n \"\"\"\n\n @wraps(function)\n def worker_function(*args, **kwargs):\n # decorator kwargs can be overridden at call time by using the\n # underscore-prefixed version of the kwarg.\n kwargs['_start_thread'] = kwargs.get('_start_thread', start_thread)\n kwargs['_connect'] = kwargs.get('_connect', connect)\n kwargs['_worker_class'] = kwargs.get('_worker_class', worker_class)\n kwargs['_ignore_errors'] = kwargs.get('_ignore_errors', ignore_errors)\n return create_worker(function, *args, **kwargs,)\n\n return worker_function\n\n\n############################################################################\n\n# This is a variant on the above pattern, it uses QThread instead of Qrunnable\n# see https://doc.qt.io/qt-5/threads-technologies.html#comparison-of-solutions\n# (it appears from that table that QRunnable cannot emit or receive signals,\n# but we circumvent that here with our WorkerBase class that also inherits from\n# QObject... providing signals/slots).\n#\n# A benefit of the QRunnable pattern is that Qt manages the threads for you,\n# in the QThreadPool.globalInstance() ... making it easier to reuse threads,\n# and reduce overhead.\n#\n# However, a disadvantage is that you have no access to (and therefore less\n# control over) the QThread itself. See for example all of the methods\n# provided on the QThread object: https://doc.qt.io/qt-5/qthread.html\n\n\n# TODO: potentially remove this altogether, by refactoring the dims\n# AnimationWorker to subclass WorkerBase\n\n\ndef _new_worker_qthread(\n Worker: Type[QObject],\n *args,\n _start_thread: bool = False,\n _connect: Dict[str, Callable] = None,\n **kwargs,\n):\n \"\"\"This is a convenience function to start a worker in a Qthread.\n\n In most cases, the @thread_worker decorator is sufficient and preferable.\n But this allows the user to completely customize the Worker object.\n However, they must then maintain control over the thread and clean up\n appropriately.\n\n It follows the pattern described here:\n https://www.qt.io/blog/2010/06/17/youre-doing-it-wrong\n and\n https://doc.qt.io/qt-5/qthread.html#details\n\n see also:\n https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/\n\n A QThread object is not a thread! It should be thought of as a class to\n *manage* a thread, not as the actual code or object that runs in that\n thread. The QThread object is created on the main thread and lives there.\n\n Worker objects which derive from QObject are the things that actually do\n the work. They can be moved to a QThread as is done here.\n\n .. note:: Mostly ignorable detail\n\n While the signals/slots syntax of the worker looks very similar to\n standard \"single-threaded\" signals & slots, note that inter-thread\n signals and slots (automatically) use an event-based QueuedConnection,\n while intra-thread signals use a DirectConnection. See `Signals and\n Slots Across Threads\n <https://doc.qt.io/qt-5/threads-qobject.html#signals-and-slots-across-threads>`_\n\n Parameters\n ----------\n Worker : QObject\n QObject type that implements a work() method. The Worker should also\n emit a finished signal when the work is done.\n start_thread : bool\n If True, thread will be started immediately, otherwise, thread must\n be manually started with thread.start().\n connections: dict, optional\n Optional dictionary of {signal: function} to connect to the new worker.\n for instance: connections = {'incremented': myfunc} will result in:\n worker.incremented.connect(myfunc)\n *args\n will be passed to the Worker class on instantiation.\n **kwargs\n will be passed to the Worker class on instantiation.\n\n Returns\n -------\n worker : WorkerBase\n The created worker.\n thread : QThread\n The thread on which the worker is running.\n\n Examples\n --------\n Create some QObject that has a long-running work method:\n\n .. code-block:: python\n\n class Worker(QObject):\n\n finished = Signal()\n increment = Signal(int)\n\n def __init__(self, argument):\n super().__init__()\n self.argument = argument\n\n @Slot()\n def work(self):\n # some long running task...\n import time\n for i in range(10):\n time.sleep(1)\n self.increment.emit(i)\n self.finished.emit()\n\n worker, thread = _new_worker_qthread(\n Worker,\n 'argument',\n start_thread=True,\n connections={'increment': print},\n )\n\n \"\"\"\n\n if _connect and not isinstance(_connect, dict):\n raise TypeError('_connect parameter must be a dict')\n\n thread = QThread()\n worker = Worker(*args, **kwargs)\n worker.moveToThread(thread)\n thread.started.connect(worker.work)\n worker.finished.connect(thread.quit)\n worker.finished.connect(worker.deleteLater)\n thread.finished.connect(thread.deleteLater)\n\n if _connect:\n [getattr(worker, key).connect(val) for key, val in _connect.items()]\n\n if _start_thread:\n thread.start() # sometimes need to connect stuff before starting\n return worker, thread\n", "path": "napari/_qt/threading.py"}], "after_files": [{"content": "import inspect\nimport time\nfrom functools import wraps\nfrom typing import Any, Callable, Dict, Optional, Sequence, Set, Type, Union\n\nimport toolz as tz\nfrom qtpy.QtCore import QObject, QRunnable, QThread, QThreadPool, Signal, Slot\n\n\ndef as_generator_function(func: Callable) -> Callable:\n \"\"\"Turns a regular function (single return) into a generator function.\"\"\"\n\n @wraps(func)\n def genwrapper(*args, **kwargs):\n yield\n return func(*args, **kwargs)\n\n return genwrapper\n\n\nclass WorkerBaseSignals(QObject):\n\n started = Signal() # emitted when the work is started\n finished = Signal() # emitted when the work is finished\n returned = Signal(object) # emitted with return value\n errored = Signal(object) # emitted with error object on Exception\n\n\nclass WorkerBase(QRunnable):\n \"\"\"Base class for creating a Worker that can run in another thread.\n\n Parameters\n ----------\n SignalsClass : type, optional\n A QObject subclass that contains signals, by default WorkerBaseSignals\n \"\"\"\n\n #: A set of Workers. Add to set using :meth:`WorkerBase.start`\n _worker_set: Set['WorkerBase'] = set()\n\n def __init__(\n self, *args, SignalsClass: Type[QObject] = WorkerBaseSignals, **kwargs\n ) -> None:\n super().__init__()\n self._abort_requested = False\n self._running = False\n self._signals = SignalsClass()\n\n def __getattr__(self, name):\n \"\"\"Pass through attr requests to signals to simplify connection API.\n\n The goal is to enable ``worker.signal.connect`` instead of\n ``worker.signals.yielded.connect``. Because multiple inheritance of Qt\n classes is not well supported in PyQt, we have to use composition here\n (signals are provided by QObjects, and QRunnable is not a QObject). So\n this passthrough allows us to connect to signals on the ``_signals``\n object.\n \"\"\"\n # the Signal object is actually a class attribute\n attr = getattr(self._signals.__class__, name, None)\n if isinstance(attr, Signal):\n # but what we need to connect to is the instantiated signal\n # (which is of type `SignalInstance` in PySide and\n # `pyqtBoundSignal` in PyQt)\n return getattr(self._signals, name)\n\n def quit(self) -> None:\n \"\"\"Send a request to abort the worker.\n\n .. note::\n\n It is entirely up to subclasses to honor this method by checking\n ``self.abort_requested`` periodically in their ``worker.work``\n method, and exiting if ``True``.\n \"\"\"\n self._abort_requested = True\n\n @property\n def abort_requested(self) -> bool:\n \"\"\"Whether the worker has been requested to stop.\"\"\"\n return self._abort_requested\n\n @property\n def is_running(self) -> bool:\n \"\"\"Whether the worker has been started\"\"\"\n return self._running\n\n @Slot()\n def run(self):\n \"\"\"Start the worker.\n\n The end-user should never need to call this function.\n But it cannot be made private or renamed, since it is called by Qt.\n\n The order of method calls when starting a worker is:\n\n .. code-block:: none\n\n calls QThreadPool.globalInstance().start(worker)\n | triggered by the QThreadPool.start() method\n | | called by worker.run\n | | |\n V V V\n worker.start -> worker.run -> worker.work\n\n **This** is the function that actually gets called when calling\n :func:`QThreadPool.start(worker)`. It simply wraps the :meth:`work`\n method, and emits a few signals. Subclasses should NOT override this\n method (except with good reason), and instead should implement\n :meth:`work`.\n \"\"\"\n self.started.emit()\n self._running = True\n try:\n result = self.work()\n self.returned.emit(result)\n except Exception as exc:\n self.errored.emit(exc)\n self.finished.emit()\n\n def work(self):\n \"\"\"Main method to execute the worker.\n\n The end-user should never need to call this function.\n But subclasses must implement this method (See\n :meth:`GeneratorFunction.work` for an example implementation).\n Minimally, it should check ``self.abort_requested`` periodically and\n exit if True.\n\n Examples\n --------\n\n .. code-block:: python\n\n class MyWorker(WorkerBase):\n\n def work(self):\n i = 0\n while True:\n if self.abort_requested:\n self.aborted.emit()\n break\n i += 1\n if i > max_iters:\n break\n time.sleep(0.5)\n \"\"\"\n raise NotImplementedError(\n f'\"{self.__class__.__name__}\" failed to define work() method'\n )\n\n def start(self):\n \"\"\"Start this worker in a thread and add it to the global threadpool.\n\n The order of method calls when starting a worker is:\n\n .. code-block:: none\n\n calls QThreadPool.globalInstance().start(worker)\n | triggered by the QThreadPool.start() method\n | | called by worker.run\n | | |\n V V V\n worker.start -> worker.run -> worker.work\n \"\"\"\n if self in WorkerBase._worker_set:\n raise RuntimeError('This worker is already started!')\n\n # This will raise a RunTimeError if the worker is already deleted\n repr(self)\n\n WorkerBase._worker_set.add(self)\n self.finished.connect(lambda: WorkerBase._worker_set.discard(self))\n QThreadPool.globalInstance().start(self)\n\n\nclass FunctionWorker(WorkerBase):\n \"\"\"QRunnable with signals that wraps a simple long-running function.\n\n .. note::\n\n ``FunctionWorker`` does not provide a way to stop a very long-running\n function (e.g. ``time.sleep(10000)``). So whenever possible, it is\n better to implement your long running function as a generator that\n yields periodically, and use the :class:`GeneratorWorker` instead.\n\n Parameters\n ----------\n func : Callable\n A function to call in another thread\n *args\n will be passed to the function\n **kwargs\n will be passed to the function\n\n Raises\n ------\n TypeError\n If ``func`` is a generator function and not a regular function.\n \"\"\"\n\n def __init__(self, func: Callable, *args, **kwargs):\n if inspect.isgeneratorfunction(func):\n raise TypeError(\n f\"Generator function {func} cannot be used with \"\n \"FunctionWorker, use GeneratorWorker instead\"\n )\n super().__init__()\n\n self._func = func\n self._args = args\n self._kwargs = kwargs\n\n def work(self):\n return self._func(*self._args, **self._kwargs)\n\n\nclass GeneratorWorkerSignals(WorkerBaseSignals):\n\n yielded = Signal(object) # emitted with yielded values (if generator used)\n paused = Signal() # emitted when a running job has successfully paused\n resumed = Signal() # emitted when a paused job has successfully resumed\n aborted = Signal() # emitted when a running job is successfully aborted\n\n\nclass GeneratorWorker(WorkerBase):\n \"\"\"QRunnable with signals that wraps a long-running generator.\n\n Provides a convenient way to run a generator function in another thread,\n while allowing 2-way communication between threads, using plain-python\n generator syntax in the original function.\n\n Parameters\n ----------\n func : callable\n The function being run in another thread. May be a generator function.\n SignalsClass : type, optional\n A QObject subclass that contains signals, by default\n GeneratorWorkerSignals\n *args\n Will be passed to func on instantiation\n **kwargs\n Will be passed to func on instantiation\n \"\"\"\n\n def __init__(\n self,\n func: Callable,\n *args,\n SignalsClass: Type[QObject] = GeneratorWorkerSignals,\n **kwargs,\n ):\n if not inspect.isgeneratorfunction(func):\n raise TypeError(\n f\"Regular function {func} cannot be used with \"\n \"GeneratorWorker, use FunctionWorker instead\"\n )\n super().__init__(SignalsClass=SignalsClass)\n\n self._gen = func(*args, **kwargs)\n self._incoming_value = None\n self._pause_requested = False\n self._resume_requested = False\n self._paused = False\n # polling interval: ONLY relevant if the user paused a running worker\n self._pause_interval = 0.01\n\n def work(self) -> None:\n \"\"\"Core event loop that calls the original function.\n\n Enters a continual loop, yielding and returning from the original\n function. Checks for various events (quit, pause, resume, etc...).\n (To clarify: we are creating a rudimentary event loop here because\n there IS NO Qt event loop running in the other thread to hook into)\n \"\"\"\n while True:\n if self.abort_requested:\n self.aborted.emit()\n break\n if self._paused:\n if self._resume_requested:\n self._paused = False\n self._resume_requested = False\n self.resumed.emit()\n else:\n time.sleep(self._pause_interval)\n continue\n elif self._pause_requested:\n self._paused = True\n self._pause_requested = False\n self.paused.emit()\n continue\n try:\n self.yielded.emit(self._gen.send(self._next_value()))\n except StopIteration as exc:\n return exc.value\n\n def send(self, value: Any):\n \"\"\"Send a value into the function (if a generator was used).\"\"\"\n self._incoming_value = value\n\n def _next_value(self) -> Any:\n out = None\n if self._incoming_value is not None:\n out = self._incoming_value\n self._incoming_value = None\n return out\n\n @property\n def is_paused(self) -> bool:\n \"\"\"Whether the worker is currently paused.\"\"\"\n return self._paused\n\n def toggle_pause(self) -> None:\n \"\"\"Request to pause the worker if playing or resume if paused.\"\"\"\n if self.is_paused:\n self._resume_requested = True\n else:\n self._pause_requested = True\n\n def pause(self) -> None:\n \"\"\"Request to pause the worker.\"\"\"\n if not self.is_paused:\n self._pause_requested = True\n\n def resume(self) -> None:\n \"\"\"Send a request to resume the worker.\n \"\"\"\n if self.is_paused:\n self._resume_requested = True\n\n\n############################################################################\n\n# public API\n\n# For now, the next three functions simply wrap the QThreadPool API, and allow\n# us to track and cleanup all workers that were started with ``start_worker``,\n# provided that ``wait_for_workers_to_quit`` is called at shutdown.\n# In the future, this could wrap any API, or a pure python threadpool.\n\n\ndef set_max_thread_count(num: int):\n \"\"\"Set the maximum number of threads used by the thread pool.\n\n Note: The thread pool will always use at least 1 thread, even if\n maxThreadCount limit is zero or negative.\n \"\"\"\n QThreadPool.globalInstance().setMaxThreadCount(num)\n\n\ndef wait_for_workers_to_quit(msecs: int = None):\n \"\"\"Ask all workers to quit, and wait up to `msec` for quit.\n\n Attempts to clean up all running workers by calling ``worker.quit()``\n method. Any workers in the ``WorkerBase._worker_set`` set will have this\n method.\n\n By default, this function will block indefinitely, until worker threads\n finish. If a timeout is provided, a ``RuntimeError`` will be raised if\n the workers do not gracefully exit in the time requests, but the threads\n will NOT be killed. It is (currently) left to the user to use their OS\n to force-quit rogue threads.\n\n .. important::\n\n If the user does not put any yields in their function, and the function\n is super long, it will just hang... For instance, there's no graceful\n way to kill this thread in python:\n\n .. code-block:: python\n\n @thread_worker\n def ZZZzzz():\n time.sleep(10000000)\n\n This is why it's always advisable to use a generator that periodically\n yields for long-running computations in another thread.\n\n See `this stack-overflow post\n <https://stackoverflow.com/questions/323972/is-there-any-way-to-kill-a-thread>`_\n for a good discussion on the difficulty of killing a rogue python thread:\n\n Parameters\n ----------\n msecs : int, optional\n Waits up to msecs milliseconds for all threads to exit and removes all\n threads from the thread pool. If msecs is `None` (the default), the\n timeout is ignored (waits for the last thread to exit).\n\n Raises\n -------\n RuntimeError\n If a timeout is provided and workers do not quit successfully within\n the time alotted.\n \"\"\"\n for worker in WorkerBase._worker_set:\n worker.quit()\n\n msecs = msecs if msecs is not None else -1\n if not QThreadPool.globalInstance().waitForDone(msecs):\n raise RuntimeError(\n f\"Workers did not quit gracefully in the time alotted ({msecs} ms)\"\n )\n\n\ndef active_thread_count() -> int:\n \"\"\"Return the number of active threads in the global ThreadPool.\"\"\"\n return QThreadPool.globalInstance().activeThreadCount()\n\n\n#############################################################################\n\n# convenience functions for creating Worker instances\n\n\ndef create_worker(\n func: Callable,\n *args,\n _start_thread: Optional[bool] = None,\n _connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,\n _worker_class: Optional[Type[WorkerBase]] = None,\n _ignore_errors: bool = False,\n **kwargs,\n) -> WorkerBase:\n \"\"\"Convenience function to start a function in another thread.\n\n By default, uses :class:`Worker`, but a custom ``WorkerBase`` subclass may\n be provided. If so, it must be a subclass of :class:`Worker`, which\n defines a standard set of signals and a run method.\n\n Parameters\n ----------\n func : Callable\n The function to call in another thread.\n _start_thread : bool, optional\n Whether to immediaetly start the thread. If False, the returned worker\n must be manually started with ``worker.start()``. by default it will be\n ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.\n _connect : Dict[str, Union[Callable, Sequence]], optional\n A mapping of ``\"signal_name\"`` -> ``callable`` or list of ``callable``:\n callback functions to connect to the various signals offered by the\n worker class. by default None\n _worker_class : Type[WorkerBase], optional\n The :class`WorkerBase` to instantiate, by default\n :class:`FunctionWorker` will be used if ``func`` is a regular function,\n and :class:`GeneratorWorker` will be used if it is a generator.\n _ignore_errors : bool, optional\n If ``False`` (the default), errors raised in the other thread will be\n reraised in the main thread (makes debugging significantly easier).\n *args\n will be passed to ``func``\n **kwargs\n will be passed to ``func``\n\n Returns\n -------\n worker : WorkerBase\n An instantiated worker. If ``_start_thread`` was ``False``, the worker\n will have a `.start()` method that can be used to start the thread.\n\n Raises\n ------\n TypeError\n If a worker_class is provided that is not a subclass of WorkerBase.\n TypeError\n If _connect is provided and is not a dict of ``{str: callable}``\n\n Examples\n --------\n\n .. code-block:: python\n\n def long_function(duration):\n import time\n time.sleep(duration)\n\n worker = create_worker(long_function, 10)\n\n \"\"\"\n if not _worker_class:\n if inspect.isgeneratorfunction(func):\n _worker_class = GeneratorWorker\n else:\n _worker_class = FunctionWorker\n\n if not (\n inspect.isclass(_worker_class)\n and issubclass(_worker_class, WorkerBase)\n ):\n raise TypeError(\n f'Worker {_worker_class} must be a subclass of WorkerBase'\n )\n\n worker = _worker_class(func, *args, **kwargs)\n\n if _connect is not None:\n if not isinstance(_connect, dict):\n raise TypeError(\"The '_connect' argument must be a dict\")\n\n if _start_thread is None:\n _start_thread = True\n\n for key, val in _connect.items():\n _val = val if isinstance(val, (tuple, list)) else [val]\n for v in _val:\n if not callable(v):\n raise TypeError(\n f'\"_connect[{key!r}]\" must be a function or '\n 'sequence of functions'\n )\n getattr(worker, key).connect(v)\n\n # if the user has not provided a default connection for the \"errored\"\n # signal... and they have not explicitly set ``ignore_errors=True``\n # Then rereaise any errors from the thread.\n if not _ignore_errors and not (_connect or {}).get('errored', False):\n\n def reraise(e):\n raise e\n\n worker.errored.connect(reraise)\n\n if _start_thread:\n worker.start()\n return worker\n\n\[email protected]\ndef thread_worker(\n function: Callable,\n start_thread: Optional[bool] = None,\n connect: Optional[Dict[str, Union[Callable, Sequence[Callable]]]] = None,\n worker_class: Optional[Type[WorkerBase]] = None,\n ignore_errors: bool = False,\n) -> Callable:\n \"\"\"Decorator that runs a function in a seperate thread when called.\n\n When called, the decorated function returns a :class:`WorkerBase`. See\n :func:`create_worker` for additional keyword arguments that can be used\n when calling the function.\n\n The returned worker will have these signals:\n\n - *started*: emitted when the work is started\n - *finished*: emitted when the work is finished\n - *returned*: emitted with return value\n - *errored*: emitted with error object on Exception\n\n It will also have a ``worker.start()`` method that can be used to start\n execution of the function in another thread. (useful if you need to connect\n callbacks to signals prior to execution)\n\n If the decorated function is a generator, the returned worker will also\n provide these signals:\n\n - *yielded*: emitted with yielded values\n - *paused*: emitted when a running job has successfully paused\n - *resumed*: emitted when a paused job has successfully resumed\n - *aborted*: emitted when a running job is successfully aborted\n\n And these methods:\n\n - *quit*: ask the thread to quit\n - *toggle_paused*: toggle the running state of the thread.\n - *send*: send a value into the generator. (This requires that your\n decorator function uses the ``value = yield`` syntax)\n\n\n Parameters\n ----------\n func : callable\n Function to call in another thread. For communication between threads\n may be a generator function.\n start_thread : bool, optional\n Whether to immediaetly start the thread. If False, the returned worker\n must be manually started with ``worker.start()``. by default it will be\n ``False`` if the ``_connect`` argument is ``None``, otherwise ``True``.\n connect : Dict[str, Union[Callable, Sequence]], optional\n A mapping of ``\"signal_name\"`` -> ``callable`` or list of ``callable``:\n callback functions to connect to the various signals offered by the\n worker class. by default None\n worker_class : Type[WorkerBase], optional\n The :class`WorkerBase` to instantiate, by default\n :class:`FunctionWorker` will be used if ``func`` is a regular function,\n and :class:`GeneratorWorker` will be used if it is a generator.\n ignore_errors : bool, optional\n If ``False`` (the default), errors raised in the other thread will be\n reraised in the main thread (makes debugging significantly easier).\n\n Returns\n -------\n callable\n function that creates a worker, puts it in a new thread and returns\n the worker instance.\n\n Examples\n --------\n\n .. code-block:: python\n\n @thread_worker\n def long_function(start, end):\n # do work, periodically yielding\n i = start\n while i <= end:\n time.sleep(0.1)\n yield i\n\n # do teardown\n return 'anything'\n\n # call the function to start running in another thread.\n worker = long_function()\n # connect signals here if desired... or they may be added using the\n # `connect` argument in the `@thread_worker` decorator... in which\n # case the worker will start immediately when long_function() is called\n worker.start()\n \"\"\"\n\n @wraps(function)\n def worker_function(*args, **kwargs):\n # decorator kwargs can be overridden at call time by using the\n # underscore-prefixed version of the kwarg.\n kwargs['_start_thread'] = kwargs.get('_start_thread', start_thread)\n kwargs['_connect'] = kwargs.get('_connect', connect)\n kwargs['_worker_class'] = kwargs.get('_worker_class', worker_class)\n kwargs['_ignore_errors'] = kwargs.get('_ignore_errors', ignore_errors)\n return create_worker(function, *args, **kwargs,)\n\n return worker_function\n\n\n############################################################################\n\n# This is a variant on the above pattern, it uses QThread instead of Qrunnable\n# see https://doc.qt.io/qt-5/threads-technologies.html#comparison-of-solutions\n# (it appears from that table that QRunnable cannot emit or receive signals,\n# but we circumvent that here with our WorkerBase class that also inherits from\n# QObject... providing signals/slots).\n#\n# A benefit of the QRunnable pattern is that Qt manages the threads for you,\n# in the QThreadPool.globalInstance() ... making it easier to reuse threads,\n# and reduce overhead.\n#\n# However, a disadvantage is that you have no access to (and therefore less\n# control over) the QThread itself. See for example all of the methods\n# provided on the QThread object: https://doc.qt.io/qt-5/qthread.html\n\n\n# TODO: potentially remove this altogether, by refactoring the dims\n# AnimationWorker to subclass WorkerBase\n\n\ndef _new_worker_qthread(\n Worker: Type[QObject],\n *args,\n _start_thread: bool = False,\n _connect: Dict[str, Callable] = None,\n **kwargs,\n):\n \"\"\"This is a convenience function to start a worker in a Qthread.\n\n In most cases, the @thread_worker decorator is sufficient and preferable.\n But this allows the user to completely customize the Worker object.\n However, they must then maintain control over the thread and clean up\n appropriately.\n\n It follows the pattern described here:\n https://www.qt.io/blog/2010/06/17/youre-doing-it-wrong\n and\n https://doc.qt.io/qt-5/qthread.html#details\n\n see also:\n https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/\n\n A QThread object is not a thread! It should be thought of as a class to\n *manage* a thread, not as the actual code or object that runs in that\n thread. The QThread object is created on the main thread and lives there.\n\n Worker objects which derive from QObject are the things that actually do\n the work. They can be moved to a QThread as is done here.\n\n .. note:: Mostly ignorable detail\n\n While the signals/slots syntax of the worker looks very similar to\n standard \"single-threaded\" signals & slots, note that inter-thread\n signals and slots (automatically) use an event-based QueuedConnection,\n while intra-thread signals use a DirectConnection. See `Signals and\n Slots Across Threads\n <https://doc.qt.io/qt-5/threads-qobject.html#signals-and-slots-across-threads>`_\n\n Parameters\n ----------\n Worker : QObject\n QObject type that implements a work() method. The Worker should also\n emit a finished signal when the work is done.\n start_thread : bool\n If True, thread will be started immediately, otherwise, thread must\n be manually started with thread.start().\n connections: dict, optional\n Optional dictionary of {signal: function} to connect to the new worker.\n for instance: connections = {'incremented': myfunc} will result in:\n worker.incremented.connect(myfunc)\n *args\n will be passed to the Worker class on instantiation.\n **kwargs\n will be passed to the Worker class on instantiation.\n\n Returns\n -------\n worker : WorkerBase\n The created worker.\n thread : QThread\n The thread on which the worker is running.\n\n Examples\n --------\n Create some QObject that has a long-running work method:\n\n .. code-block:: python\n\n class Worker(QObject):\n\n finished = Signal()\n increment = Signal(int)\n\n def __init__(self, argument):\n super().__init__()\n self.argument = argument\n\n @Slot()\n def work(self):\n # some long running task...\n import time\n for i in range(10):\n time.sleep(1)\n self.increment.emit(i)\n self.finished.emit()\n\n worker, thread = _new_worker_qthread(\n Worker,\n 'argument',\n start_thread=True,\n connections={'increment': print},\n )\n\n \"\"\"\n\n if _connect and not isinstance(_connect, dict):\n raise TypeError('_connect parameter must be a dict')\n\n thread = QThread()\n worker = Worker(*args, **kwargs)\n worker.moveToThread(thread)\n thread.started.connect(worker.work)\n worker.finished.connect(thread.quit)\n worker.finished.connect(worker.deleteLater)\n thread.finished.connect(thread.deleteLater)\n\n if _connect:\n [getattr(worker, key).connect(val) for key, val in _connect.items()]\n\n if _start_thread:\n thread.start() # sometimes need to connect stuff before starting\n return worker, thread\n", "path": "napari/_qt/threading.py"}]} |
gh_patches_debug_1163 | rasdani/github-patches | git_diff | translate__pootle-5588 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Forever loading after captcha
At the first loading of poolte, i mean after this
```
2016-07-22 11:38:22,805 INFO Loading custom settings from '/var/www/translate.tamrielunlimited.it/pootle.conf'...
2016-07-22 11:38:23,146 INFO Using Python PO
2016-07-22 11:38:24,357 INFO Loading custom settings from '/var/www/translate.tamrielunlimited.it/pootle.conf'...
2016-07-22 11:38:24,700 INFO Using Python PO
```
If an anonymous user make a suggestion, he has this problem, it keeps loading forever after the captcha... the only way to resolve this is reloading the page (??several times) and then it works... i think it's the captcha, cause the registered members have not this problem...
I found out this problem few days ago, but i wanted to test with some friends and they had all the same problem after the captcha... when they inserted it and then reloaded the page (someone more than one time), it worked without any other problem...
this is the test as video, check it out to understand: https://dl.dropboxusercontent.com/u/89718989/Screenshot/pootle/pootle-suggestion.mp4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pootle/apps/pootle_store/views.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import calendar
10
11 from translate.lang import data
12
13 from django import forms
14 from django.conf import settings
15 from django.core.exceptions import ObjectDoesNotExist, PermissionDenied
16 from django.http import Http404, QueryDict
17 from django.shortcuts import redirect
18 from django.template import loader
19 from django.utils import timezone
20 from django.utils.functional import cached_property
21 from django.utils.lru_cache import lru_cache
22 from django.utils.translation import to_locale
23 from django.utils.translation.trans_real import parse_accept_lang_header
24 from django.views.decorators.http import require_http_methods
25
26 from pootle.core.delegate import review, search_backend
27 from pootle.core.exceptions import Http400
28 from pootle.core.http import JsonResponse, JsonResponseBadRequest
29 from pootle.core.utils import dateformat
30 from pootle.core.views import PootleJSON
31 from pootle.i18n.gettext import ugettext as _
32 from pootle.local.dates import timesince
33 from pootle_app.models.directory import Directory
34 from pootle_app.models.permissions import (check_permission,
35 check_user_permission)
36 from pootle_comment.forms import UnsecuredCommentForm
37 from pootle_language.models import Language
38 from pootle_misc.util import ajax_required
39 from pootle_statistics.models import (Submission, SubmissionFields,
40 SubmissionTypes)
41
42 from .decorators import get_unit_context
43 from .forms import UnitSearchForm, unit_comment_form_factory, unit_form_factory
44 from .models import Suggestion, Unit
45 from .templatetags.store_tags import pluralize_source, pluralize_target
46 from .unit.results import GroupedResults
47 from .unit.timeline import Timeline
48 from .util import find_altsrcs
49
50
51 def get_alt_src_langs(request, user, translation_project):
52 language = translation_project.language
53 project = translation_project.project
54 source_language = project.source_language
55
56 langs = user.alt_src_langs.exclude(
57 id__in=(language.id, source_language.id)
58 ).filter(translationproject__project=project)
59
60 if not user.alt_src_langs.count():
61 accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '')
62
63 for accept_lang, __ in parse_accept_lang_header(accept):
64 if accept_lang == '*':
65 continue
66
67 simplified = data.simplify_to_common(accept_lang)
68 normalized = to_locale(data.normalize_code(simplified))
69 code = to_locale(accept_lang)
70 if (normalized in
71 ('en', 'en_US', source_language.code, language.code) or
72 code in ('en', 'en_US', source_language.code, language.code)):
73 continue
74
75 langs = Language.objects.filter(
76 code__in=(normalized, code),
77 translationproject__project=project,
78 )
79 if langs.count():
80 break
81
82 return langs
83
84
85 #
86 # Views used with XMLHttpRequest requests.
87 #
88
89 def _filter_ctx_units(units_qs, unit, how_many, gap=0):
90 """Returns ``how_many``*2 units that are before and after ``index``."""
91 result = {'before': [], 'after': []}
92
93 if how_many and unit.index - gap > 0:
94 before = units_qs.filter(store=unit.store_id, index__lt=unit.index) \
95 .order_by('-index')[gap:how_many+gap]
96 result['before'] = _build_units_list(before, reverse=True)
97 result['before'].reverse()
98
99 # FIXME: can we avoid this query if length is known?
100 if how_many:
101 after = units_qs.filter(store=unit.store_id,
102 index__gt=unit.index)[gap:how_many+gap]
103 result['after'] = _build_units_list(after)
104
105 return result
106
107
108 def _prepare_unit(unit):
109 """Constructs a dictionary with relevant `unit` data."""
110 return {
111 'id': unit.id,
112 'url': unit.get_translate_url(),
113 'isfuzzy': unit.isfuzzy(),
114 'source': [source[1] for source in pluralize_source(unit)],
115 'target': [target[1] for target in pluralize_target(unit)],
116 }
117
118
119 def _build_units_list(units, reverse=False):
120 """Given a list/queryset of units, builds a list with the unit data
121 contained in a dictionary ready to be returned as JSON.
122
123 :return: A list with unit id, source, and target texts. In case of
124 having plural forms, a title for the plural form is also provided.
125 """
126 return_units = []
127
128 for unit in iter(units):
129 return_units.append(_prepare_unit(unit))
130
131 return return_units
132
133
134 def _get_critical_checks_snippet(request, unit):
135 """Retrieves the critical checks snippet.
136
137 :param request: an `HttpRequest` object
138 :param unit: a `Unit` instance for which critical checks need to be
139 rendered.
140 :return: rendered HTML snippet with the failing checks, or `None` if
141 there are no critical failing checks.
142 """
143 if not unit.has_critical_checks():
144 return None
145
146 can_review = check_user_permission(request.user, 'review',
147 unit.store.parent)
148 ctx = {
149 'canreview': can_review,
150 'unit': unit,
151 }
152 template = loader.get_template('editor/units/xhr_checks.html')
153 return template.render(context=ctx, request=request)
154
155
156 @ajax_required
157 def get_units(request, **kwargs_):
158 """Gets source and target texts and its metadata.
159
160 :return: A JSON-encoded string containing the source and target texts
161 grouped by the store they belong to.
162
163 The optional `count` GET parameter defines the chunk size to
164 consider. The user's preference will be used by default.
165
166 When the `initial` GET parameter is present, a sorted list of
167 the result set ids will be returned too.
168 """
169 search_form = UnitSearchForm(request.GET, user=request.user)
170
171 if not search_form.is_valid():
172 errors = search_form.errors.as_data()
173 if "path" in errors:
174 for error in errors["path"]:
175 if error.code == "max_length":
176 raise Http400(_('Path too long.'))
177 elif error.code == "required":
178 raise Http400(_('Arguments missing.'))
179 raise Http404(forms.ValidationError(search_form.errors).messages)
180
181 total, start, end, units_qs = search_backend.get(Unit)(
182 request.user, **search_form.cleaned_data).search()
183 return JsonResponse(
184 {'start': start,
185 'end': end,
186 'total': total,
187 'unitGroups': GroupedResults(units_qs).data})
188
189
190 @ajax_required
191 @get_unit_context('view')
192 def get_more_context(request, unit, **kwargs_):
193 """Retrieves more context units.
194
195 :return: An object in JSON notation that contains the source and target
196 texts for units that are in the context of unit ``uid``.
197 """
198 store = request.store
199 json = {}
200 gap = int(request.GET.get('gap', 0))
201 qty = int(request.GET.get('qty', 1))
202
203 json["ctx"] = _filter_ctx_units(store.units, unit, qty, gap)
204 return JsonResponse(json)
205
206
207 @ajax_required
208 @require_http_methods(['POST', 'DELETE'])
209 @get_unit_context('translate')
210 def comment(request, unit, **kwargs_):
211 """Dispatches the comment action according to the HTTP verb."""
212 if request.method == 'DELETE':
213 return delete_comment(request, unit)
214 elif request.method == 'POST':
215 return save_comment(request, unit)
216
217
218 def delete_comment(request, unit, **kwargs_):
219 """Deletes a comment by blanking its contents and records a new
220 submission.
221 """
222 unit.commented_by = None
223 unit.commented_on = None
224
225 language = request.translation_project.language
226 comment_form_class = unit_comment_form_factory(language)
227 form = comment_form_class({}, instance=unit, request=request)
228
229 if form.is_valid():
230 form.save()
231 return JsonResponse({})
232
233 return JsonResponseBadRequest({'msg': _("Failed to remove comment.")})
234
235
236 def save_comment(request, unit):
237 """Stores a new comment for the given ``unit``.
238
239 :return: If the form validates, the cleaned comment is returned.
240 An error message is returned otherwise.
241 """
242 # Update current unit instance's attributes
243 unit.commented_by = request.user
244 unit.commented_on = timezone.now().replace(microsecond=0)
245
246 language = request.translation_project.language
247 form = unit_comment_form_factory(language)(request.POST, instance=unit,
248 request=request)
249
250 if form.is_valid():
251 form.save()
252
253 user = request.user
254 directory = unit.store.parent
255
256 ctx = {
257 'unit': unit,
258 'language': language,
259 'cantranslate': check_user_permission(user, 'translate',
260 directory),
261 'cansuggest': check_user_permission(user, 'suggest', directory),
262 }
263 t = loader.get_template('editor/units/xhr_comment.html')
264
265 return JsonResponse({'comment': t.render(context=ctx,
266 request=request)})
267
268 return JsonResponseBadRequest({'msg': _("Comment submission failed.")})
269
270
271 class PootleUnitJSON(PootleJSON):
272 model = Unit
273 pk_url_kwarg = "uid"
274
275 @cached_property
276 def permission_context(self):
277 self.object = self.get_object()
278 return Directory.objects.select_related("tp", "tp__project").get(
279 pk=self.store.parent_id)
280
281 @property
282 def pootle_path(self):
283 return self.store.pootle_path
284
285 @cached_property
286 def tp(self):
287 return self.store.translation_project
288
289 @cached_property
290 def store(self):
291 return self.object.store
292
293 @cached_property
294 def source_language(self):
295 return self.project.source_language
296
297 @cached_property
298 def directory(self):
299 return self.store.parent
300
301 @lru_cache()
302 def get_object(self):
303 return super(PootleUnitJSON, self).get_object()
304
305
306 class UnitTimelineJSON(PootleUnitJSON):
307
308 model = Unit
309 pk_url_kwarg = "uid"
310
311 template_name = 'editor/units/xhr_timeline.html'
312
313 @property
314 def language(self):
315 return self.object.store.translation_project.language
316
317 @cached_property
318 def permission_context(self):
319 self.object = self.get_object()
320 return self.project.directory
321
322 @property
323 def project(self):
324 return self.object.store.translation_project.project
325
326 @property
327 def timeline(self):
328 return Timeline(self.object)
329
330 def get_context_data(self, *args, **kwargs):
331 return dict(
332 entries_group=self.timeline.grouped_entries,
333 language=self.language)
334
335 def get_queryset(self):
336 return Unit.objects.get_translatable(self.request.user).select_related(
337 "store__translation_project__language",
338 "store__translation_project__project__directory")
339
340 def get_response_data(self, context):
341 return {
342 'uid': self.object.id,
343 'entries_group': self.get_entries_group_data(context),
344 'timeline': self.render_timeline(context)}
345
346 def render_timeline(self, context):
347 return loader.get_template(self.template_name).render(context=context)
348
349 def get_entries_group_data(self, context):
350 result = []
351 for entry_group in context['entries_group']:
352 display_dt = entry_group['datetime']
353 if display_dt is not None:
354 display_dt = dateformat.format(display_dt)
355 iso_dt = entry_group['datetime'].isoformat()
356 relative_time = timesince(
357 calendar.timegm(entry_group['datetime'].timetuple()))
358 else:
359 iso_dt = None
360 relative_time = None
361 result.append({
362 "display_datetime": display_dt,
363 "iso_datetime": iso_dt,
364 "relative_time": relative_time,
365 "via_upload": entry_group.get('via_upload', False),
366 })
367 return result
368
369
370 class UnitEditJSON(PootleUnitJSON):
371
372 def get_edit_template(self):
373 if self.project.is_terminology or self.store.has_terminology:
374 return loader.get_template('editor/units/term_edit.html')
375 return loader.get_template('editor/units/edit.html')
376
377 def render_edit_template(self, context):
378 return self.get_edit_template().render(context=context,
379 request=self.request)
380
381 def get_source_nplurals(self):
382 if self.object.hasplural():
383 return len(self.object.source.strings)
384 return None
385
386 def get_target_nplurals(self):
387 source_nplurals = self.get_source_nplurals()
388 return self.language.nplurals if source_nplurals is not None else 1
389
390 def get_unit_values(self):
391 target_nplurals = self.get_target_nplurals()
392 unit_values = [value for value in self.object.target_f.strings]
393 if len(unit_values) < target_nplurals:
394 return unit_values + ((target_nplurals - len(unit_values)) * [''])
395 return unit_values
396
397 def get_unit_edit_form(self):
398 form_class = unit_form_factory(self.language,
399 self.get_source_nplurals(),
400 self.request)
401 return form_class(instance=self.object, request=self.request)
402
403 def get_unit_comment_form(self):
404 comment_form_class = unit_comment_form_factory(self.language)
405 return comment_form_class({}, instance=self.object, request=self.request)
406
407 @lru_cache()
408 def get_alt_srcs(self):
409 return find_altsrcs(
410 self.object,
411 get_alt_src_langs(self.request, self.request.user, self.tp),
412 store=self.store,
413 project=self.project)
414
415 def get_queryset(self):
416 return Unit.objects.get_translatable(self.request.user).select_related(
417 "store",
418 "store__filetype",
419 "store__parent",
420 "store__translation_project",
421 "store__translation_project__project",
422 "store__translation_project__project__source_language",
423 "store__translation_project__language")
424
425 def get_sources(self):
426 sources = {
427 unit.language_code: unit.target.strings
428 for unit in self.get_alt_srcs()}
429 sources[self.source_language.code] = self.object.source_f.strings
430 return sources
431
432 def get_context_data(self, *args, **kwargs):
433 priority = (
434 self.store.priority
435 if 'virtualfolder' in settings.INSTALLED_APPS
436 else None)
437 suggestions = self.object.get_suggestions()
438 return {
439 'unit': self.object,
440 'form': self.get_unit_edit_form(),
441 'comment_form': self.get_unit_comment_form(),
442 'priority': priority,
443 'store': self.store,
444 'directory': self.directory,
445 'user': self.request.user,
446 'project': self.project,
447 'language': self.language,
448 'source_language': self.source_language,
449 'cantranslate': check_user_permission(self.request.user,
450 "translate",
451 self.directory),
452 'cantranslatexlang': check_user_permission(self.request.user,
453 "administrate",
454 self.project.directory),
455 'cansuggest': check_user_permission(self.request.user,
456 "suggest",
457 self.directory),
458 'canreview': check_user_permission(self.request.user,
459 "review",
460 self.directory),
461 'has_admin_access': check_user_permission(self.request.user,
462 'administrate',
463 self.directory),
464 'altsrcs': {x.id: x.data for x in self.get_alt_srcs()},
465 'unit_values': self.get_unit_values(),
466 'target_nplurals': self.get_target_nplurals(),
467 'has_plurals': self.object.hasplural(),
468 'filetype': self.object.store.filetype.name,
469 'suggestions': suggestions,
470 'suggestions_dict': {x.id: dict(id=x.id, target=x.target.strings)
471 for x in suggestions},
472 }
473
474 def get_response_data(self, context):
475 return {
476 'editor': self.render_edit_template(context),
477 'tm_suggestions': self.object.get_tm_suggestions(),
478 'is_obsolete': self.object.isobsolete(),
479 'sources': self.get_sources()}
480
481
482 @get_unit_context('view')
483 def permalink_redirect(request, unit):
484 return redirect(request.build_absolute_uri(unit.get_translate_url()))
485
486
487 @ajax_required
488 @get_unit_context('translate')
489 def submit(request, unit, **kwargs_):
490 """Processes translation submissions and stores them in the database.
491
492 :return: An object in JSON notation that contains the previous and last
493 units for the unit next to unit ``uid``.
494 """
495 json = {}
496
497 translation_project = request.translation_project
498 language = translation_project.language
499
500 if unit.hasplural():
501 snplurals = len(unit.source.strings)
502 else:
503 snplurals = None
504
505 # Store current time so that it is the same for all submissions
506 current_time = timezone.now()
507
508 form_class = unit_form_factory(language, snplurals, request)
509 form = form_class(request.POST, instance=unit, request=request)
510
511 if form.is_valid():
512 suggestion = form.cleaned_data['suggestion']
513 if suggestion:
514 review.get(Suggestion)([suggestion], request.user).accept()
515 if form.cleaned_data['comment']:
516 kwargs = dict(
517 comment=form.cleaned_data['comment'],
518 user=request.user,
519 )
520 comment_form = UnsecuredCommentForm(suggestion, kwargs)
521 if comment_form.is_valid():
522 comment_form.save()
523
524 if form.updated_fields:
525 for field, old_value, new_value in form.updated_fields:
526 if field == SubmissionFields.TARGET and suggestion:
527 old_value = str(suggestion.target_f)
528 sub = Submission(
529 creation_time=current_time,
530 translation_project=translation_project,
531 submitter=request.user,
532 unit=unit,
533 store=unit.store,
534 field=field,
535 type=SubmissionTypes.NORMAL,
536 old_value=old_value,
537 new_value=new_value,
538 similarity=form.cleaned_data['similarity'],
539 mt_similarity=form.cleaned_data['mt_similarity'],
540 )
541 sub.save()
542
543 # Update current unit instance's attributes
544 # important to set these attributes after saving Submission
545 # because we need to access the unit's state before it was saved
546 if SubmissionFields.TARGET in (f[0] for f in form.updated_fields):
547 form.instance.submitted_by = request.user
548 form.instance.submitted_on = current_time
549 form.instance.reviewed_by = None
550 form.instance.reviewed_on = None
551
552 form.instance._log_user = request.user
553
554 form.save()
555
556 json['checks'] = _get_critical_checks_snippet(request, unit)
557
558 json['user_score'] = request.user.public_score
559 json['newtargets'] = [target for target in form.instance.target.strings]
560
561 return JsonResponse(json)
562
563 return JsonResponseBadRequest({'msg': _("Failed to process submission.")})
564
565
566 @ajax_required
567 @get_unit_context('suggest')
568 def suggest(request, unit, **kwargs_):
569 """Processes translation suggestions and stores them in the database.
570
571 :return: An object in JSON notation that contains the previous and last
572 units for the unit next to unit ``uid``.
573 """
574 json = {}
575
576 translation_project = request.translation_project
577 language = translation_project.language
578
579 if unit.hasplural():
580 snplurals = len(unit.source.strings)
581 else:
582 snplurals = None
583
584 form_class = unit_form_factory(language, snplurals, request)
585 form = form_class(request.POST, instance=unit, request=request)
586
587 if form.is_valid():
588 if form.cleaned_data.get("target_updated"):
589 # TODO: Review if this hackish method is still necessary
590 # HACKISH: django 1.2 stupidly modifies instance on model form
591 # validation, reload unit from db
592 unit = Unit.objects.get(id=unit.id)
593 review.get(Suggestion)().add(
594 unit,
595 form.cleaned_data['target_f'],
596 user=request.user,
597 similarity=form.cleaned_data['similarity'],
598 mt_similarity=form.cleaned_data['mt_similarity'])
599
600 json['user_score'] = request.user.public_score
601
602 return JsonResponse(json)
603
604 return JsonResponseBadRequest({'msg': _("Failed to process suggestion.")})
605
606
607 @ajax_required
608 @require_http_methods(['POST', 'DELETE'])
609 def manage_suggestion(request, uid, sugg_id, **kwargs_):
610 """Dispatches the suggestion action according to the HTTP verb."""
611 if request.method == 'DELETE':
612 return reject_suggestion(request, uid, sugg_id)
613 elif request.method == 'POST':
614 return accept_suggestion(request, uid, sugg_id)
615
616
617 @get_unit_context()
618 def reject_suggestion(request, unit, suggid, **kwargs_):
619 try:
620 suggestion = unit.suggestion_set.get(id=suggid)
621 except ObjectDoesNotExist:
622 raise Http404
623
624 # In order to be able to reject a suggestion, users have to either:
625 # 1. Have `review` rights, or
626 # 2. Be the author of the suggestion being rejected
627 has_permission = (
628 check_permission('review', request)
629 or (not request.user.is_anonymous
630 and request.user == suggestion.user))
631 if not has_permission:
632 raise PermissionDenied(
633 _('Insufficient rights to access review mode.'))
634 review.get(Suggestion)(
635 [suggestion],
636 request.user).reject(QueryDict(request.body).get("comment"))
637 json = {
638 'udbid': unit.id,
639 'sugid': suggid,
640 'user_score': request.user.public_score,
641 }
642 return JsonResponse(json)
643
644
645 @get_unit_context('review')
646 def accept_suggestion(request, unit, suggid, **kwargs_):
647 try:
648 suggestion = unit.suggestion_set.get(id=suggid)
649 except ObjectDoesNotExist:
650 raise Http404
651 review.get(Suggestion)(
652 [suggestion], request.user).accept(request.POST.get("comment"))
653 json = {
654 'udbid': unit.id,
655 'sugid': suggid,
656 'user_score': request.user.public_score,
657 'newtargets': [target for target in unit.target.strings],
658 'checks': _get_critical_checks_snippet(request, unit),
659 }
660 return JsonResponse(json)
661
662
663 @ajax_required
664 @get_unit_context('review')
665 def toggle_qualitycheck(request, unit, check_id, **kwargs_):
666 try:
667 unit.toggle_qualitycheck(check_id, 'mute' in request.POST, request.user)
668 except ObjectDoesNotExist:
669 raise Http404
670
671 return JsonResponse({})
672
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pootle/apps/pootle_store/views.py b/pootle/apps/pootle_store/views.py
--- a/pootle/apps/pootle_store/views.py
+++ b/pootle/apps/pootle_store/views.py
@@ -597,7 +597,8 @@
similarity=form.cleaned_data['similarity'],
mt_similarity=form.cleaned_data['mt_similarity'])
- json['user_score'] = request.user.public_score
+ if not request.user.is_anonymous:
+ json['user_score'] = request.user.public_score
return JsonResponse(json)
| {"golden_diff": "diff --git a/pootle/apps/pootle_store/views.py b/pootle/apps/pootle_store/views.py\n--- a/pootle/apps/pootle_store/views.py\n+++ b/pootle/apps/pootle_store/views.py\n@@ -597,7 +597,8 @@\n similarity=form.cleaned_data['similarity'],\n mt_similarity=form.cleaned_data['mt_similarity'])\n \n- json['user_score'] = request.user.public_score\n+ if not request.user.is_anonymous:\n+ json['user_score'] = request.user.public_score\n \n return JsonResponse(json)\n", "issue": "Forever loading after captcha\nAt the first loading of poolte, i mean after this\n\n```\n2016-07-22 11:38:22,805 INFO Loading custom settings from '/var/www/translate.tamrielunlimited.it/pootle.conf'...\n2016-07-22 11:38:23,146 INFO Using Python PO\n2016-07-22 11:38:24,357 INFO Loading custom settings from '/var/www/translate.tamrielunlimited.it/pootle.conf'...\n2016-07-22 11:38:24,700 INFO Using Python PO\n```\n\nIf an anonymous user make a suggestion, he has this problem, it keeps loading forever after the captcha... the only way to resolve this is reloading the page (??several times) and then it works... i think it's the captcha, cause the registered members have not this problem...\n\nI found out this problem few days ago, but i wanted to test with some friends and they had all the same problem after the captcha... when they inserted it and then reloaded the page (someone more than one time), it worked without any other problem...\n\nthis is the test as video, check it out to understand: https://dl.dropboxusercontent.com/u/89718989/Screenshot/pootle/pootle-suggestion.mp4\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport calendar\n\nfrom translate.lang import data\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist, PermissionDenied\nfrom django.http import Http404, QueryDict\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.utils import timezone\nfrom django.utils.functional import cached_property\nfrom django.utils.lru_cache import lru_cache\nfrom django.utils.translation import to_locale\nfrom django.utils.translation.trans_real import parse_accept_lang_header\nfrom django.views.decorators.http import require_http_methods\n\nfrom pootle.core.delegate import review, search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse, JsonResponseBadRequest\nfrom pootle.core.utils import dateformat\nfrom pootle.core.views import PootleJSON\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle.local.dates import timesince\nfrom pootle_app.models.directory import Directory\nfrom pootle_app.models.permissions import (check_permission,\n check_user_permission)\nfrom pootle_comment.forms import UnsecuredCommentForm\nfrom pootle_language.models import Language\nfrom pootle_misc.util import ajax_required\nfrom pootle_statistics.models import (Submission, SubmissionFields,\n SubmissionTypes)\n\nfrom .decorators import get_unit_context\nfrom .forms import UnitSearchForm, unit_comment_form_factory, unit_form_factory\nfrom .models import Suggestion, Unit\nfrom .templatetags.store_tags import pluralize_source, pluralize_target\nfrom .unit.results import GroupedResults\nfrom .unit.timeline import Timeline\nfrom .util import find_altsrcs\n\n\ndef get_alt_src_langs(request, user, translation_project):\n language = translation_project.language\n project = translation_project.project\n source_language = project.source_language\n\n langs = user.alt_src_langs.exclude(\n id__in=(language.id, source_language.id)\n ).filter(translationproject__project=project)\n\n if not user.alt_src_langs.count():\n accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '')\n\n for accept_lang, __ in parse_accept_lang_header(accept):\n if accept_lang == '*':\n continue\n\n simplified = data.simplify_to_common(accept_lang)\n normalized = to_locale(data.normalize_code(simplified))\n code = to_locale(accept_lang)\n if (normalized in\n ('en', 'en_US', source_language.code, language.code) or\n code in ('en', 'en_US', source_language.code, language.code)):\n continue\n\n langs = Language.objects.filter(\n code__in=(normalized, code),\n translationproject__project=project,\n )\n if langs.count():\n break\n\n return langs\n\n\n#\n# Views used with XMLHttpRequest requests.\n#\n\ndef _filter_ctx_units(units_qs, unit, how_many, gap=0):\n \"\"\"Returns ``how_many``*2 units that are before and after ``index``.\"\"\"\n result = {'before': [], 'after': []}\n\n if how_many and unit.index - gap > 0:\n before = units_qs.filter(store=unit.store_id, index__lt=unit.index) \\\n .order_by('-index')[gap:how_many+gap]\n result['before'] = _build_units_list(before, reverse=True)\n result['before'].reverse()\n\n # FIXME: can we avoid this query if length is known?\n if how_many:\n after = units_qs.filter(store=unit.store_id,\n index__gt=unit.index)[gap:how_many+gap]\n result['after'] = _build_units_list(after)\n\n return result\n\n\ndef _prepare_unit(unit):\n \"\"\"Constructs a dictionary with relevant `unit` data.\"\"\"\n return {\n 'id': unit.id,\n 'url': unit.get_translate_url(),\n 'isfuzzy': unit.isfuzzy(),\n 'source': [source[1] for source in pluralize_source(unit)],\n 'target': [target[1] for target in pluralize_target(unit)],\n }\n\n\ndef _build_units_list(units, reverse=False):\n \"\"\"Given a list/queryset of units, builds a list with the unit data\n contained in a dictionary ready to be returned as JSON.\n\n :return: A list with unit id, source, and target texts. In case of\n having plural forms, a title for the plural form is also provided.\n \"\"\"\n return_units = []\n\n for unit in iter(units):\n return_units.append(_prepare_unit(unit))\n\n return return_units\n\n\ndef _get_critical_checks_snippet(request, unit):\n \"\"\"Retrieves the critical checks snippet.\n\n :param request: an `HttpRequest` object\n :param unit: a `Unit` instance for which critical checks need to be\n rendered.\n :return: rendered HTML snippet with the failing checks, or `None` if\n there are no critical failing checks.\n \"\"\"\n if not unit.has_critical_checks():\n return None\n\n can_review = check_user_permission(request.user, 'review',\n unit.store.parent)\n ctx = {\n 'canreview': can_review,\n 'unit': unit,\n }\n template = loader.get_template('editor/units/xhr_checks.html')\n return template.render(context=ctx, request=request)\n\n\n@ajax_required\ndef get_units(request, **kwargs_):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n total, start, end, units_qs = search_backend.get(Unit)(\n request.user, **search_form.cleaned_data).search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\n@ajax_required\n@get_unit_context('view')\ndef get_more_context(request, unit, **kwargs_):\n \"\"\"Retrieves more context units.\n\n :return: An object in JSON notation that contains the source and target\n texts for units that are in the context of unit ``uid``.\n \"\"\"\n store = request.store\n json = {}\n gap = int(request.GET.get('gap', 0))\n qty = int(request.GET.get('qty', 1))\n\n json[\"ctx\"] = _filter_ctx_units(store.units, unit, qty, gap)\n return JsonResponse(json)\n\n\n@ajax_required\n@require_http_methods(['POST', 'DELETE'])\n@get_unit_context('translate')\ndef comment(request, unit, **kwargs_):\n \"\"\"Dispatches the comment action according to the HTTP verb.\"\"\"\n if request.method == 'DELETE':\n return delete_comment(request, unit)\n elif request.method == 'POST':\n return save_comment(request, unit)\n\n\ndef delete_comment(request, unit, **kwargs_):\n \"\"\"Deletes a comment by blanking its contents and records a new\n submission.\n \"\"\"\n unit.commented_by = None\n unit.commented_on = None\n\n language = request.translation_project.language\n comment_form_class = unit_comment_form_factory(language)\n form = comment_form_class({}, instance=unit, request=request)\n\n if form.is_valid():\n form.save()\n return JsonResponse({})\n\n return JsonResponseBadRequest({'msg': _(\"Failed to remove comment.\")})\n\n\ndef save_comment(request, unit):\n \"\"\"Stores a new comment for the given ``unit``.\n\n :return: If the form validates, the cleaned comment is returned.\n An error message is returned otherwise.\n \"\"\"\n # Update current unit instance's attributes\n unit.commented_by = request.user\n unit.commented_on = timezone.now().replace(microsecond=0)\n\n language = request.translation_project.language\n form = unit_comment_form_factory(language)(request.POST, instance=unit,\n request=request)\n\n if form.is_valid():\n form.save()\n\n user = request.user\n directory = unit.store.parent\n\n ctx = {\n 'unit': unit,\n 'language': language,\n 'cantranslate': check_user_permission(user, 'translate',\n directory),\n 'cansuggest': check_user_permission(user, 'suggest', directory),\n }\n t = loader.get_template('editor/units/xhr_comment.html')\n\n return JsonResponse({'comment': t.render(context=ctx,\n request=request)})\n\n return JsonResponseBadRequest({'msg': _(\"Comment submission failed.\")})\n\n\nclass PootleUnitJSON(PootleJSON):\n model = Unit\n pk_url_kwarg = \"uid\"\n\n @cached_property\n def permission_context(self):\n self.object = self.get_object()\n return Directory.objects.select_related(\"tp\", \"tp__project\").get(\n pk=self.store.parent_id)\n\n @property\n def pootle_path(self):\n return self.store.pootle_path\n\n @cached_property\n def tp(self):\n return self.store.translation_project\n\n @cached_property\n def store(self):\n return self.object.store\n\n @cached_property\n def source_language(self):\n return self.project.source_language\n\n @cached_property\n def directory(self):\n return self.store.parent\n\n @lru_cache()\n def get_object(self):\n return super(PootleUnitJSON, self).get_object()\n\n\nclass UnitTimelineJSON(PootleUnitJSON):\n\n model = Unit\n pk_url_kwarg = \"uid\"\n\n template_name = 'editor/units/xhr_timeline.html'\n\n @property\n def language(self):\n return self.object.store.translation_project.language\n\n @cached_property\n def permission_context(self):\n self.object = self.get_object()\n return self.project.directory\n\n @property\n def project(self):\n return self.object.store.translation_project.project\n\n @property\n def timeline(self):\n return Timeline(self.object)\n\n def get_context_data(self, *args, **kwargs):\n return dict(\n entries_group=self.timeline.grouped_entries,\n language=self.language)\n\n def get_queryset(self):\n return Unit.objects.get_translatable(self.request.user).select_related(\n \"store__translation_project__language\",\n \"store__translation_project__project__directory\")\n\n def get_response_data(self, context):\n return {\n 'uid': self.object.id,\n 'entries_group': self.get_entries_group_data(context),\n 'timeline': self.render_timeline(context)}\n\n def render_timeline(self, context):\n return loader.get_template(self.template_name).render(context=context)\n\n def get_entries_group_data(self, context):\n result = []\n for entry_group in context['entries_group']:\n display_dt = entry_group['datetime']\n if display_dt is not None:\n display_dt = dateformat.format(display_dt)\n iso_dt = entry_group['datetime'].isoformat()\n relative_time = timesince(\n calendar.timegm(entry_group['datetime'].timetuple()))\n else:\n iso_dt = None\n relative_time = None\n result.append({\n \"display_datetime\": display_dt,\n \"iso_datetime\": iso_dt,\n \"relative_time\": relative_time,\n \"via_upload\": entry_group.get('via_upload', False),\n })\n return result\n\n\nclass UnitEditJSON(PootleUnitJSON):\n\n def get_edit_template(self):\n if self.project.is_terminology or self.store.has_terminology:\n return loader.get_template('editor/units/term_edit.html')\n return loader.get_template('editor/units/edit.html')\n\n def render_edit_template(self, context):\n return self.get_edit_template().render(context=context,\n request=self.request)\n\n def get_source_nplurals(self):\n if self.object.hasplural():\n return len(self.object.source.strings)\n return None\n\n def get_target_nplurals(self):\n source_nplurals = self.get_source_nplurals()\n return self.language.nplurals if source_nplurals is not None else 1\n\n def get_unit_values(self):\n target_nplurals = self.get_target_nplurals()\n unit_values = [value for value in self.object.target_f.strings]\n if len(unit_values) < target_nplurals:\n return unit_values + ((target_nplurals - len(unit_values)) * [''])\n return unit_values\n\n def get_unit_edit_form(self):\n form_class = unit_form_factory(self.language,\n self.get_source_nplurals(),\n self.request)\n return form_class(instance=self.object, request=self.request)\n\n def get_unit_comment_form(self):\n comment_form_class = unit_comment_form_factory(self.language)\n return comment_form_class({}, instance=self.object, request=self.request)\n\n @lru_cache()\n def get_alt_srcs(self):\n return find_altsrcs(\n self.object,\n get_alt_src_langs(self.request, self.request.user, self.tp),\n store=self.store,\n project=self.project)\n\n def get_queryset(self):\n return Unit.objects.get_translatable(self.request.user).select_related(\n \"store\",\n \"store__filetype\",\n \"store__parent\",\n \"store__translation_project\",\n \"store__translation_project__project\",\n \"store__translation_project__project__source_language\",\n \"store__translation_project__language\")\n\n def get_sources(self):\n sources = {\n unit.language_code: unit.target.strings\n for unit in self.get_alt_srcs()}\n sources[self.source_language.code] = self.object.source_f.strings\n return sources\n\n def get_context_data(self, *args, **kwargs):\n priority = (\n self.store.priority\n if 'virtualfolder' in settings.INSTALLED_APPS\n else None)\n suggestions = self.object.get_suggestions()\n return {\n 'unit': self.object,\n 'form': self.get_unit_edit_form(),\n 'comment_form': self.get_unit_comment_form(),\n 'priority': priority,\n 'store': self.store,\n 'directory': self.directory,\n 'user': self.request.user,\n 'project': self.project,\n 'language': self.language,\n 'source_language': self.source_language,\n 'cantranslate': check_user_permission(self.request.user,\n \"translate\",\n self.directory),\n 'cantranslatexlang': check_user_permission(self.request.user,\n \"administrate\",\n self.project.directory),\n 'cansuggest': check_user_permission(self.request.user,\n \"suggest\",\n self.directory),\n 'canreview': check_user_permission(self.request.user,\n \"review\",\n self.directory),\n 'has_admin_access': check_user_permission(self.request.user,\n 'administrate',\n self.directory),\n 'altsrcs': {x.id: x.data for x in self.get_alt_srcs()},\n 'unit_values': self.get_unit_values(),\n 'target_nplurals': self.get_target_nplurals(),\n 'has_plurals': self.object.hasplural(),\n 'filetype': self.object.store.filetype.name,\n 'suggestions': suggestions,\n 'suggestions_dict': {x.id: dict(id=x.id, target=x.target.strings)\n for x in suggestions},\n }\n\n def get_response_data(self, context):\n return {\n 'editor': self.render_edit_template(context),\n 'tm_suggestions': self.object.get_tm_suggestions(),\n 'is_obsolete': self.object.isobsolete(),\n 'sources': self.get_sources()}\n\n\n@get_unit_context('view')\ndef permalink_redirect(request, unit):\n return redirect(request.build_absolute_uri(unit.get_translate_url()))\n\n\n@ajax_required\n@get_unit_context('translate')\ndef submit(request, unit, **kwargs_):\n \"\"\"Processes translation submissions and stores them in the database.\n\n :return: An object in JSON notation that contains the previous and last\n units for the unit next to unit ``uid``.\n \"\"\"\n json = {}\n\n translation_project = request.translation_project\n language = translation_project.language\n\n if unit.hasplural():\n snplurals = len(unit.source.strings)\n else:\n snplurals = None\n\n # Store current time so that it is the same for all submissions\n current_time = timezone.now()\n\n form_class = unit_form_factory(language, snplurals, request)\n form = form_class(request.POST, instance=unit, request=request)\n\n if form.is_valid():\n suggestion = form.cleaned_data['suggestion']\n if suggestion:\n review.get(Suggestion)([suggestion], request.user).accept()\n if form.cleaned_data['comment']:\n kwargs = dict(\n comment=form.cleaned_data['comment'],\n user=request.user,\n )\n comment_form = UnsecuredCommentForm(suggestion, kwargs)\n if comment_form.is_valid():\n comment_form.save()\n\n if form.updated_fields:\n for field, old_value, new_value in form.updated_fields:\n if field == SubmissionFields.TARGET and suggestion:\n old_value = str(suggestion.target_f)\n sub = Submission(\n creation_time=current_time,\n translation_project=translation_project,\n submitter=request.user,\n unit=unit,\n store=unit.store,\n field=field,\n type=SubmissionTypes.NORMAL,\n old_value=old_value,\n new_value=new_value,\n similarity=form.cleaned_data['similarity'],\n mt_similarity=form.cleaned_data['mt_similarity'],\n )\n sub.save()\n\n # Update current unit instance's attributes\n # important to set these attributes after saving Submission\n # because we need to access the unit's state before it was saved\n if SubmissionFields.TARGET in (f[0] for f in form.updated_fields):\n form.instance.submitted_by = request.user\n form.instance.submitted_on = current_time\n form.instance.reviewed_by = None\n form.instance.reviewed_on = None\n\n form.instance._log_user = request.user\n\n form.save()\n\n json['checks'] = _get_critical_checks_snippet(request, unit)\n\n json['user_score'] = request.user.public_score\n json['newtargets'] = [target for target in form.instance.target.strings]\n\n return JsonResponse(json)\n\n return JsonResponseBadRequest({'msg': _(\"Failed to process submission.\")})\n\n\n@ajax_required\n@get_unit_context('suggest')\ndef suggest(request, unit, **kwargs_):\n \"\"\"Processes translation suggestions and stores them in the database.\n\n :return: An object in JSON notation that contains the previous and last\n units for the unit next to unit ``uid``.\n \"\"\"\n json = {}\n\n translation_project = request.translation_project\n language = translation_project.language\n\n if unit.hasplural():\n snplurals = len(unit.source.strings)\n else:\n snplurals = None\n\n form_class = unit_form_factory(language, snplurals, request)\n form = form_class(request.POST, instance=unit, request=request)\n\n if form.is_valid():\n if form.cleaned_data.get(\"target_updated\"):\n # TODO: Review if this hackish method is still necessary\n # HACKISH: django 1.2 stupidly modifies instance on model form\n # validation, reload unit from db\n unit = Unit.objects.get(id=unit.id)\n review.get(Suggestion)().add(\n unit,\n form.cleaned_data['target_f'],\n user=request.user,\n similarity=form.cleaned_data['similarity'],\n mt_similarity=form.cleaned_data['mt_similarity'])\n\n json['user_score'] = request.user.public_score\n\n return JsonResponse(json)\n\n return JsonResponseBadRequest({'msg': _(\"Failed to process suggestion.\")})\n\n\n@ajax_required\n@require_http_methods(['POST', 'DELETE'])\ndef manage_suggestion(request, uid, sugg_id, **kwargs_):\n \"\"\"Dispatches the suggestion action according to the HTTP verb.\"\"\"\n if request.method == 'DELETE':\n return reject_suggestion(request, uid, sugg_id)\n elif request.method == 'POST':\n return accept_suggestion(request, uid, sugg_id)\n\n\n@get_unit_context()\ndef reject_suggestion(request, unit, suggid, **kwargs_):\n try:\n suggestion = unit.suggestion_set.get(id=suggid)\n except ObjectDoesNotExist:\n raise Http404\n\n # In order to be able to reject a suggestion, users have to either:\n # 1. Have `review` rights, or\n # 2. Be the author of the suggestion being rejected\n has_permission = (\n check_permission('review', request)\n or (not request.user.is_anonymous\n and request.user == suggestion.user))\n if not has_permission:\n raise PermissionDenied(\n _('Insufficient rights to access review mode.'))\n review.get(Suggestion)(\n [suggestion],\n request.user).reject(QueryDict(request.body).get(\"comment\"))\n json = {\n 'udbid': unit.id,\n 'sugid': suggid,\n 'user_score': request.user.public_score,\n }\n return JsonResponse(json)\n\n\n@get_unit_context('review')\ndef accept_suggestion(request, unit, suggid, **kwargs_):\n try:\n suggestion = unit.suggestion_set.get(id=suggid)\n except ObjectDoesNotExist:\n raise Http404\n review.get(Suggestion)(\n [suggestion], request.user).accept(request.POST.get(\"comment\"))\n json = {\n 'udbid': unit.id,\n 'sugid': suggid,\n 'user_score': request.user.public_score,\n 'newtargets': [target for target in unit.target.strings],\n 'checks': _get_critical_checks_snippet(request, unit),\n }\n return JsonResponse(json)\n\n\n@ajax_required\n@get_unit_context('review')\ndef toggle_qualitycheck(request, unit, check_id, **kwargs_):\n try:\n unit.toggle_qualitycheck(check_id, 'mute' in request.POST, request.user)\n except ObjectDoesNotExist:\n raise Http404\n\n return JsonResponse({})\n", "path": "pootle/apps/pootle_store/views.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport calendar\n\nfrom translate.lang import data\n\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import ObjectDoesNotExist, PermissionDenied\nfrom django.http import Http404, QueryDict\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.utils import timezone\nfrom django.utils.functional import cached_property\nfrom django.utils.lru_cache import lru_cache\nfrom django.utils.translation import to_locale\nfrom django.utils.translation.trans_real import parse_accept_lang_header\nfrom django.views.decorators.http import require_http_methods\n\nfrom pootle.core.delegate import review, search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse, JsonResponseBadRequest\nfrom pootle.core.utils import dateformat\nfrom pootle.core.views import PootleJSON\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle.local.dates import timesince\nfrom pootle_app.models.directory import Directory\nfrom pootle_app.models.permissions import (check_permission,\n check_user_permission)\nfrom pootle_comment.forms import UnsecuredCommentForm\nfrom pootle_language.models import Language\nfrom pootle_misc.util import ajax_required\nfrom pootle_statistics.models import (Submission, SubmissionFields,\n SubmissionTypes)\n\nfrom .decorators import get_unit_context\nfrom .forms import UnitSearchForm, unit_comment_form_factory, unit_form_factory\nfrom .models import Suggestion, Unit\nfrom .templatetags.store_tags import pluralize_source, pluralize_target\nfrom .unit.results import GroupedResults\nfrom .unit.timeline import Timeline\nfrom .util import find_altsrcs\n\n\ndef get_alt_src_langs(request, user, translation_project):\n language = translation_project.language\n project = translation_project.project\n source_language = project.source_language\n\n langs = user.alt_src_langs.exclude(\n id__in=(language.id, source_language.id)\n ).filter(translationproject__project=project)\n\n if not user.alt_src_langs.count():\n accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '')\n\n for accept_lang, __ in parse_accept_lang_header(accept):\n if accept_lang == '*':\n continue\n\n simplified = data.simplify_to_common(accept_lang)\n normalized = to_locale(data.normalize_code(simplified))\n code = to_locale(accept_lang)\n if (normalized in\n ('en', 'en_US', source_language.code, language.code) or\n code in ('en', 'en_US', source_language.code, language.code)):\n continue\n\n langs = Language.objects.filter(\n code__in=(normalized, code),\n translationproject__project=project,\n )\n if langs.count():\n break\n\n return langs\n\n\n#\n# Views used with XMLHttpRequest requests.\n#\n\ndef _filter_ctx_units(units_qs, unit, how_many, gap=0):\n \"\"\"Returns ``how_many``*2 units that are before and after ``index``.\"\"\"\n result = {'before': [], 'after': []}\n\n if how_many and unit.index - gap > 0:\n before = units_qs.filter(store=unit.store_id, index__lt=unit.index) \\\n .order_by('-index')[gap:how_many+gap]\n result['before'] = _build_units_list(before, reverse=True)\n result['before'].reverse()\n\n # FIXME: can we avoid this query if length is known?\n if how_many:\n after = units_qs.filter(store=unit.store_id,\n index__gt=unit.index)[gap:how_many+gap]\n result['after'] = _build_units_list(after)\n\n return result\n\n\ndef _prepare_unit(unit):\n \"\"\"Constructs a dictionary with relevant `unit` data.\"\"\"\n return {\n 'id': unit.id,\n 'url': unit.get_translate_url(),\n 'isfuzzy': unit.isfuzzy(),\n 'source': [source[1] for source in pluralize_source(unit)],\n 'target': [target[1] for target in pluralize_target(unit)],\n }\n\n\ndef _build_units_list(units, reverse=False):\n \"\"\"Given a list/queryset of units, builds a list with the unit data\n contained in a dictionary ready to be returned as JSON.\n\n :return: A list with unit id, source, and target texts. In case of\n having plural forms, a title for the plural form is also provided.\n \"\"\"\n return_units = []\n\n for unit in iter(units):\n return_units.append(_prepare_unit(unit))\n\n return return_units\n\n\ndef _get_critical_checks_snippet(request, unit):\n \"\"\"Retrieves the critical checks snippet.\n\n :param request: an `HttpRequest` object\n :param unit: a `Unit` instance for which critical checks need to be\n rendered.\n :return: rendered HTML snippet with the failing checks, or `None` if\n there are no critical failing checks.\n \"\"\"\n if not unit.has_critical_checks():\n return None\n\n can_review = check_user_permission(request.user, 'review',\n unit.store.parent)\n ctx = {\n 'canreview': can_review,\n 'unit': unit,\n }\n template = loader.get_template('editor/units/xhr_checks.html')\n return template.render(context=ctx, request=request)\n\n\n@ajax_required\ndef get_units(request, **kwargs_):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n total, start, end, units_qs = search_backend.get(Unit)(\n request.user, **search_form.cleaned_data).search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\n@ajax_required\n@get_unit_context('view')\ndef get_more_context(request, unit, **kwargs_):\n \"\"\"Retrieves more context units.\n\n :return: An object in JSON notation that contains the source and target\n texts for units that are in the context of unit ``uid``.\n \"\"\"\n store = request.store\n json = {}\n gap = int(request.GET.get('gap', 0))\n qty = int(request.GET.get('qty', 1))\n\n json[\"ctx\"] = _filter_ctx_units(store.units, unit, qty, gap)\n return JsonResponse(json)\n\n\n@ajax_required\n@require_http_methods(['POST', 'DELETE'])\n@get_unit_context('translate')\ndef comment(request, unit, **kwargs_):\n \"\"\"Dispatches the comment action according to the HTTP verb.\"\"\"\n if request.method == 'DELETE':\n return delete_comment(request, unit)\n elif request.method == 'POST':\n return save_comment(request, unit)\n\n\ndef delete_comment(request, unit, **kwargs_):\n \"\"\"Deletes a comment by blanking its contents and records a new\n submission.\n \"\"\"\n unit.commented_by = None\n unit.commented_on = None\n\n language = request.translation_project.language\n comment_form_class = unit_comment_form_factory(language)\n form = comment_form_class({}, instance=unit, request=request)\n\n if form.is_valid():\n form.save()\n return JsonResponse({})\n\n return JsonResponseBadRequest({'msg': _(\"Failed to remove comment.\")})\n\n\ndef save_comment(request, unit):\n \"\"\"Stores a new comment for the given ``unit``.\n\n :return: If the form validates, the cleaned comment is returned.\n An error message is returned otherwise.\n \"\"\"\n # Update current unit instance's attributes\n unit.commented_by = request.user\n unit.commented_on = timezone.now().replace(microsecond=0)\n\n language = request.translation_project.language\n form = unit_comment_form_factory(language)(request.POST, instance=unit,\n request=request)\n\n if form.is_valid():\n form.save()\n\n user = request.user\n directory = unit.store.parent\n\n ctx = {\n 'unit': unit,\n 'language': language,\n 'cantranslate': check_user_permission(user, 'translate',\n directory),\n 'cansuggest': check_user_permission(user, 'suggest', directory),\n }\n t = loader.get_template('editor/units/xhr_comment.html')\n\n return JsonResponse({'comment': t.render(context=ctx,\n request=request)})\n\n return JsonResponseBadRequest({'msg': _(\"Comment submission failed.\")})\n\n\nclass PootleUnitJSON(PootleJSON):\n model = Unit\n pk_url_kwarg = \"uid\"\n\n @cached_property\n def permission_context(self):\n self.object = self.get_object()\n return Directory.objects.select_related(\"tp\", \"tp__project\").get(\n pk=self.store.parent_id)\n\n @property\n def pootle_path(self):\n return self.store.pootle_path\n\n @cached_property\n def tp(self):\n return self.store.translation_project\n\n @cached_property\n def store(self):\n return self.object.store\n\n @cached_property\n def source_language(self):\n return self.project.source_language\n\n @cached_property\n def directory(self):\n return self.store.parent\n\n @lru_cache()\n def get_object(self):\n return super(PootleUnitJSON, self).get_object()\n\n\nclass UnitTimelineJSON(PootleUnitJSON):\n\n model = Unit\n pk_url_kwarg = \"uid\"\n\n template_name = 'editor/units/xhr_timeline.html'\n\n @property\n def language(self):\n return self.object.store.translation_project.language\n\n @cached_property\n def permission_context(self):\n self.object = self.get_object()\n return self.project.directory\n\n @property\n def project(self):\n return self.object.store.translation_project.project\n\n @property\n def timeline(self):\n return Timeline(self.object)\n\n def get_context_data(self, *args, **kwargs):\n return dict(\n entries_group=self.timeline.grouped_entries,\n language=self.language)\n\n def get_queryset(self):\n return Unit.objects.get_translatable(self.request.user).select_related(\n \"store__translation_project__language\",\n \"store__translation_project__project__directory\")\n\n def get_response_data(self, context):\n return {\n 'uid': self.object.id,\n 'entries_group': self.get_entries_group_data(context),\n 'timeline': self.render_timeline(context)}\n\n def render_timeline(self, context):\n return loader.get_template(self.template_name).render(context=context)\n\n def get_entries_group_data(self, context):\n result = []\n for entry_group in context['entries_group']:\n display_dt = entry_group['datetime']\n if display_dt is not None:\n display_dt = dateformat.format(display_dt)\n iso_dt = entry_group['datetime'].isoformat()\n relative_time = timesince(\n calendar.timegm(entry_group['datetime'].timetuple()))\n else:\n iso_dt = None\n relative_time = None\n result.append({\n \"display_datetime\": display_dt,\n \"iso_datetime\": iso_dt,\n \"relative_time\": relative_time,\n \"via_upload\": entry_group.get('via_upload', False),\n })\n return result\n\n\nclass UnitEditJSON(PootleUnitJSON):\n\n def get_edit_template(self):\n if self.project.is_terminology or self.store.has_terminology:\n return loader.get_template('editor/units/term_edit.html')\n return loader.get_template('editor/units/edit.html')\n\n def render_edit_template(self, context):\n return self.get_edit_template().render(context=context,\n request=self.request)\n\n def get_source_nplurals(self):\n if self.object.hasplural():\n return len(self.object.source.strings)\n return None\n\n def get_target_nplurals(self):\n source_nplurals = self.get_source_nplurals()\n return self.language.nplurals if source_nplurals is not None else 1\n\n def get_unit_values(self):\n target_nplurals = self.get_target_nplurals()\n unit_values = [value for value in self.object.target_f.strings]\n if len(unit_values) < target_nplurals:\n return unit_values + ((target_nplurals - len(unit_values)) * [''])\n return unit_values\n\n def get_unit_edit_form(self):\n form_class = unit_form_factory(self.language,\n self.get_source_nplurals(),\n self.request)\n return form_class(instance=self.object, request=self.request)\n\n def get_unit_comment_form(self):\n comment_form_class = unit_comment_form_factory(self.language)\n return comment_form_class({}, instance=self.object, request=self.request)\n\n @lru_cache()\n def get_alt_srcs(self):\n return find_altsrcs(\n self.object,\n get_alt_src_langs(self.request, self.request.user, self.tp),\n store=self.store,\n project=self.project)\n\n def get_queryset(self):\n return Unit.objects.get_translatable(self.request.user).select_related(\n \"store\",\n \"store__filetype\",\n \"store__parent\",\n \"store__translation_project\",\n \"store__translation_project__project\",\n \"store__translation_project__project__source_language\",\n \"store__translation_project__language\")\n\n def get_sources(self):\n sources = {\n unit.language_code: unit.target.strings\n for unit in self.get_alt_srcs()}\n sources[self.source_language.code] = self.object.source_f.strings\n return sources\n\n def get_context_data(self, *args, **kwargs):\n priority = (\n self.store.priority\n if 'virtualfolder' in settings.INSTALLED_APPS\n else None)\n suggestions = self.object.get_suggestions()\n return {\n 'unit': self.object,\n 'form': self.get_unit_edit_form(),\n 'comment_form': self.get_unit_comment_form(),\n 'priority': priority,\n 'store': self.store,\n 'directory': self.directory,\n 'user': self.request.user,\n 'project': self.project,\n 'language': self.language,\n 'source_language': self.source_language,\n 'cantranslate': check_user_permission(self.request.user,\n \"translate\",\n self.directory),\n 'cantranslatexlang': check_user_permission(self.request.user,\n \"administrate\",\n self.project.directory),\n 'cansuggest': check_user_permission(self.request.user,\n \"suggest\",\n self.directory),\n 'canreview': check_user_permission(self.request.user,\n \"review\",\n self.directory),\n 'has_admin_access': check_user_permission(self.request.user,\n 'administrate',\n self.directory),\n 'altsrcs': {x.id: x.data for x in self.get_alt_srcs()},\n 'unit_values': self.get_unit_values(),\n 'target_nplurals': self.get_target_nplurals(),\n 'has_plurals': self.object.hasplural(),\n 'filetype': self.object.store.filetype.name,\n 'suggestions': suggestions,\n 'suggestions_dict': {x.id: dict(id=x.id, target=x.target.strings)\n for x in suggestions},\n }\n\n def get_response_data(self, context):\n return {\n 'editor': self.render_edit_template(context),\n 'tm_suggestions': self.object.get_tm_suggestions(),\n 'is_obsolete': self.object.isobsolete(),\n 'sources': self.get_sources()}\n\n\n@get_unit_context('view')\ndef permalink_redirect(request, unit):\n return redirect(request.build_absolute_uri(unit.get_translate_url()))\n\n\n@ajax_required\n@get_unit_context('translate')\ndef submit(request, unit, **kwargs_):\n \"\"\"Processes translation submissions and stores them in the database.\n\n :return: An object in JSON notation that contains the previous and last\n units for the unit next to unit ``uid``.\n \"\"\"\n json = {}\n\n translation_project = request.translation_project\n language = translation_project.language\n\n if unit.hasplural():\n snplurals = len(unit.source.strings)\n else:\n snplurals = None\n\n # Store current time so that it is the same for all submissions\n current_time = timezone.now()\n\n form_class = unit_form_factory(language, snplurals, request)\n form = form_class(request.POST, instance=unit, request=request)\n\n if form.is_valid():\n suggestion = form.cleaned_data['suggestion']\n if suggestion:\n review.get(Suggestion)([suggestion], request.user).accept()\n if form.cleaned_data['comment']:\n kwargs = dict(\n comment=form.cleaned_data['comment'],\n user=request.user,\n )\n comment_form = UnsecuredCommentForm(suggestion, kwargs)\n if comment_form.is_valid():\n comment_form.save()\n\n if form.updated_fields:\n for field, old_value, new_value in form.updated_fields:\n if field == SubmissionFields.TARGET and suggestion:\n old_value = str(suggestion.target_f)\n sub = Submission(\n creation_time=current_time,\n translation_project=translation_project,\n submitter=request.user,\n unit=unit,\n store=unit.store,\n field=field,\n type=SubmissionTypes.NORMAL,\n old_value=old_value,\n new_value=new_value,\n similarity=form.cleaned_data['similarity'],\n mt_similarity=form.cleaned_data['mt_similarity'],\n )\n sub.save()\n\n # Update current unit instance's attributes\n # important to set these attributes after saving Submission\n # because we need to access the unit's state before it was saved\n if SubmissionFields.TARGET in (f[0] for f in form.updated_fields):\n form.instance.submitted_by = request.user\n form.instance.submitted_on = current_time\n form.instance.reviewed_by = None\n form.instance.reviewed_on = None\n\n form.instance._log_user = request.user\n\n form.save()\n\n json['checks'] = _get_critical_checks_snippet(request, unit)\n\n json['user_score'] = request.user.public_score\n json['newtargets'] = [target for target in form.instance.target.strings]\n\n return JsonResponse(json)\n\n return JsonResponseBadRequest({'msg': _(\"Failed to process submission.\")})\n\n\n@ajax_required\n@get_unit_context('suggest')\ndef suggest(request, unit, **kwargs_):\n \"\"\"Processes translation suggestions and stores them in the database.\n\n :return: An object in JSON notation that contains the previous and last\n units for the unit next to unit ``uid``.\n \"\"\"\n json = {}\n\n translation_project = request.translation_project\n language = translation_project.language\n\n if unit.hasplural():\n snplurals = len(unit.source.strings)\n else:\n snplurals = None\n\n form_class = unit_form_factory(language, snplurals, request)\n form = form_class(request.POST, instance=unit, request=request)\n\n if form.is_valid():\n if form.cleaned_data.get(\"target_updated\"):\n # TODO: Review if this hackish method is still necessary\n # HACKISH: django 1.2 stupidly modifies instance on model form\n # validation, reload unit from db\n unit = Unit.objects.get(id=unit.id)\n review.get(Suggestion)().add(\n unit,\n form.cleaned_data['target_f'],\n user=request.user,\n similarity=form.cleaned_data['similarity'],\n mt_similarity=form.cleaned_data['mt_similarity'])\n\n if not request.user.is_anonymous:\n json['user_score'] = request.user.public_score\n\n return JsonResponse(json)\n\n return JsonResponseBadRequest({'msg': _(\"Failed to process suggestion.\")})\n\n\n@ajax_required\n@require_http_methods(['POST', 'DELETE'])\ndef manage_suggestion(request, uid, sugg_id, **kwargs_):\n \"\"\"Dispatches the suggestion action according to the HTTP verb.\"\"\"\n if request.method == 'DELETE':\n return reject_suggestion(request, uid, sugg_id)\n elif request.method == 'POST':\n return accept_suggestion(request, uid, sugg_id)\n\n\n@get_unit_context()\ndef reject_suggestion(request, unit, suggid, **kwargs_):\n try:\n suggestion = unit.suggestion_set.get(id=suggid)\n except ObjectDoesNotExist:\n raise Http404\n\n # In order to be able to reject a suggestion, users have to either:\n # 1. Have `review` rights, or\n # 2. Be the author of the suggestion being rejected\n has_permission = (\n check_permission('review', request)\n or (not request.user.is_anonymous\n and request.user == suggestion.user))\n if not has_permission:\n raise PermissionDenied(\n _('Insufficient rights to access review mode.'))\n review.get(Suggestion)(\n [suggestion],\n request.user).reject(QueryDict(request.body).get(\"comment\"))\n json = {\n 'udbid': unit.id,\n 'sugid': suggid,\n 'user_score': request.user.public_score,\n }\n return JsonResponse(json)\n\n\n@get_unit_context('review')\ndef accept_suggestion(request, unit, suggid, **kwargs_):\n try:\n suggestion = unit.suggestion_set.get(id=suggid)\n except ObjectDoesNotExist:\n raise Http404\n review.get(Suggestion)(\n [suggestion], request.user).accept(request.POST.get(\"comment\"))\n json = {\n 'udbid': unit.id,\n 'sugid': suggid,\n 'user_score': request.user.public_score,\n 'newtargets': [target for target in unit.target.strings],\n 'checks': _get_critical_checks_snippet(request, unit),\n }\n return JsonResponse(json)\n\n\n@ajax_required\n@get_unit_context('review')\ndef toggle_qualitycheck(request, unit, check_id, **kwargs_):\n try:\n unit.toggle_qualitycheck(check_id, 'mute' in request.POST, request.user)\n except ObjectDoesNotExist:\n raise Http404\n\n return JsonResponse({})\n", "path": "pootle/apps/pootle_store/views.py"}]} |
gh_patches_debug_1164 | rasdani/github-patches | git_diff | microsoft__ptvsd-1376 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Warning in ptvsd on python 3.8
## Environment data
- PTVSD version: 4.2.8
- OS and version: win10
- Python version (& distribution if applicable, e.g. Anaconda): cpython 3.8.0a3 32-bit
- Using VS Code or Visual Studio: VS
## Actual behavior
Warning prints:
```
c:\users\huvalo\appdata\local\microsoft\visualstudio\16.0_13ce7c9aexp\extensions\microsoft corporation\python\16.0.0\Packages\ptvsd\_vendored\pydevd\_pydevd_bundle\pydevd_resolver.py:158: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if found.get(name) is not 1:
```
## Expected behavior
No warning printed
## Steps to reproduce:
1. Start debugging a python console app
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py`
Content:
```
1 try:
2 import StringIO
3 except:
4 import io as StringIO
5 import traceback
6 from os.path import basename
7
8 from functools import partial
9 from _pydevd_bundle.pydevd_constants import dict_iter_items, dict_keys, xrange
10 from _pydevd_bundle.pydevd_safe_repr import SafeRepr
11
12 # Note: 300 is already a lot to see in the outline (after that the user should really use the shell to get things)
13 # and this also means we'll pass less information to the client side (which makes debugging faster).
14 MAX_ITEMS_TO_HANDLE = 300
15
16 TOO_LARGE_MSG = 'Too large to show contents. Max items to show: ' + str(MAX_ITEMS_TO_HANDLE)
17 TOO_LARGE_ATTR = 'Unable to handle:'
18
19
20 #=======================================================================================================================
21 # UnableToResolveVariableException
22 #=======================================================================================================================
23 class UnableToResolveVariableException(Exception):
24 pass
25
26
27 #=======================================================================================================================
28 # InspectStub
29 #=======================================================================================================================
30 class InspectStub:
31
32 def isbuiltin(self, _args):
33 return False
34
35 def isroutine(self, object):
36 return False
37
38
39 try:
40 import inspect
41 except:
42 inspect = InspectStub()
43
44 try:
45 from collections import OrderedDict
46 except:
47 OrderedDict = dict
48
49 try:
50 import java.lang # @UnresolvedImport
51 except:
52 pass
53
54 # types does not include a MethodWrapperType
55 try:
56 MethodWrapperType = type([].__str__)
57 except:
58 MethodWrapperType = None
59
60 #=======================================================================================================================
61 # See: pydevd_extension_api module for resolver interface
62 #=======================================================================================================================
63
64
65 def sorted_attributes_key(attr_name):
66 if attr_name.startswith('__'):
67 if attr_name.endswith('__'):
68 # __ double under before and after __
69 return (3, attr_name)
70 else:
71 # __ double under before
72 return (2, attr_name)
73 elif attr_name.startswith('_'):
74 # _ single under
75 return (1, attr_name)
76 else:
77 # Regular (Before anything)
78 return (0, attr_name)
79
80
81 #=======================================================================================================================
82 # DefaultResolver
83 #=======================================================================================================================
84 class DefaultResolver:
85 '''
86 DefaultResolver is the class that'll actually resolve how to show some variable.
87 '''
88
89 def resolve(self, var, attribute):
90 return getattr(var, attribute)
91
92 def get_contents_debug_adapter_protocol(self, obj, fmt=None):
93 if MethodWrapperType:
94 dct, used___dict__ = self._get_py_dictionary(obj)
95 else:
96 dct = self._get_jy_dictionary(obj)[0]
97
98 lst = sorted(dict_iter_items(dct), key=lambda tup: sorted_attributes_key(tup[0]))
99 if used___dict__:
100 return [(attr_name, attr_value, '.__dict__[%s]' % attr_name) for (attr_name, attr_value) in lst]
101 else:
102 return [(attr_name, attr_value, '.%s' % attr_name) for (attr_name, attr_value) in lst]
103
104 def get_dictionary(self, var, names=None, used___dict__=False):
105 if MethodWrapperType:
106 return self._get_py_dictionary(var, names, used___dict__=used___dict__)[0]
107 else:
108 return self._get_jy_dictionary(var)[0]
109
110 def _get_jy_dictionary(self, obj):
111 ret = {}
112 found = java.util.HashMap()
113
114 original = obj
115 if hasattr(obj, '__class__') and obj.__class__ == java.lang.Class:
116
117 # get info about superclasses
118 classes = []
119 classes.append(obj)
120 c = obj.getSuperclass()
121 while c != None:
122 classes.append(c)
123 c = c.getSuperclass()
124
125 # get info about interfaces
126 interfs = []
127 for obj in classes:
128 interfs.extend(obj.getInterfaces())
129 classes.extend(interfs)
130
131 # now is the time when we actually get info on the declared methods and fields
132 for obj in classes:
133
134 declaredMethods = obj.getDeclaredMethods()
135 declaredFields = obj.getDeclaredFields()
136 for i in xrange(len(declaredMethods)):
137 name = declaredMethods[i].getName()
138 ret[name] = declaredMethods[i].toString()
139 found.put(name, 1)
140
141 for i in xrange(len(declaredFields)):
142 name = declaredFields[i].getName()
143 found.put(name, 1)
144 # if declaredFields[i].isAccessible():
145 declaredFields[i].setAccessible(True)
146 # ret[name] = declaredFields[i].get( declaredFields[i] )
147 try:
148 ret[name] = declaredFields[i].get(original)
149 except:
150 ret[name] = declaredFields[i].toString()
151
152 # this simple dir does not always get all the info, that's why we have the part before
153 # (e.g.: if we do a dir on String, some methods that are from other interfaces such as
154 # charAt don't appear)
155 try:
156 d = dir(original)
157 for name in d:
158 if found.get(name) is not 1:
159 ret[name] = getattr(original, name)
160 except:
161 # sometimes we're unable to do a dir
162 pass
163
164 return ret
165
166 def get_names(self, var):
167 used___dict__ = False
168 try:
169 names = dir(var)
170 except TypeError:
171 names = []
172 if not names:
173 if hasattr(var, '__dict__'):
174 names = dict_keys(var.__dict__)
175 used___dict__ = True
176 return names, used___dict__
177
178 def _get_py_dictionary(self, var, names=None, used___dict__=False):
179 '''
180 :return tuple(names, used___dict__), where used___dict__ means we have to access
181 using obj.__dict__[name] instead of getattr(obj, name)
182 '''
183
184 # TODO: Those should be options (would fix https://github.com/Microsoft/ptvsd/issues/66).
185 filter_private = False
186 filter_special = True
187 filter_function = True
188 filter_builtin = True
189
190 if not names:
191 names, used___dict__ = self.get_names(var)
192 d = {}
193
194 # Be aware that the order in which the filters are applied attempts to
195 # optimize the operation by removing as many items as possible in the
196 # first filters, leaving fewer items for later filters
197
198 if filter_builtin or filter_function:
199 for name in names:
200 try:
201 name_as_str = name
202 if name_as_str.__class__ != str:
203 name_as_str = '%r' % (name_as_str,)
204
205 if filter_special:
206 if name_as_str.startswith('__') and name_as_str.endswith('__'):
207 continue
208
209 if filter_private:
210 if name_as_str.startswith('_') or name_as_str.endswith('__'):
211 continue
212 if not used___dict__:
213 attr = getattr(var, name)
214 else:
215 attr = var.__dict__[name]
216
217 # filter builtins?
218 if filter_builtin:
219 if inspect.isbuiltin(attr):
220 continue
221
222 # filter functions?
223 if filter_function:
224 if inspect.isroutine(attr) or isinstance(attr, MethodWrapperType):
225 continue
226 except:
227 # if some error occurs getting it, let's put it to the user.
228 strIO = StringIO.StringIO()
229 traceback.print_exc(file=strIO)
230 attr = strIO.getvalue()
231
232 d[name_as_str] = attr
233
234 return d, used___dict__
235
236
237 #=======================================================================================================================
238 # DictResolver
239 #=======================================================================================================================
240 class DictResolver:
241
242 def resolve(self, dict, key):
243 if key in ('__len__', TOO_LARGE_ATTR):
244 return None
245
246 if '(' not in key:
247 # we have to treat that because the dict resolver is also used to directly resolve the global and local
248 # scopes (which already have the items directly)
249 try:
250 return dict[key]
251 except:
252 return getattr(dict, key)
253
254 # ok, we have to iterate over the items to find the one that matches the id, because that's the only way
255 # to actually find the reference from the string we have before.
256 expected_id = int(key.split('(')[-1][:-1])
257 for key, val in dict_iter_items(dict):
258 if id(key) == expected_id:
259 return val
260
261 raise UnableToResolveVariableException()
262
263 def key_to_str(self, key, fmt=None):
264 if fmt is not None:
265 if fmt.get('hex', False):
266 safe_repr = SafeRepr()
267 safe_repr.convert_to_hex = True
268 return safe_repr(key)
269 return '%r' % (key,)
270
271 def init_dict(self):
272 return {}
273
274 def get_contents_debug_adapter_protocol(self, dct, fmt=None):
275 '''
276 This method is to be used in the case where the variables are all saved by its id (and as
277 such don't need to have the `resolve` method called later on, so, keys don't need to
278 embed the reference in the key).
279
280 Note that the return should be ordered.
281
282 :return list(tuple(name:str, value:object, evaluateName:str))
283 '''
284 ret = []
285
286 i = 0
287 for key, val in dict_iter_items(dct):
288 i += 1
289 key_as_str = self.key_to_str(key, fmt)
290 eval_key_str = self.key_to_str(key) # do not format the key
291 ret.append((key_as_str, val, '[%s]' % (eval_key_str,)))
292 if i > MAX_ITEMS_TO_HANDLE:
293 ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))
294 break
295
296 ret.append(('__len__', len(dct), partial(_apply_evaluate_name, evaluate_name='len(%s)')))
297 # in case the class extends built-in type and has some additional fields
298 from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(dct, fmt)
299
300 if from_default_resolver:
301 ret = from_default_resolver + ret
302
303 return sorted(ret, key=lambda tup: sorted_attributes_key(tup[0]))
304
305 def get_dictionary(self, dict):
306 ret = self.init_dict()
307
308 i = 0
309 for key, val in dict_iter_items(dict):
310 i += 1
311 # we need to add the id because otherwise we cannot find the real object to get its contents later on.
312 key = '%s (%s)' % (self.key_to_str(key), id(key))
313 ret[key] = val
314 if i > MAX_ITEMS_TO_HANDLE:
315 ret[TOO_LARGE_ATTR] = TOO_LARGE_MSG
316 break
317
318 ret['__len__'] = len(dict)
319 # in case if the class extends built-in type and has some additional fields
320 additional_fields = defaultResolver.get_dictionary(dict)
321 ret.update(additional_fields)
322 return ret
323
324
325 def _apply_evaluate_name(parent_name, evaluate_name):
326 return evaluate_name % (parent_name,)
327
328
329 #=======================================================================================================================
330 # TupleResolver
331 #=======================================================================================================================
332 class TupleResolver: # to enumerate tuples and lists
333
334 def resolve(self, var, attribute):
335 '''
336 @param var: that's the original attribute
337 @param attribute: that's the key passed in the dict (as a string)
338 '''
339 if attribute in ('__len__', TOO_LARGE_ATTR):
340 return None
341 try:
342 return var[int(attribute)]
343 except:
344 return getattr(var, attribute)
345
346 def get_contents_debug_adapter_protocol(self, lst, fmt=None):
347 '''
348 This method is to be used in the case where the variables are all saved by its id (and as
349 such don't need to have the `resolve` method called later on, so, keys don't need to
350 embed the reference in the key).
351
352 Note that the return should be ordered.
353
354 :return list(tuple(name:str, value:object, evaluateName:str))
355 '''
356 l = len(lst)
357 ret = []
358
359 format_str = '%0' + str(int(len(str(l - 1)))) + 'd'
360 if fmt is not None and fmt.get('hex', False):
361 format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'
362
363 for i, item in enumerate(lst):
364 ret.append((format_str % i, item, '[%s]' % i))
365
366 if i > MAX_ITEMS_TO_HANDLE:
367 ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))
368 break
369
370 ret.append(('__len__', len(lst), partial(_apply_evaluate_name, evaluate_name='len(%s)')))
371 # Needed in case the class extends the built-in type and has some additional fields.
372 from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(lst, fmt=fmt)
373 if from_default_resolver:
374 ret = from_default_resolver + ret
375 return ret
376
377 def get_dictionary(self, var, fmt={}):
378 l = len(var)
379 d = {}
380
381 format_str = '%0' + str(int(len(str(l - 1)))) + 'd'
382 if fmt is not None and fmt.get('hex', False):
383 format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'
384
385 for i, item in enumerate(var):
386 d[format_str % i] = item
387
388 if i > MAX_ITEMS_TO_HANDLE:
389 d[TOO_LARGE_ATTR] = TOO_LARGE_MSG
390 break
391
392 d['__len__'] = len(var)
393 # in case if the class extends built-in type and has some additional fields
394 additional_fields = defaultResolver.get_dictionary(var)
395 d.update(additional_fields)
396 return d
397
398
399 #=======================================================================================================================
400 # SetResolver
401 #=======================================================================================================================
402 class SetResolver:
403 '''
404 Resolves a set as dict id(object)->object
405 '''
406
407 def resolve(self, var, attribute):
408 if attribute in ('__len__', TOO_LARGE_ATTR):
409 return None
410
411 try:
412 attribute = int(attribute)
413 except:
414 return getattr(var, attribute)
415
416 for v in var:
417 if id(v) == attribute:
418 return v
419
420 raise UnableToResolveVariableException('Unable to resolve %s in %s' % (attribute, var))
421
422 def get_dictionary(self, var):
423 d = {}
424 i = 0
425 for item in var:
426 i += 1
427 d[str(id(item))] = item
428
429 if i > MAX_ITEMS_TO_HANDLE:
430 d[TOO_LARGE_ATTR] = TOO_LARGE_MSG
431 break
432
433 d['__len__'] = len(var)
434 # in case if the class extends built-in type and has some additional fields
435 additional_fields = defaultResolver.get_dictionary(var)
436 d.update(additional_fields)
437 return d
438
439
440 #=======================================================================================================================
441 # InstanceResolver
442 #=======================================================================================================================
443 class InstanceResolver:
444
445 def resolve(self, var, attribute):
446 field = var.__class__.getDeclaredField(attribute)
447 field.setAccessible(True)
448 return field.get(var)
449
450 def get_dictionary(self, obj):
451 ret = {}
452
453 declaredFields = obj.__class__.getDeclaredFields()
454 for i in xrange(len(declaredFields)):
455 name = declaredFields[i].getName()
456 try:
457 declaredFields[i].setAccessible(True)
458 ret[name] = declaredFields[i].get(obj)
459 except:
460 traceback.print_exc()
461
462 return ret
463
464
465 #=======================================================================================================================
466 # JyArrayResolver
467 #=======================================================================================================================
468 class JyArrayResolver:
469 '''
470 This resolves a regular Object[] array from java
471 '''
472
473 def resolve(self, var, attribute):
474 if attribute == '__len__':
475 return None
476 return var[int(attribute)]
477
478 def get_dictionary(self, obj):
479 ret = {}
480
481 for i in xrange(len(obj)):
482 ret[ i ] = obj[i]
483
484 ret['__len__'] = len(obj)
485 return ret
486
487
488 #=======================================================================================================================
489 # MultiValueDictResolver
490 #=======================================================================================================================
491 class MultiValueDictResolver(DictResolver):
492
493 def resolve(self, dict, key):
494 if key in ('__len__', TOO_LARGE_ATTR):
495 return None
496
497 # ok, we have to iterate over the items to find the one that matches the id, because that's the only way
498 # to actually find the reference from the string we have before.
499 expected_id = int(key.split('(')[-1][:-1])
500 for key in dict_keys(dict):
501 val = dict.getlist(key)
502 if id(key) == expected_id:
503 return val
504
505 raise UnableToResolveVariableException()
506
507
508 #=======================================================================================================================
509 # DjangoFormResolver
510 #=======================================================================================================================
511 class DjangoFormResolver(DefaultResolver):
512
513 def get_dictionary(self, var, names=None):
514 # Do not call self.errors because it is a property and has side effects.
515 names, used___dict__ = self.get_names(var)
516
517 has_errors_attr = False
518 if "errors" in names:
519 has_errors_attr = True
520 names.remove("errors")
521
522 d = defaultResolver.get_dictionary(var, names=names, used___dict__=used___dict__)
523 if has_errors_attr:
524 try:
525 errors_attr = getattr(var, "_errors")
526 except:
527 errors_attr = None
528 d["errors"] = errors_attr
529 return d
530
531
532 #=======================================================================================================================
533 # DequeResolver
534 #=======================================================================================================================
535 class DequeResolver(TupleResolver):
536
537 def get_dictionary(self, var):
538 d = TupleResolver.get_dictionary(self, var)
539 d['maxlen'] = getattr(var, 'maxlen', None)
540 return d
541
542
543 #=======================================================================================================================
544 # OrderedDictResolver
545 #=======================================================================================================================
546 class OrderedDictResolver(DictResolver):
547
548 def init_dict(self):
549 return OrderedDict()
550
551
552 #=======================================================================================================================
553 # FrameResolver
554 #=======================================================================================================================
555 class FrameResolver:
556 '''
557 This resolves a frame.
558 '''
559
560 def resolve(self, obj, attribute):
561 if attribute == '__internals__':
562 return defaultResolver.get_dictionary(obj)
563
564 if attribute == 'stack':
565 return self.get_frame_stack(obj)
566
567 if attribute == 'f_locals':
568 return obj.f_locals
569
570 return None
571
572 def get_dictionary(self, obj):
573 ret = {}
574 ret['__internals__'] = defaultResolver.get_dictionary(obj)
575 ret['stack'] = self.get_frame_stack(obj)
576 ret['f_locals'] = obj.f_locals
577 return ret
578
579 def get_frame_stack(self, frame):
580 ret = []
581 if frame is not None:
582 ret.append(self.get_frame_name(frame))
583
584 while frame.f_back:
585 frame = frame.f_back
586 ret.append(self.get_frame_name(frame))
587
588 return ret
589
590 def get_frame_name(self, frame):
591 if frame is None:
592 return 'None'
593 try:
594 name = basename(frame.f_code.co_filename)
595 return 'frame: %s [%s:%s] id:%s' % (frame.f_code.co_name, name, frame.f_lineno, id(frame))
596 except:
597 return 'frame object'
598
599
600 defaultResolver = DefaultResolver()
601 dictResolver = DictResolver()
602 tupleResolver = TupleResolver()
603 instanceResolver = InstanceResolver()
604 jyArrayResolver = JyArrayResolver()
605 setResolver = SetResolver()
606 multiValueDictResolver = MultiValueDictResolver()
607 djangoFormResolver = DjangoFormResolver()
608 dequeResolver = DequeResolver()
609 orderedDictResolver = OrderedDictResolver()
610 frameResolver = FrameResolver()
611
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py
--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py
+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py
@@ -155,7 +155,7 @@
try:
d = dir(original)
for name in d:
- if found.get(name) is not 1:
+ if found.get(name) != 1:
ret[name] = getattr(original, name)
except:
# sometimes we're unable to do a dir
| {"golden_diff": "diff --git a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py\n--- a/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py\n+++ b/src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py\n@@ -155,7 +155,7 @@\n try:\n d = dir(original)\n for name in d:\n- if found.get(name) is not 1:\n+ if found.get(name) != 1:\n ret[name] = getattr(original, name)\n except:\n # sometimes we're unable to do a dir\n", "issue": "Warning in ptvsd on python 3.8\n## Environment data\r\n\r\n- PTVSD version: 4.2.8\r\n- OS and version: win10\r\n- Python version (& distribution if applicable, e.g. Anaconda): cpython 3.8.0a3 32-bit\r\n- Using VS Code or Visual Studio: VS\r\n\r\n## Actual behavior\r\n\r\nWarning prints:\r\n```\r\nc:\\users\\huvalo\\appdata\\local\\microsoft\\visualstudio\\16.0_13ce7c9aexp\\extensions\\microsoft corporation\\python\\16.0.0\\Packages\\ptvsd\\_vendored\\pydevd\\_pydevd_bundle\\pydevd_resolver.py:158: SyntaxWarning: \"is not\" with a literal. Did you mean \"!=\"?\r\n if found.get(name) is not 1:\r\n```\r\n\r\n## Expected behavior\r\n\r\nNo warning printed\r\n\r\n## Steps to reproduce:\r\n1. Start debugging a python console app\r\n\n", "before_files": [{"content": "try:\n import StringIO\nexcept:\n import io as StringIO\nimport traceback\nfrom os.path import basename\n\nfrom functools import partial\nfrom _pydevd_bundle.pydevd_constants import dict_iter_items, dict_keys, xrange\nfrom _pydevd_bundle.pydevd_safe_repr import SafeRepr\n\n# Note: 300 is already a lot to see in the outline (after that the user should really use the shell to get things)\n# and this also means we'll pass less information to the client side (which makes debugging faster).\nMAX_ITEMS_TO_HANDLE = 300\n\nTOO_LARGE_MSG = 'Too large to show contents. Max items to show: ' + str(MAX_ITEMS_TO_HANDLE)\nTOO_LARGE_ATTR = 'Unable to handle:'\n\n\n#=======================================================================================================================\n# UnableToResolveVariableException\n#=======================================================================================================================\nclass UnableToResolveVariableException(Exception):\n pass\n\n\n#=======================================================================================================================\n# InspectStub\n#=======================================================================================================================\nclass InspectStub:\n\n def isbuiltin(self, _args):\n return False\n\n def isroutine(self, object):\n return False\n\n\ntry:\n import inspect\nexcept:\n inspect = InspectStub()\n\ntry:\n from collections import OrderedDict\nexcept:\n OrderedDict = dict\n\ntry:\n import java.lang # @UnresolvedImport\nexcept:\n pass\n\n# types does not include a MethodWrapperType\ntry:\n MethodWrapperType = type([].__str__)\nexcept:\n MethodWrapperType = None\n\n#=======================================================================================================================\n# See: pydevd_extension_api module for resolver interface\n#=======================================================================================================================\n\n\ndef sorted_attributes_key(attr_name):\n if attr_name.startswith('__'):\n if attr_name.endswith('__'):\n # __ double under before and after __\n return (3, attr_name)\n else:\n # __ double under before\n return (2, attr_name)\n elif attr_name.startswith('_'):\n # _ single under\n return (1, attr_name)\n else:\n # Regular (Before anything)\n return (0, attr_name)\n\n\n#=======================================================================================================================\n# DefaultResolver\n#=======================================================================================================================\nclass DefaultResolver:\n '''\n DefaultResolver is the class that'll actually resolve how to show some variable.\n '''\n\n def resolve(self, var, attribute):\n return getattr(var, attribute)\n\n def get_contents_debug_adapter_protocol(self, obj, fmt=None):\n if MethodWrapperType:\n dct, used___dict__ = self._get_py_dictionary(obj)\n else:\n dct = self._get_jy_dictionary(obj)[0]\n\n lst = sorted(dict_iter_items(dct), key=lambda tup: sorted_attributes_key(tup[0]))\n if used___dict__:\n return [(attr_name, attr_value, '.__dict__[%s]' % attr_name) for (attr_name, attr_value) in lst]\n else:\n return [(attr_name, attr_value, '.%s' % attr_name) for (attr_name, attr_value) in lst]\n\n def get_dictionary(self, var, names=None, used___dict__=False):\n if MethodWrapperType:\n return self._get_py_dictionary(var, names, used___dict__=used___dict__)[0]\n else:\n return self._get_jy_dictionary(var)[0]\n\n def _get_jy_dictionary(self, obj):\n ret = {}\n found = java.util.HashMap()\n\n original = obj\n if hasattr(obj, '__class__') and obj.__class__ == java.lang.Class:\n\n # get info about superclasses\n classes = []\n classes.append(obj)\n c = obj.getSuperclass()\n while c != None:\n classes.append(c)\n c = c.getSuperclass()\n\n # get info about interfaces\n interfs = []\n for obj in classes:\n interfs.extend(obj.getInterfaces())\n classes.extend(interfs)\n\n # now is the time when we actually get info on the declared methods and fields\n for obj in classes:\n\n declaredMethods = obj.getDeclaredMethods()\n declaredFields = obj.getDeclaredFields()\n for i in xrange(len(declaredMethods)):\n name = declaredMethods[i].getName()\n ret[name] = declaredMethods[i].toString()\n found.put(name, 1)\n\n for i in xrange(len(declaredFields)):\n name = declaredFields[i].getName()\n found.put(name, 1)\n # if declaredFields[i].isAccessible():\n declaredFields[i].setAccessible(True)\n # ret[name] = declaredFields[i].get( declaredFields[i] )\n try:\n ret[name] = declaredFields[i].get(original)\n except:\n ret[name] = declaredFields[i].toString()\n\n # this simple dir does not always get all the info, that's why we have the part before\n # (e.g.: if we do a dir on String, some methods that are from other interfaces such as\n # charAt don't appear)\n try:\n d = dir(original)\n for name in d:\n if found.get(name) is not 1:\n ret[name] = getattr(original, name)\n except:\n # sometimes we're unable to do a dir\n pass\n\n return ret\n\n def get_names(self, var):\n used___dict__ = False\n try:\n names = dir(var)\n except TypeError:\n names = []\n if not names:\n if hasattr(var, '__dict__'):\n names = dict_keys(var.__dict__)\n used___dict__ = True\n return names, used___dict__\n\n def _get_py_dictionary(self, var, names=None, used___dict__=False):\n '''\n :return tuple(names, used___dict__), where used___dict__ means we have to access\n using obj.__dict__[name] instead of getattr(obj, name)\n '''\n\n # TODO: Those should be options (would fix https://github.com/Microsoft/ptvsd/issues/66).\n filter_private = False\n filter_special = True\n filter_function = True\n filter_builtin = True\n\n if not names:\n names, used___dict__ = self.get_names(var)\n d = {}\n\n # Be aware that the order in which the filters are applied attempts to\n # optimize the operation by removing as many items as possible in the\n # first filters, leaving fewer items for later filters\n\n if filter_builtin or filter_function:\n for name in names:\n try:\n name_as_str = name\n if name_as_str.__class__ != str:\n name_as_str = '%r' % (name_as_str,)\n\n if filter_special:\n if name_as_str.startswith('__') and name_as_str.endswith('__'):\n continue\n\n if filter_private:\n if name_as_str.startswith('_') or name_as_str.endswith('__'):\n continue\n if not used___dict__:\n attr = getattr(var, name)\n else:\n attr = var.__dict__[name]\n\n # filter builtins?\n if filter_builtin:\n if inspect.isbuiltin(attr):\n continue\n\n # filter functions?\n if filter_function:\n if inspect.isroutine(attr) or isinstance(attr, MethodWrapperType):\n continue\n except:\n # if some error occurs getting it, let's put it to the user.\n strIO = StringIO.StringIO()\n traceback.print_exc(file=strIO)\n attr = strIO.getvalue()\n\n d[name_as_str] = attr\n\n return d, used___dict__\n\n\n#=======================================================================================================================\n# DictResolver\n#=======================================================================================================================\nclass DictResolver:\n\n def resolve(self, dict, key):\n if key in ('__len__', TOO_LARGE_ATTR):\n return None\n\n if '(' not in key:\n # we have to treat that because the dict resolver is also used to directly resolve the global and local\n # scopes (which already have the items directly)\n try:\n return dict[key]\n except:\n return getattr(dict, key)\n\n # ok, we have to iterate over the items to find the one that matches the id, because that's the only way\n # to actually find the reference from the string we have before.\n expected_id = int(key.split('(')[-1][:-1])\n for key, val in dict_iter_items(dict):\n if id(key) == expected_id:\n return val\n\n raise UnableToResolveVariableException()\n\n def key_to_str(self, key, fmt=None):\n if fmt is not None:\n if fmt.get('hex', False):\n safe_repr = SafeRepr()\n safe_repr.convert_to_hex = True\n return safe_repr(key)\n return '%r' % (key,)\n\n def init_dict(self):\n return {}\n\n def get_contents_debug_adapter_protocol(self, dct, fmt=None):\n '''\n This method is to be used in the case where the variables are all saved by its id (and as\n such don't need to have the `resolve` method called later on, so, keys don't need to\n embed the reference in the key).\n\n Note that the return should be ordered.\n\n :return list(tuple(name:str, value:object, evaluateName:str))\n '''\n ret = []\n\n i = 0\n for key, val in dict_iter_items(dct):\n i += 1\n key_as_str = self.key_to_str(key, fmt)\n eval_key_str = self.key_to_str(key) # do not format the key\n ret.append((key_as_str, val, '[%s]' % (eval_key_str,)))\n if i > MAX_ITEMS_TO_HANDLE:\n ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))\n break\n\n ret.append(('__len__', len(dct), partial(_apply_evaluate_name, evaluate_name='len(%s)')))\n # in case the class extends built-in type and has some additional fields\n from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(dct, fmt)\n\n if from_default_resolver:\n ret = from_default_resolver + ret\n\n return sorted(ret, key=lambda tup: sorted_attributes_key(tup[0]))\n\n def get_dictionary(self, dict):\n ret = self.init_dict()\n\n i = 0\n for key, val in dict_iter_items(dict):\n i += 1\n # we need to add the id because otherwise we cannot find the real object to get its contents later on.\n key = '%s (%s)' % (self.key_to_str(key), id(key))\n ret[key] = val\n if i > MAX_ITEMS_TO_HANDLE:\n ret[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n ret['__len__'] = len(dict)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(dict)\n ret.update(additional_fields)\n return ret\n\n\ndef _apply_evaluate_name(parent_name, evaluate_name):\n return evaluate_name % (parent_name,)\n\n\n#=======================================================================================================================\n# TupleResolver\n#=======================================================================================================================\nclass TupleResolver: # to enumerate tuples and lists\n\n def resolve(self, var, attribute):\n '''\n @param var: that's the original attribute\n @param attribute: that's the key passed in the dict (as a string)\n '''\n if attribute in ('__len__', TOO_LARGE_ATTR):\n return None\n try:\n return var[int(attribute)]\n except:\n return getattr(var, attribute)\n\n def get_contents_debug_adapter_protocol(self, lst, fmt=None):\n '''\n This method is to be used in the case where the variables are all saved by its id (and as\n such don't need to have the `resolve` method called later on, so, keys don't need to\n embed the reference in the key).\n\n Note that the return should be ordered.\n\n :return list(tuple(name:str, value:object, evaluateName:str))\n '''\n l = len(lst)\n ret = []\n\n format_str = '%0' + str(int(len(str(l - 1)))) + 'd'\n if fmt is not None and fmt.get('hex', False):\n format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'\n\n for i, item in enumerate(lst):\n ret.append((format_str % i, item, '[%s]' % i))\n\n if i > MAX_ITEMS_TO_HANDLE:\n ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))\n break\n\n ret.append(('__len__', len(lst), partial(_apply_evaluate_name, evaluate_name='len(%s)')))\n # Needed in case the class extends the built-in type and has some additional fields.\n from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(lst, fmt=fmt)\n if from_default_resolver:\n ret = from_default_resolver + ret\n return ret\n\n def get_dictionary(self, var, fmt={}):\n l = len(var)\n d = {}\n\n format_str = '%0' + str(int(len(str(l - 1)))) + 'd'\n if fmt is not None and fmt.get('hex', False):\n format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'\n\n for i, item in enumerate(var):\n d[format_str % i] = item\n\n if i > MAX_ITEMS_TO_HANDLE:\n d[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n d['__len__'] = len(var)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(var)\n d.update(additional_fields)\n return d\n\n\n#=======================================================================================================================\n# SetResolver\n#=======================================================================================================================\nclass SetResolver:\n '''\n Resolves a set as dict id(object)->object\n '''\n\n def resolve(self, var, attribute):\n if attribute in ('__len__', TOO_LARGE_ATTR):\n return None\n\n try:\n attribute = int(attribute)\n except:\n return getattr(var, attribute)\n\n for v in var:\n if id(v) == attribute:\n return v\n\n raise UnableToResolveVariableException('Unable to resolve %s in %s' % (attribute, var))\n\n def get_dictionary(self, var):\n d = {}\n i = 0\n for item in var:\n i += 1\n d[str(id(item))] = item\n\n if i > MAX_ITEMS_TO_HANDLE:\n d[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n d['__len__'] = len(var)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(var)\n d.update(additional_fields)\n return d\n\n\n#=======================================================================================================================\n# InstanceResolver\n#=======================================================================================================================\nclass InstanceResolver:\n\n def resolve(self, var, attribute):\n field = var.__class__.getDeclaredField(attribute)\n field.setAccessible(True)\n return field.get(var)\n\n def get_dictionary(self, obj):\n ret = {}\n\n declaredFields = obj.__class__.getDeclaredFields()\n for i in xrange(len(declaredFields)):\n name = declaredFields[i].getName()\n try:\n declaredFields[i].setAccessible(True)\n ret[name] = declaredFields[i].get(obj)\n except:\n traceback.print_exc()\n\n return ret\n\n\n#=======================================================================================================================\n# JyArrayResolver\n#=======================================================================================================================\nclass JyArrayResolver:\n '''\n This resolves a regular Object[] array from java\n '''\n\n def resolve(self, var, attribute):\n if attribute == '__len__':\n return None\n return var[int(attribute)]\n\n def get_dictionary(self, obj):\n ret = {}\n\n for i in xrange(len(obj)):\n ret[ i ] = obj[i]\n\n ret['__len__'] = len(obj)\n return ret\n\n\n#=======================================================================================================================\n# MultiValueDictResolver\n#=======================================================================================================================\nclass MultiValueDictResolver(DictResolver):\n\n def resolve(self, dict, key):\n if key in ('__len__', TOO_LARGE_ATTR):\n return None\n\n # ok, we have to iterate over the items to find the one that matches the id, because that's the only way\n # to actually find the reference from the string we have before.\n expected_id = int(key.split('(')[-1][:-1])\n for key in dict_keys(dict):\n val = dict.getlist(key)\n if id(key) == expected_id:\n return val\n\n raise UnableToResolveVariableException()\n\n\n#=======================================================================================================================\n# DjangoFormResolver\n#=======================================================================================================================\nclass DjangoFormResolver(DefaultResolver):\n\n def get_dictionary(self, var, names=None):\n # Do not call self.errors because it is a property and has side effects.\n names, used___dict__ = self.get_names(var)\n\n has_errors_attr = False\n if \"errors\" in names:\n has_errors_attr = True\n names.remove(\"errors\")\n\n d = defaultResolver.get_dictionary(var, names=names, used___dict__=used___dict__)\n if has_errors_attr:\n try:\n errors_attr = getattr(var, \"_errors\")\n except:\n errors_attr = None\n d[\"errors\"] = errors_attr\n return d\n\n\n#=======================================================================================================================\n# DequeResolver\n#=======================================================================================================================\nclass DequeResolver(TupleResolver):\n\n def get_dictionary(self, var):\n d = TupleResolver.get_dictionary(self, var)\n d['maxlen'] = getattr(var, 'maxlen', None)\n return d\n\n\n#=======================================================================================================================\n# OrderedDictResolver\n#=======================================================================================================================\nclass OrderedDictResolver(DictResolver):\n\n def init_dict(self):\n return OrderedDict()\n\n\n#=======================================================================================================================\n# FrameResolver\n#=======================================================================================================================\nclass FrameResolver:\n '''\n This resolves a frame.\n '''\n\n def resolve(self, obj, attribute):\n if attribute == '__internals__':\n return defaultResolver.get_dictionary(obj)\n\n if attribute == 'stack':\n return self.get_frame_stack(obj)\n\n if attribute == 'f_locals':\n return obj.f_locals\n\n return None\n\n def get_dictionary(self, obj):\n ret = {}\n ret['__internals__'] = defaultResolver.get_dictionary(obj)\n ret['stack'] = self.get_frame_stack(obj)\n ret['f_locals'] = obj.f_locals\n return ret\n\n def get_frame_stack(self, frame):\n ret = []\n if frame is not None:\n ret.append(self.get_frame_name(frame))\n\n while frame.f_back:\n frame = frame.f_back\n ret.append(self.get_frame_name(frame))\n\n return ret\n\n def get_frame_name(self, frame):\n if frame is None:\n return 'None'\n try:\n name = basename(frame.f_code.co_filename)\n return 'frame: %s [%s:%s] id:%s' % (frame.f_code.co_name, name, frame.f_lineno, id(frame))\n except:\n return 'frame object'\n\n\ndefaultResolver = DefaultResolver()\ndictResolver = DictResolver()\ntupleResolver = TupleResolver()\ninstanceResolver = InstanceResolver()\njyArrayResolver = JyArrayResolver()\nsetResolver = SetResolver()\nmultiValueDictResolver = MultiValueDictResolver()\ndjangoFormResolver = DjangoFormResolver()\ndequeResolver = DequeResolver()\norderedDictResolver = OrderedDictResolver()\nframeResolver = FrameResolver()\n", "path": "src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py"}], "after_files": [{"content": "try:\n import StringIO\nexcept:\n import io as StringIO\nimport traceback\nfrom os.path import basename\n\nfrom functools import partial\nfrom _pydevd_bundle.pydevd_constants import dict_iter_items, dict_keys, xrange\nfrom _pydevd_bundle.pydevd_safe_repr import SafeRepr\n\n# Note: 300 is already a lot to see in the outline (after that the user should really use the shell to get things)\n# and this also means we'll pass less information to the client side (which makes debugging faster).\nMAX_ITEMS_TO_HANDLE = 300\n\nTOO_LARGE_MSG = 'Too large to show contents. Max items to show: ' + str(MAX_ITEMS_TO_HANDLE)\nTOO_LARGE_ATTR = 'Unable to handle:'\n\n\n#=======================================================================================================================\n# UnableToResolveVariableException\n#=======================================================================================================================\nclass UnableToResolveVariableException(Exception):\n pass\n\n\n#=======================================================================================================================\n# InspectStub\n#=======================================================================================================================\nclass InspectStub:\n\n def isbuiltin(self, _args):\n return False\n\n def isroutine(self, object):\n return False\n\n\ntry:\n import inspect\nexcept:\n inspect = InspectStub()\n\ntry:\n from collections import OrderedDict\nexcept:\n OrderedDict = dict\n\ntry:\n import java.lang # @UnresolvedImport\nexcept:\n pass\n\n# types does not include a MethodWrapperType\ntry:\n MethodWrapperType = type([].__str__)\nexcept:\n MethodWrapperType = None\n\n#=======================================================================================================================\n# See: pydevd_extension_api module for resolver interface\n#=======================================================================================================================\n\n\ndef sorted_attributes_key(attr_name):\n if attr_name.startswith('__'):\n if attr_name.endswith('__'):\n # __ double under before and after __\n return (3, attr_name)\n else:\n # __ double under before\n return (2, attr_name)\n elif attr_name.startswith('_'):\n # _ single under\n return (1, attr_name)\n else:\n # Regular (Before anything)\n return (0, attr_name)\n\n\n#=======================================================================================================================\n# DefaultResolver\n#=======================================================================================================================\nclass DefaultResolver:\n '''\n DefaultResolver is the class that'll actually resolve how to show some variable.\n '''\n\n def resolve(self, var, attribute):\n return getattr(var, attribute)\n\n def get_contents_debug_adapter_protocol(self, obj, fmt=None):\n if MethodWrapperType:\n dct, used___dict__ = self._get_py_dictionary(obj)\n else:\n dct = self._get_jy_dictionary(obj)[0]\n\n lst = sorted(dict_iter_items(dct), key=lambda tup: sorted_attributes_key(tup[0]))\n if used___dict__:\n return [(attr_name, attr_value, '.__dict__[%s]' % attr_name) for (attr_name, attr_value) in lst]\n else:\n return [(attr_name, attr_value, '.%s' % attr_name) for (attr_name, attr_value) in lst]\n\n def get_dictionary(self, var, names=None, used___dict__=False):\n if MethodWrapperType:\n return self._get_py_dictionary(var, names, used___dict__=used___dict__)[0]\n else:\n return self._get_jy_dictionary(var)[0]\n\n def _get_jy_dictionary(self, obj):\n ret = {}\n found = java.util.HashMap()\n\n original = obj\n if hasattr(obj, '__class__') and obj.__class__ == java.lang.Class:\n\n # get info about superclasses\n classes = []\n classes.append(obj)\n c = obj.getSuperclass()\n while c != None:\n classes.append(c)\n c = c.getSuperclass()\n\n # get info about interfaces\n interfs = []\n for obj in classes:\n interfs.extend(obj.getInterfaces())\n classes.extend(interfs)\n\n # now is the time when we actually get info on the declared methods and fields\n for obj in classes:\n\n declaredMethods = obj.getDeclaredMethods()\n declaredFields = obj.getDeclaredFields()\n for i in xrange(len(declaredMethods)):\n name = declaredMethods[i].getName()\n ret[name] = declaredMethods[i].toString()\n found.put(name, 1)\n\n for i in xrange(len(declaredFields)):\n name = declaredFields[i].getName()\n found.put(name, 1)\n # if declaredFields[i].isAccessible():\n declaredFields[i].setAccessible(True)\n # ret[name] = declaredFields[i].get( declaredFields[i] )\n try:\n ret[name] = declaredFields[i].get(original)\n except:\n ret[name] = declaredFields[i].toString()\n\n # this simple dir does not always get all the info, that's why we have the part before\n # (e.g.: if we do a dir on String, some methods that are from other interfaces such as\n # charAt don't appear)\n try:\n d = dir(original)\n for name in d:\n if found.get(name) != 1:\n ret[name] = getattr(original, name)\n except:\n # sometimes we're unable to do a dir\n pass\n\n return ret\n\n def get_names(self, var):\n used___dict__ = False\n try:\n names = dir(var)\n except TypeError:\n names = []\n if not names:\n if hasattr(var, '__dict__'):\n names = dict_keys(var.__dict__)\n used___dict__ = True\n return names, used___dict__\n\n def _get_py_dictionary(self, var, names=None, used___dict__=False):\n '''\n :return tuple(names, used___dict__), where used___dict__ means we have to access\n using obj.__dict__[name] instead of getattr(obj, name)\n '''\n\n # TODO: Those should be options (would fix https://github.com/Microsoft/ptvsd/issues/66).\n filter_private = False\n filter_special = True\n filter_function = True\n filter_builtin = True\n\n if not names:\n names, used___dict__ = self.get_names(var)\n d = {}\n\n # Be aware that the order in which the filters are applied attempts to\n # optimize the operation by removing as many items as possible in the\n # first filters, leaving fewer items for later filters\n\n if filter_builtin or filter_function:\n for name in names:\n try:\n name_as_str = name\n if name_as_str.__class__ != str:\n name_as_str = '%r' % (name_as_str,)\n\n if filter_special:\n if name_as_str.startswith('__') and name_as_str.endswith('__'):\n continue\n\n if filter_private:\n if name_as_str.startswith('_') or name_as_str.endswith('__'):\n continue\n if not used___dict__:\n attr = getattr(var, name)\n else:\n attr = var.__dict__[name]\n\n # filter builtins?\n if filter_builtin:\n if inspect.isbuiltin(attr):\n continue\n\n # filter functions?\n if filter_function:\n if inspect.isroutine(attr) or isinstance(attr, MethodWrapperType):\n continue\n except:\n # if some error occurs getting it, let's put it to the user.\n strIO = StringIO.StringIO()\n traceback.print_exc(file=strIO)\n attr = strIO.getvalue()\n\n d[name_as_str] = attr\n\n return d, used___dict__\n\n\n#=======================================================================================================================\n# DictResolver\n#=======================================================================================================================\nclass DictResolver:\n\n def resolve(self, dict, key):\n if key in ('__len__', TOO_LARGE_ATTR):\n return None\n\n if '(' not in key:\n # we have to treat that because the dict resolver is also used to directly resolve the global and local\n # scopes (which already have the items directly)\n try:\n return dict[key]\n except:\n return getattr(dict, key)\n\n # ok, we have to iterate over the items to find the one that matches the id, because that's the only way\n # to actually find the reference from the string we have before.\n expected_id = int(key.split('(')[-1][:-1])\n for key, val in dict_iter_items(dict):\n if id(key) == expected_id:\n return val\n\n raise UnableToResolveVariableException()\n\n def key_to_str(self, key, fmt=None):\n if fmt is not None:\n if fmt.get('hex', False):\n safe_repr = SafeRepr()\n safe_repr.convert_to_hex = True\n return safe_repr(key)\n return '%r' % (key,)\n\n def init_dict(self):\n return {}\n\n def get_contents_debug_adapter_protocol(self, dct, fmt=None):\n '''\n This method is to be used in the case where the variables are all saved by its id (and as\n such don't need to have the `resolve` method called later on, so, keys don't need to\n embed the reference in the key).\n\n Note that the return should be ordered.\n\n :return list(tuple(name:str, value:object, evaluateName:str))\n '''\n ret = []\n\n i = 0\n for key, val in dict_iter_items(dct):\n i += 1\n key_as_str = self.key_to_str(key, fmt)\n eval_key_str = self.key_to_str(key) # do not format the key\n ret.append((key_as_str, val, '[%s]' % (eval_key_str,)))\n if i > MAX_ITEMS_TO_HANDLE:\n ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))\n break\n\n ret.append(('__len__', len(dct), partial(_apply_evaluate_name, evaluate_name='len(%s)')))\n # in case the class extends built-in type and has some additional fields\n from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(dct, fmt)\n\n if from_default_resolver:\n ret = from_default_resolver + ret\n\n return sorted(ret, key=lambda tup: sorted_attributes_key(tup[0]))\n\n def get_dictionary(self, dict):\n ret = self.init_dict()\n\n i = 0\n for key, val in dict_iter_items(dict):\n i += 1\n # we need to add the id because otherwise we cannot find the real object to get its contents later on.\n key = '%s (%s)' % (self.key_to_str(key), id(key))\n ret[key] = val\n if i > MAX_ITEMS_TO_HANDLE:\n ret[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n ret['__len__'] = len(dict)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(dict)\n ret.update(additional_fields)\n return ret\n\n\ndef _apply_evaluate_name(parent_name, evaluate_name):\n return evaluate_name % (parent_name,)\n\n\n#=======================================================================================================================\n# TupleResolver\n#=======================================================================================================================\nclass TupleResolver: # to enumerate tuples and lists\n\n def resolve(self, var, attribute):\n '''\n @param var: that's the original attribute\n @param attribute: that's the key passed in the dict (as a string)\n '''\n if attribute in ('__len__', TOO_LARGE_ATTR):\n return None\n try:\n return var[int(attribute)]\n except:\n return getattr(var, attribute)\n\n def get_contents_debug_adapter_protocol(self, lst, fmt=None):\n '''\n This method is to be used in the case where the variables are all saved by its id (and as\n such don't need to have the `resolve` method called later on, so, keys don't need to\n embed the reference in the key).\n\n Note that the return should be ordered.\n\n :return list(tuple(name:str, value:object, evaluateName:str))\n '''\n l = len(lst)\n ret = []\n\n format_str = '%0' + str(int(len(str(l - 1)))) + 'd'\n if fmt is not None and fmt.get('hex', False):\n format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'\n\n for i, item in enumerate(lst):\n ret.append((format_str % i, item, '[%s]' % i))\n\n if i > MAX_ITEMS_TO_HANDLE:\n ret.append((TOO_LARGE_ATTR, TOO_LARGE_MSG, None))\n break\n\n ret.append(('__len__', len(lst), partial(_apply_evaluate_name, evaluate_name='len(%s)')))\n # Needed in case the class extends the built-in type and has some additional fields.\n from_default_resolver = defaultResolver.get_contents_debug_adapter_protocol(lst, fmt=fmt)\n if from_default_resolver:\n ret = from_default_resolver + ret\n return ret\n\n def get_dictionary(self, var, fmt={}):\n l = len(var)\n d = {}\n\n format_str = '%0' + str(int(len(str(l - 1)))) + 'd'\n if fmt is not None and fmt.get('hex', False):\n format_str = '0x%0' + str(int(len(hex(l).lstrip('0x')))) + 'x'\n\n for i, item in enumerate(var):\n d[format_str % i] = item\n\n if i > MAX_ITEMS_TO_HANDLE:\n d[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n d['__len__'] = len(var)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(var)\n d.update(additional_fields)\n return d\n\n\n#=======================================================================================================================\n# SetResolver\n#=======================================================================================================================\nclass SetResolver:\n '''\n Resolves a set as dict id(object)->object\n '''\n\n def resolve(self, var, attribute):\n if attribute in ('__len__', TOO_LARGE_ATTR):\n return None\n\n try:\n attribute = int(attribute)\n except:\n return getattr(var, attribute)\n\n for v in var:\n if id(v) == attribute:\n return v\n\n raise UnableToResolveVariableException('Unable to resolve %s in %s' % (attribute, var))\n\n def get_dictionary(self, var):\n d = {}\n i = 0\n for item in var:\n i += 1\n d[str(id(item))] = item\n\n if i > MAX_ITEMS_TO_HANDLE:\n d[TOO_LARGE_ATTR] = TOO_LARGE_MSG\n break\n\n d['__len__'] = len(var)\n # in case if the class extends built-in type and has some additional fields\n additional_fields = defaultResolver.get_dictionary(var)\n d.update(additional_fields)\n return d\n\n\n#=======================================================================================================================\n# InstanceResolver\n#=======================================================================================================================\nclass InstanceResolver:\n\n def resolve(self, var, attribute):\n field = var.__class__.getDeclaredField(attribute)\n field.setAccessible(True)\n return field.get(var)\n\n def get_dictionary(self, obj):\n ret = {}\n\n declaredFields = obj.__class__.getDeclaredFields()\n for i in xrange(len(declaredFields)):\n name = declaredFields[i].getName()\n try:\n declaredFields[i].setAccessible(True)\n ret[name] = declaredFields[i].get(obj)\n except:\n traceback.print_exc()\n\n return ret\n\n\n#=======================================================================================================================\n# JyArrayResolver\n#=======================================================================================================================\nclass JyArrayResolver:\n '''\n This resolves a regular Object[] array from java\n '''\n\n def resolve(self, var, attribute):\n if attribute == '__len__':\n return None\n return var[int(attribute)]\n\n def get_dictionary(self, obj):\n ret = {}\n\n for i in xrange(len(obj)):\n ret[ i ] = obj[i]\n\n ret['__len__'] = len(obj)\n return ret\n\n\n#=======================================================================================================================\n# MultiValueDictResolver\n#=======================================================================================================================\nclass MultiValueDictResolver(DictResolver):\n\n def resolve(self, dict, key):\n if key in ('__len__', TOO_LARGE_ATTR):\n return None\n\n # ok, we have to iterate over the items to find the one that matches the id, because that's the only way\n # to actually find the reference from the string we have before.\n expected_id = int(key.split('(')[-1][:-1])\n for key in dict_keys(dict):\n val = dict.getlist(key)\n if id(key) == expected_id:\n return val\n\n raise UnableToResolveVariableException()\n\n\n#=======================================================================================================================\n# DjangoFormResolver\n#=======================================================================================================================\nclass DjangoFormResolver(DefaultResolver):\n\n def get_dictionary(self, var, names=None):\n # Do not call self.errors because it is a property and has side effects.\n names, used___dict__ = self.get_names(var)\n\n has_errors_attr = False\n if \"errors\" in names:\n has_errors_attr = True\n names.remove(\"errors\")\n\n d = defaultResolver.get_dictionary(var, names=names, used___dict__=used___dict__)\n if has_errors_attr:\n try:\n errors_attr = getattr(var, \"_errors\")\n except:\n errors_attr = None\n d[\"errors\"] = errors_attr\n return d\n\n\n#=======================================================================================================================\n# DequeResolver\n#=======================================================================================================================\nclass DequeResolver(TupleResolver):\n\n def get_dictionary(self, var):\n d = TupleResolver.get_dictionary(self, var)\n d['maxlen'] = getattr(var, 'maxlen', None)\n return d\n\n\n#=======================================================================================================================\n# OrderedDictResolver\n#=======================================================================================================================\nclass OrderedDictResolver(DictResolver):\n\n def init_dict(self):\n return OrderedDict()\n\n\n#=======================================================================================================================\n# FrameResolver\n#=======================================================================================================================\nclass FrameResolver:\n '''\n This resolves a frame.\n '''\n\n def resolve(self, obj, attribute):\n if attribute == '__internals__':\n return defaultResolver.get_dictionary(obj)\n\n if attribute == 'stack':\n return self.get_frame_stack(obj)\n\n if attribute == 'f_locals':\n return obj.f_locals\n\n return None\n\n def get_dictionary(self, obj):\n ret = {}\n ret['__internals__'] = defaultResolver.get_dictionary(obj)\n ret['stack'] = self.get_frame_stack(obj)\n ret['f_locals'] = obj.f_locals\n return ret\n\n def get_frame_stack(self, frame):\n ret = []\n if frame is not None:\n ret.append(self.get_frame_name(frame))\n\n while frame.f_back:\n frame = frame.f_back\n ret.append(self.get_frame_name(frame))\n\n return ret\n\n def get_frame_name(self, frame):\n if frame is None:\n return 'None'\n try:\n name = basename(frame.f_code.co_filename)\n return 'frame: %s [%s:%s] id:%s' % (frame.f_code.co_name, name, frame.f_lineno, id(frame))\n except:\n return 'frame object'\n\n\ndefaultResolver = DefaultResolver()\ndictResolver = DictResolver()\ntupleResolver = TupleResolver()\ninstanceResolver = InstanceResolver()\njyArrayResolver = JyArrayResolver()\nsetResolver = SetResolver()\nmultiValueDictResolver = MultiValueDictResolver()\ndjangoFormResolver = DjangoFormResolver()\ndequeResolver = DequeResolver()\norderedDictResolver = OrderedDictResolver()\nframeResolver = FrameResolver()\n", "path": "src/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_resolver.py"}]} |
gh_patches_debug_1165 | rasdani/github-patches | git_diff | inventree__InvenTree-2666 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[BUG] Unable to create build output
<!---
Everything inside these brackets is hidden - please remove them where you fill out information.
--->
**Describe the bug**
On a fully allocated Build Order, I'm unable to create a build output.
**Steps to Reproduce**
1. Create any new Build Order
2. Fully allocate the stock as normal
3. Under the "Pending Items" tab, click "New Build Output"
4. Populate the Quantity field, check "Confirm" then click Submit
**Expected behavior**
The Build Output would be created.
**Actual Behavior**
Blocker: The application presents "Form errors exist" and no further action can be taken (as the required fields are complete)

Further inspection of the network in DevTools, shows a post to /api/build/1/create-output/ with the response:
```json
{"auto_allocate":["This field may not be null."]}
```
**Deployment Method**
- Docker
**Version Information**
InvenTree-Version: 0.6.0
Django Version: 3.2.12
Commit Hash: 37bd573
Commit Date: 2022-02-21
Database: postgresql
Debug-Mode: False
Deployed using Docker: True
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `InvenTree/build/serializers.py`
Content:
```
1 """
2 JSON serializers for Build API
3 """
4
5 # -*- coding: utf-8 -*-
6 from __future__ import unicode_literals
7
8 from django.db import transaction
9 from django.core.exceptions import ValidationError as DjangoValidationError
10 from django.utils.translation import ugettext_lazy as _
11
12 from django.db.models import Case, When, Value
13 from django.db.models import BooleanField
14
15 from rest_framework import serializers
16 from rest_framework.serializers import ValidationError
17
18 from InvenTree.serializers import InvenTreeModelSerializer, InvenTreeAttachmentSerializer
19 from InvenTree.serializers import UserSerializerBrief, ReferenceIndexingSerializerMixin
20
21 import InvenTree.helpers
22 from InvenTree.helpers import extract_serial_numbers
23 from InvenTree.serializers import InvenTreeDecimalField
24 from InvenTree.status_codes import StockStatus
25
26 from stock.models import StockItem, StockLocation
27 from stock.serializers import StockItemSerializerBrief, LocationSerializer
28
29 from part.models import BomItem
30 from part.serializers import PartSerializer, PartBriefSerializer
31 from users.serializers import OwnerSerializer
32
33 from .models import Build, BuildItem, BuildOrderAttachment
34
35
36 class BuildSerializer(ReferenceIndexingSerializerMixin, InvenTreeModelSerializer):
37 """
38 Serializes a Build object
39 """
40
41 url = serializers.CharField(source='get_absolute_url', read_only=True)
42 status_text = serializers.CharField(source='get_status_display', read_only=True)
43
44 part_detail = PartBriefSerializer(source='part', many=False, read_only=True)
45
46 quantity = InvenTreeDecimalField()
47
48 overdue = serializers.BooleanField(required=False, read_only=True)
49
50 issued_by_detail = UserSerializerBrief(source='issued_by', read_only=True)
51
52 responsible_detail = OwnerSerializer(source='responsible', read_only=True)
53
54 @staticmethod
55 def annotate_queryset(queryset):
56 """
57 Add custom annotations to the BuildSerializer queryset,
58 performing database queries as efficiently as possible.
59
60 The following annoted fields are added:
61
62 - overdue: True if the build is outstanding *and* the completion date has past
63
64 """
65
66 # Annotate a boolean 'overdue' flag
67
68 queryset = queryset.annotate(
69 overdue=Case(
70 When(
71 Build.OVERDUE_FILTER, then=Value(True, output_field=BooleanField()),
72 ),
73 default=Value(False, output_field=BooleanField())
74 )
75 )
76
77 return queryset
78
79 def __init__(self, *args, **kwargs):
80 part_detail = kwargs.pop('part_detail', True)
81
82 super().__init__(*args, **kwargs)
83
84 if part_detail is not True:
85 self.fields.pop('part_detail')
86
87 class Meta:
88 model = Build
89 fields = [
90 'pk',
91 'url',
92 'title',
93 'batch',
94 'creation_date',
95 'completed',
96 'completion_date',
97 'destination',
98 'parent',
99 'part',
100 'part_detail',
101 'overdue',
102 'reference',
103 'sales_order',
104 'quantity',
105 'status',
106 'status_text',
107 'target_date',
108 'take_from',
109 'notes',
110 'link',
111 'issued_by',
112 'issued_by_detail',
113 'responsible',
114 'responsible_detail',
115 ]
116
117 read_only_fields = [
118 'completed',
119 'creation_date',
120 'completion_data',
121 'status',
122 'status_text',
123 ]
124
125
126 class BuildOutputSerializer(serializers.Serializer):
127 """
128 Serializer for a "BuildOutput"
129
130 Note that a "BuildOutput" is really just a StockItem which is "in production"!
131 """
132
133 output = serializers.PrimaryKeyRelatedField(
134 queryset=StockItem.objects.all(),
135 many=False,
136 allow_null=False,
137 required=True,
138 label=_('Build Output'),
139 )
140
141 def validate_output(self, output):
142
143 build = self.context['build']
144
145 # As this serializer can be used in multiple contexts, we need to work out why we are here
146 to_complete = self.context.get('to_complete', False)
147
148 # The stock item must point to the build
149 if output.build != build:
150 raise ValidationError(_("Build output does not match the parent build"))
151
152 # The part must match!
153 if output.part != build.part:
154 raise ValidationError(_("Output part does not match BuildOrder part"))
155
156 # The build output must be "in production"
157 if not output.is_building:
158 raise ValidationError(_("This build output has already been completed"))
159
160 if to_complete:
161
162 # The build output must have all tracked parts allocated
163 if not build.isFullyAllocated(output):
164 raise ValidationError(_("This build output is not fully allocated"))
165
166 return output
167
168 class Meta:
169 fields = [
170 'output',
171 ]
172
173
174 class BuildOutputCreateSerializer(serializers.Serializer):
175 """
176 Serializer for creating a new BuildOutput against a BuildOrder.
177
178 URL pattern is "/api/build/<pk>/create-output/", where <pk> is the PK of a Build.
179
180 The Build object is provided to the serializer context.
181 """
182
183 quantity = serializers.DecimalField(
184 max_digits=15,
185 decimal_places=5,
186 min_value=0,
187 required=True,
188 label=_('Quantity'),
189 help_text=_('Enter quantity for build output'),
190 )
191
192 def get_build(self):
193 return self.context["build"]
194
195 def get_part(self):
196 return self.get_build().part
197
198 def validate_quantity(self, quantity):
199
200 if quantity < 0:
201 raise ValidationError(_("Quantity must be greater than zero"))
202
203 part = self.get_part()
204
205 if int(quantity) != quantity:
206 # Quantity must be an integer value if the part being built is trackable
207 if part.trackable:
208 raise ValidationError(_("Integer quantity required for trackable parts"))
209
210 if part.has_trackable_parts():
211 raise ValidationError(_("Integer quantity required, as the bill of materials contains trackable parts"))
212
213 return quantity
214
215 batch_code = serializers.CharField(
216 required=False,
217 allow_blank=True,
218 label=_('Batch Code'),
219 help_text=_('Batch code for this build output'),
220 )
221
222 serial_numbers = serializers.CharField(
223 allow_blank=True,
224 required=False,
225 label=_('Serial Numbers'),
226 help_text=_('Enter serial numbers for build outputs'),
227 )
228
229 def validate_serial_numbers(self, serial_numbers):
230
231 serial_numbers = serial_numbers.strip()
232
233 # TODO: Field level validation necessary here?
234 return serial_numbers
235
236 auto_allocate = serializers.BooleanField(
237 required=False,
238 default=False,
239 label=_('Auto Allocate Serial Numbers'),
240 help_text=_('Automatically allocate required items with matching serial numbers'),
241 )
242
243 def validate(self, data):
244 """
245 Perform form validation
246 """
247
248 part = self.get_part()
249
250 # Cache a list of serial numbers (to be used in the "save" method)
251 self.serials = None
252
253 quantity = data['quantity']
254 serial_numbers = data.get('serial_numbers', '')
255
256 if serial_numbers:
257
258 try:
259 self.serials = extract_serial_numbers(serial_numbers, quantity, part.getLatestSerialNumberInt())
260 except DjangoValidationError as e:
261 raise ValidationError({
262 'serial_numbers': e.messages,
263 })
264
265 # Check for conflicting serial numbesr
266 existing = []
267
268 for serial in self.serials:
269 if part.checkIfSerialNumberExists(serial):
270 existing.append(serial)
271
272 if len(existing) > 0:
273
274 msg = _("The following serial numbers already exist")
275 msg += " : "
276 msg += ",".join([str(e) for e in existing])
277
278 raise ValidationError({
279 'serial_numbers': msg,
280 })
281
282 return data
283
284 def save(self):
285 """
286 Generate the new build output(s)
287 """
288
289 data = self.validated_data
290
291 quantity = data['quantity']
292 batch_code = data.get('batch_code', '')
293 auto_allocate = data.get('auto_allocate', False)
294
295 build = self.get_build()
296
297 build.create_build_output(
298 quantity,
299 serials=self.serials,
300 batch=batch_code,
301 auto_allocate=auto_allocate,
302 )
303
304
305 class BuildOutputDeleteSerializer(serializers.Serializer):
306 """
307 DRF serializer for deleting (cancelling) one or more build outputs
308 """
309
310 class Meta:
311 fields = [
312 'outputs',
313 ]
314
315 outputs = BuildOutputSerializer(
316 many=True,
317 required=True,
318 )
319
320 def validate(self, data):
321
322 data = super().validate(data)
323
324 outputs = data.get('outputs', [])
325
326 if len(outputs) == 0:
327 raise ValidationError(_("A list of build outputs must be provided"))
328
329 return data
330
331 def save(self):
332 """
333 'save' the serializer to delete the build outputs
334 """
335
336 data = self.validated_data
337 outputs = data.get('outputs', [])
338
339 build = self.context['build']
340
341 with transaction.atomic():
342 for item in outputs:
343 output = item['output']
344 build.delete_output(output)
345
346
347 class BuildOutputCompleteSerializer(serializers.Serializer):
348 """
349 DRF serializer for completing one or more build outputs
350 """
351
352 class Meta:
353 fields = [
354 'outputs',
355 'location',
356 'status',
357 'notes',
358 ]
359
360 outputs = BuildOutputSerializer(
361 many=True,
362 required=True,
363 )
364
365 location = serializers.PrimaryKeyRelatedField(
366 queryset=StockLocation.objects.all(),
367 required=True,
368 many=False,
369 label=_("Location"),
370 help_text=_("Location for completed build outputs"),
371 )
372
373 status = serializers.ChoiceField(
374 choices=list(StockStatus.items()),
375 default=StockStatus.OK,
376 label=_("Status"),
377 )
378
379 notes = serializers.CharField(
380 label=_("Notes"),
381 required=False,
382 allow_blank=True,
383 )
384
385 def validate(self, data):
386
387 super().validate(data)
388
389 outputs = data.get('outputs', [])
390
391 if len(outputs) == 0:
392 raise ValidationError(_("A list of build outputs must be provided"))
393
394 return data
395
396 def save(self):
397 """
398 "save" the serializer to complete the build outputs
399 """
400
401 build = self.context['build']
402 request = self.context['request']
403
404 data = self.validated_data
405
406 outputs = data.get('outputs', [])
407
408 # Mark the specified build outputs as "complete"
409 with transaction.atomic():
410 for item in outputs:
411
412 output = item['output']
413
414 build.complete_build_output(
415 output,
416 request.user,
417 status=data['status'],
418 notes=data.get('notes', '')
419 )
420
421
422 class BuildCompleteSerializer(serializers.Serializer):
423 """
424 DRF serializer for marking a BuildOrder as complete
425 """
426
427 accept_unallocated = serializers.BooleanField(
428 label=_('Accept Unallocated'),
429 help_text=_('Accept that stock items have not been fully allocated to this build order'),
430 required=False,
431 default=False,
432 )
433
434 def validate_accept_unallocated(self, value):
435
436 build = self.context['build']
437
438 if not build.areUntrackedPartsFullyAllocated() and not value:
439 raise ValidationError(_('Required stock has not been fully allocated'))
440
441 return value
442
443 accept_incomplete = serializers.BooleanField(
444 label=_('Accept Incomplete'),
445 help_text=_('Accept that the required number of build outputs have not been completed'),
446 required=False,
447 default=False,
448 )
449
450 def validate_accept_incomplete(self, value):
451
452 build = self.context['build']
453
454 if build.remaining > 0 and not value:
455 raise ValidationError(_('Required build quantity has not been completed'))
456
457 return value
458
459 def validate(self, data):
460
461 build = self.context['build']
462
463 if build.incomplete_count > 0:
464 raise ValidationError(_("Build order has incomplete outputs"))
465
466 if not build.has_build_outputs():
467 raise ValidationError(_("No build outputs have been created for this build order"))
468
469 return data
470
471 def save(self):
472
473 request = self.context['request']
474 build = self.context['build']
475
476 build.complete_build(request.user)
477
478
479 class BuildUnallocationSerializer(serializers.Serializer):
480 """
481 DRF serializer for unallocating stock from a BuildOrder
482
483 Allocated stock can be unallocated with a number of filters:
484
485 - output: Filter against a particular build output (blank = untracked stock)
486 - bom_item: Filter against a particular BOM line item
487
488 """
489
490 bom_item = serializers.PrimaryKeyRelatedField(
491 queryset=BomItem.objects.all(),
492 many=False,
493 allow_null=True,
494 required=False,
495 label=_('BOM Item'),
496 )
497
498 output = serializers.PrimaryKeyRelatedField(
499 queryset=StockItem.objects.filter(
500 is_building=True,
501 ),
502 many=False,
503 allow_null=True,
504 required=False,
505 label=_("Build output"),
506 )
507
508 def validate_output(self, stock_item):
509
510 # Stock item must point to the same build order!
511 build = self.context['build']
512
513 if stock_item and stock_item.build != build:
514 raise ValidationError(_("Build output must point to the same build"))
515
516 return stock_item
517
518 def save(self):
519 """
520 'Save' the serializer data.
521 This performs the actual unallocation against the build order
522 """
523
524 build = self.context['build']
525
526 data = self.validated_data
527
528 build.unallocateStock(
529 bom_item=data['bom_item'],
530 output=data['output']
531 )
532
533
534 class BuildAllocationItemSerializer(serializers.Serializer):
535 """
536 A serializer for allocating a single stock item against a build order
537 """
538
539 bom_item = serializers.PrimaryKeyRelatedField(
540 queryset=BomItem.objects.all(),
541 many=False,
542 allow_null=False,
543 required=True,
544 label=_('BOM Item'),
545 )
546
547 def validate_bom_item(self, bom_item):
548 """
549 Check if the parts match!
550 """
551
552 build = self.context['build']
553
554 # BomItem should point to the same 'part' as the parent build
555 if build.part != bom_item.part:
556
557 # If not, it may be marked as "inherited" from a parent part
558 if bom_item.inherited and build.part in bom_item.part.get_descendants(include_self=False):
559 pass
560 else:
561 raise ValidationError(_("bom_item.part must point to the same part as the build order"))
562
563 return bom_item
564
565 stock_item = serializers.PrimaryKeyRelatedField(
566 queryset=StockItem.objects.all(),
567 many=False,
568 allow_null=False,
569 required=True,
570 label=_('Stock Item'),
571 )
572
573 def validate_stock_item(self, stock_item):
574
575 if not stock_item.in_stock:
576 raise ValidationError(_("Item must be in stock"))
577
578 return stock_item
579
580 quantity = serializers.DecimalField(
581 max_digits=15,
582 decimal_places=5,
583 min_value=0,
584 required=True
585 )
586
587 def validate_quantity(self, quantity):
588
589 if quantity <= 0:
590 raise ValidationError(_("Quantity must be greater than zero"))
591
592 return quantity
593
594 output = serializers.PrimaryKeyRelatedField(
595 queryset=StockItem.objects.filter(is_building=True),
596 many=False,
597 allow_null=True,
598 required=False,
599 label=_('Build Output'),
600 )
601
602 class Meta:
603 fields = [
604 'bom_item',
605 'stock_item',
606 'quantity',
607 'output',
608 ]
609
610 def validate(self, data):
611
612 super().validate(data)
613
614 bom_item = data['bom_item']
615 stock_item = data['stock_item']
616 quantity = data['quantity']
617 output = data.get('output', None)
618
619 # build = self.context['build']
620
621 # TODO: Check that the "stock item" is valid for the referenced "sub_part"
622 # Note: Because of allow_variants options, it may not be a direct match!
623
624 # Check that the quantity does not exceed the available amount from the stock item
625 q = stock_item.unallocated_quantity()
626
627 if quantity > q:
628
629 q = InvenTree.helpers.clean_decimal(q)
630
631 raise ValidationError({
632 'quantity': _(f"Available quantity ({q}) exceeded")
633 })
634
635 # Output *must* be set for trackable parts
636 if output is None and bom_item.sub_part.trackable:
637 raise ValidationError({
638 'output': _('Build output must be specified for allocation of tracked parts')
639 })
640
641 # Output *cannot* be set for un-tracked parts
642 if output is not None and not bom_item.sub_part.trackable:
643
644 raise ValidationError({
645 'output': _('Build output cannot be specified for allocation of untracked parts')
646 })
647
648 return data
649
650
651 class BuildAllocationSerializer(serializers.Serializer):
652 """
653 DRF serializer for allocation stock items against a build order
654 """
655
656 items = BuildAllocationItemSerializer(many=True)
657
658 class Meta:
659 fields = [
660 'items',
661 ]
662
663 def validate(self, data):
664 """
665 Validation
666 """
667
668 data = super().validate(data)
669
670 items = data.get('items', [])
671
672 if len(items) == 0:
673 raise ValidationError(_('Allocation items must be provided'))
674
675 return data
676
677 def save(self):
678
679 data = self.validated_data
680
681 items = data.get('items', [])
682
683 build = self.context['build']
684
685 with transaction.atomic():
686 for item in items:
687 bom_item = item['bom_item']
688 stock_item = item['stock_item']
689 quantity = item['quantity']
690 output = item.get('output', None)
691
692 try:
693 # Create a new BuildItem to allocate stock
694 BuildItem.objects.create(
695 build=build,
696 bom_item=bom_item,
697 stock_item=stock_item,
698 quantity=quantity,
699 install_into=output
700 )
701 except (ValidationError, DjangoValidationError) as exc:
702 # Catch model errors and re-throw as DRF errors
703 raise ValidationError(detail=serializers.as_serializer_error(exc))
704
705
706 class BuildItemSerializer(InvenTreeModelSerializer):
707 """ Serializes a BuildItem object """
708
709 bom_part = serializers.IntegerField(source='bom_item.sub_part.pk', read_only=True)
710 part = serializers.IntegerField(source='stock_item.part.pk', read_only=True)
711 location = serializers.IntegerField(source='stock_item.location.pk', read_only=True)
712
713 # Extra (optional) detail fields
714 part_detail = PartSerializer(source='stock_item.part', many=False, read_only=True)
715 build_detail = BuildSerializer(source='build', many=False, read_only=True)
716 stock_item_detail = StockItemSerializerBrief(source='stock_item', read_only=True)
717 location_detail = LocationSerializer(source='stock_item.location', read_only=True)
718
719 quantity = InvenTreeDecimalField()
720
721 def __init__(self, *args, **kwargs):
722
723 build_detail = kwargs.pop('build_detail', False)
724 part_detail = kwargs.pop('part_detail', False)
725 location_detail = kwargs.pop('location_detail', False)
726
727 super().__init__(*args, **kwargs)
728
729 if not build_detail:
730 self.fields.pop('build_detail')
731
732 if not part_detail:
733 self.fields.pop('part_detail')
734
735 if not location_detail:
736 self.fields.pop('location_detail')
737
738 class Meta:
739 model = BuildItem
740 fields = [
741 'pk',
742 'bom_part',
743 'build',
744 'build_detail',
745 'install_into',
746 'location',
747 'location_detail',
748 'part',
749 'part_detail',
750 'stock_item',
751 'stock_item_detail',
752 'quantity'
753 ]
754
755
756 class BuildAttachmentSerializer(InvenTreeAttachmentSerializer):
757 """
758 Serializer for a BuildAttachment
759 """
760
761 class Meta:
762 model = BuildOrderAttachment
763
764 fields = [
765 'pk',
766 'build',
767 'attachment',
768 'link',
769 'filename',
770 'comment',
771 'upload_date',
772 ]
773
774 read_only_fields = [
775 'upload_date',
776 ]
777
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/InvenTree/build/serializers.py b/InvenTree/build/serializers.py
--- a/InvenTree/build/serializers.py
+++ b/InvenTree/build/serializers.py
@@ -236,6 +236,7 @@
auto_allocate = serializers.BooleanField(
required=False,
default=False,
+ allow_null=True,
label=_('Auto Allocate Serial Numbers'),
help_text=_('Automatically allocate required items with matching serial numbers'),
)
| {"golden_diff": "diff --git a/InvenTree/build/serializers.py b/InvenTree/build/serializers.py\n--- a/InvenTree/build/serializers.py\n+++ b/InvenTree/build/serializers.py\n@@ -236,6 +236,7 @@\n auto_allocate = serializers.BooleanField(\n required=False,\n default=False,\n+ allow_null=True,\n label=_('Auto Allocate Serial Numbers'),\n help_text=_('Automatically allocate required items with matching serial numbers'),\n )\n", "issue": "[BUG] Unable to create build output\n<!---\r\nEverything inside these brackets is hidden - please remove them where you fill out information.\r\n--->\r\n\r\n\r\n**Describe the bug**\r\nOn a fully allocated Build Order, I'm unable to create a build output.\r\n\r\n**Steps to Reproduce**\r\n\r\n1. Create any new Build Order\r\n2. Fully allocate the stock as normal\r\n3. Under the \"Pending Items\" tab, click \"New Build Output\"\r\n4. Populate the Quantity field, check \"Confirm\" then click Submit\r\n\r\n**Expected behavior**\r\nThe Build Output would be created.\r\n\r\n**Actual Behavior**\r\nBlocker: The application presents \"Form errors exist\" and no further action can be taken (as the required fields are complete)\r\n\r\n\r\nFurther inspection of the network in DevTools, shows a post to /api/build/1/create-output/ with the response:\r\n```json\r\n{\"auto_allocate\":[\"This field may not be null.\"]}\r\n```\r\n\r\n**Deployment Method**\r\n- Docker\r\n\r\n**Version Information**\r\nInvenTree-Version: 0.6.0\r\nDjango Version: 3.2.12\r\nCommit Hash: 37bd573\r\nCommit Date: 2022-02-21\r\nDatabase: postgresql\r\nDebug-Mode: False\r\nDeployed using Docker: True\r\n\n", "before_files": [{"content": "\"\"\"\nJSON serializers for Build API\n\"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import transaction\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.db.models import Case, When, Value\nfrom django.db.models import BooleanField\n\nfrom rest_framework import serializers\nfrom rest_framework.serializers import ValidationError\n\nfrom InvenTree.serializers import InvenTreeModelSerializer, InvenTreeAttachmentSerializer\nfrom InvenTree.serializers import UserSerializerBrief, ReferenceIndexingSerializerMixin\n\nimport InvenTree.helpers\nfrom InvenTree.helpers import extract_serial_numbers\nfrom InvenTree.serializers import InvenTreeDecimalField\nfrom InvenTree.status_codes import StockStatus\n\nfrom stock.models import StockItem, StockLocation\nfrom stock.serializers import StockItemSerializerBrief, LocationSerializer\n\nfrom part.models import BomItem\nfrom part.serializers import PartSerializer, PartBriefSerializer\nfrom users.serializers import OwnerSerializer\n\nfrom .models import Build, BuildItem, BuildOrderAttachment\n\n\nclass BuildSerializer(ReferenceIndexingSerializerMixin, InvenTreeModelSerializer):\n \"\"\"\n Serializes a Build object\n \"\"\"\n\n url = serializers.CharField(source='get_absolute_url', read_only=True)\n status_text = serializers.CharField(source='get_status_display', read_only=True)\n\n part_detail = PartBriefSerializer(source='part', many=False, read_only=True)\n\n quantity = InvenTreeDecimalField()\n\n overdue = serializers.BooleanField(required=False, read_only=True)\n\n issued_by_detail = UserSerializerBrief(source='issued_by', read_only=True)\n\n responsible_detail = OwnerSerializer(source='responsible', read_only=True)\n\n @staticmethod\n def annotate_queryset(queryset):\n \"\"\"\n Add custom annotations to the BuildSerializer queryset,\n performing database queries as efficiently as possible.\n\n The following annoted fields are added:\n\n - overdue: True if the build is outstanding *and* the completion date has past\n\n \"\"\"\n\n # Annotate a boolean 'overdue' flag\n\n queryset = queryset.annotate(\n overdue=Case(\n When(\n Build.OVERDUE_FILTER, then=Value(True, output_field=BooleanField()),\n ),\n default=Value(False, output_field=BooleanField())\n )\n )\n\n return queryset\n\n def __init__(self, *args, **kwargs):\n part_detail = kwargs.pop('part_detail', True)\n\n super().__init__(*args, **kwargs)\n\n if part_detail is not True:\n self.fields.pop('part_detail')\n\n class Meta:\n model = Build\n fields = [\n 'pk',\n 'url',\n 'title',\n 'batch',\n 'creation_date',\n 'completed',\n 'completion_date',\n 'destination',\n 'parent',\n 'part',\n 'part_detail',\n 'overdue',\n 'reference',\n 'sales_order',\n 'quantity',\n 'status',\n 'status_text',\n 'target_date',\n 'take_from',\n 'notes',\n 'link',\n 'issued_by',\n 'issued_by_detail',\n 'responsible',\n 'responsible_detail',\n ]\n\n read_only_fields = [\n 'completed',\n 'creation_date',\n 'completion_data',\n 'status',\n 'status_text',\n ]\n\n\nclass BuildOutputSerializer(serializers.Serializer):\n \"\"\"\n Serializer for a \"BuildOutput\"\n\n Note that a \"BuildOutput\" is really just a StockItem which is \"in production\"!\n \"\"\"\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('Build Output'),\n )\n\n def validate_output(self, output):\n\n build = self.context['build']\n\n # As this serializer can be used in multiple contexts, we need to work out why we are here\n to_complete = self.context.get('to_complete', False)\n\n # The stock item must point to the build\n if output.build != build:\n raise ValidationError(_(\"Build output does not match the parent build\"))\n\n # The part must match!\n if output.part != build.part:\n raise ValidationError(_(\"Output part does not match BuildOrder part\"))\n\n # The build output must be \"in production\"\n if not output.is_building:\n raise ValidationError(_(\"This build output has already been completed\"))\n\n if to_complete:\n\n # The build output must have all tracked parts allocated\n if not build.isFullyAllocated(output):\n raise ValidationError(_(\"This build output is not fully allocated\"))\n\n return output\n\n class Meta:\n fields = [\n 'output',\n ]\n\n\nclass BuildOutputCreateSerializer(serializers.Serializer):\n \"\"\"\n Serializer for creating a new BuildOutput against a BuildOrder.\n\n URL pattern is \"/api/build/<pk>/create-output/\", where <pk> is the PK of a Build.\n\n The Build object is provided to the serializer context.\n \"\"\"\n\n quantity = serializers.DecimalField(\n max_digits=15,\n decimal_places=5,\n min_value=0,\n required=True,\n label=_('Quantity'),\n help_text=_('Enter quantity for build output'),\n )\n\n def get_build(self):\n return self.context[\"build\"]\n\n def get_part(self):\n return self.get_build().part\n\n def validate_quantity(self, quantity):\n\n if quantity < 0:\n raise ValidationError(_(\"Quantity must be greater than zero\"))\n\n part = self.get_part()\n\n if int(quantity) != quantity:\n # Quantity must be an integer value if the part being built is trackable\n if part.trackable:\n raise ValidationError(_(\"Integer quantity required for trackable parts\"))\n\n if part.has_trackable_parts():\n raise ValidationError(_(\"Integer quantity required, as the bill of materials contains trackable parts\"))\n\n return quantity\n\n batch_code = serializers.CharField(\n required=False,\n allow_blank=True,\n label=_('Batch Code'),\n help_text=_('Batch code for this build output'),\n )\n\n serial_numbers = serializers.CharField(\n allow_blank=True,\n required=False,\n label=_('Serial Numbers'),\n help_text=_('Enter serial numbers for build outputs'),\n )\n\n def validate_serial_numbers(self, serial_numbers):\n\n serial_numbers = serial_numbers.strip()\n\n # TODO: Field level validation necessary here?\n return serial_numbers\n\n auto_allocate = serializers.BooleanField(\n required=False,\n default=False,\n label=_('Auto Allocate Serial Numbers'),\n help_text=_('Automatically allocate required items with matching serial numbers'),\n )\n\n def validate(self, data):\n \"\"\"\n Perform form validation\n \"\"\"\n\n part = self.get_part()\n\n # Cache a list of serial numbers (to be used in the \"save\" method)\n self.serials = None\n\n quantity = data['quantity']\n serial_numbers = data.get('serial_numbers', '')\n\n if serial_numbers:\n\n try:\n self.serials = extract_serial_numbers(serial_numbers, quantity, part.getLatestSerialNumberInt())\n except DjangoValidationError as e:\n raise ValidationError({\n 'serial_numbers': e.messages,\n })\n\n # Check for conflicting serial numbesr\n existing = []\n\n for serial in self.serials:\n if part.checkIfSerialNumberExists(serial):\n existing.append(serial)\n\n if len(existing) > 0:\n\n msg = _(\"The following serial numbers already exist\")\n msg += \" : \"\n msg += \",\".join([str(e) for e in existing])\n\n raise ValidationError({\n 'serial_numbers': msg,\n })\n\n return data\n\n def save(self):\n \"\"\"\n Generate the new build output(s)\n \"\"\"\n\n data = self.validated_data\n\n quantity = data['quantity']\n batch_code = data.get('batch_code', '')\n auto_allocate = data.get('auto_allocate', False)\n\n build = self.get_build()\n\n build.create_build_output(\n quantity,\n serials=self.serials,\n batch=batch_code,\n auto_allocate=auto_allocate,\n )\n\n\nclass BuildOutputDeleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for deleting (cancelling) one or more build outputs\n \"\"\"\n\n class Meta:\n fields = [\n 'outputs',\n ]\n\n outputs = BuildOutputSerializer(\n many=True,\n required=True,\n )\n\n def validate(self, data):\n\n data = super().validate(data)\n\n outputs = data.get('outputs', [])\n\n if len(outputs) == 0:\n raise ValidationError(_(\"A list of build outputs must be provided\"))\n\n return data\n\n def save(self):\n \"\"\"\n 'save' the serializer to delete the build outputs\n \"\"\"\n\n data = self.validated_data\n outputs = data.get('outputs', [])\n\n build = self.context['build']\n\n with transaction.atomic():\n for item in outputs:\n output = item['output']\n build.delete_output(output)\n\n\nclass BuildOutputCompleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for completing one or more build outputs\n \"\"\"\n\n class Meta:\n fields = [\n 'outputs',\n 'location',\n 'status',\n 'notes',\n ]\n\n outputs = BuildOutputSerializer(\n many=True,\n required=True,\n )\n\n location = serializers.PrimaryKeyRelatedField(\n queryset=StockLocation.objects.all(),\n required=True,\n many=False,\n label=_(\"Location\"),\n help_text=_(\"Location for completed build outputs\"),\n )\n\n status = serializers.ChoiceField(\n choices=list(StockStatus.items()),\n default=StockStatus.OK,\n label=_(\"Status\"),\n )\n\n notes = serializers.CharField(\n label=_(\"Notes\"),\n required=False,\n allow_blank=True,\n )\n\n def validate(self, data):\n\n super().validate(data)\n\n outputs = data.get('outputs', [])\n\n if len(outputs) == 0:\n raise ValidationError(_(\"A list of build outputs must be provided\"))\n\n return data\n\n def save(self):\n \"\"\"\n \"save\" the serializer to complete the build outputs\n \"\"\"\n\n build = self.context['build']\n request = self.context['request']\n\n data = self.validated_data\n\n outputs = data.get('outputs', [])\n\n # Mark the specified build outputs as \"complete\"\n with transaction.atomic():\n for item in outputs:\n\n output = item['output']\n\n build.complete_build_output(\n output,\n request.user,\n status=data['status'],\n notes=data.get('notes', '')\n )\n\n\nclass BuildCompleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for marking a BuildOrder as complete\n \"\"\"\n\n accept_unallocated = serializers.BooleanField(\n label=_('Accept Unallocated'),\n help_text=_('Accept that stock items have not been fully allocated to this build order'),\n required=False,\n default=False,\n )\n\n def validate_accept_unallocated(self, value):\n\n build = self.context['build']\n\n if not build.areUntrackedPartsFullyAllocated() and not value:\n raise ValidationError(_('Required stock has not been fully allocated'))\n\n return value\n\n accept_incomplete = serializers.BooleanField(\n label=_('Accept Incomplete'),\n help_text=_('Accept that the required number of build outputs have not been completed'),\n required=False,\n default=False,\n )\n\n def validate_accept_incomplete(self, value):\n\n build = self.context['build']\n\n if build.remaining > 0 and not value:\n raise ValidationError(_('Required build quantity has not been completed'))\n\n return value\n\n def validate(self, data):\n\n build = self.context['build']\n\n if build.incomplete_count > 0:\n raise ValidationError(_(\"Build order has incomplete outputs\"))\n\n if not build.has_build_outputs():\n raise ValidationError(_(\"No build outputs have been created for this build order\"))\n\n return data\n\n def save(self):\n\n request = self.context['request']\n build = self.context['build']\n\n build.complete_build(request.user)\n\n\nclass BuildUnallocationSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for unallocating stock from a BuildOrder\n\n Allocated stock can be unallocated with a number of filters:\n\n - output: Filter against a particular build output (blank = untracked stock)\n - bom_item: Filter against a particular BOM line item\n\n \"\"\"\n\n bom_item = serializers.PrimaryKeyRelatedField(\n queryset=BomItem.objects.all(),\n many=False,\n allow_null=True,\n required=False,\n label=_('BOM Item'),\n )\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.filter(\n is_building=True,\n ),\n many=False,\n allow_null=True,\n required=False,\n label=_(\"Build output\"),\n )\n\n def validate_output(self, stock_item):\n\n # Stock item must point to the same build order!\n build = self.context['build']\n\n if stock_item and stock_item.build != build:\n raise ValidationError(_(\"Build output must point to the same build\"))\n\n return stock_item\n\n def save(self):\n \"\"\"\n 'Save' the serializer data.\n This performs the actual unallocation against the build order\n \"\"\"\n\n build = self.context['build']\n\n data = self.validated_data\n\n build.unallocateStock(\n bom_item=data['bom_item'],\n output=data['output']\n )\n\n\nclass BuildAllocationItemSerializer(serializers.Serializer):\n \"\"\"\n A serializer for allocating a single stock item against a build order\n \"\"\"\n\n bom_item = serializers.PrimaryKeyRelatedField(\n queryset=BomItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('BOM Item'),\n )\n\n def validate_bom_item(self, bom_item):\n \"\"\"\n Check if the parts match!\n \"\"\"\n\n build = self.context['build']\n\n # BomItem should point to the same 'part' as the parent build\n if build.part != bom_item.part:\n\n # If not, it may be marked as \"inherited\" from a parent part\n if bom_item.inherited and build.part in bom_item.part.get_descendants(include_self=False):\n pass\n else:\n raise ValidationError(_(\"bom_item.part must point to the same part as the build order\"))\n\n return bom_item\n\n stock_item = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('Stock Item'),\n )\n\n def validate_stock_item(self, stock_item):\n\n if not stock_item.in_stock:\n raise ValidationError(_(\"Item must be in stock\"))\n\n return stock_item\n\n quantity = serializers.DecimalField(\n max_digits=15,\n decimal_places=5,\n min_value=0,\n required=True\n )\n\n def validate_quantity(self, quantity):\n\n if quantity <= 0:\n raise ValidationError(_(\"Quantity must be greater than zero\"))\n\n return quantity\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.filter(is_building=True),\n many=False,\n allow_null=True,\n required=False,\n label=_('Build Output'),\n )\n\n class Meta:\n fields = [\n 'bom_item',\n 'stock_item',\n 'quantity',\n 'output',\n ]\n\n def validate(self, data):\n\n super().validate(data)\n\n bom_item = data['bom_item']\n stock_item = data['stock_item']\n quantity = data['quantity']\n output = data.get('output', None)\n\n # build = self.context['build']\n\n # TODO: Check that the \"stock item\" is valid for the referenced \"sub_part\"\n # Note: Because of allow_variants options, it may not be a direct match!\n\n # Check that the quantity does not exceed the available amount from the stock item\n q = stock_item.unallocated_quantity()\n\n if quantity > q:\n\n q = InvenTree.helpers.clean_decimal(q)\n\n raise ValidationError({\n 'quantity': _(f\"Available quantity ({q}) exceeded\")\n })\n\n # Output *must* be set for trackable parts\n if output is None and bom_item.sub_part.trackable:\n raise ValidationError({\n 'output': _('Build output must be specified for allocation of tracked parts')\n })\n\n # Output *cannot* be set for un-tracked parts\n if output is not None and not bom_item.sub_part.trackable:\n\n raise ValidationError({\n 'output': _('Build output cannot be specified for allocation of untracked parts')\n })\n\n return data\n\n\nclass BuildAllocationSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for allocation stock items against a build order\n \"\"\"\n\n items = BuildAllocationItemSerializer(many=True)\n\n class Meta:\n fields = [\n 'items',\n ]\n\n def validate(self, data):\n \"\"\"\n Validation\n \"\"\"\n\n data = super().validate(data)\n\n items = data.get('items', [])\n\n if len(items) == 0:\n raise ValidationError(_('Allocation items must be provided'))\n\n return data\n\n def save(self):\n\n data = self.validated_data\n\n items = data.get('items', [])\n\n build = self.context['build']\n\n with transaction.atomic():\n for item in items:\n bom_item = item['bom_item']\n stock_item = item['stock_item']\n quantity = item['quantity']\n output = item.get('output', None)\n\n try:\n # Create a new BuildItem to allocate stock\n BuildItem.objects.create(\n build=build,\n bom_item=bom_item,\n stock_item=stock_item,\n quantity=quantity,\n install_into=output\n )\n except (ValidationError, DjangoValidationError) as exc:\n # Catch model errors and re-throw as DRF errors\n raise ValidationError(detail=serializers.as_serializer_error(exc))\n\n\nclass BuildItemSerializer(InvenTreeModelSerializer):\n \"\"\" Serializes a BuildItem object \"\"\"\n\n bom_part = serializers.IntegerField(source='bom_item.sub_part.pk', read_only=True)\n part = serializers.IntegerField(source='stock_item.part.pk', read_only=True)\n location = serializers.IntegerField(source='stock_item.location.pk', read_only=True)\n\n # Extra (optional) detail fields\n part_detail = PartSerializer(source='stock_item.part', many=False, read_only=True)\n build_detail = BuildSerializer(source='build', many=False, read_only=True)\n stock_item_detail = StockItemSerializerBrief(source='stock_item', read_only=True)\n location_detail = LocationSerializer(source='stock_item.location', read_only=True)\n\n quantity = InvenTreeDecimalField()\n\n def __init__(self, *args, **kwargs):\n\n build_detail = kwargs.pop('build_detail', False)\n part_detail = kwargs.pop('part_detail', False)\n location_detail = kwargs.pop('location_detail', False)\n\n super().__init__(*args, **kwargs)\n\n if not build_detail:\n self.fields.pop('build_detail')\n\n if not part_detail:\n self.fields.pop('part_detail')\n\n if not location_detail:\n self.fields.pop('location_detail')\n\n class Meta:\n model = BuildItem\n fields = [\n 'pk',\n 'bom_part',\n 'build',\n 'build_detail',\n 'install_into',\n 'location',\n 'location_detail',\n 'part',\n 'part_detail',\n 'stock_item',\n 'stock_item_detail',\n 'quantity'\n ]\n\n\nclass BuildAttachmentSerializer(InvenTreeAttachmentSerializer):\n \"\"\"\n Serializer for a BuildAttachment\n \"\"\"\n\n class Meta:\n model = BuildOrderAttachment\n\n fields = [\n 'pk',\n 'build',\n 'attachment',\n 'link',\n 'filename',\n 'comment',\n 'upload_date',\n ]\n\n read_only_fields = [\n 'upload_date',\n ]\n", "path": "InvenTree/build/serializers.py"}], "after_files": [{"content": "\"\"\"\nJSON serializers for Build API\n\"\"\"\n\n# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import transaction\nfrom django.core.exceptions import ValidationError as DjangoValidationError\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom django.db.models import Case, When, Value\nfrom django.db.models import BooleanField\n\nfrom rest_framework import serializers\nfrom rest_framework.serializers import ValidationError\n\nfrom InvenTree.serializers import InvenTreeModelSerializer, InvenTreeAttachmentSerializer\nfrom InvenTree.serializers import UserSerializerBrief, ReferenceIndexingSerializerMixin\n\nimport InvenTree.helpers\nfrom InvenTree.helpers import extract_serial_numbers\nfrom InvenTree.serializers import InvenTreeDecimalField\nfrom InvenTree.status_codes import StockStatus\n\nfrom stock.models import StockItem, StockLocation\nfrom stock.serializers import StockItemSerializerBrief, LocationSerializer\n\nfrom part.models import BomItem\nfrom part.serializers import PartSerializer, PartBriefSerializer\nfrom users.serializers import OwnerSerializer\n\nfrom .models import Build, BuildItem, BuildOrderAttachment\n\n\nclass BuildSerializer(ReferenceIndexingSerializerMixin, InvenTreeModelSerializer):\n \"\"\"\n Serializes a Build object\n \"\"\"\n\n url = serializers.CharField(source='get_absolute_url', read_only=True)\n status_text = serializers.CharField(source='get_status_display', read_only=True)\n\n part_detail = PartBriefSerializer(source='part', many=False, read_only=True)\n\n quantity = InvenTreeDecimalField()\n\n overdue = serializers.BooleanField(required=False, read_only=True)\n\n issued_by_detail = UserSerializerBrief(source='issued_by', read_only=True)\n\n responsible_detail = OwnerSerializer(source='responsible', read_only=True)\n\n @staticmethod\n def annotate_queryset(queryset):\n \"\"\"\n Add custom annotations to the BuildSerializer queryset,\n performing database queries as efficiently as possible.\n\n The following annoted fields are added:\n\n - overdue: True if the build is outstanding *and* the completion date has past\n\n \"\"\"\n\n # Annotate a boolean 'overdue' flag\n\n queryset = queryset.annotate(\n overdue=Case(\n When(\n Build.OVERDUE_FILTER, then=Value(True, output_field=BooleanField()),\n ),\n default=Value(False, output_field=BooleanField())\n )\n )\n\n return queryset\n\n def __init__(self, *args, **kwargs):\n part_detail = kwargs.pop('part_detail', True)\n\n super().__init__(*args, **kwargs)\n\n if part_detail is not True:\n self.fields.pop('part_detail')\n\n class Meta:\n model = Build\n fields = [\n 'pk',\n 'url',\n 'title',\n 'batch',\n 'creation_date',\n 'completed',\n 'completion_date',\n 'destination',\n 'parent',\n 'part',\n 'part_detail',\n 'overdue',\n 'reference',\n 'sales_order',\n 'quantity',\n 'status',\n 'status_text',\n 'target_date',\n 'take_from',\n 'notes',\n 'link',\n 'issued_by',\n 'issued_by_detail',\n 'responsible',\n 'responsible_detail',\n ]\n\n read_only_fields = [\n 'completed',\n 'creation_date',\n 'completion_data',\n 'status',\n 'status_text',\n ]\n\n\nclass BuildOutputSerializer(serializers.Serializer):\n \"\"\"\n Serializer for a \"BuildOutput\"\n\n Note that a \"BuildOutput\" is really just a StockItem which is \"in production\"!\n \"\"\"\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('Build Output'),\n )\n\n def validate_output(self, output):\n\n build = self.context['build']\n\n # As this serializer can be used in multiple contexts, we need to work out why we are here\n to_complete = self.context.get('to_complete', False)\n\n # The stock item must point to the build\n if output.build != build:\n raise ValidationError(_(\"Build output does not match the parent build\"))\n\n # The part must match!\n if output.part != build.part:\n raise ValidationError(_(\"Output part does not match BuildOrder part\"))\n\n # The build output must be \"in production\"\n if not output.is_building:\n raise ValidationError(_(\"This build output has already been completed\"))\n\n if to_complete:\n\n # The build output must have all tracked parts allocated\n if not build.isFullyAllocated(output):\n raise ValidationError(_(\"This build output is not fully allocated\"))\n\n return output\n\n class Meta:\n fields = [\n 'output',\n ]\n\n\nclass BuildOutputCreateSerializer(serializers.Serializer):\n \"\"\"\n Serializer for creating a new BuildOutput against a BuildOrder.\n\n URL pattern is \"/api/build/<pk>/create-output/\", where <pk> is the PK of a Build.\n\n The Build object is provided to the serializer context.\n \"\"\"\n\n quantity = serializers.DecimalField(\n max_digits=15,\n decimal_places=5,\n min_value=0,\n required=True,\n label=_('Quantity'),\n help_text=_('Enter quantity for build output'),\n )\n\n def get_build(self):\n return self.context[\"build\"]\n\n def get_part(self):\n return self.get_build().part\n\n def validate_quantity(self, quantity):\n\n if quantity < 0:\n raise ValidationError(_(\"Quantity must be greater than zero\"))\n\n part = self.get_part()\n\n if int(quantity) != quantity:\n # Quantity must be an integer value if the part being built is trackable\n if part.trackable:\n raise ValidationError(_(\"Integer quantity required for trackable parts\"))\n\n if part.has_trackable_parts():\n raise ValidationError(_(\"Integer quantity required, as the bill of materials contains trackable parts\"))\n\n return quantity\n\n batch_code = serializers.CharField(\n required=False,\n allow_blank=True,\n label=_('Batch Code'),\n help_text=_('Batch code for this build output'),\n )\n\n serial_numbers = serializers.CharField(\n allow_blank=True,\n required=False,\n label=_('Serial Numbers'),\n help_text=_('Enter serial numbers for build outputs'),\n )\n\n def validate_serial_numbers(self, serial_numbers):\n\n serial_numbers = serial_numbers.strip()\n\n # TODO: Field level validation necessary here?\n return serial_numbers\n\n auto_allocate = serializers.BooleanField(\n required=False,\n default=False,\n allow_null=True,\n label=_('Auto Allocate Serial Numbers'),\n help_text=_('Automatically allocate required items with matching serial numbers'),\n )\n\n def validate(self, data):\n \"\"\"\n Perform form validation\n \"\"\"\n\n part = self.get_part()\n\n # Cache a list of serial numbers (to be used in the \"save\" method)\n self.serials = None\n\n quantity = data['quantity']\n serial_numbers = data.get('serial_numbers', '')\n\n if serial_numbers:\n\n try:\n self.serials = extract_serial_numbers(serial_numbers, quantity, part.getLatestSerialNumberInt())\n except DjangoValidationError as e:\n raise ValidationError({\n 'serial_numbers': e.messages,\n })\n\n # Check for conflicting serial numbesr\n existing = []\n\n for serial in self.serials:\n if part.checkIfSerialNumberExists(serial):\n existing.append(serial)\n\n if len(existing) > 0:\n\n msg = _(\"The following serial numbers already exist\")\n msg += \" : \"\n msg += \",\".join([str(e) for e in existing])\n\n raise ValidationError({\n 'serial_numbers': msg,\n })\n\n return data\n\n def save(self):\n \"\"\"\n Generate the new build output(s)\n \"\"\"\n\n data = self.validated_data\n\n quantity = data['quantity']\n batch_code = data.get('batch_code', '')\n auto_allocate = data.get('auto_allocate', False)\n\n build = self.get_build()\n\n build.create_build_output(\n quantity,\n serials=self.serials,\n batch=batch_code,\n auto_allocate=auto_allocate,\n )\n\n\nclass BuildOutputDeleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for deleting (cancelling) one or more build outputs\n \"\"\"\n\n class Meta:\n fields = [\n 'outputs',\n ]\n\n outputs = BuildOutputSerializer(\n many=True,\n required=True,\n )\n\n def validate(self, data):\n\n data = super().validate(data)\n\n outputs = data.get('outputs', [])\n\n if len(outputs) == 0:\n raise ValidationError(_(\"A list of build outputs must be provided\"))\n\n return data\n\n def save(self):\n \"\"\"\n 'save' the serializer to delete the build outputs\n \"\"\"\n\n data = self.validated_data\n outputs = data.get('outputs', [])\n\n build = self.context['build']\n\n with transaction.atomic():\n for item in outputs:\n output = item['output']\n build.delete_output(output)\n\n\nclass BuildOutputCompleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for completing one or more build outputs\n \"\"\"\n\n class Meta:\n fields = [\n 'outputs',\n 'location',\n 'status',\n 'notes',\n ]\n\n outputs = BuildOutputSerializer(\n many=True,\n required=True,\n )\n\n location = serializers.PrimaryKeyRelatedField(\n queryset=StockLocation.objects.all(),\n required=True,\n many=False,\n label=_(\"Location\"),\n help_text=_(\"Location for completed build outputs\"),\n )\n\n status = serializers.ChoiceField(\n choices=list(StockStatus.items()),\n default=StockStatus.OK,\n label=_(\"Status\"),\n )\n\n notes = serializers.CharField(\n label=_(\"Notes\"),\n required=False,\n allow_blank=True,\n )\n\n def validate(self, data):\n\n super().validate(data)\n\n outputs = data.get('outputs', [])\n\n if len(outputs) == 0:\n raise ValidationError(_(\"A list of build outputs must be provided\"))\n\n return data\n\n def save(self):\n \"\"\"\n \"save\" the serializer to complete the build outputs\n \"\"\"\n\n build = self.context['build']\n request = self.context['request']\n\n data = self.validated_data\n\n outputs = data.get('outputs', [])\n\n # Mark the specified build outputs as \"complete\"\n with transaction.atomic():\n for item in outputs:\n\n output = item['output']\n\n build.complete_build_output(\n output,\n request.user,\n status=data['status'],\n notes=data.get('notes', '')\n )\n\n\nclass BuildCompleteSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for marking a BuildOrder as complete\n \"\"\"\n\n accept_unallocated = serializers.BooleanField(\n label=_('Accept Unallocated'),\n help_text=_('Accept that stock items have not been fully allocated to this build order'),\n required=False,\n default=False,\n )\n\n def validate_accept_unallocated(self, value):\n\n build = self.context['build']\n\n if not build.areUntrackedPartsFullyAllocated() and not value:\n raise ValidationError(_('Required stock has not been fully allocated'))\n\n return value\n\n accept_incomplete = serializers.BooleanField(\n label=_('Accept Incomplete'),\n help_text=_('Accept that the required number of build outputs have not been completed'),\n required=False,\n default=False,\n )\n\n def validate_accept_incomplete(self, value):\n\n build = self.context['build']\n\n if build.remaining > 0 and not value:\n raise ValidationError(_('Required build quantity has not been completed'))\n\n return value\n\n def validate(self, data):\n\n build = self.context['build']\n\n if build.incomplete_count > 0:\n raise ValidationError(_(\"Build order has incomplete outputs\"))\n\n if not build.has_build_outputs():\n raise ValidationError(_(\"No build outputs have been created for this build order\"))\n\n return data\n\n def save(self):\n\n request = self.context['request']\n build = self.context['build']\n\n build.complete_build(request.user)\n\n\nclass BuildUnallocationSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for unallocating stock from a BuildOrder\n\n Allocated stock can be unallocated with a number of filters:\n\n - output: Filter against a particular build output (blank = untracked stock)\n - bom_item: Filter against a particular BOM line item\n\n \"\"\"\n\n bom_item = serializers.PrimaryKeyRelatedField(\n queryset=BomItem.objects.all(),\n many=False,\n allow_null=True,\n required=False,\n label=_('BOM Item'),\n )\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.filter(\n is_building=True,\n ),\n many=False,\n allow_null=True,\n required=False,\n label=_(\"Build output\"),\n )\n\n def validate_output(self, stock_item):\n\n # Stock item must point to the same build order!\n build = self.context['build']\n\n if stock_item and stock_item.build != build:\n raise ValidationError(_(\"Build output must point to the same build\"))\n\n return stock_item\n\n def save(self):\n \"\"\"\n 'Save' the serializer data.\n This performs the actual unallocation against the build order\n \"\"\"\n\n build = self.context['build']\n\n data = self.validated_data\n\n build.unallocateStock(\n bom_item=data['bom_item'],\n output=data['output']\n )\n\n\nclass BuildAllocationItemSerializer(serializers.Serializer):\n \"\"\"\n A serializer for allocating a single stock item against a build order\n \"\"\"\n\n bom_item = serializers.PrimaryKeyRelatedField(\n queryset=BomItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('BOM Item'),\n )\n\n def validate_bom_item(self, bom_item):\n \"\"\"\n Check if the parts match!\n \"\"\"\n\n build = self.context['build']\n\n # BomItem should point to the same 'part' as the parent build\n if build.part != bom_item.part:\n\n # If not, it may be marked as \"inherited\" from a parent part\n if bom_item.inherited and build.part in bom_item.part.get_descendants(include_self=False):\n pass\n else:\n raise ValidationError(_(\"bom_item.part must point to the same part as the build order\"))\n\n return bom_item\n\n stock_item = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.all(),\n many=False,\n allow_null=False,\n required=True,\n label=_('Stock Item'),\n )\n\n def validate_stock_item(self, stock_item):\n\n if not stock_item.in_stock:\n raise ValidationError(_(\"Item must be in stock\"))\n\n return stock_item\n\n quantity = serializers.DecimalField(\n max_digits=15,\n decimal_places=5,\n min_value=0,\n required=True\n )\n\n def validate_quantity(self, quantity):\n\n if quantity <= 0:\n raise ValidationError(_(\"Quantity must be greater than zero\"))\n\n return quantity\n\n output = serializers.PrimaryKeyRelatedField(\n queryset=StockItem.objects.filter(is_building=True),\n many=False,\n allow_null=True,\n required=False,\n label=_('Build Output'),\n )\n\n class Meta:\n fields = [\n 'bom_item',\n 'stock_item',\n 'quantity',\n 'output',\n ]\n\n def validate(self, data):\n\n super().validate(data)\n\n bom_item = data['bom_item']\n stock_item = data['stock_item']\n quantity = data['quantity']\n output = data.get('output', None)\n\n # build = self.context['build']\n\n # TODO: Check that the \"stock item\" is valid for the referenced \"sub_part\"\n # Note: Because of allow_variants options, it may not be a direct match!\n\n # Check that the quantity does not exceed the available amount from the stock item\n q = stock_item.unallocated_quantity()\n\n if quantity > q:\n\n q = InvenTree.helpers.clean_decimal(q)\n\n raise ValidationError({\n 'quantity': _(f\"Available quantity ({q}) exceeded\")\n })\n\n # Output *must* be set for trackable parts\n if output is None and bom_item.sub_part.trackable:\n raise ValidationError({\n 'output': _('Build output must be specified for allocation of tracked parts')\n })\n\n # Output *cannot* be set for un-tracked parts\n if output is not None and not bom_item.sub_part.trackable:\n\n raise ValidationError({\n 'output': _('Build output cannot be specified for allocation of untracked parts')\n })\n\n return data\n\n\nclass BuildAllocationSerializer(serializers.Serializer):\n \"\"\"\n DRF serializer for allocation stock items against a build order\n \"\"\"\n\n items = BuildAllocationItemSerializer(many=True)\n\n class Meta:\n fields = [\n 'items',\n ]\n\n def validate(self, data):\n \"\"\"\n Validation\n \"\"\"\n\n data = super().validate(data)\n\n items = data.get('items', [])\n\n if len(items) == 0:\n raise ValidationError(_('Allocation items must be provided'))\n\n return data\n\n def save(self):\n\n data = self.validated_data\n\n items = data.get('items', [])\n\n build = self.context['build']\n\n with transaction.atomic():\n for item in items:\n bom_item = item['bom_item']\n stock_item = item['stock_item']\n quantity = item['quantity']\n output = item.get('output', None)\n\n try:\n # Create a new BuildItem to allocate stock\n BuildItem.objects.create(\n build=build,\n bom_item=bom_item,\n stock_item=stock_item,\n quantity=quantity,\n install_into=output\n )\n except (ValidationError, DjangoValidationError) as exc:\n # Catch model errors and re-throw as DRF errors\n raise ValidationError(detail=serializers.as_serializer_error(exc))\n\n\nclass BuildItemSerializer(InvenTreeModelSerializer):\n \"\"\" Serializes a BuildItem object \"\"\"\n\n bom_part = serializers.IntegerField(source='bom_item.sub_part.pk', read_only=True)\n part = serializers.IntegerField(source='stock_item.part.pk', read_only=True)\n location = serializers.IntegerField(source='stock_item.location.pk', read_only=True)\n\n # Extra (optional) detail fields\n part_detail = PartSerializer(source='stock_item.part', many=False, read_only=True)\n build_detail = BuildSerializer(source='build', many=False, read_only=True)\n stock_item_detail = StockItemSerializerBrief(source='stock_item', read_only=True)\n location_detail = LocationSerializer(source='stock_item.location', read_only=True)\n\n quantity = InvenTreeDecimalField()\n\n def __init__(self, *args, **kwargs):\n\n build_detail = kwargs.pop('build_detail', False)\n part_detail = kwargs.pop('part_detail', False)\n location_detail = kwargs.pop('location_detail', False)\n\n super().__init__(*args, **kwargs)\n\n if not build_detail:\n self.fields.pop('build_detail')\n\n if not part_detail:\n self.fields.pop('part_detail')\n\n if not location_detail:\n self.fields.pop('location_detail')\n\n class Meta:\n model = BuildItem\n fields = [\n 'pk',\n 'bom_part',\n 'build',\n 'build_detail',\n 'install_into',\n 'location',\n 'location_detail',\n 'part',\n 'part_detail',\n 'stock_item',\n 'stock_item_detail',\n 'quantity'\n ]\n\n\nclass BuildAttachmentSerializer(InvenTreeAttachmentSerializer):\n \"\"\"\n Serializer for a BuildAttachment\n \"\"\"\n\n class Meta:\n model = BuildOrderAttachment\n\n fields = [\n 'pk',\n 'build',\n 'attachment',\n 'link',\n 'filename',\n 'comment',\n 'upload_date',\n ]\n\n read_only_fields = [\n 'upload_date',\n ]\n", "path": "InvenTree/build/serializers.py"}]} |
gh_patches_debug_1166 | rasdani/github-patches | git_diff | svthalia__concrexit-3558 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Separate promotion permissions in eventadmin inline from the main promotion perms
### What?
Currently, people need add/change_promotionrequest permission to make promotionrequests for their events. But with this permission they also get full access to all other promotionrequests. So we should make the inline in the eventadmin bypass that check. For all I care, anyone who can change an event can make a promorquest from the eventadmin (by virtue of their 'change_event' permission and being an organizer of the event), without having the add/change_promotionrequest permission, and thus without seeing the main Promotion Requests changelist page.
### Why?
<!-- A clear and concise motivation why we should consider implementing this. -->
Least privilege principle: many people should be allowed to request promotion for their own events, but don't need to be able to edit unrelated requests. And this way we can have promocie be able to bypass the requirements in #3529, without normal organizers being able to do the same.
### How?
Override has_xxx_permission() on the inline class. Read the inlinemodeladmin docs for guidance.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `website/events/admin/inlines.py`
Content:
```
1 from django.contrib import admin
2
3 from events import models
4 from pizzas.models import FoodEvent
5 from promotion.models import PromotionRequest
6
7 from .forms import RegistrationInformationFieldForm
8
9
10 class RegistrationInformationFieldInline(admin.TabularInline):
11 """The inline for registration information fields in the Event admin."""
12
13 form = RegistrationInformationFieldForm
14 extra = 0
15 model = models.RegistrationInformationField
16 ordering = ("_order",)
17
18 radio_fields = {"type": admin.VERTICAL}
19
20 def get_formset(self, request, obj=None, **kwargs):
21 formset = super().get_formset(request, obj, **kwargs)
22 if obj is not None:
23 count = obj.registrationinformationfield_set.count()
24 formset.form.declared_fields["order"].initial = count
25 return formset
26
27
28 class PizzaEventInline(admin.StackedInline):
29 """The inline for pizza events in the Event admin."""
30
31 model = FoodEvent
32 extra = 0
33 max_num = 1
34
35
36 class PromotionRequestInline(admin.StackedInline):
37 model = PromotionRequest
38 readonly_fields = (
39 "assigned_to",
40 "status",
41 "drive_folder",
42 )
43 extra = 0
44
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/website/events/admin/inlines.py b/website/events/admin/inlines.py
--- a/website/events/admin/inlines.py
+++ b/website/events/admin/inlines.py
@@ -39,5 +39,19 @@
"assigned_to",
"status",
"drive_folder",
+ "status_updated",
)
+
+ def has_add_permission(self, request, obj=None):
+ return True
+
+ def has_view_permission(self, request, obj=None):
+ return True
+
+ def has_change_permission(self, request, obj=None):
+ return True
+
+ def has_delete_permission(self, request, obj=None):
+ return True
+
extra = 0
| {"golden_diff": "diff --git a/website/events/admin/inlines.py b/website/events/admin/inlines.py\n--- a/website/events/admin/inlines.py\n+++ b/website/events/admin/inlines.py\n@@ -39,5 +39,19 @@\n \"assigned_to\",\n \"status\",\n \"drive_folder\",\n+ \"status_updated\",\n )\n+\n+ def has_add_permission(self, request, obj=None):\n+ return True\n+\n+ def has_view_permission(self, request, obj=None):\n+ return True\n+\n+ def has_change_permission(self, request, obj=None):\n+ return True\n+\n+ def has_delete_permission(self, request, obj=None):\n+ return True\n+\n extra = 0\n", "issue": "Separate promotion permissions in eventadmin inline from the main promotion perms\n### What?\r\nCurrently, people need add/change_promotionrequest permission to make promotionrequests for their events. But with this permission they also get full access to all other promotionrequests. So we should make the inline in the eventadmin bypass that check. For all I care, anyone who can change an event can make a promorquest from the eventadmin (by virtue of their 'change_event' permission and being an organizer of the event), without having the add/change_promotionrequest permission, and thus without seeing the main Promotion Requests changelist page.\r\n\r\n### Why?\r\n<!-- A clear and concise motivation why we should consider implementing this. -->\r\nLeast privilege principle: many people should be allowed to request promotion for their own events, but don't need to be able to edit unrelated requests. And this way we can have promocie be able to bypass the requirements in #3529, without normal organizers being able to do the same.\r\n\r\n### How?\r\nOverride has_xxx_permission() on the inline class. Read the inlinemodeladmin docs for guidance.\r\n\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom events import models\nfrom pizzas.models import FoodEvent\nfrom promotion.models import PromotionRequest\n\nfrom .forms import RegistrationInformationFieldForm\n\n\nclass RegistrationInformationFieldInline(admin.TabularInline):\n \"\"\"The inline for registration information fields in the Event admin.\"\"\"\n\n form = RegistrationInformationFieldForm\n extra = 0\n model = models.RegistrationInformationField\n ordering = (\"_order\",)\n\n radio_fields = {\"type\": admin.VERTICAL}\n\n def get_formset(self, request, obj=None, **kwargs):\n formset = super().get_formset(request, obj, **kwargs)\n if obj is not None:\n count = obj.registrationinformationfield_set.count()\n formset.form.declared_fields[\"order\"].initial = count\n return formset\n\n\nclass PizzaEventInline(admin.StackedInline):\n \"\"\"The inline for pizza events in the Event admin.\"\"\"\n\n model = FoodEvent\n extra = 0\n max_num = 1\n\n\nclass PromotionRequestInline(admin.StackedInline):\n model = PromotionRequest\n readonly_fields = (\n \"assigned_to\",\n \"status\",\n \"drive_folder\",\n )\n extra = 0\n", "path": "website/events/admin/inlines.py"}], "after_files": [{"content": "from django.contrib import admin\n\nfrom events import models\nfrom pizzas.models import FoodEvent\nfrom promotion.models import PromotionRequest\n\nfrom .forms import RegistrationInformationFieldForm\n\n\nclass RegistrationInformationFieldInline(admin.TabularInline):\n \"\"\"The inline for registration information fields in the Event admin.\"\"\"\n\n form = RegistrationInformationFieldForm\n extra = 0\n model = models.RegistrationInformationField\n ordering = (\"_order\",)\n\n radio_fields = {\"type\": admin.VERTICAL}\n\n def get_formset(self, request, obj=None, **kwargs):\n formset = super().get_formset(request, obj, **kwargs)\n if obj is not None:\n count = obj.registrationinformationfield_set.count()\n formset.form.declared_fields[\"order\"].initial = count\n return formset\n\n\nclass PizzaEventInline(admin.StackedInline):\n \"\"\"The inline for pizza events in the Event admin.\"\"\"\n\n model = FoodEvent\n extra = 0\n max_num = 1\n\n\nclass PromotionRequestInline(admin.StackedInline):\n model = PromotionRequest\n readonly_fields = (\n \"assigned_to\",\n \"status\",\n \"drive_folder\",\n \"status_updated\",\n )\n\n def has_add_permission(self, request, obj=None):\n return True\n\n def has_view_permission(self, request, obj=None):\n return True\n\n def has_change_permission(self, request, obj=None):\n return True\n\n def has_delete_permission(self, request, obj=None):\n return True\n\n extra = 0\n", "path": "website/events/admin/inlines.py"}]} |
gh_patches_debug_1167 | rasdani/github-patches | git_diff | saleor__saleor-2909 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
OrderCancel mutation throws error when asked not to restock inventory
### What I'm trying to achieve
Cancel order without doing restock.
### Steps to reproduce the problem
1. Execute query
```
mutation {
orderCancel(id: "T3JkZXI6MTQ=", restock: false) {
errors {
field
message
}
order {
id
}
}
}
```
2. Get an error
3. Order is cancelled anyway
### What I expected to happen
To work perfectly. Note: if `restock: true`, mutation executes properly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `saleor/graphql/order/mutations/orders.py`
Content:
```
1 import graphene
2 from django.utils.translation import pgettext_lazy
3 from graphql_jwt.decorators import permission_required
4 from payments import PaymentError, PaymentStatus
5
6 from ....core.utils.taxes import ZERO_TAXED_MONEY
7 from ....order import CustomPaymentChoices, OrderEvents, models
8 from ....order.utils import cancel_order
9 from ....shipping.models import ShippingMethod as ShippingMethodModel
10 from ...account.types import AddressInput
11 from ...core.mutations import BaseMutation
12 from ...core.types.common import Decimal, Error
13 from ...order.mutations.draft_orders import DraftOrderUpdate
14 from ...order.types import Order, OrderEvent
15 from ...shipping.types import ShippingMethod
16
17
18 def clean_order_update_shipping(order, method, errors):
19 if not method:
20 return errors
21 if not order.shipping_address:
22 errors.append(
23 Error(
24 field='order',
25 message=(
26 'Cannot choose a shipping method for an '
27 'order without the shipping address.')))
28 return errors
29 valid_methods = (
30 ShippingMethodModel.objects.applicable_shipping_methods(
31 price=order.get_subtotal().gross.amount,
32 weight=order.get_total_weight(),
33 country_code=order.shipping_address.country.code))
34 valid_methods = valid_methods.values_list('id', flat=True)
35 if method.pk not in valid_methods:
36 errors.append(
37 Error(
38 field='shippingMethod',
39 message='Shipping method cannot be used with this order.'))
40 return errors
41
42
43 def try_payment_action(action, money, errors):
44 try:
45 action(money)
46 except (PaymentError, ValueError) as e:
47 errors.append(Error(field='payment', message=str(e)))
48
49
50 def clean_order_cancel(order, errors):
51 if order and not order.can_cancel():
52 errors.append(
53 Error(
54 field='order',
55 message='This order can\'t be canceled.'))
56 return errors
57
58
59 def clean_order_mark_as_paid(order, errors):
60 if order and order.payments.exists():
61 errors.append(
62 Error(
63 field='payment',
64 message='Orders with payments can not be manually '
65 'marked as paid.'))
66 return errors
67
68
69 def clean_order_capture(payment, amount, errors):
70 if not payment:
71 errors.append(
72 Error(
73 field='payment',
74 message='There\'s no payment associated with the order.'))
75 return errors
76 if payment.status != PaymentStatus.PREAUTH:
77 errors.append(
78 Error(
79 field='payment',
80 message='Only pre-authorized payments can be captured'))
81 return errors
82
83
84 def clean_release_payment(payment, errors):
85 """Check for payment errors."""
86 if payment.status != PaymentStatus.PREAUTH:
87 errors.append(
88 Error(field='payment',
89 message='Only pre-authorized payments can be released'))
90 try:
91 payment.release()
92 except (PaymentError, ValueError) as e:
93 errors.append(Error(field='payment', message=str(e)))
94 return errors
95
96
97 def clean_refund_payment(payment, amount, errors):
98 if payment.variant == CustomPaymentChoices.MANUAL:
99 errors.append(
100 Error(field='payment',
101 message='Manual payments can not be refunded.'))
102 return errors
103
104
105 class OrderUpdateInput(graphene.InputObjectType):
106 billing_address = AddressInput(
107 description='Billing address of the customer.')
108 user_email = graphene.String(description='Email address of the customer.')
109 shipping_address = AddressInput(
110 description='Shipping address of the customer.')
111
112
113 class OrderUpdate(DraftOrderUpdate):
114 class Arguments:
115 id = graphene.ID(
116 required=True, description='ID of an order to update.')
117 input = OrderUpdateInput(
118 required=True,
119 description='Fields required to update an order.')
120
121 class Meta:
122 description = 'Updates an order.'
123 model = models.Order
124
125
126 class OrderUpdateShippingInput(graphene.InputObjectType):
127 shipping_method = graphene.ID(
128 description='ID of the selected shipping method.',
129 name='shippingMethod')
130
131
132 class OrderUpdateShipping(BaseMutation):
133 order = graphene.Field(
134 Order, description='Order with updated shipping method.')
135
136 class Arguments:
137 id = graphene.ID(
138 required=True, name='order',
139 description='ID of the order to update a shipping method.')
140 input = OrderUpdateShippingInput(
141 description='Fields required to change '
142 'shipping method of the order.')
143
144 class Meta:
145 description = 'Updates a shipping method of the order.'
146
147 @classmethod
148 @permission_required('order.manage_orders')
149 def mutate(cls, root, info, id, input):
150 errors = []
151 order = cls.get_node_or_error(info, id, errors, 'id', Order)
152
153 if not input['shipping_method']:
154 if order.is_shipping_required():
155 cls.add_error(
156 errors, 'shippingMethod',
157 'Shipping method is required for this order.')
158 return OrderUpdateShipping(errors=errors)
159 order.shipping_method = None
160 order.shipping_price = ZERO_TAXED_MONEY
161 order.shipping_method_name = None
162 order.save(
163 update_fields=[
164 'shipping_method', 'shipping_price_net',
165 'shipping_price_gross', 'shipping_method_name'])
166 return OrderUpdateShipping(order=order)
167
168 method = cls.get_node_or_error(
169 info, input['shipping_method'], errors,
170 'shipping_method', ShippingMethod)
171 clean_order_update_shipping(order, method, errors)
172 if errors:
173 return OrderUpdateShipping(errors=errors)
174
175 order.shipping_method = method
176 order.shipping_price = method.get_total_price(info.context.taxes)
177 order.shipping_method_name = method.name
178 order.save(
179 update_fields=[
180 'shipping_method', 'shipping_method_name',
181 'shipping_price_net', 'shipping_price_gross'])
182 return OrderUpdateShipping(order=order)
183
184
185 class OrderAddNoteInput(graphene.InputObjectType):
186 message = graphene.String(description='Note message.', name='message')
187
188
189 class OrderAddNote(BaseMutation):
190 order = graphene.Field(Order, description='Order with the note added.')
191 event = graphene.Field(OrderEvent, description='Order note created.')
192
193 class Arguments:
194 id = graphene.ID(
195 required=True,
196 description='ID of the order to add a note for.', name='order')
197 input = OrderAddNoteInput(
198 required=True,
199 description='Fields required to create a note for the order.')
200
201 class Meta:
202 description = 'Adds note to the order.'
203
204 @classmethod
205 @permission_required('order.manage_orders')
206 def mutate(cls, root, info, id, input):
207 errors = []
208 order = cls.get_node_or_error(info, id, errors, 'id', Order)
209 if errors:
210 return OrderAddNote(errors=errors)
211
212 event = order.events.create(
213 type=OrderEvents.NOTE_ADDED.value,
214 user=info.context.user,
215 parameters={
216 'message': input['message']})
217 return OrderAddNote(order=order, event=event)
218
219
220 class OrderCancel(BaseMutation):
221 order = graphene.Field(Order, description='Canceled order.')
222
223 class Arguments:
224 id = graphene.ID(
225 required=True, description='ID of the order to cancel.')
226 restock = graphene.Boolean(
227 required=True,
228 description='Determine if lines will be restocked or not.')
229
230 class Meta:
231 description = 'Cancel an order.'
232
233 @classmethod
234 @permission_required('order.manage_orders')
235 def mutate(cls, root, info, id, restock):
236 errors = []
237 order = cls.get_node_or_error(info, id, errors, 'id', Order)
238 clean_order_cancel(order, errors)
239 if errors:
240 return OrderCancel(errors=errors)
241
242 cancel_order(order=order, restock=restock)
243 if restock:
244 order.events.create(
245 type=OrderEvents.FULFILLMENT_RESTOCKED_ITEMS.value,
246 user=info.context.user,
247 parameters={'quantity': order.get_total_quantity()})
248 else:
249 order.events.create(
250 type=OrderEvents.ORDER_CANCELED.value,
251 user=info.context.user)
252 return OrderCancel(order=order)
253
254
255 class OrderMarkAsPaid(BaseMutation):
256 order = graphene.Field(Order, description='Order marked as paid.')
257
258 class Arguments:
259 id = graphene.ID(
260 required=True, description='ID of the order to mark paid.')
261
262 class Meta:
263 description = 'Mark order as manually paid.'
264
265 @classmethod
266 @permission_required('order.manage_orders')
267 def mutate(cls, root, info, id):
268 errors = []
269 order = cls.get_node_or_error(info, id, errors, 'id', Order)
270 clean_order_mark_as_paid(order, errors)
271 if errors:
272 return OrderMarkAsPaid(errors=errors)
273
274 defaults = {
275 'total': order.total.gross.amount,
276 'tax': order.total.tax.amount,
277 'currency': order.total.currency,
278 'delivery': order.shipping_price.net.amount,
279 'description': pgettext_lazy(
280 'Payment description', 'Order %(order)s') % {'order': order},
281 'captured_amount': order.total.gross.amount}
282 models.Payment.objects.get_or_create(
283 variant=CustomPaymentChoices.MANUAL,
284 status=PaymentStatus.CONFIRMED, order=order,
285 defaults=defaults)
286
287 order.events.create(
288 type=OrderEvents.ORDER_MARKED_AS_PAID.value,
289 user=info.context.user)
290 return OrderMarkAsPaid(order=order)
291
292
293 class OrderCapture(BaseMutation):
294 order = graphene.Field(Order, description='Captured order.')
295
296 class Arguments:
297 id = graphene.ID(
298 required=True, description='ID of the order to capture.')
299 amount = Decimal(
300 required=True, description='Amount of money to capture.')
301
302 class Meta:
303 description = 'Capture an order.'
304
305 @classmethod
306 @permission_required('order.manage_orders')
307 def mutate(cls, root, info, id, amount):
308 errors = []
309 order = cls.get_node_or_error(info, id, errors, 'id', Order)
310 payment = order.get_last_payment()
311 clean_order_capture(payment, amount, errors)
312 try_payment_action(payment.capture, amount, errors)
313 if errors:
314 return OrderCapture(errors=errors)
315
316 order.events.create(
317 parameters={'amount': amount},
318 type=OrderEvents.PAYMENT_CAPTURED.value,
319 user=info.context.user)
320 return OrderCapture(order=order)
321
322
323 class OrderRelease(BaseMutation):
324 order = graphene.Field(Order, description='A released order.')
325
326 class Arguments:
327 id = graphene.ID(
328 required=True, description='ID of the order to release.')
329
330 class Meta:
331 description = 'Release an order.'
332
333 @classmethod
334 @permission_required('order.manage_orders')
335 def mutate(cls, root, info, id):
336 errors = []
337 order = cls.get_node_or_error(info, id, errors, 'id', Order)
338 if order:
339 payment = order.get_last_payment()
340 clean_release_payment(payment, errors)
341
342 if errors:
343 return OrderRelease(errors=errors)
344
345 order.events.create(
346 type=OrderEvents.PAYMENT_RELEASED.value,
347 user=info.context.user)
348 return OrderRelease(order=order)
349
350
351 class OrderRefund(BaseMutation):
352 order = graphene.Field(Order, description='A refunded order.')
353
354 class Arguments:
355 id = graphene.ID(
356 required=True, description='ID of the order to refund.')
357 amount = Decimal(
358 required=True, description='Amount of money to refund.')
359
360 class Meta:
361 description = 'Refund an order.'
362
363 @classmethod
364 @permission_required('order.manage_orders')
365 def mutate(cls, root, info, id, amount):
366 errors = []
367 order = cls.get_node_or_error(info, id, errors, 'id', Order)
368 if order:
369 payment = order.get_last_payment()
370 clean_refund_payment(payment, amount, errors)
371 try_payment_action(payment.refund, amount, errors)
372 if errors:
373 return OrderRefund(errors=errors)
374
375 order.events.create(
376 type=OrderEvents.PAYMENT_REFUNDED.value,
377 user=info.context.user,
378 parameters={'amount': amount})
379 return OrderRefund(order=order)
380
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/saleor/graphql/order/mutations/orders.py b/saleor/graphql/order/mutations/orders.py
--- a/saleor/graphql/order/mutations/orders.py
+++ b/saleor/graphql/order/mutations/orders.py
@@ -247,8 +247,7 @@
parameters={'quantity': order.get_total_quantity()})
else:
order.events.create(
- type=OrderEvents.ORDER_CANCELED.value,
- user=info.context.user)
+ type=OrderEvents.CANCELED.value, user=info.context.user)
return OrderCancel(order=order)
| {"golden_diff": "diff --git a/saleor/graphql/order/mutations/orders.py b/saleor/graphql/order/mutations/orders.py\n--- a/saleor/graphql/order/mutations/orders.py\n+++ b/saleor/graphql/order/mutations/orders.py\n@@ -247,8 +247,7 @@\n parameters={'quantity': order.get_total_quantity()})\n else:\n order.events.create(\n- type=OrderEvents.ORDER_CANCELED.value,\n- user=info.context.user)\n+ type=OrderEvents.CANCELED.value, user=info.context.user)\n return OrderCancel(order=order)\n", "issue": "OrderCancel mutation throws error when asked not to restock inventory\n### What I'm trying to achieve\r\nCancel order without doing restock.\r\n\r\n### Steps to reproduce the problem\r\n1. Execute query\r\n```\r\nmutation {\r\n orderCancel(id: \"T3JkZXI6MTQ=\", restock: false) {\r\n errors {\r\n field\r\n message\r\n }\r\n order {\r\n id\r\n }\r\n }\r\n}\r\n```\r\n2. Get an error\r\n3. Order is cancelled anyway\r\n\r\n### What I expected to happen\r\nTo work perfectly. Note: if `restock: true`, mutation executes properly.\r\n\n", "before_files": [{"content": "import graphene\nfrom django.utils.translation import pgettext_lazy\nfrom graphql_jwt.decorators import permission_required\nfrom payments import PaymentError, PaymentStatus\n\nfrom ....core.utils.taxes import ZERO_TAXED_MONEY\nfrom ....order import CustomPaymentChoices, OrderEvents, models\nfrom ....order.utils import cancel_order\nfrom ....shipping.models import ShippingMethod as ShippingMethodModel\nfrom ...account.types import AddressInput\nfrom ...core.mutations import BaseMutation\nfrom ...core.types.common import Decimal, Error\nfrom ...order.mutations.draft_orders import DraftOrderUpdate\nfrom ...order.types import Order, OrderEvent\nfrom ...shipping.types import ShippingMethod\n\n\ndef clean_order_update_shipping(order, method, errors):\n if not method:\n return errors\n if not order.shipping_address:\n errors.append(\n Error(\n field='order',\n message=(\n 'Cannot choose a shipping method for an '\n 'order without the shipping address.')))\n return errors\n valid_methods = (\n ShippingMethodModel.objects.applicable_shipping_methods(\n price=order.get_subtotal().gross.amount,\n weight=order.get_total_weight(),\n country_code=order.shipping_address.country.code))\n valid_methods = valid_methods.values_list('id', flat=True)\n if method.pk not in valid_methods:\n errors.append(\n Error(\n field='shippingMethod',\n message='Shipping method cannot be used with this order.'))\n return errors\n\n\ndef try_payment_action(action, money, errors):\n try:\n action(money)\n except (PaymentError, ValueError) as e:\n errors.append(Error(field='payment', message=str(e)))\n\n\ndef clean_order_cancel(order, errors):\n if order and not order.can_cancel():\n errors.append(\n Error(\n field='order',\n message='This order can\\'t be canceled.'))\n return errors\n\n\ndef clean_order_mark_as_paid(order, errors):\n if order and order.payments.exists():\n errors.append(\n Error(\n field='payment',\n message='Orders with payments can not be manually '\n 'marked as paid.'))\n return errors\n\n\ndef clean_order_capture(payment, amount, errors):\n if not payment:\n errors.append(\n Error(\n field='payment',\n message='There\\'s no payment associated with the order.'))\n return errors\n if payment.status != PaymentStatus.PREAUTH:\n errors.append(\n Error(\n field='payment',\n message='Only pre-authorized payments can be captured'))\n return errors\n\n\ndef clean_release_payment(payment, errors):\n \"\"\"Check for payment errors.\"\"\"\n if payment.status != PaymentStatus.PREAUTH:\n errors.append(\n Error(field='payment',\n message='Only pre-authorized payments can be released'))\n try:\n payment.release()\n except (PaymentError, ValueError) as e:\n errors.append(Error(field='payment', message=str(e)))\n return errors\n\n\ndef clean_refund_payment(payment, amount, errors):\n if payment.variant == CustomPaymentChoices.MANUAL:\n errors.append(\n Error(field='payment',\n message='Manual payments can not be refunded.'))\n return errors\n\n\nclass OrderUpdateInput(graphene.InputObjectType):\n billing_address = AddressInput(\n description='Billing address of the customer.')\n user_email = graphene.String(description='Email address of the customer.')\n shipping_address = AddressInput(\n description='Shipping address of the customer.')\n\n\nclass OrderUpdate(DraftOrderUpdate):\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of an order to update.')\n input = OrderUpdateInput(\n required=True,\n description='Fields required to update an order.')\n\n class Meta:\n description = 'Updates an order.'\n model = models.Order\n\n\nclass OrderUpdateShippingInput(graphene.InputObjectType):\n shipping_method = graphene.ID(\n description='ID of the selected shipping method.',\n name='shippingMethod')\n\n\nclass OrderUpdateShipping(BaseMutation):\n order = graphene.Field(\n Order, description='Order with updated shipping method.')\n\n class Arguments:\n id = graphene.ID(\n required=True, name='order',\n description='ID of the order to update a shipping method.')\n input = OrderUpdateShippingInput(\n description='Fields required to change '\n 'shipping method of the order.')\n\n class Meta:\n description = 'Updates a shipping method of the order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, input):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n\n if not input['shipping_method']:\n if order.is_shipping_required():\n cls.add_error(\n errors, 'shippingMethod',\n 'Shipping method is required for this order.')\n return OrderUpdateShipping(errors=errors)\n order.shipping_method = None\n order.shipping_price = ZERO_TAXED_MONEY\n order.shipping_method_name = None\n order.save(\n update_fields=[\n 'shipping_method', 'shipping_price_net',\n 'shipping_price_gross', 'shipping_method_name'])\n return OrderUpdateShipping(order=order)\n\n method = cls.get_node_or_error(\n info, input['shipping_method'], errors,\n 'shipping_method', ShippingMethod)\n clean_order_update_shipping(order, method, errors)\n if errors:\n return OrderUpdateShipping(errors=errors)\n\n order.shipping_method = method\n order.shipping_price = method.get_total_price(info.context.taxes)\n order.shipping_method_name = method.name\n order.save(\n update_fields=[\n 'shipping_method', 'shipping_method_name',\n 'shipping_price_net', 'shipping_price_gross'])\n return OrderUpdateShipping(order=order)\n\n\nclass OrderAddNoteInput(graphene.InputObjectType):\n message = graphene.String(description='Note message.', name='message')\n\n\nclass OrderAddNote(BaseMutation):\n order = graphene.Field(Order, description='Order with the note added.')\n event = graphene.Field(OrderEvent, description='Order note created.')\n\n class Arguments:\n id = graphene.ID(\n required=True,\n description='ID of the order to add a note for.', name='order')\n input = OrderAddNoteInput(\n required=True,\n description='Fields required to create a note for the order.')\n\n class Meta:\n description = 'Adds note to the order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, input):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if errors:\n return OrderAddNote(errors=errors)\n\n event = order.events.create(\n type=OrderEvents.NOTE_ADDED.value,\n user=info.context.user,\n parameters={\n 'message': input['message']})\n return OrderAddNote(order=order, event=event)\n\n\nclass OrderCancel(BaseMutation):\n order = graphene.Field(Order, description='Canceled order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to cancel.')\n restock = graphene.Boolean(\n required=True,\n description='Determine if lines will be restocked or not.')\n\n class Meta:\n description = 'Cancel an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, restock):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n clean_order_cancel(order, errors)\n if errors:\n return OrderCancel(errors=errors)\n\n cancel_order(order=order, restock=restock)\n if restock:\n order.events.create(\n type=OrderEvents.FULFILLMENT_RESTOCKED_ITEMS.value,\n user=info.context.user,\n parameters={'quantity': order.get_total_quantity()})\n else:\n order.events.create(\n type=OrderEvents.ORDER_CANCELED.value,\n user=info.context.user)\n return OrderCancel(order=order)\n\n\nclass OrderMarkAsPaid(BaseMutation):\n order = graphene.Field(Order, description='Order marked as paid.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to mark paid.')\n\n class Meta:\n description = 'Mark order as manually paid.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n clean_order_mark_as_paid(order, errors)\n if errors:\n return OrderMarkAsPaid(errors=errors)\n\n defaults = {\n 'total': order.total.gross.amount,\n 'tax': order.total.tax.amount,\n 'currency': order.total.currency,\n 'delivery': order.shipping_price.net.amount,\n 'description': pgettext_lazy(\n 'Payment description', 'Order %(order)s') % {'order': order},\n 'captured_amount': order.total.gross.amount}\n models.Payment.objects.get_or_create(\n variant=CustomPaymentChoices.MANUAL,\n status=PaymentStatus.CONFIRMED, order=order,\n defaults=defaults)\n\n order.events.create(\n type=OrderEvents.ORDER_MARKED_AS_PAID.value,\n user=info.context.user)\n return OrderMarkAsPaid(order=order)\n\n\nclass OrderCapture(BaseMutation):\n order = graphene.Field(Order, description='Captured order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to capture.')\n amount = Decimal(\n required=True, description='Amount of money to capture.')\n\n class Meta:\n description = 'Capture an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, amount):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n payment = order.get_last_payment()\n clean_order_capture(payment, amount, errors)\n try_payment_action(payment.capture, amount, errors)\n if errors:\n return OrderCapture(errors=errors)\n\n order.events.create(\n parameters={'amount': amount},\n type=OrderEvents.PAYMENT_CAPTURED.value,\n user=info.context.user)\n return OrderCapture(order=order)\n\n\nclass OrderRelease(BaseMutation):\n order = graphene.Field(Order, description='A released order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to release.')\n\n class Meta:\n description = 'Release an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if order:\n payment = order.get_last_payment()\n clean_release_payment(payment, errors)\n\n if errors:\n return OrderRelease(errors=errors)\n\n order.events.create(\n type=OrderEvents.PAYMENT_RELEASED.value,\n user=info.context.user)\n return OrderRelease(order=order)\n\n\nclass OrderRefund(BaseMutation):\n order = graphene.Field(Order, description='A refunded order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to refund.')\n amount = Decimal(\n required=True, description='Amount of money to refund.')\n\n class Meta:\n description = 'Refund an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, amount):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if order:\n payment = order.get_last_payment()\n clean_refund_payment(payment, amount, errors)\n try_payment_action(payment.refund, amount, errors)\n if errors:\n return OrderRefund(errors=errors)\n\n order.events.create(\n type=OrderEvents.PAYMENT_REFUNDED.value,\n user=info.context.user,\n parameters={'amount': amount})\n return OrderRefund(order=order)\n", "path": "saleor/graphql/order/mutations/orders.py"}], "after_files": [{"content": "import graphene\nfrom django.utils.translation import pgettext_lazy\nfrom graphql_jwt.decorators import permission_required\nfrom payments import PaymentError, PaymentStatus\n\nfrom ....core.utils.taxes import ZERO_TAXED_MONEY\nfrom ....order import CustomPaymentChoices, OrderEvents, models\nfrom ....order.utils import cancel_order\nfrom ....shipping.models import ShippingMethod as ShippingMethodModel\nfrom ...account.types import AddressInput\nfrom ...core.mutations import BaseMutation\nfrom ...core.types.common import Decimal, Error\nfrom ...order.mutations.draft_orders import DraftOrderUpdate\nfrom ...order.types import Order, OrderEvent\nfrom ...shipping.types import ShippingMethod\n\n\ndef clean_order_update_shipping(order, method, errors):\n if not method:\n return errors\n if not order.shipping_address:\n errors.append(\n Error(\n field='order',\n message=(\n 'Cannot choose a shipping method for an '\n 'order without the shipping address.')))\n return errors\n valid_methods = (\n ShippingMethodModel.objects.applicable_shipping_methods(\n price=order.get_subtotal().gross.amount,\n weight=order.get_total_weight(),\n country_code=order.shipping_address.country.code))\n valid_methods = valid_methods.values_list('id', flat=True)\n if method.pk not in valid_methods:\n errors.append(\n Error(\n field='shippingMethod',\n message='Shipping method cannot be used with this order.'))\n return errors\n\n\ndef try_payment_action(action, money, errors):\n try:\n action(money)\n except (PaymentError, ValueError) as e:\n errors.append(Error(field='payment', message=str(e)))\n\n\ndef clean_order_cancel(order, errors):\n if order and not order.can_cancel():\n errors.append(\n Error(\n field='order',\n message='This order can\\'t be canceled.'))\n return errors\n\n\ndef clean_order_mark_as_paid(order, errors):\n if order and order.payments.exists():\n errors.append(\n Error(\n field='payment',\n message='Orders with payments can not be manually '\n 'marked as paid.'))\n return errors\n\n\ndef clean_order_capture(payment, amount, errors):\n if not payment:\n errors.append(\n Error(\n field='payment',\n message='There\\'s no payment associated with the order.'))\n return errors\n if payment.status != PaymentStatus.PREAUTH:\n errors.append(\n Error(\n field='payment',\n message='Only pre-authorized payments can be captured'))\n return errors\n\n\ndef clean_release_payment(payment, errors):\n \"\"\"Check for payment errors.\"\"\"\n if payment.status != PaymentStatus.PREAUTH:\n errors.append(\n Error(field='payment',\n message='Only pre-authorized payments can be released'))\n try:\n payment.release()\n except (PaymentError, ValueError) as e:\n errors.append(Error(field='payment', message=str(e)))\n return errors\n\n\ndef clean_refund_payment(payment, amount, errors):\n if payment.variant == CustomPaymentChoices.MANUAL:\n errors.append(\n Error(field='payment',\n message='Manual payments can not be refunded.'))\n return errors\n\n\nclass OrderUpdateInput(graphene.InputObjectType):\n billing_address = AddressInput(\n description='Billing address of the customer.')\n user_email = graphene.String(description='Email address of the customer.')\n shipping_address = AddressInput(\n description='Shipping address of the customer.')\n\n\nclass OrderUpdate(DraftOrderUpdate):\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of an order to update.')\n input = OrderUpdateInput(\n required=True,\n description='Fields required to update an order.')\n\n class Meta:\n description = 'Updates an order.'\n model = models.Order\n\n\nclass OrderUpdateShippingInput(graphene.InputObjectType):\n shipping_method = graphene.ID(\n description='ID of the selected shipping method.',\n name='shippingMethod')\n\n\nclass OrderUpdateShipping(BaseMutation):\n order = graphene.Field(\n Order, description='Order with updated shipping method.')\n\n class Arguments:\n id = graphene.ID(\n required=True, name='order',\n description='ID of the order to update a shipping method.')\n input = OrderUpdateShippingInput(\n description='Fields required to change '\n 'shipping method of the order.')\n\n class Meta:\n description = 'Updates a shipping method of the order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, input):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n\n if not input['shipping_method']:\n if order.is_shipping_required():\n cls.add_error(\n errors, 'shippingMethod',\n 'Shipping method is required for this order.')\n return OrderUpdateShipping(errors=errors)\n order.shipping_method = None\n order.shipping_price = ZERO_TAXED_MONEY\n order.shipping_method_name = None\n order.save(\n update_fields=[\n 'shipping_method', 'shipping_price_net',\n 'shipping_price_gross', 'shipping_method_name'])\n return OrderUpdateShipping(order=order)\n\n method = cls.get_node_or_error(\n info, input['shipping_method'], errors,\n 'shipping_method', ShippingMethod)\n clean_order_update_shipping(order, method, errors)\n if errors:\n return OrderUpdateShipping(errors=errors)\n\n order.shipping_method = method\n order.shipping_price = method.get_total_price(info.context.taxes)\n order.shipping_method_name = method.name\n order.save(\n update_fields=[\n 'shipping_method', 'shipping_method_name',\n 'shipping_price_net', 'shipping_price_gross'])\n return OrderUpdateShipping(order=order)\n\n\nclass OrderAddNoteInput(graphene.InputObjectType):\n message = graphene.String(description='Note message.', name='message')\n\n\nclass OrderAddNote(BaseMutation):\n order = graphene.Field(Order, description='Order with the note added.')\n event = graphene.Field(OrderEvent, description='Order note created.')\n\n class Arguments:\n id = graphene.ID(\n required=True,\n description='ID of the order to add a note for.', name='order')\n input = OrderAddNoteInput(\n required=True,\n description='Fields required to create a note for the order.')\n\n class Meta:\n description = 'Adds note to the order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, input):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if errors:\n return OrderAddNote(errors=errors)\n\n event = order.events.create(\n type=OrderEvents.NOTE_ADDED.value,\n user=info.context.user,\n parameters={\n 'message': input['message']})\n return OrderAddNote(order=order, event=event)\n\n\nclass OrderCancel(BaseMutation):\n order = graphene.Field(Order, description='Canceled order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to cancel.')\n restock = graphene.Boolean(\n required=True,\n description='Determine if lines will be restocked or not.')\n\n class Meta:\n description = 'Cancel an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, restock):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n clean_order_cancel(order, errors)\n if errors:\n return OrderCancel(errors=errors)\n\n cancel_order(order=order, restock=restock)\n if restock:\n order.events.create(\n type=OrderEvents.FULFILLMENT_RESTOCKED_ITEMS.value,\n user=info.context.user,\n parameters={'quantity': order.get_total_quantity()})\n else:\n order.events.create(\n type=OrderEvents.CANCELED.value, user=info.context.user)\n return OrderCancel(order=order)\n\n\nclass OrderMarkAsPaid(BaseMutation):\n order = graphene.Field(Order, description='Order marked as paid.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to mark paid.')\n\n class Meta:\n description = 'Mark order as manually paid.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n clean_order_mark_as_paid(order, errors)\n if errors:\n return OrderMarkAsPaid(errors=errors)\n\n defaults = {\n 'total': order.total.gross.amount,\n 'tax': order.total.tax.amount,\n 'currency': order.total.currency,\n 'delivery': order.shipping_price.net.amount,\n 'description': pgettext_lazy(\n 'Payment description', 'Order %(order)s') % {'order': order},\n 'captured_amount': order.total.gross.amount}\n models.Payment.objects.get_or_create(\n variant=CustomPaymentChoices.MANUAL,\n status=PaymentStatus.CONFIRMED, order=order,\n defaults=defaults)\n\n order.events.create(\n type=OrderEvents.ORDER_MARKED_AS_PAID.value,\n user=info.context.user)\n return OrderMarkAsPaid(order=order)\n\n\nclass OrderCapture(BaseMutation):\n order = graphene.Field(Order, description='Captured order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to capture.')\n amount = Decimal(\n required=True, description='Amount of money to capture.')\n\n class Meta:\n description = 'Capture an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, amount):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n payment = order.get_last_payment()\n clean_order_capture(payment, amount, errors)\n try_payment_action(payment.capture, amount, errors)\n if errors:\n return OrderCapture(errors=errors)\n\n order.events.create(\n parameters={'amount': amount},\n type=OrderEvents.PAYMENT_CAPTURED.value,\n user=info.context.user)\n return OrderCapture(order=order)\n\n\nclass OrderRelease(BaseMutation):\n order = graphene.Field(Order, description='A released order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to release.')\n\n class Meta:\n description = 'Release an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if order:\n payment = order.get_last_payment()\n clean_release_payment(payment, errors)\n\n if errors:\n return OrderRelease(errors=errors)\n\n order.events.create(\n type=OrderEvents.PAYMENT_RELEASED.value,\n user=info.context.user)\n return OrderRelease(order=order)\n\n\nclass OrderRefund(BaseMutation):\n order = graphene.Field(Order, description='A refunded order.')\n\n class Arguments:\n id = graphene.ID(\n required=True, description='ID of the order to refund.')\n amount = Decimal(\n required=True, description='Amount of money to refund.')\n\n class Meta:\n description = 'Refund an order.'\n\n @classmethod\n @permission_required('order.manage_orders')\n def mutate(cls, root, info, id, amount):\n errors = []\n order = cls.get_node_or_error(info, id, errors, 'id', Order)\n if order:\n payment = order.get_last_payment()\n clean_refund_payment(payment, amount, errors)\n try_payment_action(payment.refund, amount, errors)\n if errors:\n return OrderRefund(errors=errors)\n\n order.events.create(\n type=OrderEvents.PAYMENT_REFUNDED.value,\n user=info.context.user,\n parameters={'amount': amount})\n return OrderRefund(order=order)\n", "path": "saleor/graphql/order/mutations/orders.py"}]} |
gh_patches_debug_1168 | rasdani/github-patches | git_diff | pymeasure__pymeasure-433 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Pyvisa no longer support ask, replace with query
In resources.py
`idn = res.ask('*idn?')[:-1]`
Should be:
`idn = res.query('*idn?')[:-1]`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pymeasure/instruments/resources.py`
Content:
```
1 #
2 # This file is part of the PyMeasure package.
3 #
4 # Copyright (c) 2013-2021 PyMeasure Developers
5 #
6 # Permission is hereby granted, free of charge, to any person obtaining a copy
7 # of this software and associated documentation files (the "Software"), to deal
8 # in the Software without restriction, including without limitation the rights
9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
10 # copies of the Software, and to permit persons to whom the Software is
11 # furnished to do so, subject to the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be included in
14 # all copies or substantial portions of the Software.
15 #
16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
22 # THE SOFTWARE.
23 #
24
25 import pyvisa
26
27
28 def list_resources():
29 """
30 Prints the available resources, and returns a list of VISA resource names
31
32 .. code-block:: python
33
34 resources = list_resources()
35 #prints (e.g.)
36 #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******
37 #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****
38 dmm = Agilent34410(resources[0])
39
40 """
41 rm = pyvisa.ResourceManager()
42 instrs = rm.list_resources()
43 for n, instr in enumerate(instrs):
44 # trying to catch errors in comunication
45 try:
46 res = rm.open_resource(instr)
47 # try to avoid errors from *idn?
48 try:
49 # noinspection PyUnresolvedReferences
50 idn = res.ask('*idn?')[:-1]
51 except pyvisa.Error:
52 idn = "Not known"
53 finally:
54 res.close()
55 print(n, ":", instr, ":", idn)
56 except pyvisa.VisaIOError as e:
57 print(n, ":", instr, ":", "Visa IO Error: check connections")
58 print(e)
59 rm.close()
60 return instrs
61
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pymeasure/instruments/resources.py b/pymeasure/instruments/resources.py
--- a/pymeasure/instruments/resources.py
+++ b/pymeasure/instruments/resources.py
@@ -47,7 +47,7 @@
# try to avoid errors from *idn?
try:
# noinspection PyUnresolvedReferences
- idn = res.ask('*idn?')[:-1]
+ idn = res.query('*idn?')[:-1]
except pyvisa.Error:
idn = "Not known"
finally:
| {"golden_diff": "diff --git a/pymeasure/instruments/resources.py b/pymeasure/instruments/resources.py\n--- a/pymeasure/instruments/resources.py\n+++ b/pymeasure/instruments/resources.py\n@@ -47,7 +47,7 @@\n # try to avoid errors from *idn?\n try:\n # noinspection PyUnresolvedReferences\n- idn = res.ask('*idn?')[:-1]\n+ idn = res.query('*idn?')[:-1]\n except pyvisa.Error:\n idn = \"Not known\"\n finally:\n", "issue": "Pyvisa no longer support ask, replace with query\nIn resources.py\r\n`idn = res.ask('*idn?')[:-1]`\r\nShould be:\r\n`idn = res.query('*idn?')[:-1]`\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2021 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport pyvisa\n\n\ndef list_resources():\n \"\"\"\n Prints the available resources, and returns a list of VISA resource names\n \n .. code-block:: python\n\n resources = list_resources()\n #prints (e.g.)\n #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******\n #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****\n dmm = Agilent34410(resources[0])\n \n \"\"\"\n rm = pyvisa.ResourceManager()\n instrs = rm.list_resources()\n for n, instr in enumerate(instrs):\n # trying to catch errors in comunication\n try:\n res = rm.open_resource(instr)\n # try to avoid errors from *idn?\n try:\n # noinspection PyUnresolvedReferences\n idn = res.ask('*idn?')[:-1]\n except pyvisa.Error:\n idn = \"Not known\"\n finally:\n res.close()\n print(n, \":\", instr, \":\", idn)\n except pyvisa.VisaIOError as e:\n print(n, \":\", instr, \":\", \"Visa IO Error: check connections\")\n print(e)\n rm.close()\n return instrs\n", "path": "pymeasure/instruments/resources.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2021 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport pyvisa\n\n\ndef list_resources():\n \"\"\"\n Prints the available resources, and returns a list of VISA resource names\n \n .. code-block:: python\n\n resources = list_resources()\n #prints (e.g.)\n #0 : GPIB0::22::INSTR : Agilent Technologies,34410A,******\n #1 : GPIB0::26::INSTR : Keithley Instruments Inc., Model 2612, *****\n dmm = Agilent34410(resources[0])\n \n \"\"\"\n rm = pyvisa.ResourceManager()\n instrs = rm.list_resources()\n for n, instr in enumerate(instrs):\n # trying to catch errors in comunication\n try:\n res = rm.open_resource(instr)\n # try to avoid errors from *idn?\n try:\n # noinspection PyUnresolvedReferences\n idn = res.query('*idn?')[:-1]\n except pyvisa.Error:\n idn = \"Not known\"\n finally:\n res.close()\n print(n, \":\", instr, \":\", idn)\n except pyvisa.VisaIOError as e:\n print(n, \":\", instr, \":\", \"Visa IO Error: check connections\")\n print(e)\n rm.close()\n return instrs\n", "path": "pymeasure/instruments/resources.py"}]} |
gh_patches_debug_1169 | rasdani/github-patches | git_diff | pypa__pip-12281 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
TypeError when installing from git and git version contains a letter
### Description
I am trying to install a dependency from a git source and get an exception because the git version cannot be parsed
Going through the code (I have pip version 23.2.1 installed), it seems the issue is with my git version that contains an alphabetical patch version…
```bash
$ git version
git version 2.37.GIT
```
In `git.py:100` the match produces 3 groups with the last one being `None` because of the way the `GIT_VERSION_REGEX` is built. That in turns create a problem in `git.py:104` because `tuple(int(c) for c in match.groups())` doesn't work with the `None` value.
### Expected behavior
I would expect pip to accept the major and minor git version as they are and go on without a numeric patch version. But I can't tell why it is checking the version numbers.
### pip version
23.2.1
### Python version
3.9.9
### OS
Debian 11.5
### How to Reproduce
I can't share the code but what I do is simply `pip install "my_package @ git+https://my_gitlab/my_repo"`
### Output
This is the full stacktrace :
```
ERROR: Exception:
Traceback (most recent call last):
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/cli/base_command.py", line 180, in exc_logging_wrapper
status = run_func(*args)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/cli/req_command.py", line 248, in wrapper
return func(self, options, args)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/commands/install.py", line 377, in run
requirement_set = resolver.resolve(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 73, in resolve
collected = self.factory.collect_root_requirements(root_reqs)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 491, in collect_root_requirements
req = self._make_requirement_from_install_req(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 453, in _make_requirement_from_install_req
cand = self._make_candidate_from_link(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 293, in __init__
super().__init__(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 538, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 609, in _prepare_linked_requirement
local_file = unpack_url(
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 155, in unpack_url
unpack_vcs_link(link, location, verbosity=verbosity)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py", line 78, in unpack_vcs_link
vcs_backend.unpack(location, url=hide_url(link.url), verbosity=verbosity)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/versioncontrol.py", line 608, in unpack
self.obtain(location, url=url, verbosity=verbosity)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/versioncontrol.py", line 521, in obtain
self.fetch_new(dest, url, rev_options, verbosity=verbosity)
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py", line 272, in fetch_new
if self.get_git_version() >= (2, 17):
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py", line 104, in get_git_version
return tuple(int(c) for c in match.groups())
File "/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py", line 104, in <genexpr>
return tuple(int(c) for c in match.groups())
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
### Code of Conduct
- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/pip/_internal/vcs/git.py`
Content:
```
1 import logging
2 import os.path
3 import pathlib
4 import re
5 import urllib.parse
6 import urllib.request
7 from typing import List, Optional, Tuple
8
9 from pip._internal.exceptions import BadCommand, InstallationError
10 from pip._internal.utils.misc import HiddenText, display_path, hide_url
11 from pip._internal.utils.subprocess import make_command
12 from pip._internal.vcs.versioncontrol import (
13 AuthInfo,
14 RemoteNotFoundError,
15 RemoteNotValidError,
16 RevOptions,
17 VersionControl,
18 find_path_to_project_root_from_repo_root,
19 vcs,
20 )
21
22 urlsplit = urllib.parse.urlsplit
23 urlunsplit = urllib.parse.urlunsplit
24
25
26 logger = logging.getLogger(__name__)
27
28
29 GIT_VERSION_REGEX = re.compile(
30 r"^git version " # Prefix.
31 r"(\d+)" # Major.
32 r"\.(\d+)" # Dot, minor.
33 r"(?:\.(\d+))?" # Optional dot, patch.
34 r".*$" # Suffix, including any pre- and post-release segments we don't care about.
35 )
36
37 HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$")
38
39 # SCP (Secure copy protocol) shorthand. e.g. '[email protected]:foo/bar.git'
40 SCP_REGEX = re.compile(
41 r"""^
42 # Optional user, e.g. 'git@'
43 (\w+@)?
44 # Server, e.g. 'github.com'.
45 ([^/:]+):
46 # The server-side path. e.g. 'user/project.git'. Must start with an
47 # alphanumeric character so as not to be confusable with a Windows paths
48 # like 'C:/foo/bar' or 'C:\foo\bar'.
49 (\w[^:]*)
50 $""",
51 re.VERBOSE,
52 )
53
54
55 def looks_like_hash(sha: str) -> bool:
56 return bool(HASH_REGEX.match(sha))
57
58
59 class Git(VersionControl):
60 name = "git"
61 dirname = ".git"
62 repo_name = "clone"
63 schemes = (
64 "git+http",
65 "git+https",
66 "git+ssh",
67 "git+git",
68 "git+file",
69 )
70 # Prevent the user's environment variables from interfering with pip:
71 # https://github.com/pypa/pip/issues/1130
72 unset_environ = ("GIT_DIR", "GIT_WORK_TREE")
73 default_arg_rev = "HEAD"
74
75 @staticmethod
76 def get_base_rev_args(rev: str) -> List[str]:
77 return [rev]
78
79 def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:
80 _, rev_options = self.get_url_rev_options(hide_url(url))
81 if not rev_options.rev:
82 return False
83 if not self.is_commit_id_equal(dest, rev_options.rev):
84 # the current commit is different from rev,
85 # which means rev was something else than a commit hash
86 return False
87 # return False in the rare case rev is both a commit hash
88 # and a tag or a branch; we don't want to cache in that case
89 # because that branch/tag could point to something else in the future
90 is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0])
91 return not is_tag_or_branch
92
93 def get_git_version(self) -> Tuple[int, ...]:
94 version = self.run_command(
95 ["version"],
96 command_desc="git version",
97 show_stdout=False,
98 stdout_only=True,
99 )
100 match = GIT_VERSION_REGEX.match(version)
101 if not match:
102 logger.warning("Can't parse git version: %s", version)
103 return ()
104 return tuple(int(c) for c in match.groups())
105
106 @classmethod
107 def get_current_branch(cls, location: str) -> Optional[str]:
108 """
109 Return the current branch, or None if HEAD isn't at a branch
110 (e.g. detached HEAD).
111 """
112 # git-symbolic-ref exits with empty stdout if "HEAD" is a detached
113 # HEAD rather than a symbolic ref. In addition, the -q causes the
114 # command to exit with status code 1 instead of 128 in this case
115 # and to suppress the message to stderr.
116 args = ["symbolic-ref", "-q", "HEAD"]
117 output = cls.run_command(
118 args,
119 extra_ok_returncodes=(1,),
120 show_stdout=False,
121 stdout_only=True,
122 cwd=location,
123 )
124 ref = output.strip()
125
126 if ref.startswith("refs/heads/"):
127 return ref[len("refs/heads/") :]
128
129 return None
130
131 @classmethod
132 def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]:
133 """
134 Return (sha_or_none, is_branch), where sha_or_none is a commit hash
135 if the revision names a remote branch or tag, otherwise None.
136
137 Args:
138 dest: the repository directory.
139 rev: the revision name.
140 """
141 # Pass rev to pre-filter the list.
142 output = cls.run_command(
143 ["show-ref", rev],
144 cwd=dest,
145 show_stdout=False,
146 stdout_only=True,
147 on_returncode="ignore",
148 )
149 refs = {}
150 # NOTE: We do not use splitlines here since that would split on other
151 # unicode separators, which can be maliciously used to install a
152 # different revision.
153 for line in output.strip().split("\n"):
154 line = line.rstrip("\r")
155 if not line:
156 continue
157 try:
158 ref_sha, ref_name = line.split(" ", maxsplit=2)
159 except ValueError:
160 # Include the offending line to simplify troubleshooting if
161 # this error ever occurs.
162 raise ValueError(f"unexpected show-ref line: {line!r}")
163
164 refs[ref_name] = ref_sha
165
166 branch_ref = f"refs/remotes/origin/{rev}"
167 tag_ref = f"refs/tags/{rev}"
168
169 sha = refs.get(branch_ref)
170 if sha is not None:
171 return (sha, True)
172
173 sha = refs.get(tag_ref)
174
175 return (sha, False)
176
177 @classmethod
178 def _should_fetch(cls, dest: str, rev: str) -> bool:
179 """
180 Return true if rev is a ref or is a commit that we don't have locally.
181
182 Branches and tags are not considered in this method because they are
183 assumed to be always available locally (which is a normal outcome of
184 ``git clone`` and ``git fetch --tags``).
185 """
186 if rev.startswith("refs/"):
187 # Always fetch remote refs.
188 return True
189
190 if not looks_like_hash(rev):
191 # Git fetch would fail with abbreviated commits.
192 return False
193
194 if cls.has_commit(dest, rev):
195 # Don't fetch if we have the commit locally.
196 return False
197
198 return True
199
200 @classmethod
201 def resolve_revision(
202 cls, dest: str, url: HiddenText, rev_options: RevOptions
203 ) -> RevOptions:
204 """
205 Resolve a revision to a new RevOptions object with the SHA1 of the
206 branch, tag, or ref if found.
207
208 Args:
209 rev_options: a RevOptions object.
210 """
211 rev = rev_options.arg_rev
212 # The arg_rev property's implementation for Git ensures that the
213 # rev return value is always non-None.
214 assert rev is not None
215
216 sha, is_branch = cls.get_revision_sha(dest, rev)
217
218 if sha is not None:
219 rev_options = rev_options.make_new(sha)
220 rev_options.branch_name = rev if is_branch else None
221
222 return rev_options
223
224 # Do not show a warning for the common case of something that has
225 # the form of a Git commit hash.
226 if not looks_like_hash(rev):
227 logger.warning(
228 "Did not find branch or tag '%s', assuming revision or ref.",
229 rev,
230 )
231
232 if not cls._should_fetch(dest, rev):
233 return rev_options
234
235 # fetch the requested revision
236 cls.run_command(
237 make_command("fetch", "-q", url, rev_options.to_args()),
238 cwd=dest,
239 )
240 # Change the revision to the SHA of the ref we fetched
241 sha = cls.get_revision(dest, rev="FETCH_HEAD")
242 rev_options = rev_options.make_new(sha)
243
244 return rev_options
245
246 @classmethod
247 def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:
248 """
249 Return whether the current commit hash equals the given name.
250
251 Args:
252 dest: the repository directory.
253 name: a string name.
254 """
255 if not name:
256 # Then avoid an unnecessary subprocess call.
257 return False
258
259 return cls.get_revision(dest) == name
260
261 def fetch_new(
262 self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int
263 ) -> None:
264 rev_display = rev_options.to_display()
265 logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest))
266 if verbosity <= 0:
267 flags: Tuple[str, ...] = ("--quiet",)
268 elif verbosity == 1:
269 flags = ()
270 else:
271 flags = ("--verbose", "--progress")
272 if self.get_git_version() >= (2, 17):
273 # Git added support for partial clone in 2.17
274 # https://git-scm.com/docs/partial-clone
275 # Speeds up cloning by functioning without a complete copy of repository
276 self.run_command(
277 make_command(
278 "clone",
279 "--filter=blob:none",
280 *flags,
281 url,
282 dest,
283 )
284 )
285 else:
286 self.run_command(make_command("clone", *flags, url, dest))
287
288 if rev_options.rev:
289 # Then a specific revision was requested.
290 rev_options = self.resolve_revision(dest, url, rev_options)
291 branch_name = getattr(rev_options, "branch_name", None)
292 logger.debug("Rev options %s, branch_name %s", rev_options, branch_name)
293 if branch_name is None:
294 # Only do a checkout if the current commit id doesn't match
295 # the requested revision.
296 if not self.is_commit_id_equal(dest, rev_options.rev):
297 cmd_args = make_command(
298 "checkout",
299 "-q",
300 rev_options.to_args(),
301 )
302 self.run_command(cmd_args, cwd=dest)
303 elif self.get_current_branch(dest) != branch_name:
304 # Then a specific branch was requested, and that branch
305 # is not yet checked out.
306 track_branch = f"origin/{branch_name}"
307 cmd_args = [
308 "checkout",
309 "-b",
310 branch_name,
311 "--track",
312 track_branch,
313 ]
314 self.run_command(cmd_args, cwd=dest)
315 else:
316 sha = self.get_revision(dest)
317 rev_options = rev_options.make_new(sha)
318
319 logger.info("Resolved %s to commit %s", url, rev_options.rev)
320
321 #: repo may contain submodules
322 self.update_submodules(dest)
323
324 def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
325 self.run_command(
326 make_command("config", "remote.origin.url", url),
327 cwd=dest,
328 )
329 cmd_args = make_command("checkout", "-q", rev_options.to_args())
330 self.run_command(cmd_args, cwd=dest)
331
332 self.update_submodules(dest)
333
334 def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:
335 # First fetch changes from the default remote
336 if self.get_git_version() >= (1, 9):
337 # fetch tags in addition to everything else
338 self.run_command(["fetch", "-q", "--tags"], cwd=dest)
339 else:
340 self.run_command(["fetch", "-q"], cwd=dest)
341 # Then reset to wanted revision (maybe even origin/master)
342 rev_options = self.resolve_revision(dest, url, rev_options)
343 cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args())
344 self.run_command(cmd_args, cwd=dest)
345 #: update submodules
346 self.update_submodules(dest)
347
348 @classmethod
349 def get_remote_url(cls, location: str) -> str:
350 """
351 Return URL of the first remote encountered.
352
353 Raises RemoteNotFoundError if the repository does not have a remote
354 url configured.
355 """
356 # We need to pass 1 for extra_ok_returncodes since the command
357 # exits with return code 1 if there are no matching lines.
358 stdout = cls.run_command(
359 ["config", "--get-regexp", r"remote\..*\.url"],
360 extra_ok_returncodes=(1,),
361 show_stdout=False,
362 stdout_only=True,
363 cwd=location,
364 )
365 remotes = stdout.splitlines()
366 try:
367 found_remote = remotes[0]
368 except IndexError:
369 raise RemoteNotFoundError
370
371 for remote in remotes:
372 if remote.startswith("remote.origin.url "):
373 found_remote = remote
374 break
375 url = found_remote.split(" ")[1]
376 return cls._git_remote_to_pip_url(url.strip())
377
378 @staticmethod
379 def _git_remote_to_pip_url(url: str) -> str:
380 """
381 Convert a remote url from what git uses to what pip accepts.
382
383 There are 3 legal forms **url** may take:
384
385 1. A fully qualified url: ssh://[email protected]/foo/bar.git
386 2. A local project.git folder: /path/to/bare/repository.git
387 3. SCP shorthand for form 1: [email protected]:foo/bar.git
388
389 Form 1 is output as-is. Form 2 must be converted to URI and form 3 must
390 be converted to form 1.
391
392 See the corresponding test test_git_remote_url_to_pip() for examples of
393 sample inputs/outputs.
394 """
395 if re.match(r"\w+://", url):
396 # This is already valid. Pass it though as-is.
397 return url
398 if os.path.exists(url):
399 # A local bare remote (git clone --mirror).
400 # Needs a file:// prefix.
401 return pathlib.PurePath(url).as_uri()
402 scp_match = SCP_REGEX.match(url)
403 if scp_match:
404 # Add an ssh:// prefix and replace the ':' with a '/'.
405 return scp_match.expand(r"ssh://\1\2/\3")
406 # Otherwise, bail out.
407 raise RemoteNotValidError(url)
408
409 @classmethod
410 def has_commit(cls, location: str, rev: str) -> bool:
411 """
412 Check if rev is a commit that is available in the local repository.
413 """
414 try:
415 cls.run_command(
416 ["rev-parse", "-q", "--verify", "sha^" + rev],
417 cwd=location,
418 log_failed_cmd=False,
419 )
420 except InstallationError:
421 return False
422 else:
423 return True
424
425 @classmethod
426 def get_revision(cls, location: str, rev: Optional[str] = None) -> str:
427 if rev is None:
428 rev = "HEAD"
429 current_rev = cls.run_command(
430 ["rev-parse", rev],
431 show_stdout=False,
432 stdout_only=True,
433 cwd=location,
434 )
435 return current_rev.strip()
436
437 @classmethod
438 def get_subdirectory(cls, location: str) -> Optional[str]:
439 """
440 Return the path to Python project root, relative to the repo root.
441 Return None if the project root is in the repo root.
442 """
443 # find the repo root
444 git_dir = cls.run_command(
445 ["rev-parse", "--git-dir"],
446 show_stdout=False,
447 stdout_only=True,
448 cwd=location,
449 ).strip()
450 if not os.path.isabs(git_dir):
451 git_dir = os.path.join(location, git_dir)
452 repo_root = os.path.abspath(os.path.join(git_dir, ".."))
453 return find_path_to_project_root_from_repo_root(location, repo_root)
454
455 @classmethod
456 def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:
457 """
458 Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.
459 That's required because although they use SSH they sometimes don't
460 work with a ssh:// scheme (e.g. GitHub). But we need a scheme for
461 parsing. Hence we remove it again afterwards and return it as a stub.
462 """
463 # Works around an apparent Git bug
464 # (see https://article.gmane.org/gmane.comp.version-control.git/146500)
465 scheme, netloc, path, query, fragment = urlsplit(url)
466 if scheme.endswith("file"):
467 initial_slashes = path[: -len(path.lstrip("/"))]
468 newpath = initial_slashes + urllib.request.url2pathname(path).replace(
469 "\\", "/"
470 ).lstrip("/")
471 after_plus = scheme.find("+") + 1
472 url = scheme[:after_plus] + urlunsplit(
473 (scheme[after_plus:], netloc, newpath, query, fragment),
474 )
475
476 if "://" not in url:
477 assert "file:" not in url
478 url = url.replace("git+", "git+ssh://")
479 url, rev, user_pass = super().get_url_rev_and_auth(url)
480 url = url.replace("ssh://", "")
481 else:
482 url, rev, user_pass = super().get_url_rev_and_auth(url)
483
484 return url, rev, user_pass
485
486 @classmethod
487 def update_submodules(cls, location: str) -> None:
488 if not os.path.exists(os.path.join(location, ".gitmodules")):
489 return
490 cls.run_command(
491 ["submodule", "update", "--init", "--recursive", "-q"],
492 cwd=location,
493 )
494
495 @classmethod
496 def get_repository_root(cls, location: str) -> Optional[str]:
497 loc = super().get_repository_root(location)
498 if loc:
499 return loc
500 try:
501 r = cls.run_command(
502 ["rev-parse", "--show-toplevel"],
503 cwd=location,
504 show_stdout=False,
505 stdout_only=True,
506 on_returncode="raise",
507 log_failed_cmd=False,
508 )
509 except BadCommand:
510 logger.debug(
511 "could not determine if %s is under git control "
512 "because git is not available",
513 location,
514 )
515 return None
516 except InstallationError:
517 return None
518 return os.path.normpath(r.rstrip("\r\n"))
519
520 @staticmethod
521 def should_add_vcs_url_prefix(repo_url: str) -> bool:
522 """In either https or ssh form, requirements must be prefixed with git+."""
523 return True
524
525
526 vcs.register(Git)
527
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/pip/_internal/vcs/git.py b/src/pip/_internal/vcs/git.py
--- a/src/pip/_internal/vcs/git.py
+++ b/src/pip/_internal/vcs/git.py
@@ -101,7 +101,7 @@
if not match:
logger.warning("Can't parse git version: %s", version)
return ()
- return tuple(int(c) for c in match.groups())
+ return (int(match.group(1)), int(match.group(2)))
@classmethod
def get_current_branch(cls, location: str) -> Optional[str]:
| {"golden_diff": "diff --git a/src/pip/_internal/vcs/git.py b/src/pip/_internal/vcs/git.py\n--- a/src/pip/_internal/vcs/git.py\n+++ b/src/pip/_internal/vcs/git.py\n@@ -101,7 +101,7 @@\n if not match:\n logger.warning(\"Can't parse git version: %s\", version)\n return ()\n- return tuple(int(c) for c in match.groups())\n+ return (int(match.group(1)), int(match.group(2)))\n \n @classmethod\n def get_current_branch(cls, location: str) -> Optional[str]:\n", "issue": "TypeError when installing from git and git version contains a letter\n### Description\n\nI am trying to install a dependency from a git source and get an exception because the git version cannot be parsed\r\n\r\nGoing through the code (I have pip version 23.2.1 installed), it seems the issue is with my git version that contains an alphabetical patch version\u2026\r\n\r\n```bash\r\n$ git version\r\ngit version 2.37.GIT\r\n```\r\n\r\nIn `git.py:100` the match produces 3 groups with the last one being `None` because of the way the `GIT_VERSION_REGEX` is built. That in turns create a problem in `git.py:104` because `tuple(int(c) for c in match.groups())` doesn't work with the `None` value.\n\n### Expected behavior\n\nI would expect pip to accept the major and minor git version as they are and go on without a numeric patch version. But I can't tell why it is checking the version numbers.\n\n### pip version\n\n23.2.1\n\n### Python version\n\n3.9.9\n\n### OS\n\nDebian 11.5\n\n### How to Reproduce\n\nI can't share the code but what I do is simply `pip install \"my_package @ git+https://my_gitlab/my_repo\"`\n\n### Output\n\nThis is the full stacktrace :\r\n\r\n```\r\nERROR: Exception:\r\nTraceback (most recent call last):\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/cli/base_command.py\", line 180, in exc_logging_wrapper\r\n status = run_func(*args)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/cli/req_command.py\", line 248, in wrapper\r\n return func(self, options, args)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/commands/install.py\", line 377, in run\r\n requirement_set = resolver.resolve(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/resolver.py\", line 73, in resolve\r\n collected = self.factory.collect_root_requirements(root_reqs)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py\", line 491, in collect_root_requirements\r\n req = self._make_requirement_from_install_req(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py\", line 453, in _make_requirement_from_install_req\r\n cand = self._make_candidate_from_link(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/factory.py\", line 206, in _make_candidate_from_link\r\n self._link_candidate_cache[link] = LinkCandidate(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py\", line 293, in __init__\r\n super().__init__(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py\", line 156, in __init__\r\n self.dist = self._prepare()\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py\", line 225, in _prepare\r\n dist = self._prepare_distribution()\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/resolution/resolvelib/candidates.py\", line 304, in _prepare_distribution\r\n return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py\", line 538, in prepare_linked_requirement\r\n return self._prepare_linked_requirement(req, parallel_builds)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py\", line 609, in _prepare_linked_requirement\r\n local_file = unpack_url(\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py\", line 155, in unpack_url\r\n unpack_vcs_link(link, location, verbosity=verbosity)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/operations/prepare.py\", line 78, in unpack_vcs_link\r\n vcs_backend.unpack(location, url=hide_url(link.url), verbosity=verbosity)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/versioncontrol.py\", line 608, in unpack\r\n self.obtain(location, url=url, verbosity=verbosity)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/versioncontrol.py\", line 521, in obtain\r\n self.fetch_new(dest, url, rev_options, verbosity=verbosity)\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py\", line 272, in fetch_new\r\n if self.get_git_version() >= (2, 17):\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py\", line 104, in get_git_version\r\n return tuple(int(c) for c in match.groups())\r\n File \"/home/lcottereau/my_project/env/lib/python3.9/site-packages/pip/_internal/vcs/git.py\", line 104, in <genexpr>\r\n return tuple(int(c) for c in match.groups())\r\nTypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the [PSF Code of Conduct](https://www.python.org/psf/conduct/).\n", "before_files": [{"content": "import logging\nimport os.path\nimport pathlib\nimport re\nimport urllib.parse\nimport urllib.request\nfrom typing import List, Optional, Tuple\n\nfrom pip._internal.exceptions import BadCommand, InstallationError\nfrom pip._internal.utils.misc import HiddenText, display_path, hide_url\nfrom pip._internal.utils.subprocess import make_command\nfrom pip._internal.vcs.versioncontrol import (\n AuthInfo,\n RemoteNotFoundError,\n RemoteNotValidError,\n RevOptions,\n VersionControl,\n find_path_to_project_root_from_repo_root,\n vcs,\n)\n\nurlsplit = urllib.parse.urlsplit\nurlunsplit = urllib.parse.urlunsplit\n\n\nlogger = logging.getLogger(__name__)\n\n\nGIT_VERSION_REGEX = re.compile(\n r\"^git version \" # Prefix.\n r\"(\\d+)\" # Major.\n r\"\\.(\\d+)\" # Dot, minor.\n r\"(?:\\.(\\d+))?\" # Optional dot, patch.\n r\".*$\" # Suffix, including any pre- and post-release segments we don't care about.\n)\n\nHASH_REGEX = re.compile(\"^[a-fA-F0-9]{40}$\")\n\n# SCP (Secure copy protocol) shorthand. e.g. '[email protected]:foo/bar.git'\nSCP_REGEX = re.compile(\n r\"\"\"^\n # Optional user, e.g. 'git@'\n (\\w+@)?\n # Server, e.g. 'github.com'.\n ([^/:]+):\n # The server-side path. e.g. 'user/project.git'. Must start with an\n # alphanumeric character so as not to be confusable with a Windows paths\n # like 'C:/foo/bar' or 'C:\\foo\\bar'.\n (\\w[^:]*)\n $\"\"\",\n re.VERBOSE,\n)\n\n\ndef looks_like_hash(sha: str) -> bool:\n return bool(HASH_REGEX.match(sha))\n\n\nclass Git(VersionControl):\n name = \"git\"\n dirname = \".git\"\n repo_name = \"clone\"\n schemes = (\n \"git+http\",\n \"git+https\",\n \"git+ssh\",\n \"git+git\",\n \"git+file\",\n )\n # Prevent the user's environment variables from interfering with pip:\n # https://github.com/pypa/pip/issues/1130\n unset_environ = (\"GIT_DIR\", \"GIT_WORK_TREE\")\n default_arg_rev = \"HEAD\"\n\n @staticmethod\n def get_base_rev_args(rev: str) -> List[str]:\n return [rev]\n\n def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:\n _, rev_options = self.get_url_rev_options(hide_url(url))\n if not rev_options.rev:\n return False\n if not self.is_commit_id_equal(dest, rev_options.rev):\n # the current commit is different from rev,\n # which means rev was something else than a commit hash\n return False\n # return False in the rare case rev is both a commit hash\n # and a tag or a branch; we don't want to cache in that case\n # because that branch/tag could point to something else in the future\n is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0])\n return not is_tag_or_branch\n\n def get_git_version(self) -> Tuple[int, ...]:\n version = self.run_command(\n [\"version\"],\n command_desc=\"git version\",\n show_stdout=False,\n stdout_only=True,\n )\n match = GIT_VERSION_REGEX.match(version)\n if not match:\n logger.warning(\"Can't parse git version: %s\", version)\n return ()\n return tuple(int(c) for c in match.groups())\n\n @classmethod\n def get_current_branch(cls, location: str) -> Optional[str]:\n \"\"\"\n Return the current branch, or None if HEAD isn't at a branch\n (e.g. detached HEAD).\n \"\"\"\n # git-symbolic-ref exits with empty stdout if \"HEAD\" is a detached\n # HEAD rather than a symbolic ref. In addition, the -q causes the\n # command to exit with status code 1 instead of 128 in this case\n # and to suppress the message to stderr.\n args = [\"symbolic-ref\", \"-q\", \"HEAD\"]\n output = cls.run_command(\n args,\n extra_ok_returncodes=(1,),\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n ref = output.strip()\n\n if ref.startswith(\"refs/heads/\"):\n return ref[len(\"refs/heads/\") :]\n\n return None\n\n @classmethod\n def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]:\n \"\"\"\n Return (sha_or_none, is_branch), where sha_or_none is a commit hash\n if the revision names a remote branch or tag, otherwise None.\n\n Args:\n dest: the repository directory.\n rev: the revision name.\n \"\"\"\n # Pass rev to pre-filter the list.\n output = cls.run_command(\n [\"show-ref\", rev],\n cwd=dest,\n show_stdout=False,\n stdout_only=True,\n on_returncode=\"ignore\",\n )\n refs = {}\n # NOTE: We do not use splitlines here since that would split on other\n # unicode separators, which can be maliciously used to install a\n # different revision.\n for line in output.strip().split(\"\\n\"):\n line = line.rstrip(\"\\r\")\n if not line:\n continue\n try:\n ref_sha, ref_name = line.split(\" \", maxsplit=2)\n except ValueError:\n # Include the offending line to simplify troubleshooting if\n # this error ever occurs.\n raise ValueError(f\"unexpected show-ref line: {line!r}\")\n\n refs[ref_name] = ref_sha\n\n branch_ref = f\"refs/remotes/origin/{rev}\"\n tag_ref = f\"refs/tags/{rev}\"\n\n sha = refs.get(branch_ref)\n if sha is not None:\n return (sha, True)\n\n sha = refs.get(tag_ref)\n\n return (sha, False)\n\n @classmethod\n def _should_fetch(cls, dest: str, rev: str) -> bool:\n \"\"\"\n Return true if rev is a ref or is a commit that we don't have locally.\n\n Branches and tags are not considered in this method because they are\n assumed to be always available locally (which is a normal outcome of\n ``git clone`` and ``git fetch --tags``).\n \"\"\"\n if rev.startswith(\"refs/\"):\n # Always fetch remote refs.\n return True\n\n if not looks_like_hash(rev):\n # Git fetch would fail with abbreviated commits.\n return False\n\n if cls.has_commit(dest, rev):\n # Don't fetch if we have the commit locally.\n return False\n\n return True\n\n @classmethod\n def resolve_revision(\n cls, dest: str, url: HiddenText, rev_options: RevOptions\n ) -> RevOptions:\n \"\"\"\n Resolve a revision to a new RevOptions object with the SHA1 of the\n branch, tag, or ref if found.\n\n Args:\n rev_options: a RevOptions object.\n \"\"\"\n rev = rev_options.arg_rev\n # The arg_rev property's implementation for Git ensures that the\n # rev return value is always non-None.\n assert rev is not None\n\n sha, is_branch = cls.get_revision_sha(dest, rev)\n\n if sha is not None:\n rev_options = rev_options.make_new(sha)\n rev_options.branch_name = rev if is_branch else None\n\n return rev_options\n\n # Do not show a warning for the common case of something that has\n # the form of a Git commit hash.\n if not looks_like_hash(rev):\n logger.warning(\n \"Did not find branch or tag '%s', assuming revision or ref.\",\n rev,\n )\n\n if not cls._should_fetch(dest, rev):\n return rev_options\n\n # fetch the requested revision\n cls.run_command(\n make_command(\"fetch\", \"-q\", url, rev_options.to_args()),\n cwd=dest,\n )\n # Change the revision to the SHA of the ref we fetched\n sha = cls.get_revision(dest, rev=\"FETCH_HEAD\")\n rev_options = rev_options.make_new(sha)\n\n return rev_options\n\n @classmethod\n def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:\n \"\"\"\n Return whether the current commit hash equals the given name.\n\n Args:\n dest: the repository directory.\n name: a string name.\n \"\"\"\n if not name:\n # Then avoid an unnecessary subprocess call.\n return False\n\n return cls.get_revision(dest) == name\n\n def fetch_new(\n self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int\n ) -> None:\n rev_display = rev_options.to_display()\n logger.info(\"Cloning %s%s to %s\", url, rev_display, display_path(dest))\n if verbosity <= 0:\n flags: Tuple[str, ...] = (\"--quiet\",)\n elif verbosity == 1:\n flags = ()\n else:\n flags = (\"--verbose\", \"--progress\")\n if self.get_git_version() >= (2, 17):\n # Git added support for partial clone in 2.17\n # https://git-scm.com/docs/partial-clone\n # Speeds up cloning by functioning without a complete copy of repository\n self.run_command(\n make_command(\n \"clone\",\n \"--filter=blob:none\",\n *flags,\n url,\n dest,\n )\n )\n else:\n self.run_command(make_command(\"clone\", *flags, url, dest))\n\n if rev_options.rev:\n # Then a specific revision was requested.\n rev_options = self.resolve_revision(dest, url, rev_options)\n branch_name = getattr(rev_options, \"branch_name\", None)\n logger.debug(\"Rev options %s, branch_name %s\", rev_options, branch_name)\n if branch_name is None:\n # Only do a checkout if the current commit id doesn't match\n # the requested revision.\n if not self.is_commit_id_equal(dest, rev_options.rev):\n cmd_args = make_command(\n \"checkout\",\n \"-q\",\n rev_options.to_args(),\n )\n self.run_command(cmd_args, cwd=dest)\n elif self.get_current_branch(dest) != branch_name:\n # Then a specific branch was requested, and that branch\n # is not yet checked out.\n track_branch = f\"origin/{branch_name}\"\n cmd_args = [\n \"checkout\",\n \"-b\",\n branch_name,\n \"--track\",\n track_branch,\n ]\n self.run_command(cmd_args, cwd=dest)\n else:\n sha = self.get_revision(dest)\n rev_options = rev_options.make_new(sha)\n\n logger.info(\"Resolved %s to commit %s\", url, rev_options.rev)\n\n #: repo may contain submodules\n self.update_submodules(dest)\n\n def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n self.run_command(\n make_command(\"config\", \"remote.origin.url\", url),\n cwd=dest,\n )\n cmd_args = make_command(\"checkout\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n\n self.update_submodules(dest)\n\n def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n # First fetch changes from the default remote\n if self.get_git_version() >= (1, 9):\n # fetch tags in addition to everything else\n self.run_command([\"fetch\", \"-q\", \"--tags\"], cwd=dest)\n else:\n self.run_command([\"fetch\", \"-q\"], cwd=dest)\n # Then reset to wanted revision (maybe even origin/master)\n rev_options = self.resolve_revision(dest, url, rev_options)\n cmd_args = make_command(\"reset\", \"--hard\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n #: update submodules\n self.update_submodules(dest)\n\n @classmethod\n def get_remote_url(cls, location: str) -> str:\n \"\"\"\n Return URL of the first remote encountered.\n\n Raises RemoteNotFoundError if the repository does not have a remote\n url configured.\n \"\"\"\n # We need to pass 1 for extra_ok_returncodes since the command\n # exits with return code 1 if there are no matching lines.\n stdout = cls.run_command(\n [\"config\", \"--get-regexp\", r\"remote\\..*\\.url\"],\n extra_ok_returncodes=(1,),\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n remotes = stdout.splitlines()\n try:\n found_remote = remotes[0]\n except IndexError:\n raise RemoteNotFoundError\n\n for remote in remotes:\n if remote.startswith(\"remote.origin.url \"):\n found_remote = remote\n break\n url = found_remote.split(\" \")[1]\n return cls._git_remote_to_pip_url(url.strip())\n\n @staticmethod\n def _git_remote_to_pip_url(url: str) -> str:\n \"\"\"\n Convert a remote url from what git uses to what pip accepts.\n\n There are 3 legal forms **url** may take:\n\n 1. A fully qualified url: ssh://[email protected]/foo/bar.git\n 2. A local project.git folder: /path/to/bare/repository.git\n 3. SCP shorthand for form 1: [email protected]:foo/bar.git\n\n Form 1 is output as-is. Form 2 must be converted to URI and form 3 must\n be converted to form 1.\n\n See the corresponding test test_git_remote_url_to_pip() for examples of\n sample inputs/outputs.\n \"\"\"\n if re.match(r\"\\w+://\", url):\n # This is already valid. Pass it though as-is.\n return url\n if os.path.exists(url):\n # A local bare remote (git clone --mirror).\n # Needs a file:// prefix.\n return pathlib.PurePath(url).as_uri()\n scp_match = SCP_REGEX.match(url)\n if scp_match:\n # Add an ssh:// prefix and replace the ':' with a '/'.\n return scp_match.expand(r\"ssh://\\1\\2/\\3\")\n # Otherwise, bail out.\n raise RemoteNotValidError(url)\n\n @classmethod\n def has_commit(cls, location: str, rev: str) -> bool:\n \"\"\"\n Check if rev is a commit that is available in the local repository.\n \"\"\"\n try:\n cls.run_command(\n [\"rev-parse\", \"-q\", \"--verify\", \"sha^\" + rev],\n cwd=location,\n log_failed_cmd=False,\n )\n except InstallationError:\n return False\n else:\n return True\n\n @classmethod\n def get_revision(cls, location: str, rev: Optional[str] = None) -> str:\n if rev is None:\n rev = \"HEAD\"\n current_rev = cls.run_command(\n [\"rev-parse\", rev],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n return current_rev.strip()\n\n @classmethod\n def get_subdirectory(cls, location: str) -> Optional[str]:\n \"\"\"\n Return the path to Python project root, relative to the repo root.\n Return None if the project root is in the repo root.\n \"\"\"\n # find the repo root\n git_dir = cls.run_command(\n [\"rev-parse\", \"--git-dir\"],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n ).strip()\n if not os.path.isabs(git_dir):\n git_dir = os.path.join(location, git_dir)\n repo_root = os.path.abspath(os.path.join(git_dir, \"..\"))\n return find_path_to_project_root_from_repo_root(location, repo_root)\n\n @classmethod\n def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:\n \"\"\"\n Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.\n That's required because although they use SSH they sometimes don't\n work with a ssh:// scheme (e.g. GitHub). But we need a scheme for\n parsing. Hence we remove it again afterwards and return it as a stub.\n \"\"\"\n # Works around an apparent Git bug\n # (see https://article.gmane.org/gmane.comp.version-control.git/146500)\n scheme, netloc, path, query, fragment = urlsplit(url)\n if scheme.endswith(\"file\"):\n initial_slashes = path[: -len(path.lstrip(\"/\"))]\n newpath = initial_slashes + urllib.request.url2pathname(path).replace(\n \"\\\\\", \"/\"\n ).lstrip(\"/\")\n after_plus = scheme.find(\"+\") + 1\n url = scheme[:after_plus] + urlunsplit(\n (scheme[after_plus:], netloc, newpath, query, fragment),\n )\n\n if \"://\" not in url:\n assert \"file:\" not in url\n url = url.replace(\"git+\", \"git+ssh://\")\n url, rev, user_pass = super().get_url_rev_and_auth(url)\n url = url.replace(\"ssh://\", \"\")\n else:\n url, rev, user_pass = super().get_url_rev_and_auth(url)\n\n return url, rev, user_pass\n\n @classmethod\n def update_submodules(cls, location: str) -> None:\n if not os.path.exists(os.path.join(location, \".gitmodules\")):\n return\n cls.run_command(\n [\"submodule\", \"update\", \"--init\", \"--recursive\", \"-q\"],\n cwd=location,\n )\n\n @classmethod\n def get_repository_root(cls, location: str) -> Optional[str]:\n loc = super().get_repository_root(location)\n if loc:\n return loc\n try:\n r = cls.run_command(\n [\"rev-parse\", \"--show-toplevel\"],\n cwd=location,\n show_stdout=False,\n stdout_only=True,\n on_returncode=\"raise\",\n log_failed_cmd=False,\n )\n except BadCommand:\n logger.debug(\n \"could not determine if %s is under git control \"\n \"because git is not available\",\n location,\n )\n return None\n except InstallationError:\n return None\n return os.path.normpath(r.rstrip(\"\\r\\n\"))\n\n @staticmethod\n def should_add_vcs_url_prefix(repo_url: str) -> bool:\n \"\"\"In either https or ssh form, requirements must be prefixed with git+.\"\"\"\n return True\n\n\nvcs.register(Git)\n", "path": "src/pip/_internal/vcs/git.py"}], "after_files": [{"content": "import logging\nimport os.path\nimport pathlib\nimport re\nimport urllib.parse\nimport urllib.request\nfrom typing import List, Optional, Tuple\n\nfrom pip._internal.exceptions import BadCommand, InstallationError\nfrom pip._internal.utils.misc import HiddenText, display_path, hide_url\nfrom pip._internal.utils.subprocess import make_command\nfrom pip._internal.vcs.versioncontrol import (\n AuthInfo,\n RemoteNotFoundError,\n RemoteNotValidError,\n RevOptions,\n VersionControl,\n find_path_to_project_root_from_repo_root,\n vcs,\n)\n\nurlsplit = urllib.parse.urlsplit\nurlunsplit = urllib.parse.urlunsplit\n\n\nlogger = logging.getLogger(__name__)\n\n\nGIT_VERSION_REGEX = re.compile(\n r\"^git version \" # Prefix.\n r\"(\\d+)\" # Major.\n r\"\\.(\\d+)\" # Dot, minor.\n r\"(?:\\.(\\d+))?\" # Optional dot, patch.\n r\".*$\" # Suffix, including any pre- and post-release segments we don't care about.\n)\n\nHASH_REGEX = re.compile(\"^[a-fA-F0-9]{40}$\")\n\n# SCP (Secure copy protocol) shorthand. e.g. '[email protected]:foo/bar.git'\nSCP_REGEX = re.compile(\n r\"\"\"^\n # Optional user, e.g. 'git@'\n (\\w+@)?\n # Server, e.g. 'github.com'.\n ([^/:]+):\n # The server-side path. e.g. 'user/project.git'. Must start with an\n # alphanumeric character so as not to be confusable with a Windows paths\n # like 'C:/foo/bar' or 'C:\\foo\\bar'.\n (\\w[^:]*)\n $\"\"\",\n re.VERBOSE,\n)\n\n\ndef looks_like_hash(sha: str) -> bool:\n return bool(HASH_REGEX.match(sha))\n\n\nclass Git(VersionControl):\n name = \"git\"\n dirname = \".git\"\n repo_name = \"clone\"\n schemes = (\n \"git+http\",\n \"git+https\",\n \"git+ssh\",\n \"git+git\",\n \"git+file\",\n )\n # Prevent the user's environment variables from interfering with pip:\n # https://github.com/pypa/pip/issues/1130\n unset_environ = (\"GIT_DIR\", \"GIT_WORK_TREE\")\n default_arg_rev = \"HEAD\"\n\n @staticmethod\n def get_base_rev_args(rev: str) -> List[str]:\n return [rev]\n\n def is_immutable_rev_checkout(self, url: str, dest: str) -> bool:\n _, rev_options = self.get_url_rev_options(hide_url(url))\n if not rev_options.rev:\n return False\n if not self.is_commit_id_equal(dest, rev_options.rev):\n # the current commit is different from rev,\n # which means rev was something else than a commit hash\n return False\n # return False in the rare case rev is both a commit hash\n # and a tag or a branch; we don't want to cache in that case\n # because that branch/tag could point to something else in the future\n is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0])\n return not is_tag_or_branch\n\n def get_git_version(self) -> Tuple[int, ...]:\n version = self.run_command(\n [\"version\"],\n command_desc=\"git version\",\n show_stdout=False,\n stdout_only=True,\n )\n match = GIT_VERSION_REGEX.match(version)\n if not match:\n logger.warning(\"Can't parse git version: %s\", version)\n return ()\n return (int(match.group(1)), int(match.group(2)))\n\n @classmethod\n def get_current_branch(cls, location: str) -> Optional[str]:\n \"\"\"\n Return the current branch, or None if HEAD isn't at a branch\n (e.g. detached HEAD).\n \"\"\"\n # git-symbolic-ref exits with empty stdout if \"HEAD\" is a detached\n # HEAD rather than a symbolic ref. In addition, the -q causes the\n # command to exit with status code 1 instead of 128 in this case\n # and to suppress the message to stderr.\n args = [\"symbolic-ref\", \"-q\", \"HEAD\"]\n output = cls.run_command(\n args,\n extra_ok_returncodes=(1,),\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n ref = output.strip()\n\n if ref.startswith(\"refs/heads/\"):\n return ref[len(\"refs/heads/\") :]\n\n return None\n\n @classmethod\n def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]:\n \"\"\"\n Return (sha_or_none, is_branch), where sha_or_none is a commit hash\n if the revision names a remote branch or tag, otherwise None.\n\n Args:\n dest: the repository directory.\n rev: the revision name.\n \"\"\"\n # Pass rev to pre-filter the list.\n output = cls.run_command(\n [\"show-ref\", rev],\n cwd=dest,\n show_stdout=False,\n stdout_only=True,\n on_returncode=\"ignore\",\n )\n refs = {}\n # NOTE: We do not use splitlines here since that would split on other\n # unicode separators, which can be maliciously used to install a\n # different revision.\n for line in output.strip().split(\"\\n\"):\n line = line.rstrip(\"\\r\")\n if not line:\n continue\n try:\n ref_sha, ref_name = line.split(\" \", maxsplit=2)\n except ValueError:\n # Include the offending line to simplify troubleshooting if\n # this error ever occurs.\n raise ValueError(f\"unexpected show-ref line: {line!r}\")\n\n refs[ref_name] = ref_sha\n\n branch_ref = f\"refs/remotes/origin/{rev}\"\n tag_ref = f\"refs/tags/{rev}\"\n\n sha = refs.get(branch_ref)\n if sha is not None:\n return (sha, True)\n\n sha = refs.get(tag_ref)\n\n return (sha, False)\n\n @classmethod\n def _should_fetch(cls, dest: str, rev: str) -> bool:\n \"\"\"\n Return true if rev is a ref or is a commit that we don't have locally.\n\n Branches and tags are not considered in this method because they are\n assumed to be always available locally (which is a normal outcome of\n ``git clone`` and ``git fetch --tags``).\n \"\"\"\n if rev.startswith(\"refs/\"):\n # Always fetch remote refs.\n return True\n\n if not looks_like_hash(rev):\n # Git fetch would fail with abbreviated commits.\n return False\n\n if cls.has_commit(dest, rev):\n # Don't fetch if we have the commit locally.\n return False\n\n return True\n\n @classmethod\n def resolve_revision(\n cls, dest: str, url: HiddenText, rev_options: RevOptions\n ) -> RevOptions:\n \"\"\"\n Resolve a revision to a new RevOptions object with the SHA1 of the\n branch, tag, or ref if found.\n\n Args:\n rev_options: a RevOptions object.\n \"\"\"\n rev = rev_options.arg_rev\n # The arg_rev property's implementation for Git ensures that the\n # rev return value is always non-None.\n assert rev is not None\n\n sha, is_branch = cls.get_revision_sha(dest, rev)\n\n if sha is not None:\n rev_options = rev_options.make_new(sha)\n rev_options.branch_name = rev if is_branch else None\n\n return rev_options\n\n # Do not show a warning for the common case of something that has\n # the form of a Git commit hash.\n if not looks_like_hash(rev):\n logger.warning(\n \"Did not find branch or tag '%s', assuming revision or ref.\",\n rev,\n )\n\n if not cls._should_fetch(dest, rev):\n return rev_options\n\n # fetch the requested revision\n cls.run_command(\n make_command(\"fetch\", \"-q\", url, rev_options.to_args()),\n cwd=dest,\n )\n # Change the revision to the SHA of the ref we fetched\n sha = cls.get_revision(dest, rev=\"FETCH_HEAD\")\n rev_options = rev_options.make_new(sha)\n\n return rev_options\n\n @classmethod\n def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool:\n \"\"\"\n Return whether the current commit hash equals the given name.\n\n Args:\n dest: the repository directory.\n name: a string name.\n \"\"\"\n if not name:\n # Then avoid an unnecessary subprocess call.\n return False\n\n return cls.get_revision(dest) == name\n\n def fetch_new(\n self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int\n ) -> None:\n rev_display = rev_options.to_display()\n logger.info(\"Cloning %s%s to %s\", url, rev_display, display_path(dest))\n if verbosity <= 0:\n flags: Tuple[str, ...] = (\"--quiet\",)\n elif verbosity == 1:\n flags = ()\n else:\n flags = (\"--verbose\", \"--progress\")\n if self.get_git_version() >= (2, 17):\n # Git added support for partial clone in 2.17\n # https://git-scm.com/docs/partial-clone\n # Speeds up cloning by functioning without a complete copy of repository\n self.run_command(\n make_command(\n \"clone\",\n \"--filter=blob:none\",\n *flags,\n url,\n dest,\n )\n )\n else:\n self.run_command(make_command(\"clone\", *flags, url, dest))\n\n if rev_options.rev:\n # Then a specific revision was requested.\n rev_options = self.resolve_revision(dest, url, rev_options)\n branch_name = getattr(rev_options, \"branch_name\", None)\n logger.debug(\"Rev options %s, branch_name %s\", rev_options, branch_name)\n if branch_name is None:\n # Only do a checkout if the current commit id doesn't match\n # the requested revision.\n if not self.is_commit_id_equal(dest, rev_options.rev):\n cmd_args = make_command(\n \"checkout\",\n \"-q\",\n rev_options.to_args(),\n )\n self.run_command(cmd_args, cwd=dest)\n elif self.get_current_branch(dest) != branch_name:\n # Then a specific branch was requested, and that branch\n # is not yet checked out.\n track_branch = f\"origin/{branch_name}\"\n cmd_args = [\n \"checkout\",\n \"-b\",\n branch_name,\n \"--track\",\n track_branch,\n ]\n self.run_command(cmd_args, cwd=dest)\n else:\n sha = self.get_revision(dest)\n rev_options = rev_options.make_new(sha)\n\n logger.info(\"Resolved %s to commit %s\", url, rev_options.rev)\n\n #: repo may contain submodules\n self.update_submodules(dest)\n\n def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n self.run_command(\n make_command(\"config\", \"remote.origin.url\", url),\n cwd=dest,\n )\n cmd_args = make_command(\"checkout\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n\n self.update_submodules(dest)\n\n def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None:\n # First fetch changes from the default remote\n if self.get_git_version() >= (1, 9):\n # fetch tags in addition to everything else\n self.run_command([\"fetch\", \"-q\", \"--tags\"], cwd=dest)\n else:\n self.run_command([\"fetch\", \"-q\"], cwd=dest)\n # Then reset to wanted revision (maybe even origin/master)\n rev_options = self.resolve_revision(dest, url, rev_options)\n cmd_args = make_command(\"reset\", \"--hard\", \"-q\", rev_options.to_args())\n self.run_command(cmd_args, cwd=dest)\n #: update submodules\n self.update_submodules(dest)\n\n @classmethod\n def get_remote_url(cls, location: str) -> str:\n \"\"\"\n Return URL of the first remote encountered.\n\n Raises RemoteNotFoundError if the repository does not have a remote\n url configured.\n \"\"\"\n # We need to pass 1 for extra_ok_returncodes since the command\n # exits with return code 1 if there are no matching lines.\n stdout = cls.run_command(\n [\"config\", \"--get-regexp\", r\"remote\\..*\\.url\"],\n extra_ok_returncodes=(1,),\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n remotes = stdout.splitlines()\n try:\n found_remote = remotes[0]\n except IndexError:\n raise RemoteNotFoundError\n\n for remote in remotes:\n if remote.startswith(\"remote.origin.url \"):\n found_remote = remote\n break\n url = found_remote.split(\" \")[1]\n return cls._git_remote_to_pip_url(url.strip())\n\n @staticmethod\n def _git_remote_to_pip_url(url: str) -> str:\n \"\"\"\n Convert a remote url from what git uses to what pip accepts.\n\n There are 3 legal forms **url** may take:\n\n 1. A fully qualified url: ssh://[email protected]/foo/bar.git\n 2. A local project.git folder: /path/to/bare/repository.git\n 3. SCP shorthand for form 1: [email protected]:foo/bar.git\n\n Form 1 is output as-is. Form 2 must be converted to URI and form 3 must\n be converted to form 1.\n\n See the corresponding test test_git_remote_url_to_pip() for examples of\n sample inputs/outputs.\n \"\"\"\n if re.match(r\"\\w+://\", url):\n # This is already valid. Pass it though as-is.\n return url\n if os.path.exists(url):\n # A local bare remote (git clone --mirror).\n # Needs a file:// prefix.\n return pathlib.PurePath(url).as_uri()\n scp_match = SCP_REGEX.match(url)\n if scp_match:\n # Add an ssh:// prefix and replace the ':' with a '/'.\n return scp_match.expand(r\"ssh://\\1\\2/\\3\")\n # Otherwise, bail out.\n raise RemoteNotValidError(url)\n\n @classmethod\n def has_commit(cls, location: str, rev: str) -> bool:\n \"\"\"\n Check if rev is a commit that is available in the local repository.\n \"\"\"\n try:\n cls.run_command(\n [\"rev-parse\", \"-q\", \"--verify\", \"sha^\" + rev],\n cwd=location,\n log_failed_cmd=False,\n )\n except InstallationError:\n return False\n else:\n return True\n\n @classmethod\n def get_revision(cls, location: str, rev: Optional[str] = None) -> str:\n if rev is None:\n rev = \"HEAD\"\n current_rev = cls.run_command(\n [\"rev-parse\", rev],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n )\n return current_rev.strip()\n\n @classmethod\n def get_subdirectory(cls, location: str) -> Optional[str]:\n \"\"\"\n Return the path to Python project root, relative to the repo root.\n Return None if the project root is in the repo root.\n \"\"\"\n # find the repo root\n git_dir = cls.run_command(\n [\"rev-parse\", \"--git-dir\"],\n show_stdout=False,\n stdout_only=True,\n cwd=location,\n ).strip()\n if not os.path.isabs(git_dir):\n git_dir = os.path.join(location, git_dir)\n repo_root = os.path.abspath(os.path.join(git_dir, \"..\"))\n return find_path_to_project_root_from_repo_root(location, repo_root)\n\n @classmethod\n def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]:\n \"\"\"\n Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'.\n That's required because although they use SSH they sometimes don't\n work with a ssh:// scheme (e.g. GitHub). But we need a scheme for\n parsing. Hence we remove it again afterwards and return it as a stub.\n \"\"\"\n # Works around an apparent Git bug\n # (see https://article.gmane.org/gmane.comp.version-control.git/146500)\n scheme, netloc, path, query, fragment = urlsplit(url)\n if scheme.endswith(\"file\"):\n initial_slashes = path[: -len(path.lstrip(\"/\"))]\n newpath = initial_slashes + urllib.request.url2pathname(path).replace(\n \"\\\\\", \"/\"\n ).lstrip(\"/\")\n after_plus = scheme.find(\"+\") + 1\n url = scheme[:after_plus] + urlunsplit(\n (scheme[after_plus:], netloc, newpath, query, fragment),\n )\n\n if \"://\" not in url:\n assert \"file:\" not in url\n url = url.replace(\"git+\", \"git+ssh://\")\n url, rev, user_pass = super().get_url_rev_and_auth(url)\n url = url.replace(\"ssh://\", \"\")\n else:\n url, rev, user_pass = super().get_url_rev_and_auth(url)\n\n return url, rev, user_pass\n\n @classmethod\n def update_submodules(cls, location: str) -> None:\n if not os.path.exists(os.path.join(location, \".gitmodules\")):\n return\n cls.run_command(\n [\"submodule\", \"update\", \"--init\", \"--recursive\", \"-q\"],\n cwd=location,\n )\n\n @classmethod\n def get_repository_root(cls, location: str) -> Optional[str]:\n loc = super().get_repository_root(location)\n if loc:\n return loc\n try:\n r = cls.run_command(\n [\"rev-parse\", \"--show-toplevel\"],\n cwd=location,\n show_stdout=False,\n stdout_only=True,\n on_returncode=\"raise\",\n log_failed_cmd=False,\n )\n except BadCommand:\n logger.debug(\n \"could not determine if %s is under git control \"\n \"because git is not available\",\n location,\n )\n return None\n except InstallationError:\n return None\n return os.path.normpath(r.rstrip(\"\\r\\n\"))\n\n @staticmethod\n def should_add_vcs_url_prefix(repo_url: str) -> bool:\n \"\"\"In either https or ssh form, requirements must be prefixed with git+.\"\"\"\n return True\n\n\nvcs.register(Git)\n", "path": "src/pip/_internal/vcs/git.py"}]} |
gh_patches_debug_1170 | rasdani/github-patches | git_diff | facebookresearch__ParlAI-3393 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
HuggingFace model only works with transformers <4.0.0
**Bug description**
With transfomers 4.0 or greater:
```
$ parlai dm -t convai2 -m hugging_face/gpt2 -bs 1
```
```python-traceback
Traceback (most recent call last):
File "/private/home/roller/.conda/envs/chat202001/bin/parlai", line 33, in <module>
sys.exit(load_entry_point('parlai', 'console_scripts', 'parlai')())
File "/private/home/roller/working/parlai/parlai/__main__.py", line 14, in main
superscript_main()
File "/private/home/roller/working/parlai/parlai/core/script.py", line 307, in superscript_main
return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
File "/private/home/roller/working/parlai/parlai/core/script.py", line 90, in _run_from_parser_and_opt
return script.run()
File "/private/home/roller/working/parlai/parlai/scripts/display_model.py", line 91, in run
display_model(self.opt)
File "/private/home/roller/working/parlai/parlai/scripts/display_model.py", line 70, in display_model
world.parley()
File "/private/home/roller/working/parlai/parlai/core/worlds.py", line 346, in parley
acts[1] = agents[1].act()
File "/private/home/roller/working/parlai/parlai/core/torch_agent.py", line 1946, in act
response = self.batch_act([self.observation])[0]
File "/private/home/roller/working/parlai/parlai/core/torch_agent.py", line 2007, in batch_act
output = self.eval_step(batch)
File "/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py", line 880, in eval_step
loss, model_output = self.compute_loss(batch, return_output=True)
File "/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py", line 710, in compute_loss
model_output = self.model(*self._model_input(batch), ys=batch.label_vec)
File "/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py", line 328, in forward
scores, preds = self.decode_forced(encoder_states, ys)
File "/private/home/roller/working/parlai/parlai/agents/hugging_face/gpt2.py", line 197, in decode_forced
latent, _ = self.decoder(inputs, encoder_states)
File "/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/private/home/roller/working/parlai/parlai/agents/hugging_face/gpt2.py", line 113, in forward
transformer_outputs = self.transformer(
File "/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'past'
```
Upgrade path is unclear given fbcode dependency.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `parlai/agents/hugging_face/gpt2.py`
Content:
```
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from typing import Optional
8 from parlai.core.params import ParlaiParser
9 from parlai.core.opt import Opt
10 import os
11
12 import torch
13 from parlai.agents.hugging_face.dict import Gpt2DictionaryAgent
14 from parlai.core.torch_generator_agent import TorchGeneratorAgent, TorchGeneratorModel
15 from parlai.utils.io import PathManager
16 from parlai.utils.misc import warn_once
17 from parlai.utils.torch import IdentityLayer, padded_tensor
18
19 try:
20 from transformers import GPT2Model
21 except ImportError:
22 raise ImportError("Please run `pip install transformers`.")
23
24
25 ############################################
26 ## Modules
27 ############################################
28
29
30 class GPT2Decoder(torch.nn.Module):
31 """
32 GPT2 Decoder.
33
34 This decoder is initialized with the pretrained model from Hugging Face.
35 """
36
37 def __init__(self, opt, dict):
38 super().__init__()
39 self.transformer = self._init_from_pretrained(opt)
40 # add special tokens
41 if opt["add_special_tokens"]:
42 size_before = self.transformer.wte.weight.size(0)
43 self.transformer.resize_token_embeddings(len(dict.tokenizer))
44 with torch.no_grad():
45 # first reduce the random jitter of the initialization
46 self.transformer.wte.weight[size_before:] *= 0.1
47 # next center it on the endoftext token
48 self.transformer.wte.weight[
49 size_before:
50 ] += self.transformer.wte.weight[size_before - 1].unsqueeze(0)
51
52 self.add_start_token = opt["add_start_token"]
53 self.START_IDX = dict.start_idx
54 self.NULL_IDX = dict.null_idx
55 self.END_IDX = dict.end_idx
56 # use cuda
57 self.use_cuda = not opt["no_cuda"] and torch.cuda.is_available()
58
59 def _init_from_pretrained(self, opt):
60 # load model
61 model_sz = opt["gpt2_size"]
62 if model_sz == "small":
63 model_key = "gpt2"
64 elif model_sz == "distilgpt2":
65 model_key = "distilgpt2"
66 else:
67 model_key = f"gpt2-{model_sz}"
68
69 # check if datapath has the files that hugging face code looks for
70 hf_dir = os.path.join(opt["datapath"], "hf", model_key)
71 if all(
72 PathManager.exists(os.path.join(hf_dir, file_name))
73 for file_name in ["pytorch_model.bin", "config.json"]
74 ):
75 fle_key = PathManager.get_local_path(hf_dir, recursive=True)
76 else:
77 fle_key = model_key
78 return GPT2Model.from_pretrained(fle_key)
79
80 def forward(self, input, encoder_state, incr_state=None):
81 attention_mask = None
82 position_ids = None
83 if incr_state is None:
84 # first step
85 if (
86 not self.add_start_token
87 and input.size(1) == 1
88 and int(input[0][0]) == self.START_IDX
89 ):
90 # generating: ignore the start token
91 # without deep copy, the padding_idx (-1) in encoder_state can be reset to 0 with clamp_ inplace operation
92 model_input = encoder_state.clone()
93 else:
94 # forced decoding: concatenate the context
95 # with the labels
96 model_input = torch.cat([encoder_state, input], dim=-1)
97 attention_mask = model_input != self.NULL_IDX
98 position_ids = (
99 attention_mask.cumsum(dim=-1, dtype=torch.int64) - 1
100 ).clamp_(min=0)
101 else:
102 if not self.add_start_token:
103 input = input[:, 1:]
104 # generating with continuation
105 # get the position ids
106 position_ids = (encoder_state != self.NULL_IDX).sum(
107 -1, True, dtype=torch.int64
108 ) - 1
109 delta = ((input != self.NULL_IDX)).sum(-1, True, dtype=torch.int64)
110 position_ids += delta
111 # generation: get the last token input
112 model_input = input[:, -1:]
113 attention_mask = torch.cat([encoder_state, input], dim=-1) != self.NULL_IDX
114
115 model_input = model_input.clamp_(min=0)
116 transformer_outputs = self.transformer(
117 model_input,
118 past=incr_state,
119 attention_mask=attention_mask,
120 position_ids=position_ids,
121 )
122 hidden_states = transformer_outputs[0]
123 new_incr_state = transformer_outputs[1]
124
125 if incr_state is None:
126 # pull out only the hidden states for the label tokens
127 output = hidden_states[:, -input.size(1) - 1 + int(self.add_start_token) :]
128 # hack: we need the last state of the encoder-side to be the first
129 # element of the decoder-side
130 lengths = (input != self.NULL_IDX).sum(dim=-1)
131 for i in range(input.size(0)):
132 output[i, input.size(1) - lengths[i]] = output[i, 0]
133
134 else:
135 # generation, we're only doing one token at a time. no need to
136 # shove things back in
137 output = hidden_states
138
139 return output, new_incr_state
140
141
142 class HFGPT2Model(TorchGeneratorModel):
143 """
144 Hugging Face GPT2 Model.
145
146 GPT2 is a multi-layer decoder-only Transformer. As such, the encoder
147 is simply an identity layer. The decoder is initialized with pretrained
148 weights from Hugging Face. Read more about this model here
149 <https://huggingface.co/transformers/model_doc/gpt2.html>.
150 """
151
152 def __init__(self, opt, dict):
153 self.add_start_token = opt["add_start_token"]
154 super().__init__(*self._get_special_tokens(opt, dict))
155
156 # init the model
157 self.encoder = IdentityLayer()
158 self.decoder = self._get_decoder(opt, dict)
159 self.config = self.decoder.transformer.config
160 self.lm_head = torch.nn.Linear(
161 self.config.n_embd, self.config.vocab_size, bias=False
162 )
163 self._tie_weights(self.lm_head, self.decoder.transformer.wte)
164 # add start token
165
166 def _get_decoder(self, opt, dict):
167 return GPT2Decoder(opt, dict)
168
169 def _tie_weights(self, output_embeddings, input_embeddings):
170 output_embeddings.weight = input_embeddings.weight
171
172 def _get_special_tokens(self, opt, dict):
173 return dict.null_idx, dict.start_idx, dict.end_idx
174
175 def reorder_encoder_states(self, encoder_states, indices):
176 enc = torch.index_select(encoder_states, 0, indices)
177 return enc
178
179 def output(self, tensor):
180 """
181 Compute output logits.
182 """
183 return self.lm_head(tensor)
184
185 def reorder_decoder_incremental_state(self, incremental_state, inds):
186 new_incr_state = []
187 for layer_past in incremental_state:
188 new_incr_state.append(torch.index_select(layer_past, 1, inds))
189
190 return tuple(new_incr_state)
191
192 def decode_forced(self, encoder_states, ys):
193 """
194 Override to get rid of start token input.
195 """
196 if self.add_start_token:
197 return super().decode_forced(encoder_states, ys)
198 seqlen = ys.size(1)
199 inputs = ys.narrow(1, 0, seqlen - 1)
200 latent, _ = self.decoder(inputs, encoder_states)
201 logits = self.output(latent)
202 _, preds = logits.max(dim=2)
203 return logits, preds
204
205
206 ############################################
207 ## Agent
208 ############################################
209
210
211 class Gpt2Agent(TorchGeneratorAgent):
212 """
213 Hugging Face GPT2 Agent.
214
215 GPT2 is a multi-layer decoder-only Transformer.
216 The decoder is initialized with pretrained weights from Hugging Face.
217 Read more about this model here
218 <https://huggingface.co/transformers/model_doc/gpt2.html>.
219
220 GPT2 comes in five sizes: distilgpt2, small, medium, large, XL. Use the
221 flag `--gpt2-size` to choose the size.
222
223 If you are finetuning the Gpt2 agent as a dialogue agent, be sure
224 to run `--add-special-tokens True`. To examine the performance of the
225 agent out of the box, run with `--add-special-tokens False`, and make
226 sure that the batch size is 1.
227 """
228
229 @classmethod
230 def add_cmdline_args(
231 cls, parser: ParlaiParser, partial_opt: Optional[Opt] = None
232 ) -> ParlaiParser:
233 agent = parser.add_argument_group("Gpt2 Args")
234 agent.add_argument(
235 "--gpt2-size",
236 type=str,
237 default="small",
238 choices=["small", "medium", "large", "xl", "distilgpt2"],
239 help="Which size model to initialize.",
240 )
241 agent.add_argument(
242 "--add-special-tokens",
243 type="bool",
244 default=True,
245 help="Add special tokens (like PAD, etc.). If False, "
246 "Can only use with batch size 1.",
247 )
248 agent.add_argument(
249 "--add-start-token",
250 type="bool",
251 default=False,
252 help="Add start tokens when finetuning.",
253 )
254 parser.set_defaults(
255 text_truncate=768,
256 label_truncate=256,
257 dict_maxexs=0, # skip building dictionary
258 )
259 super().add_cmdline_args(parser, partial_opt=partial_opt)
260 warn_once("WARNING: this model is in beta and the API is subject to change.")
261 return agent
262
263 def __init__(self, opt, shared=None):
264 if not opt["add_special_tokens"] and opt.get('batchsize', 1) > 1:
265 # *** STOP ***
266 # You may be a future researcher who has stumbled upon this odd
267 # restriction, and is tempted to comment this out. After all, the
268 # code still runs when it's uncommented, why shouldn't you?
269 # You should know this has serious implications, as gpt2 doesn't have
270 # padding tokens. This is incompatible with ParlAI's batching,
271 # which puts conversations of different length in the same
272 # batch. Without a padding token, nonsense will be inserted into
273 # the context, and the generations & PPL you get will be wrong.
274 raise RuntimeError(
275 "If using batchsize > 1, --add-special-tokens must be True."
276 )
277 if not opt["add_special_tokens"] and opt["add_start_token"]:
278 raise RuntimeError(
279 "--add-start-token true requires --add-special-tokens true"
280 )
281 super().__init__(opt, shared)
282 if hasattr(self.model, "module"):
283 self.START_IDX = self.model.module.START_IDX
284 self.END_IDX = self.model.module.END_IDX
285 self.NULL_IDX = self.model.module.NULL_IDX
286 else:
287 self.START_IDX = self.model.START_IDX
288 self.END_IDX = self.model.END_IDX
289 self.NULL_IDX = self.model.NULL_IDX
290
291 @staticmethod
292 def dictionary_class():
293 """
294 Return the dictionary class that this agent expects to use.
295
296 Can be overriden if a more complex dictionary is required.
297 """
298 return Gpt2DictionaryAgent
299
300 def build_model(self, states=None):
301 """
302 Build and return model.
303 """
304 return HFGPT2Model(self.opt, self.dict)
305
306 def _encoder_input(self, batch):
307 return (batch.text_vec,)
308
309 def _pad_tensor(self, items):
310 """
311 Override to always set fp16friendly to False and left_pad to True.
312 """
313 return padded_tensor(
314 items,
315 pad_idx=self.NULL_IDX,
316 use_cuda=self.use_cuda,
317 left_padded=True,
318 fp16friendly=False,
319 )
320
321 def load_state_dict(self, state_dict):
322 # 2020-11-10: some very old transformer model points (pre v3.0.1) are
323 # missing a field called transformer.h.0.attn.masked_bias. This hacks
324 # around that. See
325 # https://github.com/huggingface/transformers/issues/4309.
326 current_sd = self.model.state_dict()
327 missing = set(current_sd.keys()) - set(state_dict.keys())
328 for m in missing:
329 if 'masked_bias' in m:
330 state_dict[m] = current_sd[m]
331 return super().load_state_dict(state_dict)
332
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/parlai/agents/hugging_face/gpt2.py b/parlai/agents/hugging_face/gpt2.py
--- a/parlai/agents/hugging_face/gpt2.py
+++ b/parlai/agents/hugging_face/gpt2.py
@@ -115,7 +115,7 @@
model_input = model_input.clamp_(min=0)
transformer_outputs = self.transformer(
model_input,
- past=incr_state,
+ past_key_values=incr_state,
attention_mask=attention_mask,
position_ids=position_ids,
)
| {"golden_diff": "diff --git a/parlai/agents/hugging_face/gpt2.py b/parlai/agents/hugging_face/gpt2.py\n--- a/parlai/agents/hugging_face/gpt2.py\n+++ b/parlai/agents/hugging_face/gpt2.py\n@@ -115,7 +115,7 @@\n model_input = model_input.clamp_(min=0)\n transformer_outputs = self.transformer(\n model_input,\n- past=incr_state,\n+ past_key_values=incr_state,\n attention_mask=attention_mask,\n position_ids=position_ids,\n )\n", "issue": "HuggingFace model only works with transformers <4.0.0\n**Bug description**\r\n\r\nWith transfomers 4.0 or greater:\r\n```\r\n$ parlai dm -t convai2 -m hugging_face/gpt2 -bs 1\r\n```\r\n```python-traceback\r\nTraceback (most recent call last):\r\n File \"/private/home/roller/.conda/envs/chat202001/bin/parlai\", line 33, in <module>\r\n sys.exit(load_entry_point('parlai', 'console_scripts', 'parlai')())\r\n File \"/private/home/roller/working/parlai/parlai/__main__.py\", line 14, in main\r\n superscript_main()\r\n File \"/private/home/roller/working/parlai/parlai/core/script.py\", line 307, in superscript_main\r\n return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)\r\n File \"/private/home/roller/working/parlai/parlai/core/script.py\", line 90, in _run_from_parser_and_opt\r\n return script.run()\r\n File \"/private/home/roller/working/parlai/parlai/scripts/display_model.py\", line 91, in run\r\n display_model(self.opt)\r\n File \"/private/home/roller/working/parlai/parlai/scripts/display_model.py\", line 70, in display_model\r\n world.parley()\r\n File \"/private/home/roller/working/parlai/parlai/core/worlds.py\", line 346, in parley\r\n acts[1] = agents[1].act()\r\n File \"/private/home/roller/working/parlai/parlai/core/torch_agent.py\", line 1946, in act\r\n response = self.batch_act([self.observation])[0]\r\n File \"/private/home/roller/working/parlai/parlai/core/torch_agent.py\", line 2007, in batch_act\r\n output = self.eval_step(batch)\r\n File \"/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py\", line 880, in eval_step\r\n loss, model_output = self.compute_loss(batch, return_output=True)\r\n File \"/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py\", line 710, in compute_loss\r\n model_output = self.model(*self._model_input(batch), ys=batch.label_vec)\r\n File \"/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/private/home/roller/working/parlai/parlai/core/torch_generator_agent.py\", line 328, in forward\r\n scores, preds = self.decode_forced(encoder_states, ys)\r\n File \"/private/home/roller/working/parlai/parlai/agents/hugging_face/gpt2.py\", line 197, in decode_forced\r\n latent, _ = self.decoder(inputs, encoder_states)\r\n File \"/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/private/home/roller/working/parlai/parlai/agents/hugging_face/gpt2.py\", line 113, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/private/home/roller/.conda/envs/chat202001/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 727, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'past'\r\n```\r\n\r\nUpgrade path is unclear given fbcode dependency.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom typing import Optional\nfrom parlai.core.params import ParlaiParser\nfrom parlai.core.opt import Opt\nimport os\n\nimport torch\nfrom parlai.agents.hugging_face.dict import Gpt2DictionaryAgent\nfrom parlai.core.torch_generator_agent import TorchGeneratorAgent, TorchGeneratorModel\nfrom parlai.utils.io import PathManager\nfrom parlai.utils.misc import warn_once\nfrom parlai.utils.torch import IdentityLayer, padded_tensor\n\ntry:\n from transformers import GPT2Model\nexcept ImportError:\n raise ImportError(\"Please run `pip install transformers`.\")\n\n\n############################################\n## Modules\n############################################\n\n\nclass GPT2Decoder(torch.nn.Module):\n \"\"\"\n GPT2 Decoder.\n\n This decoder is initialized with the pretrained model from Hugging Face.\n \"\"\"\n\n def __init__(self, opt, dict):\n super().__init__()\n self.transformer = self._init_from_pretrained(opt)\n # add special tokens\n if opt[\"add_special_tokens\"]:\n size_before = self.transformer.wte.weight.size(0)\n self.transformer.resize_token_embeddings(len(dict.tokenizer))\n with torch.no_grad():\n # first reduce the random jitter of the initialization\n self.transformer.wte.weight[size_before:] *= 0.1\n # next center it on the endoftext token\n self.transformer.wte.weight[\n size_before:\n ] += self.transformer.wte.weight[size_before - 1].unsqueeze(0)\n\n self.add_start_token = opt[\"add_start_token\"]\n self.START_IDX = dict.start_idx\n self.NULL_IDX = dict.null_idx\n self.END_IDX = dict.end_idx\n # use cuda\n self.use_cuda = not opt[\"no_cuda\"] and torch.cuda.is_available()\n\n def _init_from_pretrained(self, opt):\n # load model\n model_sz = opt[\"gpt2_size\"]\n if model_sz == \"small\":\n model_key = \"gpt2\"\n elif model_sz == \"distilgpt2\":\n model_key = \"distilgpt2\"\n else:\n model_key = f\"gpt2-{model_sz}\"\n\n # check if datapath has the files that hugging face code looks for\n hf_dir = os.path.join(opt[\"datapath\"], \"hf\", model_key)\n if all(\n PathManager.exists(os.path.join(hf_dir, file_name))\n for file_name in [\"pytorch_model.bin\", \"config.json\"]\n ):\n fle_key = PathManager.get_local_path(hf_dir, recursive=True)\n else:\n fle_key = model_key\n return GPT2Model.from_pretrained(fle_key)\n\n def forward(self, input, encoder_state, incr_state=None):\n attention_mask = None\n position_ids = None\n if incr_state is None:\n # first step\n if (\n not self.add_start_token\n and input.size(1) == 1\n and int(input[0][0]) == self.START_IDX\n ):\n # generating: ignore the start token\n # without deep copy, the padding_idx (-1) in encoder_state can be reset to 0 with clamp_ inplace operation\n model_input = encoder_state.clone()\n else:\n # forced decoding: concatenate the context\n # with the labels\n model_input = torch.cat([encoder_state, input], dim=-1)\n attention_mask = model_input != self.NULL_IDX\n position_ids = (\n attention_mask.cumsum(dim=-1, dtype=torch.int64) - 1\n ).clamp_(min=0)\n else:\n if not self.add_start_token:\n input = input[:, 1:]\n # generating with continuation\n # get the position ids\n position_ids = (encoder_state != self.NULL_IDX).sum(\n -1, True, dtype=torch.int64\n ) - 1\n delta = ((input != self.NULL_IDX)).sum(-1, True, dtype=torch.int64)\n position_ids += delta\n # generation: get the last token input\n model_input = input[:, -1:]\n attention_mask = torch.cat([encoder_state, input], dim=-1) != self.NULL_IDX\n\n model_input = model_input.clamp_(min=0)\n transformer_outputs = self.transformer(\n model_input,\n past=incr_state,\n attention_mask=attention_mask,\n position_ids=position_ids,\n )\n hidden_states = transformer_outputs[0]\n new_incr_state = transformer_outputs[1]\n\n if incr_state is None:\n # pull out only the hidden states for the label tokens\n output = hidden_states[:, -input.size(1) - 1 + int(self.add_start_token) :]\n # hack: we need the last state of the encoder-side to be the first\n # element of the decoder-side\n lengths = (input != self.NULL_IDX).sum(dim=-1)\n for i in range(input.size(0)):\n output[i, input.size(1) - lengths[i]] = output[i, 0]\n\n else:\n # generation, we're only doing one token at a time. no need to\n # shove things back in\n output = hidden_states\n\n return output, new_incr_state\n\n\nclass HFGPT2Model(TorchGeneratorModel):\n \"\"\"\n Hugging Face GPT2 Model.\n\n GPT2 is a multi-layer decoder-only Transformer. As such, the encoder\n is simply an identity layer. The decoder is initialized with pretrained\n weights from Hugging Face. Read more about this model here\n <https://huggingface.co/transformers/model_doc/gpt2.html>.\n \"\"\"\n\n def __init__(self, opt, dict):\n self.add_start_token = opt[\"add_start_token\"]\n super().__init__(*self._get_special_tokens(opt, dict))\n\n # init the model\n self.encoder = IdentityLayer()\n self.decoder = self._get_decoder(opt, dict)\n self.config = self.decoder.transformer.config\n self.lm_head = torch.nn.Linear(\n self.config.n_embd, self.config.vocab_size, bias=False\n )\n self._tie_weights(self.lm_head, self.decoder.transformer.wte)\n # add start token\n\n def _get_decoder(self, opt, dict):\n return GPT2Decoder(opt, dict)\n\n def _tie_weights(self, output_embeddings, input_embeddings):\n output_embeddings.weight = input_embeddings.weight\n\n def _get_special_tokens(self, opt, dict):\n return dict.null_idx, dict.start_idx, dict.end_idx\n\n def reorder_encoder_states(self, encoder_states, indices):\n enc = torch.index_select(encoder_states, 0, indices)\n return enc\n\n def output(self, tensor):\n \"\"\"\n Compute output logits.\n \"\"\"\n return self.lm_head(tensor)\n\n def reorder_decoder_incremental_state(self, incremental_state, inds):\n new_incr_state = []\n for layer_past in incremental_state:\n new_incr_state.append(torch.index_select(layer_past, 1, inds))\n\n return tuple(new_incr_state)\n\n def decode_forced(self, encoder_states, ys):\n \"\"\"\n Override to get rid of start token input.\n \"\"\"\n if self.add_start_token:\n return super().decode_forced(encoder_states, ys)\n seqlen = ys.size(1)\n inputs = ys.narrow(1, 0, seqlen - 1)\n latent, _ = self.decoder(inputs, encoder_states)\n logits = self.output(latent)\n _, preds = logits.max(dim=2)\n return logits, preds\n\n\n############################################\n## Agent\n############################################\n\n\nclass Gpt2Agent(TorchGeneratorAgent):\n \"\"\"\n Hugging Face GPT2 Agent.\n\n GPT2 is a multi-layer decoder-only Transformer.\n The decoder is initialized with pretrained weights from Hugging Face.\n Read more about this model here\n <https://huggingface.co/transformers/model_doc/gpt2.html>.\n\n GPT2 comes in five sizes: distilgpt2, small, medium, large, XL. Use the\n flag `--gpt2-size` to choose the size.\n\n If you are finetuning the Gpt2 agent as a dialogue agent, be sure\n to run `--add-special-tokens True`. To examine the performance of the\n agent out of the box, run with `--add-special-tokens False`, and make\n sure that the batch size is 1.\n \"\"\"\n\n @classmethod\n def add_cmdline_args(\n cls, parser: ParlaiParser, partial_opt: Optional[Opt] = None\n ) -> ParlaiParser:\n agent = parser.add_argument_group(\"Gpt2 Args\")\n agent.add_argument(\n \"--gpt2-size\",\n type=str,\n default=\"small\",\n choices=[\"small\", \"medium\", \"large\", \"xl\", \"distilgpt2\"],\n help=\"Which size model to initialize.\",\n )\n agent.add_argument(\n \"--add-special-tokens\",\n type=\"bool\",\n default=True,\n help=\"Add special tokens (like PAD, etc.). If False, \"\n \"Can only use with batch size 1.\",\n )\n agent.add_argument(\n \"--add-start-token\",\n type=\"bool\",\n default=False,\n help=\"Add start tokens when finetuning.\",\n )\n parser.set_defaults(\n text_truncate=768,\n label_truncate=256,\n dict_maxexs=0, # skip building dictionary\n )\n super().add_cmdline_args(parser, partial_opt=partial_opt)\n warn_once(\"WARNING: this model is in beta and the API is subject to change.\")\n return agent\n\n def __init__(self, opt, shared=None):\n if not opt[\"add_special_tokens\"] and opt.get('batchsize', 1) > 1:\n # *** STOP ***\n # You may be a future researcher who has stumbled upon this odd\n # restriction, and is tempted to comment this out. After all, the\n # code still runs when it's uncommented, why shouldn't you?\n # You should know this has serious implications, as gpt2 doesn't have\n # padding tokens. This is incompatible with ParlAI's batching,\n # which puts conversations of different length in the same\n # batch. Without a padding token, nonsense will be inserted into\n # the context, and the generations & PPL you get will be wrong.\n raise RuntimeError(\n \"If using batchsize > 1, --add-special-tokens must be True.\"\n )\n if not opt[\"add_special_tokens\"] and opt[\"add_start_token\"]:\n raise RuntimeError(\n \"--add-start-token true requires --add-special-tokens true\"\n )\n super().__init__(opt, shared)\n if hasattr(self.model, \"module\"):\n self.START_IDX = self.model.module.START_IDX\n self.END_IDX = self.model.module.END_IDX\n self.NULL_IDX = self.model.module.NULL_IDX\n else:\n self.START_IDX = self.model.START_IDX\n self.END_IDX = self.model.END_IDX\n self.NULL_IDX = self.model.NULL_IDX\n\n @staticmethod\n def dictionary_class():\n \"\"\"\n Return the dictionary class that this agent expects to use.\n\n Can be overriden if a more complex dictionary is required.\n \"\"\"\n return Gpt2DictionaryAgent\n\n def build_model(self, states=None):\n \"\"\"\n Build and return model.\n \"\"\"\n return HFGPT2Model(self.opt, self.dict)\n\n def _encoder_input(self, batch):\n return (batch.text_vec,)\n\n def _pad_tensor(self, items):\n \"\"\"\n Override to always set fp16friendly to False and left_pad to True.\n \"\"\"\n return padded_tensor(\n items,\n pad_idx=self.NULL_IDX,\n use_cuda=self.use_cuda,\n left_padded=True,\n fp16friendly=False,\n )\n\n def load_state_dict(self, state_dict):\n # 2020-11-10: some very old transformer model points (pre v3.0.1) are\n # missing a field called transformer.h.0.attn.masked_bias. This hacks\n # around that. See\n # https://github.com/huggingface/transformers/issues/4309.\n current_sd = self.model.state_dict()\n missing = set(current_sd.keys()) - set(state_dict.keys())\n for m in missing:\n if 'masked_bias' in m:\n state_dict[m] = current_sd[m]\n return super().load_state_dict(state_dict)\n", "path": "parlai/agents/hugging_face/gpt2.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom typing import Optional\nfrom parlai.core.params import ParlaiParser\nfrom parlai.core.opt import Opt\nimport os\n\nimport torch\nfrom parlai.agents.hugging_face.dict import Gpt2DictionaryAgent\nfrom parlai.core.torch_generator_agent import TorchGeneratorAgent, TorchGeneratorModel\nfrom parlai.utils.io import PathManager\nfrom parlai.utils.misc import warn_once\nfrom parlai.utils.torch import IdentityLayer, padded_tensor\n\ntry:\n from transformers import GPT2Model\nexcept ImportError:\n raise ImportError(\"Please run `pip install transformers`.\")\n\n\n############################################\n## Modules\n############################################\n\n\nclass GPT2Decoder(torch.nn.Module):\n \"\"\"\n GPT2 Decoder.\n\n This decoder is initialized with the pretrained model from Hugging Face.\n \"\"\"\n\n def __init__(self, opt, dict):\n super().__init__()\n self.transformer = self._init_from_pretrained(opt)\n # add special tokens\n if opt[\"add_special_tokens\"]:\n size_before = self.transformer.wte.weight.size(0)\n self.transformer.resize_token_embeddings(len(dict.tokenizer))\n with torch.no_grad():\n # first reduce the random jitter of the initialization\n self.transformer.wte.weight[size_before:] *= 0.1\n # next center it on the endoftext token\n self.transformer.wte.weight[\n size_before:\n ] += self.transformer.wte.weight[size_before - 1].unsqueeze(0)\n\n self.add_start_token = opt[\"add_start_token\"]\n self.START_IDX = dict.start_idx\n self.NULL_IDX = dict.null_idx\n self.END_IDX = dict.end_idx\n # use cuda\n self.use_cuda = not opt[\"no_cuda\"] and torch.cuda.is_available()\n\n def _init_from_pretrained(self, opt):\n # load model\n model_sz = opt[\"gpt2_size\"]\n if model_sz == \"small\":\n model_key = \"gpt2\"\n elif model_sz == \"distilgpt2\":\n model_key = \"distilgpt2\"\n else:\n model_key = f\"gpt2-{model_sz}\"\n\n # check if datapath has the files that hugging face code looks for\n hf_dir = os.path.join(opt[\"datapath\"], \"hf\", model_key)\n if all(\n PathManager.exists(os.path.join(hf_dir, file_name))\n for file_name in [\"pytorch_model.bin\", \"config.json\"]\n ):\n fle_key = PathManager.get_local_path(hf_dir, recursive=True)\n else:\n fle_key = model_key\n return GPT2Model.from_pretrained(fle_key)\n\n def forward(self, input, encoder_state, incr_state=None):\n attention_mask = None\n position_ids = None\n if incr_state is None:\n # first step\n if (\n not self.add_start_token\n and input.size(1) == 1\n and int(input[0][0]) == self.START_IDX\n ):\n # generating: ignore the start token\n # without deep copy, the padding_idx (-1) in encoder_state can be reset to 0 with clamp_ inplace operation\n model_input = encoder_state.clone()\n else:\n # forced decoding: concatenate the context\n # with the labels\n model_input = torch.cat([encoder_state, input], dim=-1)\n attention_mask = model_input != self.NULL_IDX\n position_ids = (\n attention_mask.cumsum(dim=-1, dtype=torch.int64) - 1\n ).clamp_(min=0)\n else:\n if not self.add_start_token:\n input = input[:, 1:]\n # generating with continuation\n # get the position ids\n position_ids = (encoder_state != self.NULL_IDX).sum(\n -1, True, dtype=torch.int64\n ) - 1\n delta = ((input != self.NULL_IDX)).sum(-1, True, dtype=torch.int64)\n position_ids += delta\n # generation: get the last token input\n model_input = input[:, -1:]\n attention_mask = torch.cat([encoder_state, input], dim=-1) != self.NULL_IDX\n\n model_input = model_input.clamp_(min=0)\n transformer_outputs = self.transformer(\n model_input,\n past_key_values=incr_state,\n attention_mask=attention_mask,\n position_ids=position_ids,\n )\n hidden_states = transformer_outputs[0]\n new_incr_state = transformer_outputs[1]\n\n if incr_state is None:\n # pull out only the hidden states for the label tokens\n output = hidden_states[:, -input.size(1) - 1 + int(self.add_start_token) :]\n # hack: we need the last state of the encoder-side to be the first\n # element of the decoder-side\n lengths = (input != self.NULL_IDX).sum(dim=-1)\n for i in range(input.size(0)):\n output[i, input.size(1) - lengths[i]] = output[i, 0]\n\n else:\n # generation, we're only doing one token at a time. no need to\n # shove things back in\n output = hidden_states\n\n return output, new_incr_state\n\n\nclass HFGPT2Model(TorchGeneratorModel):\n \"\"\"\n Hugging Face GPT2 Model.\n\n GPT2 is a multi-layer decoder-only Transformer. As such, the encoder\n is simply an identity layer. The decoder is initialized with pretrained\n weights from Hugging Face. Read more about this model here\n <https://huggingface.co/transformers/model_doc/gpt2.html>.\n \"\"\"\n\n def __init__(self, opt, dict):\n self.add_start_token = opt[\"add_start_token\"]\n super().__init__(*self._get_special_tokens(opt, dict))\n\n # init the model\n self.encoder = IdentityLayer()\n self.decoder = self._get_decoder(opt, dict)\n self.config = self.decoder.transformer.config\n self.lm_head = torch.nn.Linear(\n self.config.n_embd, self.config.vocab_size, bias=False\n )\n self._tie_weights(self.lm_head, self.decoder.transformer.wte)\n # add start token\n\n def _get_decoder(self, opt, dict):\n return GPT2Decoder(opt, dict)\n\n def _tie_weights(self, output_embeddings, input_embeddings):\n output_embeddings.weight = input_embeddings.weight\n\n def _get_special_tokens(self, opt, dict):\n return dict.null_idx, dict.start_idx, dict.end_idx\n\n def reorder_encoder_states(self, encoder_states, indices):\n enc = torch.index_select(encoder_states, 0, indices)\n return enc\n\n def output(self, tensor):\n \"\"\"\n Compute output logits.\n \"\"\"\n return self.lm_head(tensor)\n\n def reorder_decoder_incremental_state(self, incremental_state, inds):\n new_incr_state = []\n for layer_past in incremental_state:\n new_incr_state.append(torch.index_select(layer_past, 1, inds))\n\n return tuple(new_incr_state)\n\n def decode_forced(self, encoder_states, ys):\n \"\"\"\n Override to get rid of start token input.\n \"\"\"\n if self.add_start_token:\n return super().decode_forced(encoder_states, ys)\n seqlen = ys.size(1)\n inputs = ys.narrow(1, 0, seqlen - 1)\n latent, _ = self.decoder(inputs, encoder_states)\n logits = self.output(latent)\n _, preds = logits.max(dim=2)\n return logits, preds\n\n\n############################################\n## Agent\n############################################\n\n\nclass Gpt2Agent(TorchGeneratorAgent):\n \"\"\"\n Hugging Face GPT2 Agent.\n\n GPT2 is a multi-layer decoder-only Transformer.\n The decoder is initialized with pretrained weights from Hugging Face.\n Read more about this model here\n <https://huggingface.co/transformers/model_doc/gpt2.html>.\n\n GPT2 comes in five sizes: distilgpt2, small, medium, large, XL. Use the\n flag `--gpt2-size` to choose the size.\n\n If you are finetuning the Gpt2 agent as a dialogue agent, be sure\n to run `--add-special-tokens True`. To examine the performance of the\n agent out of the box, run with `--add-special-tokens False`, and make\n sure that the batch size is 1.\n \"\"\"\n\n @classmethod\n def add_cmdline_args(\n cls, parser: ParlaiParser, partial_opt: Optional[Opt] = None\n ) -> ParlaiParser:\n agent = parser.add_argument_group(\"Gpt2 Args\")\n agent.add_argument(\n \"--gpt2-size\",\n type=str,\n default=\"small\",\n choices=[\"small\", \"medium\", \"large\", \"xl\", \"distilgpt2\"],\n help=\"Which size model to initialize.\",\n )\n agent.add_argument(\n \"--add-special-tokens\",\n type=\"bool\",\n default=True,\n help=\"Add special tokens (like PAD, etc.). If False, \"\n \"Can only use with batch size 1.\",\n )\n agent.add_argument(\n \"--add-start-token\",\n type=\"bool\",\n default=False,\n help=\"Add start tokens when finetuning.\",\n )\n parser.set_defaults(\n text_truncate=768,\n label_truncate=256,\n dict_maxexs=0, # skip building dictionary\n )\n super().add_cmdline_args(parser, partial_opt=partial_opt)\n warn_once(\"WARNING: this model is in beta and the API is subject to change.\")\n return agent\n\n def __init__(self, opt, shared=None):\n if not opt[\"add_special_tokens\"] and opt.get('batchsize', 1) > 1:\n # *** STOP ***\n # You may be a future researcher who has stumbled upon this odd\n # restriction, and is tempted to comment this out. After all, the\n # code still runs when it's uncommented, why shouldn't you?\n # You should know this has serious implications, as gpt2 doesn't have\n # padding tokens. This is incompatible with ParlAI's batching,\n # which puts conversations of different length in the same\n # batch. Without a padding token, nonsense will be inserted into\n # the context, and the generations & PPL you get will be wrong.\n raise RuntimeError(\n \"If using batchsize > 1, --add-special-tokens must be True.\"\n )\n if not opt[\"add_special_tokens\"] and opt[\"add_start_token\"]:\n raise RuntimeError(\n \"--add-start-token true requires --add-special-tokens true\"\n )\n super().__init__(opt, shared)\n if hasattr(self.model, \"module\"):\n self.START_IDX = self.model.module.START_IDX\n self.END_IDX = self.model.module.END_IDX\n self.NULL_IDX = self.model.module.NULL_IDX\n else:\n self.START_IDX = self.model.START_IDX\n self.END_IDX = self.model.END_IDX\n self.NULL_IDX = self.model.NULL_IDX\n\n @staticmethod\n def dictionary_class():\n \"\"\"\n Return the dictionary class that this agent expects to use.\n\n Can be overriden if a more complex dictionary is required.\n \"\"\"\n return Gpt2DictionaryAgent\n\n def build_model(self, states=None):\n \"\"\"\n Build and return model.\n \"\"\"\n return HFGPT2Model(self.opt, self.dict)\n\n def _encoder_input(self, batch):\n return (batch.text_vec,)\n\n def _pad_tensor(self, items):\n \"\"\"\n Override to always set fp16friendly to False and left_pad to True.\n \"\"\"\n return padded_tensor(\n items,\n pad_idx=self.NULL_IDX,\n use_cuda=self.use_cuda,\n left_padded=True,\n fp16friendly=False,\n )\n\n def load_state_dict(self, state_dict):\n # 2020-11-10: some very old transformer model points (pre v3.0.1) are\n # missing a field called transformer.h.0.attn.masked_bias. This hacks\n # around that. See\n # https://github.com/huggingface/transformers/issues/4309.\n current_sd = self.model.state_dict()\n missing = set(current_sd.keys()) - set(state_dict.keys())\n for m in missing:\n if 'masked_bias' in m:\n state_dict[m] = current_sd[m]\n return super().load_state_dict(state_dict)\n", "path": "parlai/agents/hugging_face/gpt2.py"}]} |
gh_patches_debug_1171 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-2510 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
example_input_array dtype
Currently assumed that example_input_array dtype to be equal to model dtype. This is not necessarily correct - e.g. if input is a vector of INT.
https://github.com/PyTorchLightning/pytorch-lightning/blob/7dc58bd286b1e81ca4d293f05bddff5e93361020/pytorch_lightning/core/memory.py#L192
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/core/memory.py`
Content:
```
1 import os
2 import subprocess
3 from collections import OrderedDict
4 from subprocess import PIPE
5 from typing import Tuple, Dict, Union, List, Any
6
7 import numpy as np
8 import torch
9 import torch.nn as nn
10 from torch.utils.hooks import RemovableHandle
11
12
13 from pytorch_lightning.utilities import NATIVE_AMP_AVALAIBLE
14 from pytorch_lightning.utilities.apply_func import apply_to_collection
15
16 PARAMETER_NUM_UNITS = [" ", "K", "M", "B", "T"]
17 UNKNOWN_SIZE = "?"
18
19
20 class LayerSummary(object):
21 """
22 Summary class for a single layer in a :class:`~pytorch_lightning.core.lightning.LightningModule`.
23 It collects the following information:
24
25 - Type of the layer (e.g. Linear, BatchNorm1d, ...)
26 - Input shape
27 - Output shape
28 - Number of parameters
29
30 The input and output shapes are only known after the example input array was
31 passed through the model.
32
33 Example::
34
35 >>> model = torch.nn.Conv2d(3, 8, 3)
36 >>> summary = LayerSummary(model)
37 >>> summary.num_parameters
38 224
39 >>> summary.layer_type
40 'Conv2d'
41 >>> output = model(torch.rand(1, 3, 5, 5))
42 >>> summary.in_size
43 [1, 3, 5, 5]
44 >>> summary.out_size
45 [1, 8, 3, 3]
46
47 Args:
48 module: A module to summarize
49
50 """
51
52 def __init__(self, module: nn.Module):
53 super().__init__()
54 self._module = module
55 self._hook_handle = self._register_hook()
56 self._in_size = None
57 self._out_size = None
58
59 def __del__(self):
60 self.detach_hook()
61
62 def _register_hook(self) -> RemovableHandle:
63 """
64 Registers a hook on the module that computes the input- and output size(s) on the first forward pass.
65 If the hook is called, it will remove itself from the from the module, meaning that
66 recursive models will only record their input- and output shapes once.
67
68 Return:
69 A handle for the installed hook.
70 """
71
72 def hook(module, inp, out):
73 if len(inp) == 1:
74 inp = inp[0]
75 self._in_size = parse_batch_shape(inp)
76 self._out_size = parse_batch_shape(out)
77 self._hook_handle.remove()
78
79 return self._module.register_forward_hook(hook)
80
81 def detach_hook(self):
82 """
83 Removes the forward hook if it was not already removed in the forward pass.
84 Will be called after the summary is created.
85 """
86 if self._hook_handle is not None:
87 self._hook_handle.remove()
88
89 @property
90 def in_size(self) -> Union[str, List]:
91 return self._in_size or UNKNOWN_SIZE
92
93 @property
94 def out_size(self) -> Union[str, List]:
95 return self._out_size or UNKNOWN_SIZE
96
97 @property
98 def layer_type(self) -> str:
99 """ Returns the class name of the module. """
100 return str(self._module.__class__.__name__)
101
102 @property
103 def num_parameters(self) -> int:
104 """ Returns the number of parameters in this module. """
105 return sum(np.prod(p.shape) for p in self._module.parameters())
106
107
108 class ModelSummary(object):
109 """
110 Generates a summary of all layers in a :class:`~pytorch_lightning.core.lightning.LightningModule`.
111
112 Args:
113 model: The model to summarize (also referred to as the root module)
114 mode: Can be one of
115
116 - `top` (default): only the top-level modules will be recorded (the children of the root module)
117 - `full`: summarizes all layers and their submodules in the root module
118
119 The string representation of this summary prints a table with columns containing
120 the name, type and number of parameters for each layer.
121
122 The root module may also have an attribute ``example_input_array`` as shown in the example below.
123 If present, the root module will be called with it as input to determine the
124 intermediate input- and output shapes of all layers. Supported are tensors and
125 nested lists and tuples of tensors. All other types of inputs will be skipped and show as `?`
126 in the summary table. The summary will also display `?` for layers not used in the forward pass.
127
128 Example::
129
130 >>> import pytorch_lightning as pl
131 >>> class LitModel(pl.LightningModule):
132 ...
133 ... def __init__(self):
134 ... super().__init__()
135 ... self.net = nn.Sequential(nn.Linear(256, 512), nn.BatchNorm1d(512))
136 ... self.example_input_array = torch.zeros(10, 256) # optional
137 ...
138 ... def forward(self, x):
139 ... return self.net(x)
140 ...
141 >>> model = LitModel()
142 >>> ModelSummary(model, mode='top') # doctest: +NORMALIZE_WHITESPACE
143 | Name | Type | Params | In sizes | Out sizes
144 ------------------------------------------------------------
145 0 | net | Sequential | 132 K | [10, 256] | [10, 512]
146 >>> ModelSummary(model, mode='full') # doctest: +NORMALIZE_WHITESPACE
147 | Name | Type | Params | In sizes | Out sizes
148 --------------------------------------------------------------
149 0 | net | Sequential | 132 K | [10, 256] | [10, 512]
150 1 | net.0 | Linear | 131 K | [10, 256] | [10, 512]
151 2 | net.1 | BatchNorm1d | 1 K | [10, 512] | [10, 512]
152 """
153
154 MODE_TOP = "top"
155 MODE_FULL = "full"
156 MODE_DEFAULT = MODE_TOP
157 MODES = [MODE_FULL, MODE_TOP]
158
159 def __init__(self, model, mode: str = MODE_DEFAULT):
160 self._model = model
161 self._mode = mode
162 self._layer_summary = self.summarize()
163
164 @property
165 def named_modules(self) -> List[Tuple[str, nn.Module]]:
166 if self._mode == ModelSummary.MODE_FULL:
167 mods = self._model.named_modules()
168 mods = list(mods)[1:] # do not include root module (LightningModule)
169 elif self._mode == ModelSummary.MODE_TOP:
170 # the children are the top-level modules
171 mods = self._model.named_children()
172 else:
173 mods = []
174 return list(mods)
175
176 @property
177 def layer_names(self) -> List[str]:
178 return list(self._layer_summary.keys())
179
180 @property
181 def layer_types(self) -> List[str]:
182 return [layer.layer_type for layer in self._layer_summary.values()]
183
184 @property
185 def in_sizes(self) -> List:
186 return [layer.in_size for layer in self._layer_summary.values()]
187
188 @property
189 def out_sizes(self) -> List:
190 return [layer.out_size for layer in self._layer_summary.values()]
191
192 @property
193 def param_nums(self) -> List[int]:
194 return [layer.num_parameters for layer in self._layer_summary.values()]
195
196 def summarize(self) -> Dict[str, LayerSummary]:
197 summary = OrderedDict((name, LayerSummary(module)) for name, module in self.named_modules)
198 if self._model.example_input_array is not None:
199 self._forward_example_input()
200 for layer in summary.values():
201 layer.detach_hook()
202 return summary
203
204 def _forward_example_input(self) -> None:
205 """ Run the example input through each layer to get input- and output sizes. """
206 model = self._model
207 trainer = self._model.trainer
208
209 input_ = model.example_input_array
210 input_ = model.transfer_batch_to_device(input_, model.device)
211 input_ = apply_to_collection(input_, torch.Tensor, lambda x: x.type(model.dtype))
212
213 if trainer is not None and trainer.use_amp:
214 if NATIVE_AMP_AVALAIBLE:
215 model.forward = torch.cuda.amp.autocast()(model.forward)
216
217 mode = model.training
218 model.eval()
219 with torch.no_grad():
220 # let the model hooks collect the input- and output shapes
221 if isinstance(input_, (list, tuple)):
222 model(*input_)
223 elif isinstance(input_, dict):
224 model(**input_)
225 else:
226 model(input_)
227 model.train(mode) # restore mode of module
228
229 def __str__(self):
230 """
231 Makes a summary listing with:
232
233 Layer Name, Layer Type, Number of Parameters, Input Sizes, Output Sizes
234 """
235 arrays = [
236 [" ", list(map(str, range(len(self._layer_summary))))],
237 ["Name", self.layer_names],
238 ["Type", self.layer_types],
239 ["Params", list(map(get_human_readable_count, self.param_nums))],
240 ]
241 if self._model.example_input_array is not None:
242 arrays.append(["In sizes", self.in_sizes])
243 arrays.append(["Out sizes", self.out_sizes])
244
245 return _format_summary_table(*arrays)
246
247 def __repr__(self):
248 return str(self)
249
250
251 def parse_batch_shape(batch: Any) -> Union[str, List]:
252 if hasattr(batch, "shape"):
253 return list(batch.shape)
254
255 if isinstance(batch, (list, tuple)):
256 shape = [parse_batch_shape(el) for el in batch]
257 return shape
258
259 return UNKNOWN_SIZE
260
261
262 def _format_summary_table(*cols) -> str:
263 """
264 Takes in a number of arrays, each specifying a column in
265 the summary table, and combines them all into one big
266 string defining the summary table that are nicely formatted.
267 """
268 n_rows = len(cols[0][1])
269 n_cols = 1 + len(cols)
270
271 # Get formatting width of each column
272 col_widths = []
273 for c in cols:
274 col_width = max(len(str(a)) for a in c[1]) if n_rows else 0
275 col_width = max(col_width, len(c[0])) # minimum length is header length
276 col_widths.append(col_width)
277
278 # Formatting
279 s = "{:<{}}"
280 total_width = sum(col_widths) + 3 * n_cols
281 header = [s.format(c[0], l) for c, l in zip(cols, col_widths)]
282
283 # Summary = header + divider + Rest of table
284 summary = " | ".join(header) + "\n" + "-" * total_width
285 for i in range(n_rows):
286 line = []
287 for c, l in zip(cols, col_widths):
288 line.append(s.format(str(c[1][i]), l))
289 summary += "\n" + " | ".join(line)
290
291 return summary
292
293
294 def get_memory_profile(mode: str) -> Union[Dict[str, int], Dict[int, int]]:
295 """ Get a profile of the current memory usage.
296
297 Args:
298 mode: There are two modes:
299
300 - 'all' means return memory for all gpus
301 - 'min_max' means return memory for max and min
302
303 Return:
304 A dictionary in which the keys are device ids as integers and
305 values are memory usage as integers in MB.
306 If mode is 'min_max', the dictionary will also contain two additional keys:
307
308 - 'min_gpu_mem': the minimum memory usage in MB
309 - 'max_gpu_mem': the maximum memory usage in MB
310 """
311 memory_map = get_gpu_memory_map()
312
313 if mode == "min_max":
314 min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])
315 max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])
316
317 memory_map = {"min_gpu_mem": min_memory, "max_gpu_mem": max_memory}
318
319 return memory_map
320
321
322 def get_gpu_memory_map() -> Dict[str, int]:
323 """Get the current gpu usage.
324
325 Return:
326 A dictionary in which the keys are device ids as integers and
327 values are memory usage as integers in MB.
328 """
329 result = subprocess.run(
330 ["nvidia-smi", "--query-gpu=memory.used", "--format=csv,nounits,noheader",],
331 encoding="utf-8",
332 # capture_output=True, # valid for python version >=3.7
333 stdout=PIPE,
334 stderr=PIPE, # for backward compatibility with python version 3.6
335 check=True,
336 )
337 # Convert lines into a dictionary
338 gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]
339 gpu_memory_map = {f"gpu_{index}": memory for index, memory in enumerate(gpu_memory)}
340 return gpu_memory_map
341
342
343 def get_human_readable_count(number: int) -> str:
344 """
345 Abbreviates an integer number with K, M, B, T for thousands, millions,
346 billions and trillions, respectively.
347
348 Examples:
349 >>> get_human_readable_count(123)
350 '123 '
351 >>> get_human_readable_count(1234) # (one thousand)
352 '1 K'
353 >>> get_human_readable_count(2e6) # (two million)
354 '2 M'
355 >>> get_human_readable_count(3e9) # (three billion)
356 '3 B'
357 >>> get_human_readable_count(4e12) # (four trillion)
358 '4 T'
359 >>> get_human_readable_count(5e15) # (more than trillion)
360 '5,000 T'
361
362 Args:
363 number: a positive integer number
364
365 Return:
366 A string formatted according to the pattern described above.
367
368 """
369 assert number >= 0
370 labels = PARAMETER_NUM_UNITS
371 num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)
372 num_groups = int(np.ceil(num_digits / 3))
373 num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions
374 shift = -3 * (num_groups - 1)
375 number = number * (10 ** shift)
376 index = num_groups - 1
377 return f"{int(number):,d} {labels[index]}"
378
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py
--- a/pytorch_lightning/core/memory.py
+++ b/pytorch_lightning/core/memory.py
@@ -208,7 +208,6 @@
input_ = model.example_input_array
input_ = model.transfer_batch_to_device(input_, model.device)
- input_ = apply_to_collection(input_, torch.Tensor, lambda x: x.type(model.dtype))
if trainer is not None and trainer.use_amp:
if NATIVE_AMP_AVALAIBLE:
| {"golden_diff": "diff --git a/pytorch_lightning/core/memory.py b/pytorch_lightning/core/memory.py\n--- a/pytorch_lightning/core/memory.py\n+++ b/pytorch_lightning/core/memory.py\n@@ -208,7 +208,6 @@\n \n input_ = model.example_input_array\n input_ = model.transfer_batch_to_device(input_, model.device)\n- input_ = apply_to_collection(input_, torch.Tensor, lambda x: x.type(model.dtype))\n \n if trainer is not None and trainer.use_amp:\n if NATIVE_AMP_AVALAIBLE:\n", "issue": "example_input_array dtype\nCurrently assumed that example_input_array dtype to be equal to model dtype. This is not necessarily correct - e.g. if input is a vector of INT.\r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/blob/7dc58bd286b1e81ca4d293f05bddff5e93361020/pytorch_lightning/core/memory.py#L192\n", "before_files": [{"content": "import os\nimport subprocess\nfrom collections import OrderedDict\nfrom subprocess import PIPE\nfrom typing import Tuple, Dict, Union, List, Any\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.hooks import RemovableHandle\n\n\nfrom pytorch_lightning.utilities import NATIVE_AMP_AVALAIBLE\nfrom pytorch_lightning.utilities.apply_func import apply_to_collection\n\nPARAMETER_NUM_UNITS = [\" \", \"K\", \"M\", \"B\", \"T\"]\nUNKNOWN_SIZE = \"?\"\n\n\nclass LayerSummary(object):\n \"\"\"\n Summary class for a single layer in a :class:`~pytorch_lightning.core.lightning.LightningModule`.\n It collects the following information:\n\n - Type of the layer (e.g. Linear, BatchNorm1d, ...)\n - Input shape\n - Output shape\n - Number of parameters\n\n The input and output shapes are only known after the example input array was\n passed through the model.\n\n Example::\n\n >>> model = torch.nn.Conv2d(3, 8, 3)\n >>> summary = LayerSummary(model)\n >>> summary.num_parameters\n 224\n >>> summary.layer_type\n 'Conv2d'\n >>> output = model(torch.rand(1, 3, 5, 5))\n >>> summary.in_size\n [1, 3, 5, 5]\n >>> summary.out_size\n [1, 8, 3, 3]\n\n Args:\n module: A module to summarize\n\n \"\"\"\n\n def __init__(self, module: nn.Module):\n super().__init__()\n self._module = module\n self._hook_handle = self._register_hook()\n self._in_size = None\n self._out_size = None\n\n def __del__(self):\n self.detach_hook()\n\n def _register_hook(self) -> RemovableHandle:\n \"\"\"\n Registers a hook on the module that computes the input- and output size(s) on the first forward pass.\n If the hook is called, it will remove itself from the from the module, meaning that\n recursive models will only record their input- and output shapes once.\n\n Return:\n A handle for the installed hook.\n \"\"\"\n\n def hook(module, inp, out):\n if len(inp) == 1:\n inp = inp[0]\n self._in_size = parse_batch_shape(inp)\n self._out_size = parse_batch_shape(out)\n self._hook_handle.remove()\n\n return self._module.register_forward_hook(hook)\n\n def detach_hook(self):\n \"\"\"\n Removes the forward hook if it was not already removed in the forward pass.\n Will be called after the summary is created.\n \"\"\"\n if self._hook_handle is not None:\n self._hook_handle.remove()\n\n @property\n def in_size(self) -> Union[str, List]:\n return self._in_size or UNKNOWN_SIZE\n\n @property\n def out_size(self) -> Union[str, List]:\n return self._out_size or UNKNOWN_SIZE\n\n @property\n def layer_type(self) -> str:\n \"\"\" Returns the class name of the module. \"\"\"\n return str(self._module.__class__.__name__)\n\n @property\n def num_parameters(self) -> int:\n \"\"\" Returns the number of parameters in this module. \"\"\"\n return sum(np.prod(p.shape) for p in self._module.parameters())\n\n\nclass ModelSummary(object):\n \"\"\"\n Generates a summary of all layers in a :class:`~pytorch_lightning.core.lightning.LightningModule`.\n\n Args:\n model: The model to summarize (also referred to as the root module)\n mode: Can be one of\n\n - `top` (default): only the top-level modules will be recorded (the children of the root module)\n - `full`: summarizes all layers and their submodules in the root module\n\n The string representation of this summary prints a table with columns containing\n the name, type and number of parameters for each layer.\n\n The root module may also have an attribute ``example_input_array`` as shown in the example below.\n If present, the root module will be called with it as input to determine the\n intermediate input- and output shapes of all layers. Supported are tensors and\n nested lists and tuples of tensors. All other types of inputs will be skipped and show as `?`\n in the summary table. The summary will also display `?` for layers not used in the forward pass.\n\n Example::\n\n >>> import pytorch_lightning as pl\n >>> class LitModel(pl.LightningModule):\n ...\n ... def __init__(self):\n ... super().__init__()\n ... self.net = nn.Sequential(nn.Linear(256, 512), nn.BatchNorm1d(512))\n ... self.example_input_array = torch.zeros(10, 256) # optional\n ...\n ... def forward(self, x):\n ... return self.net(x)\n ...\n >>> model = LitModel()\n >>> ModelSummary(model, mode='top') # doctest: +NORMALIZE_WHITESPACE\n | Name | Type | Params | In sizes | Out sizes\n ------------------------------------------------------------\n 0 | net | Sequential | 132 K | [10, 256] | [10, 512]\n >>> ModelSummary(model, mode='full') # doctest: +NORMALIZE_WHITESPACE\n | Name | Type | Params | In sizes | Out sizes\n --------------------------------------------------------------\n 0 | net | Sequential | 132 K | [10, 256] | [10, 512]\n 1 | net.0 | Linear | 131 K | [10, 256] | [10, 512]\n 2 | net.1 | BatchNorm1d | 1 K | [10, 512] | [10, 512]\n \"\"\"\n\n MODE_TOP = \"top\"\n MODE_FULL = \"full\"\n MODE_DEFAULT = MODE_TOP\n MODES = [MODE_FULL, MODE_TOP]\n\n def __init__(self, model, mode: str = MODE_DEFAULT):\n self._model = model\n self._mode = mode\n self._layer_summary = self.summarize()\n\n @property\n def named_modules(self) -> List[Tuple[str, nn.Module]]:\n if self._mode == ModelSummary.MODE_FULL:\n mods = self._model.named_modules()\n mods = list(mods)[1:] # do not include root module (LightningModule)\n elif self._mode == ModelSummary.MODE_TOP:\n # the children are the top-level modules\n mods = self._model.named_children()\n else:\n mods = []\n return list(mods)\n\n @property\n def layer_names(self) -> List[str]:\n return list(self._layer_summary.keys())\n\n @property\n def layer_types(self) -> List[str]:\n return [layer.layer_type for layer in self._layer_summary.values()]\n\n @property\n def in_sizes(self) -> List:\n return [layer.in_size for layer in self._layer_summary.values()]\n\n @property\n def out_sizes(self) -> List:\n return [layer.out_size for layer in self._layer_summary.values()]\n\n @property\n def param_nums(self) -> List[int]:\n return [layer.num_parameters for layer in self._layer_summary.values()]\n\n def summarize(self) -> Dict[str, LayerSummary]:\n summary = OrderedDict((name, LayerSummary(module)) for name, module in self.named_modules)\n if self._model.example_input_array is not None:\n self._forward_example_input()\n for layer in summary.values():\n layer.detach_hook()\n return summary\n\n def _forward_example_input(self) -> None:\n \"\"\" Run the example input through each layer to get input- and output sizes. \"\"\"\n model = self._model\n trainer = self._model.trainer\n\n input_ = model.example_input_array\n input_ = model.transfer_batch_to_device(input_, model.device)\n input_ = apply_to_collection(input_, torch.Tensor, lambda x: x.type(model.dtype))\n\n if trainer is not None and trainer.use_amp:\n if NATIVE_AMP_AVALAIBLE:\n model.forward = torch.cuda.amp.autocast()(model.forward)\n\n mode = model.training\n model.eval()\n with torch.no_grad():\n # let the model hooks collect the input- and output shapes\n if isinstance(input_, (list, tuple)):\n model(*input_)\n elif isinstance(input_, dict):\n model(**input_)\n else:\n model(input_)\n model.train(mode) # restore mode of module\n\n def __str__(self):\n \"\"\"\n Makes a summary listing with:\n\n Layer Name, Layer Type, Number of Parameters, Input Sizes, Output Sizes\n \"\"\"\n arrays = [\n [\" \", list(map(str, range(len(self._layer_summary))))],\n [\"Name\", self.layer_names],\n [\"Type\", self.layer_types],\n [\"Params\", list(map(get_human_readable_count, self.param_nums))],\n ]\n if self._model.example_input_array is not None:\n arrays.append([\"In sizes\", self.in_sizes])\n arrays.append([\"Out sizes\", self.out_sizes])\n\n return _format_summary_table(*arrays)\n\n def __repr__(self):\n return str(self)\n\n\ndef parse_batch_shape(batch: Any) -> Union[str, List]:\n if hasattr(batch, \"shape\"):\n return list(batch.shape)\n\n if isinstance(batch, (list, tuple)):\n shape = [parse_batch_shape(el) for el in batch]\n return shape\n\n return UNKNOWN_SIZE\n\n\ndef _format_summary_table(*cols) -> str:\n \"\"\"\n Takes in a number of arrays, each specifying a column in\n the summary table, and combines them all into one big\n string defining the summary table that are nicely formatted.\n \"\"\"\n n_rows = len(cols[0][1])\n n_cols = 1 + len(cols)\n\n # Get formatting width of each column\n col_widths = []\n for c in cols:\n col_width = max(len(str(a)) for a in c[1]) if n_rows else 0\n col_width = max(col_width, len(c[0])) # minimum length is header length\n col_widths.append(col_width)\n\n # Formatting\n s = \"{:<{}}\"\n total_width = sum(col_widths) + 3 * n_cols\n header = [s.format(c[0], l) for c, l in zip(cols, col_widths)]\n\n # Summary = header + divider + Rest of table\n summary = \" | \".join(header) + \"\\n\" + \"-\" * total_width\n for i in range(n_rows):\n line = []\n for c, l in zip(cols, col_widths):\n line.append(s.format(str(c[1][i]), l))\n summary += \"\\n\" + \" | \".join(line)\n\n return summary\n\n\ndef get_memory_profile(mode: str) -> Union[Dict[str, int], Dict[int, int]]:\n \"\"\" Get a profile of the current memory usage.\n\n Args:\n mode: There are two modes:\n\n - 'all' means return memory for all gpus\n - 'min_max' means return memory for max and min\n\n Return:\n A dictionary in which the keys are device ids as integers and\n values are memory usage as integers in MB.\n If mode is 'min_max', the dictionary will also contain two additional keys:\n\n - 'min_gpu_mem': the minimum memory usage in MB\n - 'max_gpu_mem': the maximum memory usage in MB\n \"\"\"\n memory_map = get_gpu_memory_map()\n\n if mode == \"min_max\":\n min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])\n max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])\n\n memory_map = {\"min_gpu_mem\": min_memory, \"max_gpu_mem\": max_memory}\n\n return memory_map\n\n\ndef get_gpu_memory_map() -> Dict[str, int]:\n \"\"\"Get the current gpu usage.\n\n Return:\n A dictionary in which the keys are device ids as integers and\n values are memory usage as integers in MB.\n \"\"\"\n result = subprocess.run(\n [\"nvidia-smi\", \"--query-gpu=memory.used\", \"--format=csv,nounits,noheader\",],\n encoding=\"utf-8\",\n # capture_output=True, # valid for python version >=3.7\n stdout=PIPE,\n stderr=PIPE, # for backward compatibility with python version 3.6\n check=True,\n )\n # Convert lines into a dictionary\n gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]\n gpu_memory_map = {f\"gpu_{index}\": memory for index, memory in enumerate(gpu_memory)}\n return gpu_memory_map\n\n\ndef get_human_readable_count(number: int) -> str:\n \"\"\"\n Abbreviates an integer number with K, M, B, T for thousands, millions,\n billions and trillions, respectively.\n\n Examples:\n >>> get_human_readable_count(123)\n '123 '\n >>> get_human_readable_count(1234) # (one thousand)\n '1 K'\n >>> get_human_readable_count(2e6) # (two million)\n '2 M'\n >>> get_human_readable_count(3e9) # (three billion)\n '3 B'\n >>> get_human_readable_count(4e12) # (four trillion)\n '4 T'\n >>> get_human_readable_count(5e15) # (more than trillion)\n '5,000 T'\n\n Args:\n number: a positive integer number\n\n Return:\n A string formatted according to the pattern described above.\n\n \"\"\"\n assert number >= 0\n labels = PARAMETER_NUM_UNITS\n num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)\n num_groups = int(np.ceil(num_digits / 3))\n num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions\n shift = -3 * (num_groups - 1)\n number = number * (10 ** shift)\n index = num_groups - 1\n return f\"{int(number):,d} {labels[index]}\"\n", "path": "pytorch_lightning/core/memory.py"}], "after_files": [{"content": "import os\nimport subprocess\nfrom collections import OrderedDict\nfrom subprocess import PIPE\nfrom typing import Tuple, Dict, Union, List, Any\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch.utils.hooks import RemovableHandle\n\n\nfrom pytorch_lightning.utilities import NATIVE_AMP_AVALAIBLE\nfrom pytorch_lightning.utilities.apply_func import apply_to_collection\n\nPARAMETER_NUM_UNITS = [\" \", \"K\", \"M\", \"B\", \"T\"]\nUNKNOWN_SIZE = \"?\"\n\n\nclass LayerSummary(object):\n \"\"\"\n Summary class for a single layer in a :class:`~pytorch_lightning.core.lightning.LightningModule`.\n It collects the following information:\n\n - Type of the layer (e.g. Linear, BatchNorm1d, ...)\n - Input shape\n - Output shape\n - Number of parameters\n\n The input and output shapes are only known after the example input array was\n passed through the model.\n\n Example::\n\n >>> model = torch.nn.Conv2d(3, 8, 3)\n >>> summary = LayerSummary(model)\n >>> summary.num_parameters\n 224\n >>> summary.layer_type\n 'Conv2d'\n >>> output = model(torch.rand(1, 3, 5, 5))\n >>> summary.in_size\n [1, 3, 5, 5]\n >>> summary.out_size\n [1, 8, 3, 3]\n\n Args:\n module: A module to summarize\n\n \"\"\"\n\n def __init__(self, module: nn.Module):\n super().__init__()\n self._module = module\n self._hook_handle = self._register_hook()\n self._in_size = None\n self._out_size = None\n\n def __del__(self):\n self.detach_hook()\n\n def _register_hook(self) -> RemovableHandle:\n \"\"\"\n Registers a hook on the module that computes the input- and output size(s) on the first forward pass.\n If the hook is called, it will remove itself from the from the module, meaning that\n recursive models will only record their input- and output shapes once.\n\n Return:\n A handle for the installed hook.\n \"\"\"\n\n def hook(module, inp, out):\n if len(inp) == 1:\n inp = inp[0]\n self._in_size = parse_batch_shape(inp)\n self._out_size = parse_batch_shape(out)\n self._hook_handle.remove()\n\n return self._module.register_forward_hook(hook)\n\n def detach_hook(self):\n \"\"\"\n Removes the forward hook if it was not already removed in the forward pass.\n Will be called after the summary is created.\n \"\"\"\n if self._hook_handle is not None:\n self._hook_handle.remove()\n\n @property\n def in_size(self) -> Union[str, List]:\n return self._in_size or UNKNOWN_SIZE\n\n @property\n def out_size(self) -> Union[str, List]:\n return self._out_size or UNKNOWN_SIZE\n\n @property\n def layer_type(self) -> str:\n \"\"\" Returns the class name of the module. \"\"\"\n return str(self._module.__class__.__name__)\n\n @property\n def num_parameters(self) -> int:\n \"\"\" Returns the number of parameters in this module. \"\"\"\n return sum(np.prod(p.shape) for p in self._module.parameters())\n\n\nclass ModelSummary(object):\n \"\"\"\n Generates a summary of all layers in a :class:`~pytorch_lightning.core.lightning.LightningModule`.\n\n Args:\n model: The model to summarize (also referred to as the root module)\n mode: Can be one of\n\n - `top` (default): only the top-level modules will be recorded (the children of the root module)\n - `full`: summarizes all layers and their submodules in the root module\n\n The string representation of this summary prints a table with columns containing\n the name, type and number of parameters for each layer.\n\n The root module may also have an attribute ``example_input_array`` as shown in the example below.\n If present, the root module will be called with it as input to determine the\n intermediate input- and output shapes of all layers. Supported are tensors and\n nested lists and tuples of tensors. All other types of inputs will be skipped and show as `?`\n in the summary table. The summary will also display `?` for layers not used in the forward pass.\n\n Example::\n\n >>> import pytorch_lightning as pl\n >>> class LitModel(pl.LightningModule):\n ...\n ... def __init__(self):\n ... super().__init__()\n ... self.net = nn.Sequential(nn.Linear(256, 512), nn.BatchNorm1d(512))\n ... self.example_input_array = torch.zeros(10, 256) # optional\n ...\n ... def forward(self, x):\n ... return self.net(x)\n ...\n >>> model = LitModel()\n >>> ModelSummary(model, mode='top') # doctest: +NORMALIZE_WHITESPACE\n | Name | Type | Params | In sizes | Out sizes\n ------------------------------------------------------------\n 0 | net | Sequential | 132 K | [10, 256] | [10, 512]\n >>> ModelSummary(model, mode='full') # doctest: +NORMALIZE_WHITESPACE\n | Name | Type | Params | In sizes | Out sizes\n --------------------------------------------------------------\n 0 | net | Sequential | 132 K | [10, 256] | [10, 512]\n 1 | net.0 | Linear | 131 K | [10, 256] | [10, 512]\n 2 | net.1 | BatchNorm1d | 1 K | [10, 512] | [10, 512]\n \"\"\"\n\n MODE_TOP = \"top\"\n MODE_FULL = \"full\"\n MODE_DEFAULT = MODE_TOP\n MODES = [MODE_FULL, MODE_TOP]\n\n def __init__(self, model, mode: str = MODE_DEFAULT):\n self._model = model\n self._mode = mode\n self._layer_summary = self.summarize()\n\n @property\n def named_modules(self) -> List[Tuple[str, nn.Module]]:\n if self._mode == ModelSummary.MODE_FULL:\n mods = self._model.named_modules()\n mods = list(mods)[1:] # do not include root module (LightningModule)\n elif self._mode == ModelSummary.MODE_TOP:\n # the children are the top-level modules\n mods = self._model.named_children()\n else:\n mods = []\n return list(mods)\n\n @property\n def layer_names(self) -> List[str]:\n return list(self._layer_summary.keys())\n\n @property\n def layer_types(self) -> List[str]:\n return [layer.layer_type for layer in self._layer_summary.values()]\n\n @property\n def in_sizes(self) -> List:\n return [layer.in_size for layer in self._layer_summary.values()]\n\n @property\n def out_sizes(self) -> List:\n return [layer.out_size for layer in self._layer_summary.values()]\n\n @property\n def param_nums(self) -> List[int]:\n return [layer.num_parameters for layer in self._layer_summary.values()]\n\n def summarize(self) -> Dict[str, LayerSummary]:\n summary = OrderedDict((name, LayerSummary(module)) for name, module in self.named_modules)\n if self._model.example_input_array is not None:\n self._forward_example_input()\n for layer in summary.values():\n layer.detach_hook()\n return summary\n\n def _forward_example_input(self) -> None:\n \"\"\" Run the example input through each layer to get input- and output sizes. \"\"\"\n model = self._model\n trainer = self._model.trainer\n\n input_ = model.example_input_array\n input_ = model.transfer_batch_to_device(input_, model.device)\n\n if trainer is not None and trainer.use_amp:\n if NATIVE_AMP_AVALAIBLE:\n model.forward = torch.cuda.amp.autocast()(model.forward)\n\n mode = model.training\n model.eval()\n with torch.no_grad():\n # let the model hooks collect the input- and output shapes\n if isinstance(input_, (list, tuple)):\n model(*input_)\n elif isinstance(input_, dict):\n model(**input_)\n else:\n model(input_)\n model.train(mode) # restore mode of module\n\n def __str__(self):\n \"\"\"\n Makes a summary listing with:\n\n Layer Name, Layer Type, Number of Parameters, Input Sizes, Output Sizes\n \"\"\"\n arrays = [\n [\" \", list(map(str, range(len(self._layer_summary))))],\n [\"Name\", self.layer_names],\n [\"Type\", self.layer_types],\n [\"Params\", list(map(get_human_readable_count, self.param_nums))],\n ]\n if self._model.example_input_array is not None:\n arrays.append([\"In sizes\", self.in_sizes])\n arrays.append([\"Out sizes\", self.out_sizes])\n\n return _format_summary_table(*arrays)\n\n def __repr__(self):\n return str(self)\n\n\ndef parse_batch_shape(batch: Any) -> Union[str, List]:\n if hasattr(batch, \"shape\"):\n return list(batch.shape)\n\n if isinstance(batch, (list, tuple)):\n shape = [parse_batch_shape(el) for el in batch]\n return shape\n\n return UNKNOWN_SIZE\n\n\ndef _format_summary_table(*cols) -> str:\n \"\"\"\n Takes in a number of arrays, each specifying a column in\n the summary table, and combines them all into one big\n string defining the summary table that are nicely formatted.\n \"\"\"\n n_rows = len(cols[0][1])\n n_cols = 1 + len(cols)\n\n # Get formatting width of each column\n col_widths = []\n for c in cols:\n col_width = max(len(str(a)) for a in c[1]) if n_rows else 0\n col_width = max(col_width, len(c[0])) # minimum length is header length\n col_widths.append(col_width)\n\n # Formatting\n s = \"{:<{}}\"\n total_width = sum(col_widths) + 3 * n_cols\n header = [s.format(c[0], l) for c, l in zip(cols, col_widths)]\n\n # Summary = header + divider + Rest of table\n summary = \" | \".join(header) + \"\\n\" + \"-\" * total_width\n for i in range(n_rows):\n line = []\n for c, l in zip(cols, col_widths):\n line.append(s.format(str(c[1][i]), l))\n summary += \"\\n\" + \" | \".join(line)\n\n return summary\n\n\ndef get_memory_profile(mode: str) -> Union[Dict[str, int], Dict[int, int]]:\n \"\"\" Get a profile of the current memory usage.\n\n Args:\n mode: There are two modes:\n\n - 'all' means return memory for all gpus\n - 'min_max' means return memory for max and min\n\n Return:\n A dictionary in which the keys are device ids as integers and\n values are memory usage as integers in MB.\n If mode is 'min_max', the dictionary will also contain two additional keys:\n\n - 'min_gpu_mem': the minimum memory usage in MB\n - 'max_gpu_mem': the maximum memory usage in MB\n \"\"\"\n memory_map = get_gpu_memory_map()\n\n if mode == \"min_max\":\n min_index, min_memory = min(memory_map.items(), key=lambda item: item[1])\n max_index, max_memory = max(memory_map.items(), key=lambda item: item[1])\n\n memory_map = {\"min_gpu_mem\": min_memory, \"max_gpu_mem\": max_memory}\n\n return memory_map\n\n\ndef get_gpu_memory_map() -> Dict[str, int]:\n \"\"\"Get the current gpu usage.\n\n Return:\n A dictionary in which the keys are device ids as integers and\n values are memory usage as integers in MB.\n \"\"\"\n result = subprocess.run(\n [\"nvidia-smi\", \"--query-gpu=memory.used\", \"--format=csv,nounits,noheader\",],\n encoding=\"utf-8\",\n # capture_output=True, # valid for python version >=3.7\n stdout=PIPE,\n stderr=PIPE, # for backward compatibility with python version 3.6\n check=True,\n )\n # Convert lines into a dictionary\n gpu_memory = [int(x) for x in result.stdout.strip().split(os.linesep)]\n gpu_memory_map = {f\"gpu_{index}\": memory for index, memory in enumerate(gpu_memory)}\n return gpu_memory_map\n\n\ndef get_human_readable_count(number: int) -> str:\n \"\"\"\n Abbreviates an integer number with K, M, B, T for thousands, millions,\n billions and trillions, respectively.\n\n Examples:\n >>> get_human_readable_count(123)\n '123 '\n >>> get_human_readable_count(1234) # (one thousand)\n '1 K'\n >>> get_human_readable_count(2e6) # (two million)\n '2 M'\n >>> get_human_readable_count(3e9) # (three billion)\n '3 B'\n >>> get_human_readable_count(4e12) # (four trillion)\n '4 T'\n >>> get_human_readable_count(5e15) # (more than trillion)\n '5,000 T'\n\n Args:\n number: a positive integer number\n\n Return:\n A string formatted according to the pattern described above.\n\n \"\"\"\n assert number >= 0\n labels = PARAMETER_NUM_UNITS\n num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1)\n num_groups = int(np.ceil(num_digits / 3))\n num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions\n shift = -3 * (num_groups - 1)\n number = number * (10 ** shift)\n index = num_groups - 1\n return f\"{int(number):,d} {labels[index]}\"\n", "path": "pytorch_lightning/core/memory.py"}]} |
gh_patches_debug_1172 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-django-4028 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Docker Django 4.1 template reload behavior
Using Django 4.1 under Docker gives trouble in development with the changed default behavior of the template loading. Templates are cached until restart since the cached loader is added as default: https://docs.djangoproject.com/en/4.1/releases/4.1/#templates
My current workaround is adding the next snippet to local.py
```
TEMPLATES[0]["OPTIONS"]["loaders"] = [
(
"django.template.loaders.filesystem.Loader",
[APPS_DIR / "templates"],
),
"django.template.loaders.app_directories.Loader",
]
```
[Update Django] Django 4.1
4.1 requirements tables
## base.txt
| Name | Version in Master | 4.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [pytz](http://pythonhosted.org/pytz) | 2023.3 | n/a | ✅ |
| [python-slugify](https://github.com/un33k/python-slugify) | 8.0.1 | n/a | ✅ |
| [Pillow](https://python-pillow.org) | 9.5.0 | n/a | ✅ |
| [rcssmin](http://opensource.perlig.de/rcssmin/) | 1.1.1 | n/a | ✅ |
| [argon2-cffi](https://pypi.org/project/argon2-cffi/) | 21.3.0 | n/a | ✅ |
| [whitenoise](https://github.com/evansd/whitenoise) | 6.4.0 | 6.2.0 | ✅ |
| [redis](https://github.com/redis/redis-py) | 4.5.4 | n/a | ✅ |
| [hiredis](https://github.com/redis/hiredis-py) | 2.2.2 | n/a | ✅ |
| [celery](http://celeryproject.org) | 5.2.7 | n/a | ✅ |
| [django-celery-beat](https://github.com/celery/django-celery-beat) | 2.5.0 | 2.4.0 | ✅ |
| [flower](https://github.com/mher/flower) | 1.2.0 | n/a | ✅ |
| [uvicorn](https://pypi.org/project/uvicorn/) | 0.21.1 | n/a | ✅ |
| [django](https://www.djangoproject.com/) | 4.0.10 | 2.4.0 | ✅ |
| [django-environ](https://django-environ.readthedocs.org) | 0.10.0 | | ❓ |
| [django-model-utils](https://github.com/jazzband/django-model-utils) | 4.3.1 | 4.3.1 | ✅ |
| [django-allauth](http://www.intenct.nl/projects/django-allauth/) | 0.54.0 | 0.52.0 | ✅ |
| [django-crispy-forms](https://pypi.org/project/django-crispy-forms/) | 2.0 | 2.0 | ✅ |
| [crispy-bootstrap5](https://github.com/django-crispy-forms/crispy-bootstrap5) | 0.7 | 0.7 | ✅ |
| [django-compressor](https://django-compressor.readthedocs.io/en/latest/) | 4.3.1 | 4.1 | ✅ |
| [django-redis](https://github.com/jazzband/django-redis) | 5.2.0 | | ❌ |
| [djangorestframework](https://www.django-rest-framework.org/) | 3.14.0 | 3.14.0 | ✅ |
| [django-cors-headers](https://github.com/adamchainz/django-cors-headers) | 3.14.0 | 3.13.0 | ✅ |
| [drf-spectacular](https://github.com/tfranzel/drf-spectacular) | 0.26.1 | 0.24.0 | ✅ |
| [django-webpack-loader](https://github.com/django-webpack/django-webpack-loader) | 1.8.1 | 1.8.0 | ✅ |
## local.txt
| Name | Version in Master | 4.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [Werkzeug](https://palletsprojects.com/p/werkzeug/) | 2.2.3 | n/a | ✅ |
| [ipdb](https://github.com/gotcha/ipdb) | 0.13.13 | n/a | ✅ |
| [psycopg2](https://psycopg.org/) | 2.9.6 | n/a | ✅ |
| [psycopg2-binary](https://psycopg.org/) | 2.9.6 | n/a | ✅ |
| [watchfiles](https://github.com/samuelcolvin/watchfiles/watchfiles) | 0.19.0 | n/a | ✅ |
| [mypy](http://www.mypy-lang.org/) | 1.1.1 | n/a | ✅ |
| [django-stubs](https://github.com/typeddjango/django-stubs) | 1.16.0 | 1.13.0 | ✅ |
| [pytest](https://docs.pytest.org/en/latest/) | 7.2.2 | n/a | ✅ |
| [pytest-sugar](https://pivotfinland.com/pytest-sugar/) | 0.9.6 | n/a | ✅ |
| [djangorestframework-stubs](https://github.com/typeddjango/djangorestframework-stubs) | 1.10.0 | n/a | ✅ |
| [sphinx](https://pypi.org/project/Sphinx/) | 6.1.3 | n/a | ✅ |
| [sphinx-autobuild](https://github.com/executablebooks/sphinx-autobuild) | 2021.3.14 | n/a | ✅ |
| [flake8](https://github.com/pycqa/flake8) | 6.0.0 | n/a | ✅ |
| [flake8-isort](https://github.com/gforcada/flake8-isort) | 6.0.0 | n/a | ✅ |
| [coverage](https://github.com/nedbat/coveragepy) | 7.2.2 | n/a | ✅ |
| [black](https://pypi.org/project/black/) | 23.3.0 | n/a | ✅ |
| [pylint-django](https://github.com/PyCQA/pylint-django) | 2.5.3 | | ❌ |
| [pylint-celery](https://github.com/landscapeio/pylint-celery) | 0.3 | n/a | ✅ |
| [pre-commit](https://github.com/pre-commit/pre-commit) | 3.2.1 | n/a | ✅ |
| [factory-boy](https://github.com/FactoryBoy/factory_boy) | 3.2.1 | | ❌ |
| [django-debug-toolbar](https://pypi.org/project/django-debug-toolbar/) | 3.8.1 | 3.6.0 | ✅ |
| [django-extensions](http://github.com/django-extensions/django-extensions) | 3.2.1 | 3.2.1 | ✅ |
| [django-coverage-plugin](https://github.com/nedbat/django_coverage_plugin) | 3.0.0 | 2.0.4 | ✅ |
| [pytest-django](https://pytest-django.readthedocs.io/) | 4.5.2 | | ❌ |
## production.txt
| Name | Version in Master | 4.1 Compatible Version | OK |
| ---- | :---------------: | :-----------------------------: | :-: |
| [gunicorn](https://gunicorn.org) | 20.1.0 | n/a | ✅ |
| [psycopg2](https://psycopg.org/) | 2.9.6 | n/a | ✅ |
| [Collectfast](https://github.com/antonagestam/collectfast/) | 2.2.0 | n/a | ✅ |
| [sentry-sdk](https://github.com/getsentry/sentry-python) | 1.17.0 | n/a | ✅ |
| [hiredis](https://github.com/redis/hiredis-py) | 2.2.2 | n/a | ✅ |
| [django-storages](https://github.com/jschneier/django-storages) | 1.13.2 | 1.13 | ✅ |
| [django-anymail](https://github.com/anymail/django-anymail) | 9.1 | 9.0 | ✅ |
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #!/usr/bin/env python
2 try:
3 from setuptools import setup
4 except ImportError:
5 from distutils.core import setup
6
7 # We use calendar versioning
8 version = "2023.04.03"
9
10 with open("README.rst") as readme_file:
11 long_description = readme_file.read()
12
13 setup(
14 name="cookiecutter-django",
15 version=version,
16 description=(
17 "A Cookiecutter template for creating production-ready "
18 "Django projects quickly."
19 ),
20 long_description=long_description,
21 author="Daniel Roy Greenfeld",
22 author_email="[email protected]",
23 url="https://github.com/cookiecutter/cookiecutter-django",
24 packages=[],
25 license="BSD",
26 zip_safe=False,
27 classifiers=[
28 "Development Status :: 4 - Beta",
29 "Environment :: Console",
30 "Framework :: Django :: 4.0",
31 "Intended Audience :: Developers",
32 "Natural Language :: English",
33 "License :: OSI Approved :: BSD License",
34 "Programming Language :: Python",
35 "Programming Language :: Python :: 3",
36 "Programming Language :: Python :: 3.10",
37 "Programming Language :: Python :: Implementation :: CPython",
38 "Topic :: Software Development",
39 ],
40 keywords=(
41 "cookiecutter, Python, projects, project templates, django, "
42 "skeleton, scaffolding, project directory, setup.py"
43 ),
44 )
45
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -27,7 +27,7 @@
classifiers=[
"Development Status :: 4 - Beta",
"Environment :: Console",
- "Framework :: Django :: 4.0",
+ "Framework :: Django :: 4.1",
"Intended Audience :: Developers",
"Natural Language :: English",
"License :: OSI Approved :: BSD License",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -27,7 +27,7 @@\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n- \"Framework :: Django :: 4.0\",\n+ \"Framework :: Django :: 4.1\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n", "issue": "Docker Django 4.1 template reload behavior\nUsing Django 4.1 under Docker gives trouble in development with the changed default behavior of the template loading. Templates are cached until restart since the cached loader is added as default: https://docs.djangoproject.com/en/4.1/releases/4.1/#templates\r\n\r\nMy current workaround is adding the next snippet to local.py\r\n\r\n```\r\nTEMPLATES[0][\"OPTIONS\"][\"loaders\"] = [\r\n (\r\n \"django.template.loaders.filesystem.Loader\",\r\n [APPS_DIR / \"templates\"],\r\n ),\r\n \"django.template.loaders.app_directories.Loader\",\r\n]\r\n```\n[Update Django] Django 4.1\n4.1 requirements tables\n\n\n\n## base.txt\n\n| Name | Version in Master | 4.1 Compatible Version | OK |\n| ---- | :---------------: | :-----------------------------: | :-: |\n| [pytz](http://pythonhosted.org/pytz) | 2023.3 | n/a | \u2705 |\n| [python-slugify](https://github.com/un33k/python-slugify) | 8.0.1 | n/a | \u2705 |\n| [Pillow](https://python-pillow.org) | 9.5.0 | n/a | \u2705 |\n| [rcssmin](http://opensource.perlig.de/rcssmin/) | 1.1.1 | n/a | \u2705 |\n| [argon2-cffi](https://pypi.org/project/argon2-cffi/) | 21.3.0 | n/a | \u2705 |\n| [whitenoise](https://github.com/evansd/whitenoise) | 6.4.0 | 6.2.0 | \u2705 |\n| [redis](https://github.com/redis/redis-py) | 4.5.4 | n/a | \u2705 |\n| [hiredis](https://github.com/redis/hiredis-py) | 2.2.2 | n/a | \u2705 |\n| [celery](http://celeryproject.org) | 5.2.7 | n/a | \u2705 |\n| [django-celery-beat](https://github.com/celery/django-celery-beat) | 2.5.0 | 2.4.0 | \u2705 |\n| [flower](https://github.com/mher/flower) | 1.2.0 | n/a | \u2705 |\n| [uvicorn](https://pypi.org/project/uvicorn/) | 0.21.1 | n/a | \u2705 |\n| [django](https://www.djangoproject.com/) | 4.0.10 | 2.4.0 | \u2705 |\n| [django-environ](https://django-environ.readthedocs.org) | 0.10.0 | | \u2753 |\n| [django-model-utils](https://github.com/jazzband/django-model-utils) | 4.3.1 | 4.3.1 | \u2705 |\n| [django-allauth](http://www.intenct.nl/projects/django-allauth/) | 0.54.0 | 0.52.0 | \u2705 |\n| [django-crispy-forms](https://pypi.org/project/django-crispy-forms/) | 2.0 | 2.0 | \u2705 |\n| [crispy-bootstrap5](https://github.com/django-crispy-forms/crispy-bootstrap5) | 0.7 | 0.7 | \u2705 |\n| [django-compressor](https://django-compressor.readthedocs.io/en/latest/) | 4.3.1 | 4.1 | \u2705 |\n| [django-redis](https://github.com/jazzband/django-redis) | 5.2.0 | | \u274c |\n| [djangorestframework](https://www.django-rest-framework.org/) | 3.14.0 | 3.14.0 | \u2705 |\n| [django-cors-headers](https://github.com/adamchainz/django-cors-headers) | 3.14.0 | 3.13.0 | \u2705 |\n| [drf-spectacular](https://github.com/tfranzel/drf-spectacular) | 0.26.1 | 0.24.0 | \u2705 |\n| [django-webpack-loader](https://github.com/django-webpack/django-webpack-loader) | 1.8.1 | 1.8.0 | \u2705 |\n\n\n## local.txt\n\n| Name | Version in Master | 4.1 Compatible Version | OK |\n| ---- | :---------------: | :-----------------------------: | :-: |\n| [Werkzeug](https://palletsprojects.com/p/werkzeug/) | 2.2.3 | n/a | \u2705 |\n| [ipdb](https://github.com/gotcha/ipdb) | 0.13.13 | n/a | \u2705 |\n| [psycopg2](https://psycopg.org/) | 2.9.6 | n/a | \u2705 |\n| [psycopg2-binary](https://psycopg.org/) | 2.9.6 | n/a | \u2705 |\n| [watchfiles](https://github.com/samuelcolvin/watchfiles/watchfiles) | 0.19.0 | n/a | \u2705 |\n| [mypy](http://www.mypy-lang.org/) | 1.1.1 | n/a | \u2705 |\n| [django-stubs](https://github.com/typeddjango/django-stubs) | 1.16.0 | 1.13.0 | \u2705 |\n| [pytest](https://docs.pytest.org/en/latest/) | 7.2.2 | n/a | \u2705 |\n| [pytest-sugar](https://pivotfinland.com/pytest-sugar/) | 0.9.6 | n/a | \u2705 |\n| [djangorestframework-stubs](https://github.com/typeddjango/djangorestframework-stubs) | 1.10.0 | n/a | \u2705 |\n| [sphinx](https://pypi.org/project/Sphinx/) | 6.1.3 | n/a | \u2705 |\n| [sphinx-autobuild](https://github.com/executablebooks/sphinx-autobuild) | 2021.3.14 | n/a | \u2705 |\n| [flake8](https://github.com/pycqa/flake8) | 6.0.0 | n/a | \u2705 |\n| [flake8-isort](https://github.com/gforcada/flake8-isort) | 6.0.0 | n/a | \u2705 |\n| [coverage](https://github.com/nedbat/coveragepy) | 7.2.2 | n/a | \u2705 |\n| [black](https://pypi.org/project/black/) | 23.3.0 | n/a | \u2705 |\n| [pylint-django](https://github.com/PyCQA/pylint-django) | 2.5.3 | | \u274c |\n| [pylint-celery](https://github.com/landscapeio/pylint-celery) | 0.3 | n/a | \u2705 |\n| [pre-commit](https://github.com/pre-commit/pre-commit) | 3.2.1 | n/a | \u2705 |\n| [factory-boy](https://github.com/FactoryBoy/factory_boy) | 3.2.1 | | \u274c |\n| [django-debug-toolbar](https://pypi.org/project/django-debug-toolbar/) | 3.8.1 | 3.6.0 | \u2705 |\n| [django-extensions](http://github.com/django-extensions/django-extensions) | 3.2.1 | 3.2.1 | \u2705 |\n| [django-coverage-plugin](https://github.com/nedbat/django_coverage_plugin) | 3.0.0 | 2.0.4 | \u2705 |\n| [pytest-django](https://pytest-django.readthedocs.io/) | 4.5.2 | | \u274c |\n\n\n## production.txt\n\n| Name | Version in Master | 4.1 Compatible Version | OK |\n| ---- | :---------------: | :-----------------------------: | :-: |\n| [gunicorn](https://gunicorn.org) | 20.1.0 | n/a | \u2705 |\n| [psycopg2](https://psycopg.org/) | 2.9.6 | n/a | \u2705 |\n| [Collectfast](https://github.com/antonagestam/collectfast/) | 2.2.0 | n/a | \u2705 |\n| [sentry-sdk](https://github.com/getsentry/sentry-python) | 1.17.0 | n/a | \u2705 |\n| [hiredis](https://github.com/redis/hiredis-py) | 2.2.2 | n/a | \u2705 |\n| [django-storages](https://github.com/jschneier/django-storages) | 1.13.2 | 1.13 | \u2705 |\n| [django-anymail](https://github.com/anymail/django-anymail) | 9.1 | 9.0 | \u2705 |\n\n", "before_files": [{"content": "#!/usr/bin/env python\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# We use calendar versioning\nversion = \"2023.04.03\"\n\nwith open(\"README.rst\") as readme_file:\n long_description = readme_file.read()\n\nsetup(\n name=\"cookiecutter-django\",\n version=version,\n description=(\n \"A Cookiecutter template for creating production-ready \"\n \"Django projects quickly.\"\n ),\n long_description=long_description,\n author=\"Daniel Roy Greenfeld\",\n author_email=\"[email protected]\",\n url=\"https://github.com/cookiecutter/cookiecutter-django\",\n packages=[],\n license=\"BSD\",\n zip_safe=False,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n \"Framework :: Django :: 4.0\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n \"cookiecutter, Python, projects, project templates, django, \"\n \"skeleton, scaffolding, project directory, setup.py\"\n ),\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\n# We use calendar versioning\nversion = \"2023.04.03\"\n\nwith open(\"README.rst\") as readme_file:\n long_description = readme_file.read()\n\nsetup(\n name=\"cookiecutter-django\",\n version=version,\n description=(\n \"A Cookiecutter template for creating production-ready \"\n \"Django projects quickly.\"\n ),\n long_description=long_description,\n author=\"Daniel Roy Greenfeld\",\n author_email=\"[email protected]\",\n url=\"https://github.com/cookiecutter/cookiecutter-django\",\n packages=[],\n license=\"BSD\",\n zip_safe=False,\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n \"Framework :: Django :: 4.1\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Software Development\",\n ],\n keywords=(\n \"cookiecutter, Python, projects, project templates, django, \"\n \"skeleton, scaffolding, project directory, setup.py\"\n ),\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1173 | rasdani/github-patches | git_diff | python-pillow__Pillow-3478 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Using seek to skip more than one frame with FliImageFile only shows the pixels changed that frame
### What did you do?
I opened a FLI file and used .seek(50) on the image before creating a PhotoImage to display it on a tix label.
### What did you expect to happen?
I expected to see the complete image.
### What actually happened?
I only saw the part of the image that had changed for that particular frame. The rest of the image is black.
### What versions of Pillow and Python are you using?
Python 3.6.2 on Windows 7 x64
Pillow: 4.2.1
I did find that if I hack in a call to self.load() in FliImageFile's _seek() method, the frame displays fully. I don't know if this is the best way to fix the issue.
```python
import PIL as pil
from PIL import Image,ImageTk,FliImagePlugin
import tkinter.tix as tix
class FliImageFile(FliImagePlugin.FliImageFile):
def _seek(self, frame):
FliImagePlugin.FliImageFile._seek(self, frame)
# ensure that this frame is loaded
self.load()
def createlabel(root, filename):
label = tix.Label(root)
label.original = Image.open(filename)
label.original.seek(50) # Go to frame 50.
label.photoimage = ImageTk.PhotoImage(label.original) # keep a reference!
label.config(image=label.photoimage)
return label
def main():
root = tix.Tk()
label1 = createlabel(root, 'a.fli')
label1.pack()
# Hack to replace PIL's FliImageFile with one that loads image data at
# the end of each internal _seek() call.
Image.OPEN[FliImagePlugin.FliImageFile.format] = (FliImageFile, Image.OPEN[FliImagePlugin.FliImageFile.format][1])
label2 = createlabel(root, 'a.fli')
label2.pack()
root.mainloop()
main()
```
Using a.fli found at https://samples.libav.org/fli-flc/
Top image is what Pillow displays as-is. The bottom image uses my hack that loads the image at the end of _seek.

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `src/PIL/FliImagePlugin.py`
Content:
```
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # FLI/FLC file handling.
6 #
7 # History:
8 # 95-09-01 fl Created
9 # 97-01-03 fl Fixed parser, setup decoder tile
10 # 98-07-15 fl Renamed offset attribute to avoid name clash
11 #
12 # Copyright (c) Secret Labs AB 1997-98.
13 # Copyright (c) Fredrik Lundh 1995-97.
14 #
15 # See the README file for information on usage and redistribution.
16 #
17
18
19 from . import Image, ImageFile, ImagePalette
20 from ._binary import i8, i16le as i16, i32le as i32, o8
21
22 __version__ = "0.2"
23
24
25 #
26 # decoder
27
28 def _accept(prefix):
29 return len(prefix) >= 6 and i16(prefix[4:6]) in [0xAF11, 0xAF12]
30
31
32 ##
33 # Image plugin for the FLI/FLC animation format. Use the <b>seek</b>
34 # method to load individual frames.
35
36 class FliImageFile(ImageFile.ImageFile):
37
38 format = "FLI"
39 format_description = "Autodesk FLI/FLC Animation"
40 _close_exclusive_fp_after_loading = False
41
42 def _open(self):
43
44 # HEAD
45 s = self.fp.read(128)
46 magic = i16(s[4:6])
47 if not (magic in [0xAF11, 0xAF12] and
48 i16(s[14:16]) in [0, 3] and # flags
49 s[20:22] == b"\x00\x00"): # reserved
50 raise SyntaxError("not an FLI/FLC file")
51
52 # frames
53 self.__framecount = i16(s[6:8])
54
55 # image characteristics
56 self.mode = "P"
57 self._size = i16(s[8:10]), i16(s[10:12])
58
59 # animation speed
60 duration = i32(s[16:20])
61 if magic == 0xAF11:
62 duration = (duration * 1000) // 70
63 self.info["duration"] = duration
64
65 # look for palette
66 palette = [(a, a, a) for a in range(256)]
67
68 s = self.fp.read(16)
69
70 self.__offset = 128
71
72 if i16(s[4:6]) == 0xF100:
73 # prefix chunk; ignore it
74 self.__offset = self.__offset + i32(s)
75 s = self.fp.read(16)
76
77 if i16(s[4:6]) == 0xF1FA:
78 # look for palette chunk
79 s = self.fp.read(6)
80 if i16(s[4:6]) == 11:
81 self._palette(palette, 2)
82 elif i16(s[4:6]) == 4:
83 self._palette(palette, 0)
84
85 palette = [o8(r)+o8(g)+o8(b) for (r, g, b) in palette]
86 self.palette = ImagePalette.raw("RGB", b"".join(palette))
87
88 # set things up to decode first frame
89 self.__frame = -1
90 self.__fp = self.fp
91 self.__rewind = self.fp.tell()
92 self.seek(0)
93
94 def _palette(self, palette, shift):
95 # load palette
96
97 i = 0
98 for e in range(i16(self.fp.read(2))):
99 s = self.fp.read(2)
100 i = i + i8(s[0])
101 n = i8(s[1])
102 if n == 0:
103 n = 256
104 s = self.fp.read(n * 3)
105 for n in range(0, len(s), 3):
106 r = i8(s[n]) << shift
107 g = i8(s[n+1]) << shift
108 b = i8(s[n+2]) << shift
109 palette[i] = (r, g, b)
110 i += 1
111
112 @property
113 def n_frames(self):
114 return self.__framecount
115
116 @property
117 def is_animated(self):
118 return self.__framecount > 1
119
120 def seek(self, frame):
121 if not self._seek_check(frame):
122 return
123 if frame < self.__frame:
124 self._seek(0)
125
126 for f in range(self.__frame + 1, frame + 1):
127 self._seek(f)
128
129 def _seek(self, frame):
130 if frame == 0:
131 self.__frame = -1
132 self.__fp.seek(self.__rewind)
133 self.__offset = 128
134
135 if frame != self.__frame + 1:
136 raise ValueError("cannot seek to frame %d" % frame)
137 self.__frame = frame
138
139 # move to next frame
140 self.fp = self.__fp
141 self.fp.seek(self.__offset)
142
143 s = self.fp.read(4)
144 if not s:
145 raise EOFError
146
147 framesize = i32(s)
148
149 self.decodermaxblock = framesize
150 self.tile = [("fli", (0, 0)+self.size, self.__offset, None)]
151
152 self.__offset += framesize
153
154 def tell(self):
155 return self.__frame
156
157
158 #
159 # registry
160
161 Image.register_open(FliImageFile.format, FliImageFile, _accept)
162
163 Image.register_extensions(FliImageFile.format, [".fli", ".flc"])
164
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/src/PIL/FliImagePlugin.py b/src/PIL/FliImagePlugin.py
--- a/src/PIL/FliImagePlugin.py
+++ b/src/PIL/FliImagePlugin.py
@@ -131,6 +131,9 @@
self.__frame = -1
self.__fp.seek(self.__rewind)
self.__offset = 128
+ else:
+ # ensure that the previous frame was loaded
+ self.load()
if frame != self.__frame + 1:
raise ValueError("cannot seek to frame %d" % frame)
| {"golden_diff": "diff --git a/src/PIL/FliImagePlugin.py b/src/PIL/FliImagePlugin.py\n--- a/src/PIL/FliImagePlugin.py\n+++ b/src/PIL/FliImagePlugin.py\n@@ -131,6 +131,9 @@\n self.__frame = -1\n self.__fp.seek(self.__rewind)\n self.__offset = 128\n+ else:\n+ # ensure that the previous frame was loaded\n+ self.load()\n \n if frame != self.__frame + 1:\n raise ValueError(\"cannot seek to frame %d\" % frame)\n", "issue": "Using seek to skip more than one frame with FliImageFile only shows the pixels changed that frame\n### What did you do?\r\nI opened a FLI file and used .seek(50) on the image before creating a PhotoImage to display it on a tix label.\r\n\r\n### What did you expect to happen?\r\nI expected to see the complete image.\r\n\r\n### What actually happened?\r\nI only saw the part of the image that had changed for that particular frame. The rest of the image is black.\r\n\r\n### What versions of Pillow and Python are you using?\r\nPython 3.6.2 on Windows 7 x64\r\nPillow: 4.2.1\r\n\r\nI did find that if I hack in a call to self.load() in FliImageFile's _seek() method, the frame displays fully. I don't know if this is the best way to fix the issue.\r\n\r\n```python\r\nimport PIL as pil\r\nfrom PIL import Image,ImageTk,FliImagePlugin\r\nimport tkinter.tix as tix\r\n\r\nclass FliImageFile(FliImagePlugin.FliImageFile):\r\n def _seek(self, frame):\r\n FliImagePlugin.FliImageFile._seek(self, frame)\r\n # ensure that this frame is loaded\r\n self.load()\r\n\r\ndef createlabel(root, filename):\r\n label = tix.Label(root)\r\n label.original = Image.open(filename)\r\n label.original.seek(50) # Go to frame 50.\r\n label.photoimage = ImageTk.PhotoImage(label.original) # keep a reference!\r\n label.config(image=label.photoimage)\r\n return label\r\n\r\ndef main():\r\n root = tix.Tk()\r\n label1 = createlabel(root, 'a.fli')\r\n label1.pack()\r\n # Hack to replace PIL's FliImageFile with one that loads image data at\r\n # the end of each internal _seek() call.\r\n Image.OPEN[FliImagePlugin.FliImageFile.format] = (FliImageFile, Image.OPEN[FliImagePlugin.FliImageFile.format][1])\r\n label2 = createlabel(root, 'a.fli')\r\n label2.pack()\r\n root.mainloop()\r\n\r\nmain()\r\n```\r\nUsing a.fli found at https://samples.libav.org/fli-flc/\r\n\r\nTop image is what Pillow displays as-is. The bottom image uses my hack that loads the image at the end of _seek.\r\n\r\n\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# FLI/FLC file handling.\n#\n# History:\n# 95-09-01 fl Created\n# 97-01-03 fl Fixed parser, setup decoder tile\n# 98-07-15 fl Renamed offset attribute to avoid name clash\n#\n# Copyright (c) Secret Labs AB 1997-98.\n# Copyright (c) Fredrik Lundh 1995-97.\n#\n# See the README file for information on usage and redistribution.\n#\n\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i8, i16le as i16, i32le as i32, o8\n\n__version__ = \"0.2\"\n\n\n#\n# decoder\n\ndef _accept(prefix):\n return len(prefix) >= 6 and i16(prefix[4:6]) in [0xAF11, 0xAF12]\n\n\n##\n# Image plugin for the FLI/FLC animation format. Use the <b>seek</b>\n# method to load individual frames.\n\nclass FliImageFile(ImageFile.ImageFile):\n\n format = \"FLI\"\n format_description = \"Autodesk FLI/FLC Animation\"\n _close_exclusive_fp_after_loading = False\n\n def _open(self):\n\n # HEAD\n s = self.fp.read(128)\n magic = i16(s[4:6])\n if not (magic in [0xAF11, 0xAF12] and\n i16(s[14:16]) in [0, 3] and # flags\n s[20:22] == b\"\\x00\\x00\"): # reserved\n raise SyntaxError(\"not an FLI/FLC file\")\n\n # frames\n self.__framecount = i16(s[6:8])\n\n # image characteristics\n self.mode = \"P\"\n self._size = i16(s[8:10]), i16(s[10:12])\n\n # animation speed\n duration = i32(s[16:20])\n if magic == 0xAF11:\n duration = (duration * 1000) // 70\n self.info[\"duration\"] = duration\n\n # look for palette\n palette = [(a, a, a) for a in range(256)]\n\n s = self.fp.read(16)\n\n self.__offset = 128\n\n if i16(s[4:6]) == 0xF100:\n # prefix chunk; ignore it\n self.__offset = self.__offset + i32(s)\n s = self.fp.read(16)\n\n if i16(s[4:6]) == 0xF1FA:\n # look for palette chunk\n s = self.fp.read(6)\n if i16(s[4:6]) == 11:\n self._palette(palette, 2)\n elif i16(s[4:6]) == 4:\n self._palette(palette, 0)\n\n palette = [o8(r)+o8(g)+o8(b) for (r, g, b) in palette]\n self.palette = ImagePalette.raw(\"RGB\", b\"\".join(palette))\n\n # set things up to decode first frame\n self.__frame = -1\n self.__fp = self.fp\n self.__rewind = self.fp.tell()\n self.seek(0)\n\n def _palette(self, palette, shift):\n # load palette\n\n i = 0\n for e in range(i16(self.fp.read(2))):\n s = self.fp.read(2)\n i = i + i8(s[0])\n n = i8(s[1])\n if n == 0:\n n = 256\n s = self.fp.read(n * 3)\n for n in range(0, len(s), 3):\n r = i8(s[n]) << shift\n g = i8(s[n+1]) << shift\n b = i8(s[n+2]) << shift\n palette[i] = (r, g, b)\n i += 1\n\n @property\n def n_frames(self):\n return self.__framecount\n\n @property\n def is_animated(self):\n return self.__framecount > 1\n\n def seek(self, frame):\n if not self._seek_check(frame):\n return\n if frame < self.__frame:\n self._seek(0)\n\n for f in range(self.__frame + 1, frame + 1):\n self._seek(f)\n\n def _seek(self, frame):\n if frame == 0:\n self.__frame = -1\n self.__fp.seek(self.__rewind)\n self.__offset = 128\n\n if frame != self.__frame + 1:\n raise ValueError(\"cannot seek to frame %d\" % frame)\n self.__frame = frame\n\n # move to next frame\n self.fp = self.__fp\n self.fp.seek(self.__offset)\n\n s = self.fp.read(4)\n if not s:\n raise EOFError\n\n framesize = i32(s)\n\n self.decodermaxblock = framesize\n self.tile = [(\"fli\", (0, 0)+self.size, self.__offset, None)]\n\n self.__offset += framesize\n\n def tell(self):\n return self.__frame\n\n\n#\n# registry\n\nImage.register_open(FliImageFile.format, FliImageFile, _accept)\n\nImage.register_extensions(FliImageFile.format, [\".fli\", \".flc\"])\n", "path": "src/PIL/FliImagePlugin.py"}], "after_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# FLI/FLC file handling.\n#\n# History:\n# 95-09-01 fl Created\n# 97-01-03 fl Fixed parser, setup decoder tile\n# 98-07-15 fl Renamed offset attribute to avoid name clash\n#\n# Copyright (c) Secret Labs AB 1997-98.\n# Copyright (c) Fredrik Lundh 1995-97.\n#\n# See the README file for information on usage and redistribution.\n#\n\n\nfrom . import Image, ImageFile, ImagePalette\nfrom ._binary import i8, i16le as i16, i32le as i32, o8\n\n__version__ = \"0.2\"\n\n\n#\n# decoder\n\ndef _accept(prefix):\n return len(prefix) >= 6 and i16(prefix[4:6]) in [0xAF11, 0xAF12]\n\n\n##\n# Image plugin for the FLI/FLC animation format. Use the <b>seek</b>\n# method to load individual frames.\n\nclass FliImageFile(ImageFile.ImageFile):\n\n format = \"FLI\"\n format_description = \"Autodesk FLI/FLC Animation\"\n _close_exclusive_fp_after_loading = False\n\n def _open(self):\n\n # HEAD\n s = self.fp.read(128)\n magic = i16(s[4:6])\n if not (magic in [0xAF11, 0xAF12] and\n i16(s[14:16]) in [0, 3] and # flags\n s[20:22] == b\"\\x00\\x00\"): # reserved\n raise SyntaxError(\"not an FLI/FLC file\")\n\n # frames\n self.__framecount = i16(s[6:8])\n\n # image characteristics\n self.mode = \"P\"\n self._size = i16(s[8:10]), i16(s[10:12])\n\n # animation speed\n duration = i32(s[16:20])\n if magic == 0xAF11:\n duration = (duration * 1000) // 70\n self.info[\"duration\"] = duration\n\n # look for palette\n palette = [(a, a, a) for a in range(256)]\n\n s = self.fp.read(16)\n\n self.__offset = 128\n\n if i16(s[4:6]) == 0xF100:\n # prefix chunk; ignore it\n self.__offset = self.__offset + i32(s)\n s = self.fp.read(16)\n\n if i16(s[4:6]) == 0xF1FA:\n # look for palette chunk\n s = self.fp.read(6)\n if i16(s[4:6]) == 11:\n self._palette(palette, 2)\n elif i16(s[4:6]) == 4:\n self._palette(palette, 0)\n\n palette = [o8(r)+o8(g)+o8(b) for (r, g, b) in palette]\n self.palette = ImagePalette.raw(\"RGB\", b\"\".join(palette))\n\n # set things up to decode first frame\n self.__frame = -1\n self.__fp = self.fp\n self.__rewind = self.fp.tell()\n self.seek(0)\n\n def _palette(self, palette, shift):\n # load palette\n\n i = 0\n for e in range(i16(self.fp.read(2))):\n s = self.fp.read(2)\n i = i + i8(s[0])\n n = i8(s[1])\n if n == 0:\n n = 256\n s = self.fp.read(n * 3)\n for n in range(0, len(s), 3):\n r = i8(s[n]) << shift\n g = i8(s[n+1]) << shift\n b = i8(s[n+2]) << shift\n palette[i] = (r, g, b)\n i += 1\n\n @property\n def n_frames(self):\n return self.__framecount\n\n @property\n def is_animated(self):\n return self.__framecount > 1\n\n def seek(self, frame):\n if not self._seek_check(frame):\n return\n if frame < self.__frame:\n self._seek(0)\n\n for f in range(self.__frame + 1, frame + 1):\n self._seek(f)\n\n def _seek(self, frame):\n if frame == 0:\n self.__frame = -1\n self.__fp.seek(self.__rewind)\n self.__offset = 128\n else:\n # ensure that the previous frame was loaded\n self.load()\n\n if frame != self.__frame + 1:\n raise ValueError(\"cannot seek to frame %d\" % frame)\n self.__frame = frame\n\n # move to next frame\n self.fp = self.__fp\n self.fp.seek(self.__offset)\n\n s = self.fp.read(4)\n if not s:\n raise EOFError\n\n framesize = i32(s)\n\n self.decodermaxblock = framesize\n self.tile = [(\"fli\", (0, 0)+self.size, self.__offset, None)]\n\n self.__offset += framesize\n\n def tell(self):\n return self.__frame\n\n\n#\n# registry\n\nImage.register_open(FliImageFile.format, FliImageFile, _accept)\n\nImage.register_extensions(FliImageFile.format, [\".fli\", \".flc\"])\n", "path": "src/PIL/FliImagePlugin.py"}]} |
gh_patches_debug_1174 | rasdani/github-patches | git_diff | ray-project__ray-8886 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[CLI] version option
### Describe your feature request
`ray --version` should output the version. Judging from the output of `ray --help`, it looks like Click is used. Version flags should be easy in click; see https://click.palletsprojects.com/en/7.x/api/#click.version_option.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/ray/scripts/scripts.py`
Content:
```
1 import click
2 import copy
3 from datetime import datetime
4 import json
5 import logging
6 import os
7 import subprocess
8 import sys
9 import time
10 import urllib
11 import urllib.parse
12
13 import ray
14 import psutil
15 import ray.services as services
16 from ray.autoscaler.commands import (
17 attach_cluster, exec_cluster, create_or_update_cluster, monitor_cluster,
18 rsync, teardown_cluster, get_head_node_ip, kill_node, get_worker_node_ips)
19 import ray.ray_constants as ray_constants
20 import ray.utils
21 from ray.projects.scripts import project_cli, session_cli
22
23 logger = logging.getLogger(__name__)
24
25
26 def check_no_existing_redis_clients(node_ip_address, redis_client):
27 # The client table prefix must be kept in sync with the file
28 # "src/ray/gcs/redis_module/ray_redis_module.cc" where it is defined.
29 REDIS_CLIENT_TABLE_PREFIX = "CL:"
30 client_keys = redis_client.keys("{}*".format(REDIS_CLIENT_TABLE_PREFIX))
31 # Filter to clients on the same node and do some basic checking.
32 for key in client_keys:
33 info = redis_client.hgetall(key)
34 assert b"ray_client_id" in info
35 assert b"node_ip_address" in info
36 assert b"client_type" in info
37 assert b"deleted" in info
38 # Clients that ran on the same node but that are marked dead can be
39 # ignored.
40 deleted = info[b"deleted"]
41 deleted = bool(int(deleted))
42 if deleted:
43 continue
44
45 if ray.utils.decode(info[b"node_ip_address"]) == node_ip_address:
46 raise Exception("This Redis instance is already connected to "
47 "clients with this IP address.")
48
49
50 @click.group()
51 @click.option(
52 "--logging-level",
53 required=False,
54 default=ray_constants.LOGGER_LEVEL,
55 type=str,
56 help=ray_constants.LOGGER_LEVEL_HELP)
57 @click.option(
58 "--logging-format",
59 required=False,
60 default=ray_constants.LOGGER_FORMAT,
61 type=str,
62 help=ray_constants.LOGGER_FORMAT_HELP)
63 def cli(logging_level, logging_format):
64 level = logging.getLevelName(logging_level.upper())
65 ray.utils.setup_logger(level, logging_format)
66
67
68 @click.command()
69 @click.argument("cluster_config_file", required=True, type=str)
70 @click.option(
71 "--cluster-name",
72 "-n",
73 required=False,
74 type=str,
75 help="Override the configured cluster name.")
76 @click.option(
77 "--port",
78 "-p",
79 required=False,
80 type=int,
81 default=8265,
82 help="The local port to forward to the dashboard")
83 def dashboard(cluster_config_file, cluster_name, port):
84 """Port-forward a Ray cluster's dashboard to the local machine."""
85 # Sleeping in a loop is preferable to `sleep infinity` because the latter
86 # only works on linux.
87 remote_port = 8265
88 if port:
89 dashboard_port = port
90 else:
91 dashboard_port = remote_port
92
93 port_taken = True
94
95 # Find the first open port sequentially from `remote_port`.
96 while port_taken:
97 try:
98 port_forward = [
99 (dashboard_port, remote_port),
100 ]
101 click.echo(
102 "Attempting to establish dashboard at localhost:{}".format(
103 port_forward[0][0]))
104 # We want to probe with a no-op that returns quickly to avoid
105 # exceptions caused by network errors.
106 exec_cluster(
107 cluster_config_file,
108 override_cluster_name=cluster_name,
109 port_forward=port_forward)
110 port_taken = False
111 except Exception:
112 click.echo("Failed to forward dashboard, trying a new port...")
113 port_taken = True
114 dashboard_port += 1
115 pass
116
117
118 @cli.command()
119 @click.option(
120 "--node-ip-address",
121 required=False,
122 type=str,
123 help="the IP address of this node")
124 @click.option(
125 "--redis-address", required=False, type=str, help="same as --address")
126 @click.option(
127 "--address", required=False, type=str, help="the address to use for Ray")
128 @click.option(
129 "--redis-port",
130 required=False,
131 type=str,
132 help="(DEPRECATED) the port to use for starting redis. "
133 "Please use --port instead now.")
134 @click.option(
135 "--port",
136 required=False,
137 type=str,
138 help="the port of the head ray process. If not provided, tries to use "
139 "{0}, falling back to a random port if {0} is "
140 "not available".format(ray_constants.DEFAULT_PORT))
141 @click.option(
142 "--num-redis-shards",
143 required=False,
144 type=int,
145 help=("the number of additional Redis shards to use in "
146 "addition to the primary Redis shard"))
147 @click.option(
148 "--redis-max-clients",
149 required=False,
150 type=int,
151 help=("If provided, attempt to configure Redis with this "
152 "maximum number of clients."))
153 @click.option(
154 "--redis-password",
155 required=False,
156 type=str,
157 default=ray_constants.REDIS_DEFAULT_PASSWORD,
158 help="If provided, secure Redis ports with this password")
159 @click.option(
160 "--redis-shard-ports",
161 required=False,
162 type=str,
163 help="the port to use for the Redis shards other than the "
164 "primary Redis shard")
165 @click.option(
166 "--object-manager-port",
167 required=False,
168 type=int,
169 help="the port to use for starting the object manager")
170 @click.option(
171 "--node-manager-port",
172 required=False,
173 type=int,
174 help="the port to use for starting the node manager")
175 @click.option(
176 "--min-worker-port",
177 required=False,
178 type=int,
179 default=10000,
180 help="the lowest port number that workers will bind on. If not set, "
181 "random ports will be chosen.")
182 @click.option(
183 "--max-worker-port",
184 required=False,
185 type=int,
186 default=10999,
187 help="the highest port number that workers will bind on. If set, "
188 "'--min-worker-port' must also be set.")
189 @click.option(
190 "--memory",
191 required=False,
192 type=int,
193 help="The amount of memory (in bytes) to make available to workers. "
194 "By default, this is set to the available memory on the node.")
195 @click.option(
196 "--object-store-memory",
197 required=False,
198 type=int,
199 help="The amount of memory (in bytes) to start the object store with. "
200 "By default, this is capped at 20GB but can be set higher.")
201 @click.option(
202 "--redis-max-memory",
203 required=False,
204 type=int,
205 help="The max amount of memory (in bytes) to allow redis to use. Once the "
206 "limit is exceeded, redis will start LRU eviction of entries. This only "
207 "applies to the sharded redis tables (task, object, and profile tables). "
208 "By default this is capped at 10GB but can be set higher.")
209 @click.option(
210 "--num-cpus",
211 required=False,
212 type=int,
213 help="the number of CPUs on this node")
214 @click.option(
215 "--num-gpus",
216 required=False,
217 type=int,
218 help="the number of GPUs on this node")
219 @click.option(
220 "--resources",
221 required=False,
222 default="{}",
223 type=str,
224 help="a JSON serialized dictionary mapping resource name to "
225 "resource quantity")
226 @click.option(
227 "--head",
228 is_flag=True,
229 default=False,
230 help="provide this argument for the head node")
231 @click.option(
232 "--include-webui",
233 default=None,
234 type=bool,
235 help="provide this argument if the UI should be started")
236 @click.option(
237 "--webui-host",
238 required=False,
239 default="localhost",
240 help="The host to bind the web UI server to. Can either be localhost "
241 "(127.0.0.1) or 0.0.0.0 (available from all interfaces). By default, this "
242 "is set to localhost to prevent access from external machines.")
243 @click.option(
244 "--block",
245 is_flag=True,
246 default=False,
247 help="provide this argument to block forever in this command")
248 @click.option(
249 "--plasma-directory",
250 required=False,
251 type=str,
252 help="object store directory for memory mapped files")
253 @click.option(
254 "--huge-pages",
255 is_flag=True,
256 default=False,
257 help="enable support for huge pages in the object store")
258 @click.option(
259 "--autoscaling-config",
260 required=False,
261 type=str,
262 help="the file that contains the autoscaling config")
263 @click.option(
264 "--no-redirect-worker-output",
265 is_flag=True,
266 default=False,
267 help="do not redirect worker stdout and stderr to files")
268 @click.option(
269 "--no-redirect-output",
270 is_flag=True,
271 default=False,
272 help="do not redirect non-worker stdout and stderr to files")
273 @click.option(
274 "--plasma-store-socket-name",
275 default=None,
276 help="manually specify the socket name of the plasma store")
277 @click.option(
278 "--raylet-socket-name",
279 default=None,
280 help="manually specify the socket path of the raylet process")
281 @click.option(
282 "--temp-dir",
283 default=None,
284 help="manually specify the root temporary dir of the Ray process")
285 @click.option(
286 "--include-java",
287 is_flag=True,
288 default=None,
289 help="Enable Java worker support.")
290 @click.option(
291 "--java-worker-options",
292 required=False,
293 default=None,
294 type=str,
295 help="Overwrite the options to start Java workers.")
296 @click.option(
297 "--internal-config",
298 default=None,
299 type=json.loads,
300 help="Do NOT use this. This is for debugging/development purposes ONLY.")
301 @click.option(
302 "--load-code-from-local",
303 is_flag=True,
304 default=False,
305 help="Specify whether load code from local file or GCS serialization.")
306 def start(node_ip_address, redis_address, address, redis_port, port,
307 num_redis_shards, redis_max_clients, redis_password,
308 redis_shard_ports, object_manager_port, node_manager_port,
309 min_worker_port, max_worker_port, memory, object_store_memory,
310 redis_max_memory, num_cpus, num_gpus, resources, head, include_webui,
311 webui_host, block, plasma_directory, huge_pages, autoscaling_config,
312 no_redirect_worker_output, no_redirect_output,
313 plasma_store_socket_name, raylet_socket_name, temp_dir, include_java,
314 java_worker_options, load_code_from_local, internal_config):
315 """Start Ray processes manually on the local machine."""
316 if redis_address is not None:
317 raise DeprecationWarning("The --redis-address argument is "
318 "deprecated. Please use --address instead.")
319 if redis_port is not None:
320 logger.warn("The --redis-port argument will be deprecated soon. "
321 "Please use --port instead.")
322 if port is not None and port != redis_port:
323 raise ValueError("Cannot specify both --port and --redis-port "
324 "as port is a rename of deprecated redis-port")
325
326 # Convert hostnames to numerical IP address.
327 if node_ip_address is not None:
328 node_ip_address = services.address_to_ip(node_ip_address)
329
330 if redis_address is not None or address is not None:
331 (redis_address, redis_address_ip,
332 redis_address_port) = services.validate_redis_address(
333 address, redis_address)
334
335 try:
336 resources = json.loads(resources)
337 except Exception:
338 raise Exception("Unable to parse the --resources argument using "
339 "json.loads. Try using a format like\n\n"
340 " --resources='{\"CustomResource1\": 3, "
341 "\"CustomReseource2\": 2}'")
342
343 redirect_worker_output = None if not no_redirect_worker_output else True
344 redirect_output = None if not no_redirect_output else True
345 ray_params = ray.parameter.RayParams(
346 node_ip_address=node_ip_address,
347 min_worker_port=min_worker_port,
348 max_worker_port=max_worker_port,
349 object_manager_port=object_manager_port,
350 node_manager_port=node_manager_port,
351 memory=memory,
352 object_store_memory=object_store_memory,
353 redis_password=redis_password,
354 redirect_worker_output=redirect_worker_output,
355 redirect_output=redirect_output,
356 num_cpus=num_cpus,
357 num_gpus=num_gpus,
358 resources=resources,
359 plasma_directory=plasma_directory,
360 huge_pages=huge_pages,
361 plasma_store_socket_name=plasma_store_socket_name,
362 raylet_socket_name=raylet_socket_name,
363 temp_dir=temp_dir,
364 include_java=include_java,
365 include_webui=include_webui,
366 webui_host=webui_host,
367 java_worker_options=java_worker_options,
368 load_code_from_local=load_code_from_local,
369 _internal_config=internal_config)
370 if head:
371 # Start Ray on the head node.
372 if redis_shard_ports is not None:
373 redis_shard_ports = redis_shard_ports.split(",")
374 # Infer the number of Redis shards from the ports if the number is
375 # not provided.
376 if num_redis_shards is None:
377 num_redis_shards = len(redis_shard_ports)
378 # Check that the arguments match.
379 if len(redis_shard_ports) != num_redis_shards:
380 raise Exception("If --redis-shard-ports is provided, it must "
381 "have the form '6380,6381,6382', and the "
382 "number of ports provided must equal "
383 "--num-redis-shards (which is 1 if not "
384 "provided)")
385
386 if redis_address is not None:
387 raise Exception("If --head is passed in, a Redis server will be "
388 "started, so a Redis address should not be "
389 "provided.")
390
391 # Get the node IP address if one is not provided.
392 ray_params.update_if_absent(
393 node_ip_address=services.get_node_ip_address())
394 logger.info("Using IP address {} for this node.".format(
395 ray_params.node_ip_address))
396 ray_params.update_if_absent(
397 redis_port=port or redis_port,
398 redis_shard_ports=redis_shard_ports,
399 redis_max_memory=redis_max_memory,
400 num_redis_shards=num_redis_shards,
401 redis_max_clients=redis_max_clients,
402 autoscaling_config=autoscaling_config,
403 include_java=False,
404 )
405
406 node = ray.node.Node(
407 ray_params, head=True, shutdown_at_exit=block, spawn_reaper=block)
408 redis_address = node.redis_address
409
410 logger.info(
411 "\nStarted Ray on this node. You can add additional nodes to "
412 "the cluster by calling\n\n"
413 " ray start --address='{}'{}\n\n"
414 "from the node you wish to add. You can connect a driver to the "
415 "cluster from Python by running\n\n"
416 " import ray\n"
417 " ray.init(address='auto'{})\n\n"
418 "If you have trouble connecting from a different machine, check "
419 "that your firewall is configured properly. If you wish to "
420 "terminate the processes that have been started, run\n\n"
421 " ray stop".format(
422 redis_address, " --redis-password='" + redis_password + "'"
423 if redis_password else "",
424 ", redis_password='" + redis_password + "'"
425 if redis_password else ""))
426 else:
427 # Start Ray on a non-head node.
428 if not (redis_port is None and port is None):
429 raise Exception(
430 "If --head is not passed in, --port and --redis-port are not "
431 "allowed.")
432 if redis_shard_ports is not None:
433 raise Exception("If --head is not passed in, --redis-shard-ports "
434 "is not allowed.")
435 if redis_address is None:
436 raise Exception("If --head is not passed in, --address must "
437 "be provided.")
438 if num_redis_shards is not None:
439 raise Exception("If --head is not passed in, --num-redis-shards "
440 "must not be provided.")
441 if redis_max_clients is not None:
442 raise Exception("If --head is not passed in, --redis-max-clients "
443 "must not be provided.")
444 if include_webui:
445 raise Exception("If --head is not passed in, the --include-webui "
446 "flag is not relevant.")
447 if include_java is not None:
448 raise ValueError("--include-java should only be set for the head "
449 "node.")
450
451 # Wait for the Redis server to be started. And throw an exception if we
452 # can't connect to it.
453 services.wait_for_redis_to_start(
454 redis_address_ip, redis_address_port, password=redis_password)
455
456 # Create a Redis client.
457 redis_client = services.create_redis_client(
458 redis_address, password=redis_password)
459
460 # Check that the version information on this node matches the version
461 # information that the cluster was started with.
462 services.check_version_info(redis_client)
463
464 # Get the node IP address if one is not provided.
465 ray_params.update_if_absent(
466 node_ip_address=services.get_node_ip_address(redis_address))
467 logger.info("Using IP address {} for this node.".format(
468 ray_params.node_ip_address))
469 # Check that there aren't already Redis clients with the same IP
470 # address connected with this Redis instance. This raises an exception
471 # if the Redis server already has clients on this node.
472 check_no_existing_redis_clients(ray_params.node_ip_address,
473 redis_client)
474 ray_params.update(redis_address=redis_address)
475 node = ray.node.Node(
476 ray_params, head=False, shutdown_at_exit=block, spawn_reaper=block)
477 logger.info("\nStarted Ray on this node. If you wish to terminate the "
478 "processes that have been started, run\n\n"
479 " ray stop")
480
481 if block:
482 while True:
483 time.sleep(1)
484 deceased = node.dead_processes()
485 if len(deceased) > 0:
486 logger.error("Ray processes died unexpectedly:")
487 for process_type, process in deceased:
488 logger.error("\t{} died with exit code {}".format(
489 process_type, process.returncode))
490 # shutdown_at_exit will handle cleanup.
491 logger.error("Killing remaining processes and exiting...")
492 sys.exit(1)
493
494
495 @cli.command()
496 @click.option(
497 "-f",
498 "--force",
499 is_flag=True,
500 help="If set, ray will send SIGKILL instead of SIGTERM.")
501 @click.option(
502 "-v",
503 "--verbose",
504 is_flag=True,
505 help="If set, ray prints out more information about processes to kill.")
506 def stop(force, verbose):
507 """Stop Ray processes manually on the local machine."""
508 # Note that raylet needs to exit before object store, otherwise
509 # it cannot exit gracefully.
510 is_linux = sys.platform.startswith("linux")
511 processes_to_kill = [
512 # The first element is the substring to filter.
513 # The second element, if True, is to filter ps results by command name
514 # (only the first 15 charactors of the executable name on Linux);
515 # if False, is to filter ps results by command with all its arguments.
516 # See STANDARD FORMAT SPECIFIERS section of
517 # http://man7.org/linux/man-pages/man1/ps.1.html
518 # about comm and args. This can help avoid killing non-ray processes.
519 # Format:
520 # Keyword to filter, filter by command (True)/filter by args (False)
521 ["raylet", True],
522 ["plasma_store", True],
523 ["raylet_monitor", True],
524 ["gcs_server", True],
525 ["monitor.py", False],
526 ["redis-server", False],
527 ["default_worker.py", False], # Python worker.
528 ["ray::", True], # Python worker. TODO(mehrdadn): Fix for Windows
529 ["io.ray.runtime.runner.worker.DefaultWorker", False], # Java worker.
530 ["log_monitor.py", False],
531 ["reporter.py", False],
532 ["dashboard.py", False],
533 ["ray_process_reaper.py", False],
534 ]
535
536 process_infos = []
537 for proc in psutil.process_iter(["name", "cmdline"]):
538 try:
539 process_infos.append((proc, proc.name(), proc.cmdline()))
540 except psutil.Error:
541 pass
542 for keyword, filter_by_cmd in processes_to_kill:
543 if filter_by_cmd and is_linux and len(keyword) > 15:
544 msg = ("The filter string should not be more than {} "
545 "characters. Actual length: {}. Filter: {}").format(
546 15, len(keyword), keyword)
547 raise ValueError(msg)
548 found = []
549 for candidate in process_infos:
550 proc, proc_cmd, proc_args = candidate
551 corpus = (proc_cmd
552 if filter_by_cmd else subprocess.list2cmdline(proc_args))
553 if keyword in corpus:
554 found.append(candidate)
555 for proc, proc_cmd, proc_args in found:
556 if verbose:
557 operation = "Terminating" if force else "Killing"
558 logger.info("%s process %s: %s", operation, proc.pid,
559 subprocess.list2cmdline(proc_args))
560 try:
561 if force:
562 proc.kill()
563 else:
564 # TODO(mehrdadn): On Windows, this is forceful termination.
565 # We don't want CTRL_BREAK_EVENT, because that would
566 # terminate the entire process group. What to do?
567 proc.terminate()
568 except psutil.NoSuchProcess:
569 pass
570 except (psutil.Error, OSError) as ex:
571 logger.error("Error: %s", ex)
572
573
574 @cli.command(hidden=True)
575 @click.argument("cluster_config_file", required=True, type=str)
576 @click.option(
577 "--no-restart",
578 is_flag=True,
579 default=False,
580 help=("Whether to skip restarting Ray services during the update. "
581 "This avoids interrupting running jobs."))
582 @click.option(
583 "--restart-only",
584 is_flag=True,
585 default=False,
586 help=("Whether to skip running setup commands and only restart Ray. "
587 "This cannot be used with 'no-restart'."))
588 @click.option(
589 "--min-workers",
590 required=False,
591 type=int,
592 help="Override the configured min worker node count for the cluster.")
593 @click.option(
594 "--max-workers",
595 required=False,
596 type=int,
597 help="Override the configured max worker node count for the cluster.")
598 @click.option(
599 "--cluster-name",
600 "-n",
601 required=False,
602 type=str,
603 help="Override the configured cluster name.")
604 @click.option(
605 "--yes",
606 "-y",
607 is_flag=True,
608 default=False,
609 help="Don't ask for confirmation.")
610 def create_or_update(cluster_config_file, min_workers, max_workers, no_restart,
611 restart_only, yes, cluster_name):
612 """Create or update a Ray cluster."""
613 if restart_only or no_restart:
614 assert restart_only != no_restart, "Cannot set both 'restart_only' " \
615 "and 'no_restart' at the same time!"
616 if urllib.parse.urlparse(cluster_config_file).scheme in ("http", "https"):
617 try:
618 response = urllib.request.urlopen(cluster_config_file, timeout=5)
619 content = response.read()
620 file_name = cluster_config_file.split("/")[-1]
621 with open(file_name, "wb") as f:
622 f.write(content)
623 cluster_config_file = file_name
624 except urllib.error.HTTPError as e:
625 logger.info("Error downloading file: ", e)
626 create_or_update_cluster(cluster_config_file, min_workers, max_workers,
627 no_restart, restart_only, yes, cluster_name)
628
629
630 @cli.command(hidden=True)
631 @click.argument("cluster_config_file", required=True, type=str)
632 @click.option(
633 "--workers-only",
634 is_flag=True,
635 default=False,
636 help="Only destroy the workers.")
637 @click.option(
638 "--keep-min-workers",
639 is_flag=True,
640 default=False,
641 help="Retain the minimal amount of workers specified in the config.")
642 @click.option(
643 "--yes",
644 "-y",
645 is_flag=True,
646 default=False,
647 help="Don't ask for confirmation.")
648 @click.option(
649 "--cluster-name",
650 "-n",
651 required=False,
652 type=str,
653 help="Override the configured cluster name.")
654 def teardown(cluster_config_file, yes, workers_only, cluster_name,
655 keep_min_workers):
656 """Tear down a Ray cluster."""
657 teardown_cluster(cluster_config_file, yes, workers_only, cluster_name,
658 keep_min_workers)
659
660
661 @cli.command()
662 @click.argument("cluster_config_file", required=True, type=str)
663 @click.option(
664 "--yes",
665 "-y",
666 is_flag=True,
667 default=False,
668 help="Don't ask for confirmation.")
669 @click.option(
670 "--hard",
671 is_flag=True,
672 default=False,
673 help="Terminates the node via node provider (defaults to a 'soft kill'"
674 " which terminates Ray but does not actually delete the instances).")
675 @click.option(
676 "--cluster-name",
677 "-n",
678 required=False,
679 type=str,
680 help="Override the configured cluster name.")
681 def kill_random_node(cluster_config_file, yes, hard, cluster_name):
682 """Kills a random Ray node. For testing purposes only."""
683 click.echo("Killed node with IP " +
684 kill_node(cluster_config_file, yes, hard, cluster_name))
685
686
687 @cli.command()
688 @click.argument("cluster_config_file", required=True, type=str)
689 @click.option(
690 "--lines",
691 required=False,
692 default=100,
693 type=int,
694 help="Number of lines to tail.")
695 @click.option(
696 "--cluster-name",
697 "-n",
698 required=False,
699 type=str,
700 help="Override the configured cluster name.")
701 def monitor(cluster_config_file, lines, cluster_name):
702 """Tails the autoscaler logs of a Ray cluster."""
703 monitor_cluster(cluster_config_file, lines, cluster_name)
704
705
706 @cli.command()
707 @click.argument("cluster_config_file", required=True, type=str)
708 @click.option(
709 "--start",
710 is_flag=True,
711 default=False,
712 help="Start the cluster if needed.")
713 @click.option(
714 "--screen", is_flag=True, default=False, help="Run the command in screen.")
715 @click.option(
716 "--tmux", is_flag=True, default=False, help="Run the command in tmux.")
717 @click.option(
718 "--cluster-name",
719 "-n",
720 required=False,
721 type=str,
722 help="Override the configured cluster name.")
723 @click.option(
724 "--new", "-N", is_flag=True, help="Force creation of a new screen.")
725 @click.option(
726 "--port-forward",
727 "-p",
728 required=False,
729 multiple=True,
730 type=int,
731 help="Port to forward. Use this multiple times to forward multiple ports.")
732 def attach(cluster_config_file, start, screen, tmux, cluster_name, new,
733 port_forward):
734 """Create or attach to a SSH session to a Ray cluster."""
735 port_forward = [(port, port) for port in list(port_forward)]
736 attach_cluster(cluster_config_file, start, screen, tmux, cluster_name, new,
737 port_forward)
738
739
740 @cli.command()
741 @click.argument("cluster_config_file", required=True, type=str)
742 @click.argument("source", required=False, type=str)
743 @click.argument("target", required=False, type=str)
744 @click.option(
745 "--cluster-name",
746 "-n",
747 required=False,
748 type=str,
749 help="Override the configured cluster name.")
750 def rsync_down(cluster_config_file, source, target, cluster_name):
751 """Download specific files from a Ray cluster."""
752 rsync(cluster_config_file, source, target, cluster_name, down=True)
753
754
755 @cli.command()
756 @click.argument("cluster_config_file", required=True, type=str)
757 @click.argument("source", required=False, type=str)
758 @click.argument("target", required=False, type=str)
759 @click.option(
760 "--cluster-name",
761 "-n",
762 required=False,
763 type=str,
764 help="Override the configured cluster name.")
765 @click.option(
766 "--all-nodes",
767 "-A",
768 is_flag=True,
769 required=False,
770 help="Upload to all nodes (workers and head).")
771 def rsync_up(cluster_config_file, source, target, cluster_name, all_nodes):
772 """Upload specific files to a Ray cluster."""
773 rsync(
774 cluster_config_file,
775 source,
776 target,
777 cluster_name,
778 down=False,
779 all_nodes=all_nodes)
780
781
782 @cli.command(context_settings={"ignore_unknown_options": True})
783 @click.argument("cluster_config_file", required=True, type=str)
784 @click.option(
785 "--docker",
786 is_flag=True,
787 default=False,
788 help="Runs command in the docker container specified in cluster_config.")
789 @click.option(
790 "--stop",
791 is_flag=True,
792 default=False,
793 help="Stop the cluster after the command finishes running.")
794 @click.option(
795 "--start",
796 is_flag=True,
797 default=False,
798 help="Start the cluster if needed.")
799 @click.option(
800 "--screen",
801 is_flag=True,
802 default=False,
803 help="Run the command in a screen.")
804 @click.option(
805 "--tmux", is_flag=True, default=False, help="Run the command in tmux.")
806 @click.option(
807 "--cluster-name",
808 "-n",
809 required=False,
810 type=str,
811 help="Override the configured cluster name.")
812 @click.option(
813 "--port-forward",
814 "-p",
815 required=False,
816 multiple=True,
817 type=int,
818 help="Port to forward. Use this multiple times to forward multiple ports.")
819 @click.argument("script", required=True, type=str)
820 @click.option(
821 "--args",
822 required=False,
823 type=str,
824 help="(deprecated) Use '-- --arg1 --arg2' for script args.")
825 @click.argument("script_args", nargs=-1)
826 def submit(cluster_config_file, docker, screen, tmux, stop, start,
827 cluster_name, port_forward, script, args, script_args):
828 """Uploads and runs a script on the specified cluster.
829
830 The script is automatically synced to the following location:
831
832 os.path.join("~", os.path.basename(script))
833
834 Example:
835 >>> ray submit [CLUSTER.YAML] experiment.py -- --smoke-test
836 """
837 assert not (screen and tmux), "Can specify only one of `screen` or `tmux`."
838 assert not (script_args and args), "Use -- --arg1 --arg2 for script args."
839
840 if args:
841 logger.warning(
842 "ray submit [yaml] [script.py] --args=... is deprecated and "
843 "will be removed in a future version of Ray. Use "
844 "`ray submit [yaml] script.py -- --arg1 --arg2` instead.")
845
846 if start:
847 create_or_update_cluster(cluster_config_file, None, None, False, False,
848 True, cluster_name)
849
850 target = os.path.join("~", os.path.basename(script))
851 rsync(cluster_config_file, script, target, cluster_name, down=False)
852
853 command_parts = ["python", target]
854 if script_args:
855 command_parts += list(script_args)
856 elif args is not None:
857 command_parts += [args]
858
859 port_forward = [(port, port) for port in list(port_forward)]
860 cmd = " ".join(command_parts)
861 exec_cluster(
862 cluster_config_file,
863 cmd,
864 docker,
865 screen,
866 tmux,
867 stop,
868 start=False,
869 override_cluster_name=cluster_name,
870 port_forward=port_forward)
871
872
873 @cli.command(hidden=True)
874 @click.argument("cluster_config_file", required=True, type=str)
875 @click.argument("cmd", required=True, type=str)
876 @click.option(
877 "--docker",
878 is_flag=True,
879 default=False,
880 help="Runs command in the docker container specified in cluster_config.")
881 @click.option(
882 "--stop",
883 is_flag=True,
884 default=False,
885 help="Stop the cluster after the command finishes running.")
886 @click.option(
887 "--start",
888 is_flag=True,
889 default=False,
890 help="Start the cluster if needed.")
891 @click.option(
892 "--screen",
893 is_flag=True,
894 default=False,
895 help="Run the command in a screen.")
896 @click.option(
897 "--tmux", is_flag=True, default=False, help="Run the command in tmux.")
898 @click.option(
899 "--cluster-name",
900 "-n",
901 required=False,
902 type=str,
903 help="Override the configured cluster name.")
904 @click.option(
905 "--port-forward",
906 "-p",
907 required=False,
908 multiple=True,
909 type=int,
910 help="Port to forward. Use this multiple times to forward multiple ports.")
911 def exec_cmd(cluster_config_file, cmd, docker, screen, tmux, stop, start,
912 cluster_name, port_forward):
913 """Execute a command via SSH on a Ray cluster."""
914 port_forward = [(port, port) for port in list(port_forward)]
915 exec_cluster(cluster_config_file, cmd, docker, screen, tmux, stop, start,
916 cluster_name, port_forward)
917
918
919 @cli.command()
920 @click.argument("cluster_config_file", required=True, type=str)
921 @click.option(
922 "--cluster-name",
923 "-n",
924 required=False,
925 type=str,
926 help="Override the configured cluster name.")
927 def get_head_ip(cluster_config_file, cluster_name):
928 """Return the head node IP of a Ray cluster."""
929 click.echo(get_head_node_ip(cluster_config_file, cluster_name))
930
931
932 @cli.command()
933 @click.argument("cluster_config_file", required=True, type=str)
934 @click.option(
935 "--cluster-name",
936 "-n",
937 required=False,
938 type=str,
939 help="Override the configured cluster name.")
940 def get_worker_ips(cluster_config_file, cluster_name):
941 """Return the list of worker IPs of a Ray cluster."""
942 worker_ips = get_worker_node_ips(cluster_config_file, cluster_name)
943 click.echo("\n".join(worker_ips))
944
945
946 @cli.command()
947 def stack():
948 """Take a stack dump of all Python workers on the local machine."""
949 COMMAND = """
950 pyspy=`which py-spy`
951 if [ ! -e "$pyspy" ]; then
952 echo "ERROR: Please 'pip install py-spy' (or ray[debug]) first"
953 exit 1
954 fi
955 # Set IFS to iterate over lines instead of over words.
956 export IFS="
957 "
958 # Call sudo to prompt for password before anything has been printed.
959 sudo true
960 workers=$(
961 ps aux | grep -E ' ray_|default_worker.py' | grep -v grep
962 )
963 for worker in $workers; do
964 echo "Stack dump for $worker";
965 pid=`echo $worker | awk '{print $2}'`;
966 sudo $pyspy dump --pid $pid;
967 echo;
968 done
969 """
970 subprocess.call(COMMAND, shell=True)
971
972
973 @cli.command()
974 def microbenchmark():
975 """Run a local Ray microbenchmark on the current machine."""
976 from ray.ray_perf import main
977 main()
978
979
980 @cli.command()
981 @click.option(
982 "--address",
983 required=False,
984 type=str,
985 help="Override the redis address to connect to.")
986 def timeline(address):
987 """Take a Chrome tracing timeline for a Ray cluster."""
988 if not address:
989 address = services.find_redis_address_or_die()
990 logger.info("Connecting to Ray instance at {}.".format(address))
991 ray.init(address=address)
992 time = datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
993 filename = os.path.join(ray.utils.get_user_temp_dir(),
994 "ray-timeline-{}.json".format(time))
995 ray.timeline(filename=filename)
996 size = os.path.getsize(filename)
997 logger.info("Trace file written to {} ({} bytes).".format(filename, size))
998 logger.info(
999 "You can open this with chrome://tracing in the Chrome browser.")
1000
1001
1002 @cli.command()
1003 @click.option(
1004 "--address",
1005 required=False,
1006 type=str,
1007 help="Override the address to connect to.")
1008 def stat(address):
1009 """Get the current metrics protobuf from a Ray cluster (developer tool)."""
1010 if not address:
1011 address = services.find_redis_address_or_die()
1012 logger.info("Connecting to Ray instance at {}.".format(address))
1013 ray.init(address=address)
1014
1015 import grpc
1016 from ray.core.generated import node_manager_pb2
1017 from ray.core.generated import node_manager_pb2_grpc
1018
1019 for raylet in ray.nodes():
1020 raylet_address = "{}:{}".format(raylet["NodeManagerAddress"],
1021 ray.nodes()[0]["NodeManagerPort"])
1022 logger.info("Querying raylet {}".format(raylet_address))
1023
1024 channel = grpc.insecure_channel(raylet_address)
1025 stub = node_manager_pb2_grpc.NodeManagerServiceStub(channel)
1026 reply = stub.GetNodeStats(
1027 node_manager_pb2.GetNodeStatsRequest(include_memory_info=False),
1028 timeout=2.0)
1029 print(reply)
1030
1031
1032 @cli.command()
1033 @click.option(
1034 "--address",
1035 required=False,
1036 type=str,
1037 help="Override the address to connect to.")
1038 def memory(address):
1039 """Print object references held in a Ray cluster."""
1040 if not address:
1041 address = services.find_redis_address_or_die()
1042 logger.info("Connecting to Ray instance at {}.".format(address))
1043 ray.init(address=address)
1044 print(ray.internal.internal_api.memory_summary())
1045
1046
1047 @cli.command()
1048 @click.option(
1049 "--address",
1050 required=False,
1051 type=str,
1052 help="Override the address to connect to.")
1053 def globalgc(address):
1054 """Trigger Python garbage collection on all cluster workers."""
1055 if not address:
1056 address = services.find_redis_address_or_die()
1057 logger.info("Connecting to Ray instance at {}.".format(address))
1058 ray.init(address=address)
1059 ray.internal.internal_api.global_gc()
1060 print("Triggered gc.collect() on all workers.")
1061
1062
1063 def add_command_alias(command, name, hidden):
1064 new_command = copy.deepcopy(command)
1065 new_command.hidden = hidden
1066 cli.add_command(new_command, name=name)
1067
1068
1069 cli.add_command(dashboard)
1070 cli.add_command(start)
1071 cli.add_command(stop)
1072 add_command_alias(create_or_update, name="up", hidden=False)
1073 cli.add_command(attach)
1074 add_command_alias(exec_cmd, name="exec", hidden=False)
1075 add_command_alias(rsync_down, name="rsync_down", hidden=True)
1076 add_command_alias(rsync_up, name="rsync_up", hidden=True)
1077 cli.add_command(submit)
1078 cli.add_command(teardown)
1079 add_command_alias(teardown, name="down", hidden=False)
1080 cli.add_command(kill_random_node)
1081 add_command_alias(get_head_ip, name="get_head_ip", hidden=True)
1082 cli.add_command(get_worker_ips)
1083 cli.add_command(microbenchmark)
1084 cli.add_command(stack)
1085 cli.add_command(stat)
1086 cli.add_command(memory)
1087 cli.add_command(globalgc)
1088 cli.add_command(timeline)
1089 cli.add_command(project_cli)
1090 cli.add_command(session_cli)
1091
1092 try:
1093 from ray.serve.scripts import serve_cli
1094 cli.add_command(serve_cli)
1095 except Exception as e:
1096 logger.debug(
1097 "Integrating ray serve command line tool failed with {}".format(e))
1098
1099
1100 def main():
1101 return cli()
1102
1103
1104 if __name__ == "__main__":
1105 main()
1106
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py
--- a/python/ray/scripts/scripts.py
+++ b/python/ray/scripts/scripts.py
@@ -60,6 +60,7 @@
default=ray_constants.LOGGER_FORMAT,
type=str,
help=ray_constants.LOGGER_FORMAT_HELP)
[email protected]_option()
def cli(logging_level, logging_format):
level = logging.getLevelName(logging_level.upper())
ray.utils.setup_logger(level, logging_format)
| {"golden_diff": "diff --git a/python/ray/scripts/scripts.py b/python/ray/scripts/scripts.py\n--- a/python/ray/scripts/scripts.py\n+++ b/python/ray/scripts/scripts.py\n@@ -60,6 +60,7 @@\n default=ray_constants.LOGGER_FORMAT,\n type=str,\n help=ray_constants.LOGGER_FORMAT_HELP)\[email protected]_option()\n def cli(logging_level, logging_format):\n level = logging.getLevelName(logging_level.upper())\n ray.utils.setup_logger(level, logging_format)\n", "issue": "[CLI] version option\n### Describe your feature request\r\n\r\n`ray --version` should output the version. Judging from the output of `ray --help`, it looks like Click is used. Version flags should be easy in click; see https://click.palletsprojects.com/en/7.x/api/#click.version_option. \n", "before_files": [{"content": "import click\nimport copy\nfrom datetime import datetime\nimport json\nimport logging\nimport os\nimport subprocess\nimport sys\nimport time\nimport urllib\nimport urllib.parse\n\nimport ray\nimport psutil\nimport ray.services as services\nfrom ray.autoscaler.commands import (\n attach_cluster, exec_cluster, create_or_update_cluster, monitor_cluster,\n rsync, teardown_cluster, get_head_node_ip, kill_node, get_worker_node_ips)\nimport ray.ray_constants as ray_constants\nimport ray.utils\nfrom ray.projects.scripts import project_cli, session_cli\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_no_existing_redis_clients(node_ip_address, redis_client):\n # The client table prefix must be kept in sync with the file\n # \"src/ray/gcs/redis_module/ray_redis_module.cc\" where it is defined.\n REDIS_CLIENT_TABLE_PREFIX = \"CL:\"\n client_keys = redis_client.keys(\"{}*\".format(REDIS_CLIENT_TABLE_PREFIX))\n # Filter to clients on the same node and do some basic checking.\n for key in client_keys:\n info = redis_client.hgetall(key)\n assert b\"ray_client_id\" in info\n assert b\"node_ip_address\" in info\n assert b\"client_type\" in info\n assert b\"deleted\" in info\n # Clients that ran on the same node but that are marked dead can be\n # ignored.\n deleted = info[b\"deleted\"]\n deleted = bool(int(deleted))\n if deleted:\n continue\n\n if ray.utils.decode(info[b\"node_ip_address\"]) == node_ip_address:\n raise Exception(\"This Redis instance is already connected to \"\n \"clients with this IP address.\")\n\n\[email protected]()\[email protected](\n \"--logging-level\",\n required=False,\n default=ray_constants.LOGGER_LEVEL,\n type=str,\n help=ray_constants.LOGGER_LEVEL_HELP)\[email protected](\n \"--logging-format\",\n required=False,\n default=ray_constants.LOGGER_FORMAT,\n type=str,\n help=ray_constants.LOGGER_FORMAT_HELP)\ndef cli(logging_level, logging_format):\n level = logging.getLevelName(logging_level.upper())\n ray.utils.setup_logger(level, logging_format)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port\",\n \"-p\",\n required=False,\n type=int,\n default=8265,\n help=\"The local port to forward to the dashboard\")\ndef dashboard(cluster_config_file, cluster_name, port):\n \"\"\"Port-forward a Ray cluster's dashboard to the local machine.\"\"\"\n # Sleeping in a loop is preferable to `sleep infinity` because the latter\n # only works on linux.\n remote_port = 8265\n if port:\n dashboard_port = port\n else:\n dashboard_port = remote_port\n\n port_taken = True\n\n # Find the first open port sequentially from `remote_port`.\n while port_taken:\n try:\n port_forward = [\n (dashboard_port, remote_port),\n ]\n click.echo(\n \"Attempting to establish dashboard at localhost:{}\".format(\n port_forward[0][0]))\n # We want to probe with a no-op that returns quickly to avoid\n # exceptions caused by network errors.\n exec_cluster(\n cluster_config_file,\n override_cluster_name=cluster_name,\n port_forward=port_forward)\n port_taken = False\n except Exception:\n click.echo(\"Failed to forward dashboard, trying a new port...\")\n port_taken = True\n dashboard_port += 1\n pass\n\n\[email protected]()\[email protected](\n \"--node-ip-address\",\n required=False,\n type=str,\n help=\"the IP address of this node\")\[email protected](\n \"--redis-address\", required=False, type=str, help=\"same as --address\")\[email protected](\n \"--address\", required=False, type=str, help=\"the address to use for Ray\")\[email protected](\n \"--redis-port\",\n required=False,\n type=str,\n help=\"(DEPRECATED) the port to use for starting redis. \"\n \"Please use --port instead now.\")\[email protected](\n \"--port\",\n required=False,\n type=str,\n help=\"the port of the head ray process. If not provided, tries to use \"\n \"{0}, falling back to a random port if {0} is \"\n \"not available\".format(ray_constants.DEFAULT_PORT))\[email protected](\n \"--num-redis-shards\",\n required=False,\n type=int,\n help=(\"the number of additional Redis shards to use in \"\n \"addition to the primary Redis shard\"))\[email protected](\n \"--redis-max-clients\",\n required=False,\n type=int,\n help=(\"If provided, attempt to configure Redis with this \"\n \"maximum number of clients.\"))\[email protected](\n \"--redis-password\",\n required=False,\n type=str,\n default=ray_constants.REDIS_DEFAULT_PASSWORD,\n help=\"If provided, secure Redis ports with this password\")\[email protected](\n \"--redis-shard-ports\",\n required=False,\n type=str,\n help=\"the port to use for the Redis shards other than the \"\n \"primary Redis shard\")\[email protected](\n \"--object-manager-port\",\n required=False,\n type=int,\n help=\"the port to use for starting the object manager\")\[email protected](\n \"--node-manager-port\",\n required=False,\n type=int,\n help=\"the port to use for starting the node manager\")\[email protected](\n \"--min-worker-port\",\n required=False,\n type=int,\n default=10000,\n help=\"the lowest port number that workers will bind on. If not set, \"\n \"random ports will be chosen.\")\[email protected](\n \"--max-worker-port\",\n required=False,\n type=int,\n default=10999,\n help=\"the highest port number that workers will bind on. If set, \"\n \"'--min-worker-port' must also be set.\")\[email protected](\n \"--memory\",\n required=False,\n type=int,\n help=\"The amount of memory (in bytes) to make available to workers. \"\n \"By default, this is set to the available memory on the node.\")\[email protected](\n \"--object-store-memory\",\n required=False,\n type=int,\n help=\"The amount of memory (in bytes) to start the object store with. \"\n \"By default, this is capped at 20GB but can be set higher.\")\[email protected](\n \"--redis-max-memory\",\n required=False,\n type=int,\n help=\"The max amount of memory (in bytes) to allow redis to use. Once the \"\n \"limit is exceeded, redis will start LRU eviction of entries. This only \"\n \"applies to the sharded redis tables (task, object, and profile tables). \"\n \"By default this is capped at 10GB but can be set higher.\")\[email protected](\n \"--num-cpus\",\n required=False,\n type=int,\n help=\"the number of CPUs on this node\")\[email protected](\n \"--num-gpus\",\n required=False,\n type=int,\n help=\"the number of GPUs on this node\")\[email protected](\n \"--resources\",\n required=False,\n default=\"{}\",\n type=str,\n help=\"a JSON serialized dictionary mapping resource name to \"\n \"resource quantity\")\[email protected](\n \"--head\",\n is_flag=True,\n default=False,\n help=\"provide this argument for the head node\")\[email protected](\n \"--include-webui\",\n default=None,\n type=bool,\n help=\"provide this argument if the UI should be started\")\[email protected](\n \"--webui-host\",\n required=False,\n default=\"localhost\",\n help=\"The host to bind the web UI server to. Can either be localhost \"\n \"(127.0.0.1) or 0.0.0.0 (available from all interfaces). By default, this \"\n \"is set to localhost to prevent access from external machines.\")\[email protected](\n \"--block\",\n is_flag=True,\n default=False,\n help=\"provide this argument to block forever in this command\")\[email protected](\n \"--plasma-directory\",\n required=False,\n type=str,\n help=\"object store directory for memory mapped files\")\[email protected](\n \"--huge-pages\",\n is_flag=True,\n default=False,\n help=\"enable support for huge pages in the object store\")\[email protected](\n \"--autoscaling-config\",\n required=False,\n type=str,\n help=\"the file that contains the autoscaling config\")\[email protected](\n \"--no-redirect-worker-output\",\n is_flag=True,\n default=False,\n help=\"do not redirect worker stdout and stderr to files\")\[email protected](\n \"--no-redirect-output\",\n is_flag=True,\n default=False,\n help=\"do not redirect non-worker stdout and stderr to files\")\[email protected](\n \"--plasma-store-socket-name\",\n default=None,\n help=\"manually specify the socket name of the plasma store\")\[email protected](\n \"--raylet-socket-name\",\n default=None,\n help=\"manually specify the socket path of the raylet process\")\[email protected](\n \"--temp-dir\",\n default=None,\n help=\"manually specify the root temporary dir of the Ray process\")\[email protected](\n \"--include-java\",\n is_flag=True,\n default=None,\n help=\"Enable Java worker support.\")\[email protected](\n \"--java-worker-options\",\n required=False,\n default=None,\n type=str,\n help=\"Overwrite the options to start Java workers.\")\[email protected](\n \"--internal-config\",\n default=None,\n type=json.loads,\n help=\"Do NOT use this. This is for debugging/development purposes ONLY.\")\[email protected](\n \"--load-code-from-local\",\n is_flag=True,\n default=False,\n help=\"Specify whether load code from local file or GCS serialization.\")\ndef start(node_ip_address, redis_address, address, redis_port, port,\n num_redis_shards, redis_max_clients, redis_password,\n redis_shard_ports, object_manager_port, node_manager_port,\n min_worker_port, max_worker_port, memory, object_store_memory,\n redis_max_memory, num_cpus, num_gpus, resources, head, include_webui,\n webui_host, block, plasma_directory, huge_pages, autoscaling_config,\n no_redirect_worker_output, no_redirect_output,\n plasma_store_socket_name, raylet_socket_name, temp_dir, include_java,\n java_worker_options, load_code_from_local, internal_config):\n \"\"\"Start Ray processes manually on the local machine.\"\"\"\n if redis_address is not None:\n raise DeprecationWarning(\"The --redis-address argument is \"\n \"deprecated. Please use --address instead.\")\n if redis_port is not None:\n logger.warn(\"The --redis-port argument will be deprecated soon. \"\n \"Please use --port instead.\")\n if port is not None and port != redis_port:\n raise ValueError(\"Cannot specify both --port and --redis-port \"\n \"as port is a rename of deprecated redis-port\")\n\n # Convert hostnames to numerical IP address.\n if node_ip_address is not None:\n node_ip_address = services.address_to_ip(node_ip_address)\n\n if redis_address is not None or address is not None:\n (redis_address, redis_address_ip,\n redis_address_port) = services.validate_redis_address(\n address, redis_address)\n\n try:\n resources = json.loads(resources)\n except Exception:\n raise Exception(\"Unable to parse the --resources argument using \"\n \"json.loads. Try using a format like\\n\\n\"\n \" --resources='{\\\"CustomResource1\\\": 3, \"\n \"\\\"CustomReseource2\\\": 2}'\")\n\n redirect_worker_output = None if not no_redirect_worker_output else True\n redirect_output = None if not no_redirect_output else True\n ray_params = ray.parameter.RayParams(\n node_ip_address=node_ip_address,\n min_worker_port=min_worker_port,\n max_worker_port=max_worker_port,\n object_manager_port=object_manager_port,\n node_manager_port=node_manager_port,\n memory=memory,\n object_store_memory=object_store_memory,\n redis_password=redis_password,\n redirect_worker_output=redirect_worker_output,\n redirect_output=redirect_output,\n num_cpus=num_cpus,\n num_gpus=num_gpus,\n resources=resources,\n plasma_directory=plasma_directory,\n huge_pages=huge_pages,\n plasma_store_socket_name=plasma_store_socket_name,\n raylet_socket_name=raylet_socket_name,\n temp_dir=temp_dir,\n include_java=include_java,\n include_webui=include_webui,\n webui_host=webui_host,\n java_worker_options=java_worker_options,\n load_code_from_local=load_code_from_local,\n _internal_config=internal_config)\n if head:\n # Start Ray on the head node.\n if redis_shard_ports is not None:\n redis_shard_ports = redis_shard_ports.split(\",\")\n # Infer the number of Redis shards from the ports if the number is\n # not provided.\n if num_redis_shards is None:\n num_redis_shards = len(redis_shard_ports)\n # Check that the arguments match.\n if len(redis_shard_ports) != num_redis_shards:\n raise Exception(\"If --redis-shard-ports is provided, it must \"\n \"have the form '6380,6381,6382', and the \"\n \"number of ports provided must equal \"\n \"--num-redis-shards (which is 1 if not \"\n \"provided)\")\n\n if redis_address is not None:\n raise Exception(\"If --head is passed in, a Redis server will be \"\n \"started, so a Redis address should not be \"\n \"provided.\")\n\n # Get the node IP address if one is not provided.\n ray_params.update_if_absent(\n node_ip_address=services.get_node_ip_address())\n logger.info(\"Using IP address {} for this node.\".format(\n ray_params.node_ip_address))\n ray_params.update_if_absent(\n redis_port=port or redis_port,\n redis_shard_ports=redis_shard_ports,\n redis_max_memory=redis_max_memory,\n num_redis_shards=num_redis_shards,\n redis_max_clients=redis_max_clients,\n autoscaling_config=autoscaling_config,\n include_java=False,\n )\n\n node = ray.node.Node(\n ray_params, head=True, shutdown_at_exit=block, spawn_reaper=block)\n redis_address = node.redis_address\n\n logger.info(\n \"\\nStarted Ray on this node. You can add additional nodes to \"\n \"the cluster by calling\\n\\n\"\n \" ray start --address='{}'{}\\n\\n\"\n \"from the node you wish to add. You can connect a driver to the \"\n \"cluster from Python by running\\n\\n\"\n \" import ray\\n\"\n \" ray.init(address='auto'{})\\n\\n\"\n \"If you have trouble connecting from a different machine, check \"\n \"that your firewall is configured properly. If you wish to \"\n \"terminate the processes that have been started, run\\n\\n\"\n \" ray stop\".format(\n redis_address, \" --redis-password='\" + redis_password + \"'\"\n if redis_password else \"\",\n \", redis_password='\" + redis_password + \"'\"\n if redis_password else \"\"))\n else:\n # Start Ray on a non-head node.\n if not (redis_port is None and port is None):\n raise Exception(\n \"If --head is not passed in, --port and --redis-port are not \"\n \"allowed.\")\n if redis_shard_ports is not None:\n raise Exception(\"If --head is not passed in, --redis-shard-ports \"\n \"is not allowed.\")\n if redis_address is None:\n raise Exception(\"If --head is not passed in, --address must \"\n \"be provided.\")\n if num_redis_shards is not None:\n raise Exception(\"If --head is not passed in, --num-redis-shards \"\n \"must not be provided.\")\n if redis_max_clients is not None:\n raise Exception(\"If --head is not passed in, --redis-max-clients \"\n \"must not be provided.\")\n if include_webui:\n raise Exception(\"If --head is not passed in, the --include-webui \"\n \"flag is not relevant.\")\n if include_java is not None:\n raise ValueError(\"--include-java should only be set for the head \"\n \"node.\")\n\n # Wait for the Redis server to be started. And throw an exception if we\n # can't connect to it.\n services.wait_for_redis_to_start(\n redis_address_ip, redis_address_port, password=redis_password)\n\n # Create a Redis client.\n redis_client = services.create_redis_client(\n redis_address, password=redis_password)\n\n # Check that the version information on this node matches the version\n # information that the cluster was started with.\n services.check_version_info(redis_client)\n\n # Get the node IP address if one is not provided.\n ray_params.update_if_absent(\n node_ip_address=services.get_node_ip_address(redis_address))\n logger.info(\"Using IP address {} for this node.\".format(\n ray_params.node_ip_address))\n # Check that there aren't already Redis clients with the same IP\n # address connected with this Redis instance. This raises an exception\n # if the Redis server already has clients on this node.\n check_no_existing_redis_clients(ray_params.node_ip_address,\n redis_client)\n ray_params.update(redis_address=redis_address)\n node = ray.node.Node(\n ray_params, head=False, shutdown_at_exit=block, spawn_reaper=block)\n logger.info(\"\\nStarted Ray on this node. If you wish to terminate the \"\n \"processes that have been started, run\\n\\n\"\n \" ray stop\")\n\n if block:\n while True:\n time.sleep(1)\n deceased = node.dead_processes()\n if len(deceased) > 0:\n logger.error(\"Ray processes died unexpectedly:\")\n for process_type, process in deceased:\n logger.error(\"\\t{} died with exit code {}\".format(\n process_type, process.returncode))\n # shutdown_at_exit will handle cleanup.\n logger.error(\"Killing remaining processes and exiting...\")\n sys.exit(1)\n\n\[email protected]()\[email protected](\n \"-f\",\n \"--force\",\n is_flag=True,\n help=\"If set, ray will send SIGKILL instead of SIGTERM.\")\[email protected](\n \"-v\",\n \"--verbose\",\n is_flag=True,\n help=\"If set, ray prints out more information about processes to kill.\")\ndef stop(force, verbose):\n \"\"\"Stop Ray processes manually on the local machine.\"\"\"\n # Note that raylet needs to exit before object store, otherwise\n # it cannot exit gracefully.\n is_linux = sys.platform.startswith(\"linux\")\n processes_to_kill = [\n # The first element is the substring to filter.\n # The second element, if True, is to filter ps results by command name\n # (only the first 15 charactors of the executable name on Linux);\n # if False, is to filter ps results by command with all its arguments.\n # See STANDARD FORMAT SPECIFIERS section of\n # http://man7.org/linux/man-pages/man1/ps.1.html\n # about comm and args. This can help avoid killing non-ray processes.\n # Format:\n # Keyword to filter, filter by command (True)/filter by args (False)\n [\"raylet\", True],\n [\"plasma_store\", True],\n [\"raylet_monitor\", True],\n [\"gcs_server\", True],\n [\"monitor.py\", False],\n [\"redis-server\", False],\n [\"default_worker.py\", False], # Python worker.\n [\"ray::\", True], # Python worker. TODO(mehrdadn): Fix for Windows\n [\"io.ray.runtime.runner.worker.DefaultWorker\", False], # Java worker.\n [\"log_monitor.py\", False],\n [\"reporter.py\", False],\n [\"dashboard.py\", False],\n [\"ray_process_reaper.py\", False],\n ]\n\n process_infos = []\n for proc in psutil.process_iter([\"name\", \"cmdline\"]):\n try:\n process_infos.append((proc, proc.name(), proc.cmdline()))\n except psutil.Error:\n pass\n for keyword, filter_by_cmd in processes_to_kill:\n if filter_by_cmd and is_linux and len(keyword) > 15:\n msg = (\"The filter string should not be more than {} \"\n \"characters. Actual length: {}. Filter: {}\").format(\n 15, len(keyword), keyword)\n raise ValueError(msg)\n found = []\n for candidate in process_infos:\n proc, proc_cmd, proc_args = candidate\n corpus = (proc_cmd\n if filter_by_cmd else subprocess.list2cmdline(proc_args))\n if keyword in corpus:\n found.append(candidate)\n for proc, proc_cmd, proc_args in found:\n if verbose:\n operation = \"Terminating\" if force else \"Killing\"\n logger.info(\"%s process %s: %s\", operation, proc.pid,\n subprocess.list2cmdline(proc_args))\n try:\n if force:\n proc.kill()\n else:\n # TODO(mehrdadn): On Windows, this is forceful termination.\n # We don't want CTRL_BREAK_EVENT, because that would\n # terminate the entire process group. What to do?\n proc.terminate()\n except psutil.NoSuchProcess:\n pass\n except (psutil.Error, OSError) as ex:\n logger.error(\"Error: %s\", ex)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--no-restart\",\n is_flag=True,\n default=False,\n help=(\"Whether to skip restarting Ray services during the update. \"\n \"This avoids interrupting running jobs.\"))\[email protected](\n \"--restart-only\",\n is_flag=True,\n default=False,\n help=(\"Whether to skip running setup commands and only restart Ray. \"\n \"This cannot be used with 'no-restart'.\"))\[email protected](\n \"--min-workers\",\n required=False,\n type=int,\n help=\"Override the configured min worker node count for the cluster.\")\[email protected](\n \"--max-workers\",\n required=False,\n type=int,\n help=\"Override the configured max worker node count for the cluster.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\ndef create_or_update(cluster_config_file, min_workers, max_workers, no_restart,\n restart_only, yes, cluster_name):\n \"\"\"Create or update a Ray cluster.\"\"\"\n if restart_only or no_restart:\n assert restart_only != no_restart, \"Cannot set both 'restart_only' \" \\\n \"and 'no_restart' at the same time!\"\n if urllib.parse.urlparse(cluster_config_file).scheme in (\"http\", \"https\"):\n try:\n response = urllib.request.urlopen(cluster_config_file, timeout=5)\n content = response.read()\n file_name = cluster_config_file.split(\"/\")[-1]\n with open(file_name, \"wb\") as f:\n f.write(content)\n cluster_config_file = file_name\n except urllib.error.HTTPError as e:\n logger.info(\"Error downloading file: \", e)\n create_or_update_cluster(cluster_config_file, min_workers, max_workers,\n no_restart, restart_only, yes, cluster_name)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--workers-only\",\n is_flag=True,\n default=False,\n help=\"Only destroy the workers.\")\[email protected](\n \"--keep-min-workers\",\n is_flag=True,\n default=False,\n help=\"Retain the minimal amount of workers specified in the config.\")\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef teardown(cluster_config_file, yes, workers_only, cluster_name,\n keep_min_workers):\n \"\"\"Tear down a Ray cluster.\"\"\"\n teardown_cluster(cluster_config_file, yes, workers_only, cluster_name,\n keep_min_workers)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\[email protected](\n \"--hard\",\n is_flag=True,\n default=False,\n help=\"Terminates the node via node provider (defaults to a 'soft kill'\"\n \" which terminates Ray but does not actually delete the instances).\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef kill_random_node(cluster_config_file, yes, hard, cluster_name):\n \"\"\"Kills a random Ray node. For testing purposes only.\"\"\"\n click.echo(\"Killed node with IP \" +\n kill_node(cluster_config_file, yes, hard, cluster_name))\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--lines\",\n required=False,\n default=100,\n type=int,\n help=\"Number of lines to tail.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef monitor(cluster_config_file, lines, cluster_name):\n \"\"\"Tails the autoscaler logs of a Ray cluster.\"\"\"\n monitor_cluster(cluster_config_file, lines, cluster_name)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\", is_flag=True, default=False, help=\"Run the command in screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--new\", \"-N\", is_flag=True, help=\"Force creation of a new screen.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\ndef attach(cluster_config_file, start, screen, tmux, cluster_name, new,\n port_forward):\n \"\"\"Create or attach to a SSH session to a Ray cluster.\"\"\"\n port_forward = [(port, port) for port in list(port_forward)]\n attach_cluster(cluster_config_file, start, screen, tmux, cluster_name, new,\n port_forward)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"source\", required=False, type=str)\[email protected](\"target\", required=False, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef rsync_down(cluster_config_file, source, target, cluster_name):\n \"\"\"Download specific files from a Ray cluster.\"\"\"\n rsync(cluster_config_file, source, target, cluster_name, down=True)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"source\", required=False, type=str)\[email protected](\"target\", required=False, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--all-nodes\",\n \"-A\",\n is_flag=True,\n required=False,\n help=\"Upload to all nodes (workers and head).\")\ndef rsync_up(cluster_config_file, source, target, cluster_name, all_nodes):\n \"\"\"Upload specific files to a Ray cluster.\"\"\"\n rsync(\n cluster_config_file,\n source,\n target,\n cluster_name,\n down=False,\n all_nodes=all_nodes)\n\n\[email protected](context_settings={\"ignore_unknown_options\": True})\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--docker\",\n is_flag=True,\n default=False,\n help=\"Runs command in the docker container specified in cluster_config.\")\[email protected](\n \"--stop\",\n is_flag=True,\n default=False,\n help=\"Stop the cluster after the command finishes running.\")\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\",\n is_flag=True,\n default=False,\n help=\"Run the command in a screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\[email protected](\"script\", required=True, type=str)\[email protected](\n \"--args\",\n required=False,\n type=str,\n help=\"(deprecated) Use '-- --arg1 --arg2' for script args.\")\[email protected](\"script_args\", nargs=-1)\ndef submit(cluster_config_file, docker, screen, tmux, stop, start,\n cluster_name, port_forward, script, args, script_args):\n \"\"\"Uploads and runs a script on the specified cluster.\n\n The script is automatically synced to the following location:\n\n os.path.join(\"~\", os.path.basename(script))\n\n Example:\n >>> ray submit [CLUSTER.YAML] experiment.py -- --smoke-test\n \"\"\"\n assert not (screen and tmux), \"Can specify only one of `screen` or `tmux`.\"\n assert not (script_args and args), \"Use -- --arg1 --arg2 for script args.\"\n\n if args:\n logger.warning(\n \"ray submit [yaml] [script.py] --args=... is deprecated and \"\n \"will be removed in a future version of Ray. Use \"\n \"`ray submit [yaml] script.py -- --arg1 --arg2` instead.\")\n\n if start:\n create_or_update_cluster(cluster_config_file, None, None, False, False,\n True, cluster_name)\n\n target = os.path.join(\"~\", os.path.basename(script))\n rsync(cluster_config_file, script, target, cluster_name, down=False)\n\n command_parts = [\"python\", target]\n if script_args:\n command_parts += list(script_args)\n elif args is not None:\n command_parts += [args]\n\n port_forward = [(port, port) for port in list(port_forward)]\n cmd = \" \".join(command_parts)\n exec_cluster(\n cluster_config_file,\n cmd,\n docker,\n screen,\n tmux,\n stop,\n start=False,\n override_cluster_name=cluster_name,\n port_forward=port_forward)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"cmd\", required=True, type=str)\[email protected](\n \"--docker\",\n is_flag=True,\n default=False,\n help=\"Runs command in the docker container specified in cluster_config.\")\[email protected](\n \"--stop\",\n is_flag=True,\n default=False,\n help=\"Stop the cluster after the command finishes running.\")\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\",\n is_flag=True,\n default=False,\n help=\"Run the command in a screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\ndef exec_cmd(cluster_config_file, cmd, docker, screen, tmux, stop, start,\n cluster_name, port_forward):\n \"\"\"Execute a command via SSH on a Ray cluster.\"\"\"\n port_forward = [(port, port) for port in list(port_forward)]\n exec_cluster(cluster_config_file, cmd, docker, screen, tmux, stop, start,\n cluster_name, port_forward)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef get_head_ip(cluster_config_file, cluster_name):\n \"\"\"Return the head node IP of a Ray cluster.\"\"\"\n click.echo(get_head_node_ip(cluster_config_file, cluster_name))\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef get_worker_ips(cluster_config_file, cluster_name):\n \"\"\"Return the list of worker IPs of a Ray cluster.\"\"\"\n worker_ips = get_worker_node_ips(cluster_config_file, cluster_name)\n click.echo(\"\\n\".join(worker_ips))\n\n\[email protected]()\ndef stack():\n \"\"\"Take a stack dump of all Python workers on the local machine.\"\"\"\n COMMAND = \"\"\"\npyspy=`which py-spy`\nif [ ! -e \"$pyspy\" ]; then\n echo \"ERROR: Please 'pip install py-spy' (or ray[debug]) first\"\n exit 1\nfi\n# Set IFS to iterate over lines instead of over words.\nexport IFS=\"\n\"\n# Call sudo to prompt for password before anything has been printed.\nsudo true\nworkers=$(\n ps aux | grep -E ' ray_|default_worker.py' | grep -v grep\n)\nfor worker in $workers; do\n echo \"Stack dump for $worker\";\n pid=`echo $worker | awk '{print $2}'`;\n sudo $pyspy dump --pid $pid;\n echo;\ndone\n \"\"\"\n subprocess.call(COMMAND, shell=True)\n\n\[email protected]()\ndef microbenchmark():\n \"\"\"Run a local Ray microbenchmark on the current machine.\"\"\"\n from ray.ray_perf import main\n main()\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the redis address to connect to.\")\ndef timeline(address):\n \"\"\"Take a Chrome tracing timeline for a Ray cluster.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n time = datetime.today().strftime(\"%Y-%m-%d_%H-%M-%S\")\n filename = os.path.join(ray.utils.get_user_temp_dir(),\n \"ray-timeline-{}.json\".format(time))\n ray.timeline(filename=filename)\n size = os.path.getsize(filename)\n logger.info(\"Trace file written to {} ({} bytes).\".format(filename, size))\n logger.info(\n \"You can open this with chrome://tracing in the Chrome browser.\")\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef stat(address):\n \"\"\"Get the current metrics protobuf from a Ray cluster (developer tool).\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n\n import grpc\n from ray.core.generated import node_manager_pb2\n from ray.core.generated import node_manager_pb2_grpc\n\n for raylet in ray.nodes():\n raylet_address = \"{}:{}\".format(raylet[\"NodeManagerAddress\"],\n ray.nodes()[0][\"NodeManagerPort\"])\n logger.info(\"Querying raylet {}\".format(raylet_address))\n\n channel = grpc.insecure_channel(raylet_address)\n stub = node_manager_pb2_grpc.NodeManagerServiceStub(channel)\n reply = stub.GetNodeStats(\n node_manager_pb2.GetNodeStatsRequest(include_memory_info=False),\n timeout=2.0)\n print(reply)\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef memory(address):\n \"\"\"Print object references held in a Ray cluster.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n print(ray.internal.internal_api.memory_summary())\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef globalgc(address):\n \"\"\"Trigger Python garbage collection on all cluster workers.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n ray.internal.internal_api.global_gc()\n print(\"Triggered gc.collect() on all workers.\")\n\n\ndef add_command_alias(command, name, hidden):\n new_command = copy.deepcopy(command)\n new_command.hidden = hidden\n cli.add_command(new_command, name=name)\n\n\ncli.add_command(dashboard)\ncli.add_command(start)\ncli.add_command(stop)\nadd_command_alias(create_or_update, name=\"up\", hidden=False)\ncli.add_command(attach)\nadd_command_alias(exec_cmd, name=\"exec\", hidden=False)\nadd_command_alias(rsync_down, name=\"rsync_down\", hidden=True)\nadd_command_alias(rsync_up, name=\"rsync_up\", hidden=True)\ncli.add_command(submit)\ncli.add_command(teardown)\nadd_command_alias(teardown, name=\"down\", hidden=False)\ncli.add_command(kill_random_node)\nadd_command_alias(get_head_ip, name=\"get_head_ip\", hidden=True)\ncli.add_command(get_worker_ips)\ncli.add_command(microbenchmark)\ncli.add_command(stack)\ncli.add_command(stat)\ncli.add_command(memory)\ncli.add_command(globalgc)\ncli.add_command(timeline)\ncli.add_command(project_cli)\ncli.add_command(session_cli)\n\ntry:\n from ray.serve.scripts import serve_cli\n cli.add_command(serve_cli)\nexcept Exception as e:\n logger.debug(\n \"Integrating ray serve command line tool failed with {}\".format(e))\n\n\ndef main():\n return cli()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "python/ray/scripts/scripts.py"}], "after_files": [{"content": "import click\nimport copy\nfrom datetime import datetime\nimport json\nimport logging\nimport os\nimport subprocess\nimport sys\nimport time\nimport urllib\nimport urllib.parse\n\nimport ray\nimport psutil\nimport ray.services as services\nfrom ray.autoscaler.commands import (\n attach_cluster, exec_cluster, create_or_update_cluster, monitor_cluster,\n rsync, teardown_cluster, get_head_node_ip, kill_node, get_worker_node_ips)\nimport ray.ray_constants as ray_constants\nimport ray.utils\nfrom ray.projects.scripts import project_cli, session_cli\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_no_existing_redis_clients(node_ip_address, redis_client):\n # The client table prefix must be kept in sync with the file\n # \"src/ray/gcs/redis_module/ray_redis_module.cc\" where it is defined.\n REDIS_CLIENT_TABLE_PREFIX = \"CL:\"\n client_keys = redis_client.keys(\"{}*\".format(REDIS_CLIENT_TABLE_PREFIX))\n # Filter to clients on the same node and do some basic checking.\n for key in client_keys:\n info = redis_client.hgetall(key)\n assert b\"ray_client_id\" in info\n assert b\"node_ip_address\" in info\n assert b\"client_type\" in info\n assert b\"deleted\" in info\n # Clients that ran on the same node but that are marked dead can be\n # ignored.\n deleted = info[b\"deleted\"]\n deleted = bool(int(deleted))\n if deleted:\n continue\n\n if ray.utils.decode(info[b\"node_ip_address\"]) == node_ip_address:\n raise Exception(\"This Redis instance is already connected to \"\n \"clients with this IP address.\")\n\n\[email protected]()\[email protected](\n \"--logging-level\",\n required=False,\n default=ray_constants.LOGGER_LEVEL,\n type=str,\n help=ray_constants.LOGGER_LEVEL_HELP)\[email protected](\n \"--logging-format\",\n required=False,\n default=ray_constants.LOGGER_FORMAT,\n type=str,\n help=ray_constants.LOGGER_FORMAT_HELP)\[email protected]_option()\ndef cli(logging_level, logging_format):\n level = logging.getLevelName(logging_level.upper())\n ray.utils.setup_logger(level, logging_format)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port\",\n \"-p\",\n required=False,\n type=int,\n default=8265,\n help=\"The local port to forward to the dashboard\")\ndef dashboard(cluster_config_file, cluster_name, port):\n \"\"\"Port-forward a Ray cluster's dashboard to the local machine.\"\"\"\n # Sleeping in a loop is preferable to `sleep infinity` because the latter\n # only works on linux.\n remote_port = 8265\n if port:\n dashboard_port = port\n else:\n dashboard_port = remote_port\n\n port_taken = True\n\n # Find the first open port sequentially from `remote_port`.\n while port_taken:\n try:\n port_forward = [\n (dashboard_port, remote_port),\n ]\n click.echo(\n \"Attempting to establish dashboard at localhost:{}\".format(\n port_forward[0][0]))\n # We want to probe with a no-op that returns quickly to avoid\n # exceptions caused by network errors.\n exec_cluster(\n cluster_config_file,\n override_cluster_name=cluster_name,\n port_forward=port_forward)\n port_taken = False\n except Exception:\n click.echo(\"Failed to forward dashboard, trying a new port...\")\n port_taken = True\n dashboard_port += 1\n pass\n\n\[email protected]()\[email protected](\n \"--node-ip-address\",\n required=False,\n type=str,\n help=\"the IP address of this node\")\[email protected](\n \"--redis-address\", required=False, type=str, help=\"same as --address\")\[email protected](\n \"--address\", required=False, type=str, help=\"the address to use for Ray\")\[email protected](\n \"--redis-port\",\n required=False,\n type=str,\n help=\"(DEPRECATED) the port to use for starting redis. \"\n \"Please use --port instead now.\")\[email protected](\n \"--port\",\n required=False,\n type=str,\n help=\"the port of the head ray process. If not provided, tries to use \"\n \"{0}, falling back to a random port if {0} is \"\n \"not available\".format(ray_constants.DEFAULT_PORT))\[email protected](\n \"--num-redis-shards\",\n required=False,\n type=int,\n help=(\"the number of additional Redis shards to use in \"\n \"addition to the primary Redis shard\"))\[email protected](\n \"--redis-max-clients\",\n required=False,\n type=int,\n help=(\"If provided, attempt to configure Redis with this \"\n \"maximum number of clients.\"))\[email protected](\n \"--redis-password\",\n required=False,\n type=str,\n default=ray_constants.REDIS_DEFAULT_PASSWORD,\n help=\"If provided, secure Redis ports with this password\")\[email protected](\n \"--redis-shard-ports\",\n required=False,\n type=str,\n help=\"the port to use for the Redis shards other than the \"\n \"primary Redis shard\")\[email protected](\n \"--object-manager-port\",\n required=False,\n type=int,\n help=\"the port to use for starting the object manager\")\[email protected](\n \"--node-manager-port\",\n required=False,\n type=int,\n help=\"the port to use for starting the node manager\")\[email protected](\n \"--min-worker-port\",\n required=False,\n type=int,\n default=10000,\n help=\"the lowest port number that workers will bind on. If not set, \"\n \"random ports will be chosen.\")\[email protected](\n \"--max-worker-port\",\n required=False,\n type=int,\n default=10999,\n help=\"the highest port number that workers will bind on. If set, \"\n \"'--min-worker-port' must also be set.\")\[email protected](\n \"--memory\",\n required=False,\n type=int,\n help=\"The amount of memory (in bytes) to make available to workers. \"\n \"By default, this is set to the available memory on the node.\")\[email protected](\n \"--object-store-memory\",\n required=False,\n type=int,\n help=\"The amount of memory (in bytes) to start the object store with. \"\n \"By default, this is capped at 20GB but can be set higher.\")\[email protected](\n \"--redis-max-memory\",\n required=False,\n type=int,\n help=\"The max amount of memory (in bytes) to allow redis to use. Once the \"\n \"limit is exceeded, redis will start LRU eviction of entries. This only \"\n \"applies to the sharded redis tables (task, object, and profile tables). \"\n \"By default this is capped at 10GB but can be set higher.\")\[email protected](\n \"--num-cpus\",\n required=False,\n type=int,\n help=\"the number of CPUs on this node\")\[email protected](\n \"--num-gpus\",\n required=False,\n type=int,\n help=\"the number of GPUs on this node\")\[email protected](\n \"--resources\",\n required=False,\n default=\"{}\",\n type=str,\n help=\"a JSON serialized dictionary mapping resource name to \"\n \"resource quantity\")\[email protected](\n \"--head\",\n is_flag=True,\n default=False,\n help=\"provide this argument for the head node\")\[email protected](\n \"--include-webui\",\n default=None,\n type=bool,\n help=\"provide this argument if the UI should be started\")\[email protected](\n \"--webui-host\",\n required=False,\n default=\"localhost\",\n help=\"The host to bind the web UI server to. Can either be localhost \"\n \"(127.0.0.1) or 0.0.0.0 (available from all interfaces). By default, this \"\n \"is set to localhost to prevent access from external machines.\")\[email protected](\n \"--block\",\n is_flag=True,\n default=False,\n help=\"provide this argument to block forever in this command\")\[email protected](\n \"--plasma-directory\",\n required=False,\n type=str,\n help=\"object store directory for memory mapped files\")\[email protected](\n \"--huge-pages\",\n is_flag=True,\n default=False,\n help=\"enable support for huge pages in the object store\")\[email protected](\n \"--autoscaling-config\",\n required=False,\n type=str,\n help=\"the file that contains the autoscaling config\")\[email protected](\n \"--no-redirect-worker-output\",\n is_flag=True,\n default=False,\n help=\"do not redirect worker stdout and stderr to files\")\[email protected](\n \"--no-redirect-output\",\n is_flag=True,\n default=False,\n help=\"do not redirect non-worker stdout and stderr to files\")\[email protected](\n \"--plasma-store-socket-name\",\n default=None,\n help=\"manually specify the socket name of the plasma store\")\[email protected](\n \"--raylet-socket-name\",\n default=None,\n help=\"manually specify the socket path of the raylet process\")\[email protected](\n \"--temp-dir\",\n default=None,\n help=\"manually specify the root temporary dir of the Ray process\")\[email protected](\n \"--include-java\",\n is_flag=True,\n default=None,\n help=\"Enable Java worker support.\")\[email protected](\n \"--java-worker-options\",\n required=False,\n default=None,\n type=str,\n help=\"Overwrite the options to start Java workers.\")\[email protected](\n \"--internal-config\",\n default=None,\n type=json.loads,\n help=\"Do NOT use this. This is for debugging/development purposes ONLY.\")\[email protected](\n \"--load-code-from-local\",\n is_flag=True,\n default=False,\n help=\"Specify whether load code from local file or GCS serialization.\")\ndef start(node_ip_address, redis_address, address, redis_port, port,\n num_redis_shards, redis_max_clients, redis_password,\n redis_shard_ports, object_manager_port, node_manager_port,\n min_worker_port, max_worker_port, memory, object_store_memory,\n redis_max_memory, num_cpus, num_gpus, resources, head, include_webui,\n webui_host, block, plasma_directory, huge_pages, autoscaling_config,\n no_redirect_worker_output, no_redirect_output,\n plasma_store_socket_name, raylet_socket_name, temp_dir, include_java,\n java_worker_options, load_code_from_local, internal_config):\n \"\"\"Start Ray processes manually on the local machine.\"\"\"\n if redis_address is not None:\n raise DeprecationWarning(\"The --redis-address argument is \"\n \"deprecated. Please use --address instead.\")\n if redis_port is not None:\n logger.warn(\"The --redis-port argument will be deprecated soon. \"\n \"Please use --port instead.\")\n if port is not None and port != redis_port:\n raise ValueError(\"Cannot specify both --port and --redis-port \"\n \"as port is a rename of deprecated redis-port\")\n\n # Convert hostnames to numerical IP address.\n if node_ip_address is not None:\n node_ip_address = services.address_to_ip(node_ip_address)\n\n if redis_address is not None or address is not None:\n (redis_address, redis_address_ip,\n redis_address_port) = services.validate_redis_address(\n address, redis_address)\n\n try:\n resources = json.loads(resources)\n except Exception:\n raise Exception(\"Unable to parse the --resources argument using \"\n \"json.loads. Try using a format like\\n\\n\"\n \" --resources='{\\\"CustomResource1\\\": 3, \"\n \"\\\"CustomReseource2\\\": 2}'\")\n\n redirect_worker_output = None if not no_redirect_worker_output else True\n redirect_output = None if not no_redirect_output else True\n ray_params = ray.parameter.RayParams(\n node_ip_address=node_ip_address,\n min_worker_port=min_worker_port,\n max_worker_port=max_worker_port,\n object_manager_port=object_manager_port,\n node_manager_port=node_manager_port,\n memory=memory,\n object_store_memory=object_store_memory,\n redis_password=redis_password,\n redirect_worker_output=redirect_worker_output,\n redirect_output=redirect_output,\n num_cpus=num_cpus,\n num_gpus=num_gpus,\n resources=resources,\n plasma_directory=plasma_directory,\n huge_pages=huge_pages,\n plasma_store_socket_name=plasma_store_socket_name,\n raylet_socket_name=raylet_socket_name,\n temp_dir=temp_dir,\n include_java=include_java,\n include_webui=include_webui,\n webui_host=webui_host,\n java_worker_options=java_worker_options,\n load_code_from_local=load_code_from_local,\n _internal_config=internal_config)\n if head:\n # Start Ray on the head node.\n if redis_shard_ports is not None:\n redis_shard_ports = redis_shard_ports.split(\",\")\n # Infer the number of Redis shards from the ports if the number is\n # not provided.\n if num_redis_shards is None:\n num_redis_shards = len(redis_shard_ports)\n # Check that the arguments match.\n if len(redis_shard_ports) != num_redis_shards:\n raise Exception(\"If --redis-shard-ports is provided, it must \"\n \"have the form '6380,6381,6382', and the \"\n \"number of ports provided must equal \"\n \"--num-redis-shards (which is 1 if not \"\n \"provided)\")\n\n if redis_address is not None:\n raise Exception(\"If --head is passed in, a Redis server will be \"\n \"started, so a Redis address should not be \"\n \"provided.\")\n\n # Get the node IP address if one is not provided.\n ray_params.update_if_absent(\n node_ip_address=services.get_node_ip_address())\n logger.info(\"Using IP address {} for this node.\".format(\n ray_params.node_ip_address))\n ray_params.update_if_absent(\n redis_port=port or redis_port,\n redis_shard_ports=redis_shard_ports,\n redis_max_memory=redis_max_memory,\n num_redis_shards=num_redis_shards,\n redis_max_clients=redis_max_clients,\n autoscaling_config=autoscaling_config,\n include_java=False,\n )\n\n node = ray.node.Node(\n ray_params, head=True, shutdown_at_exit=block, spawn_reaper=block)\n redis_address = node.redis_address\n\n logger.info(\n \"\\nStarted Ray on this node. You can add additional nodes to \"\n \"the cluster by calling\\n\\n\"\n \" ray start --address='{}'{}\\n\\n\"\n \"from the node you wish to add. You can connect a driver to the \"\n \"cluster from Python by running\\n\\n\"\n \" import ray\\n\"\n \" ray.init(address='auto'{})\\n\\n\"\n \"If you have trouble connecting from a different machine, check \"\n \"that your firewall is configured properly. If you wish to \"\n \"terminate the processes that have been started, run\\n\\n\"\n \" ray stop\".format(\n redis_address, \" --redis-password='\" + redis_password + \"'\"\n if redis_password else \"\",\n \", redis_password='\" + redis_password + \"'\"\n if redis_password else \"\"))\n else:\n # Start Ray on a non-head node.\n if not (redis_port is None and port is None):\n raise Exception(\n \"If --head is not passed in, --port and --redis-port are not \"\n \"allowed.\")\n if redis_shard_ports is not None:\n raise Exception(\"If --head is not passed in, --redis-shard-ports \"\n \"is not allowed.\")\n if redis_address is None:\n raise Exception(\"If --head is not passed in, --address must \"\n \"be provided.\")\n if num_redis_shards is not None:\n raise Exception(\"If --head is not passed in, --num-redis-shards \"\n \"must not be provided.\")\n if redis_max_clients is not None:\n raise Exception(\"If --head is not passed in, --redis-max-clients \"\n \"must not be provided.\")\n if include_webui:\n raise Exception(\"If --head is not passed in, the --include-webui \"\n \"flag is not relevant.\")\n if include_java is not None:\n raise ValueError(\"--include-java should only be set for the head \"\n \"node.\")\n\n # Wait for the Redis server to be started. And throw an exception if we\n # can't connect to it.\n services.wait_for_redis_to_start(\n redis_address_ip, redis_address_port, password=redis_password)\n\n # Create a Redis client.\n redis_client = services.create_redis_client(\n redis_address, password=redis_password)\n\n # Check that the version information on this node matches the version\n # information that the cluster was started with.\n services.check_version_info(redis_client)\n\n # Get the node IP address if one is not provided.\n ray_params.update_if_absent(\n node_ip_address=services.get_node_ip_address(redis_address))\n logger.info(\"Using IP address {} for this node.\".format(\n ray_params.node_ip_address))\n # Check that there aren't already Redis clients with the same IP\n # address connected with this Redis instance. This raises an exception\n # if the Redis server already has clients on this node.\n check_no_existing_redis_clients(ray_params.node_ip_address,\n redis_client)\n ray_params.update(redis_address=redis_address)\n node = ray.node.Node(\n ray_params, head=False, shutdown_at_exit=block, spawn_reaper=block)\n logger.info(\"\\nStarted Ray on this node. If you wish to terminate the \"\n \"processes that have been started, run\\n\\n\"\n \" ray stop\")\n\n if block:\n while True:\n time.sleep(1)\n deceased = node.dead_processes()\n if len(deceased) > 0:\n logger.error(\"Ray processes died unexpectedly:\")\n for process_type, process in deceased:\n logger.error(\"\\t{} died with exit code {}\".format(\n process_type, process.returncode))\n # shutdown_at_exit will handle cleanup.\n logger.error(\"Killing remaining processes and exiting...\")\n sys.exit(1)\n\n\[email protected]()\[email protected](\n \"-f\",\n \"--force\",\n is_flag=True,\n help=\"If set, ray will send SIGKILL instead of SIGTERM.\")\[email protected](\n \"-v\",\n \"--verbose\",\n is_flag=True,\n help=\"If set, ray prints out more information about processes to kill.\")\ndef stop(force, verbose):\n \"\"\"Stop Ray processes manually on the local machine.\"\"\"\n # Note that raylet needs to exit before object store, otherwise\n # it cannot exit gracefully.\n is_linux = sys.platform.startswith(\"linux\")\n processes_to_kill = [\n # The first element is the substring to filter.\n # The second element, if True, is to filter ps results by command name\n # (only the first 15 charactors of the executable name on Linux);\n # if False, is to filter ps results by command with all its arguments.\n # See STANDARD FORMAT SPECIFIERS section of\n # http://man7.org/linux/man-pages/man1/ps.1.html\n # about comm and args. This can help avoid killing non-ray processes.\n # Format:\n # Keyword to filter, filter by command (True)/filter by args (False)\n [\"raylet\", True],\n [\"plasma_store\", True],\n [\"raylet_monitor\", True],\n [\"gcs_server\", True],\n [\"monitor.py\", False],\n [\"redis-server\", False],\n [\"default_worker.py\", False], # Python worker.\n [\"ray::\", True], # Python worker. TODO(mehrdadn): Fix for Windows\n [\"io.ray.runtime.runner.worker.DefaultWorker\", False], # Java worker.\n [\"log_monitor.py\", False],\n [\"reporter.py\", False],\n [\"dashboard.py\", False],\n [\"ray_process_reaper.py\", False],\n ]\n\n process_infos = []\n for proc in psutil.process_iter([\"name\", \"cmdline\"]):\n try:\n process_infos.append((proc, proc.name(), proc.cmdline()))\n except psutil.Error:\n pass\n for keyword, filter_by_cmd in processes_to_kill:\n if filter_by_cmd and is_linux and len(keyword) > 15:\n msg = (\"The filter string should not be more than {} \"\n \"characters. Actual length: {}. Filter: {}\").format(\n 15, len(keyword), keyword)\n raise ValueError(msg)\n found = []\n for candidate in process_infos:\n proc, proc_cmd, proc_args = candidate\n corpus = (proc_cmd\n if filter_by_cmd else subprocess.list2cmdline(proc_args))\n if keyword in corpus:\n found.append(candidate)\n for proc, proc_cmd, proc_args in found:\n if verbose:\n operation = \"Terminating\" if force else \"Killing\"\n logger.info(\"%s process %s: %s\", operation, proc.pid,\n subprocess.list2cmdline(proc_args))\n try:\n if force:\n proc.kill()\n else:\n # TODO(mehrdadn): On Windows, this is forceful termination.\n # We don't want CTRL_BREAK_EVENT, because that would\n # terminate the entire process group. What to do?\n proc.terminate()\n except psutil.NoSuchProcess:\n pass\n except (psutil.Error, OSError) as ex:\n logger.error(\"Error: %s\", ex)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--no-restart\",\n is_flag=True,\n default=False,\n help=(\"Whether to skip restarting Ray services during the update. \"\n \"This avoids interrupting running jobs.\"))\[email protected](\n \"--restart-only\",\n is_flag=True,\n default=False,\n help=(\"Whether to skip running setup commands and only restart Ray. \"\n \"This cannot be used with 'no-restart'.\"))\[email protected](\n \"--min-workers\",\n required=False,\n type=int,\n help=\"Override the configured min worker node count for the cluster.\")\[email protected](\n \"--max-workers\",\n required=False,\n type=int,\n help=\"Override the configured max worker node count for the cluster.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\ndef create_or_update(cluster_config_file, min_workers, max_workers, no_restart,\n restart_only, yes, cluster_name):\n \"\"\"Create or update a Ray cluster.\"\"\"\n if restart_only or no_restart:\n assert restart_only != no_restart, \"Cannot set both 'restart_only' \" \\\n \"and 'no_restart' at the same time!\"\n if urllib.parse.urlparse(cluster_config_file).scheme in (\"http\", \"https\"):\n try:\n response = urllib.request.urlopen(cluster_config_file, timeout=5)\n content = response.read()\n file_name = cluster_config_file.split(\"/\")[-1]\n with open(file_name, \"wb\") as f:\n f.write(content)\n cluster_config_file = file_name\n except urllib.error.HTTPError as e:\n logger.info(\"Error downloading file: \", e)\n create_or_update_cluster(cluster_config_file, min_workers, max_workers,\n no_restart, restart_only, yes, cluster_name)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--workers-only\",\n is_flag=True,\n default=False,\n help=\"Only destroy the workers.\")\[email protected](\n \"--keep-min-workers\",\n is_flag=True,\n default=False,\n help=\"Retain the minimal amount of workers specified in the config.\")\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef teardown(cluster_config_file, yes, workers_only, cluster_name,\n keep_min_workers):\n \"\"\"Tear down a Ray cluster.\"\"\"\n teardown_cluster(cluster_config_file, yes, workers_only, cluster_name,\n keep_min_workers)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--yes\",\n \"-y\",\n is_flag=True,\n default=False,\n help=\"Don't ask for confirmation.\")\[email protected](\n \"--hard\",\n is_flag=True,\n default=False,\n help=\"Terminates the node via node provider (defaults to a 'soft kill'\"\n \" which terminates Ray but does not actually delete the instances).\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef kill_random_node(cluster_config_file, yes, hard, cluster_name):\n \"\"\"Kills a random Ray node. For testing purposes only.\"\"\"\n click.echo(\"Killed node with IP \" +\n kill_node(cluster_config_file, yes, hard, cluster_name))\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--lines\",\n required=False,\n default=100,\n type=int,\n help=\"Number of lines to tail.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef monitor(cluster_config_file, lines, cluster_name):\n \"\"\"Tails the autoscaler logs of a Ray cluster.\"\"\"\n monitor_cluster(cluster_config_file, lines, cluster_name)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\", is_flag=True, default=False, help=\"Run the command in screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--new\", \"-N\", is_flag=True, help=\"Force creation of a new screen.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\ndef attach(cluster_config_file, start, screen, tmux, cluster_name, new,\n port_forward):\n \"\"\"Create or attach to a SSH session to a Ray cluster.\"\"\"\n port_forward = [(port, port) for port in list(port_forward)]\n attach_cluster(cluster_config_file, start, screen, tmux, cluster_name, new,\n port_forward)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"source\", required=False, type=str)\[email protected](\"target\", required=False, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef rsync_down(cluster_config_file, source, target, cluster_name):\n \"\"\"Download specific files from a Ray cluster.\"\"\"\n rsync(cluster_config_file, source, target, cluster_name, down=True)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"source\", required=False, type=str)\[email protected](\"target\", required=False, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--all-nodes\",\n \"-A\",\n is_flag=True,\n required=False,\n help=\"Upload to all nodes (workers and head).\")\ndef rsync_up(cluster_config_file, source, target, cluster_name, all_nodes):\n \"\"\"Upload specific files to a Ray cluster.\"\"\"\n rsync(\n cluster_config_file,\n source,\n target,\n cluster_name,\n down=False,\n all_nodes=all_nodes)\n\n\[email protected](context_settings={\"ignore_unknown_options\": True})\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--docker\",\n is_flag=True,\n default=False,\n help=\"Runs command in the docker container specified in cluster_config.\")\[email protected](\n \"--stop\",\n is_flag=True,\n default=False,\n help=\"Stop the cluster after the command finishes running.\")\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\",\n is_flag=True,\n default=False,\n help=\"Run the command in a screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\[email protected](\"script\", required=True, type=str)\[email protected](\n \"--args\",\n required=False,\n type=str,\n help=\"(deprecated) Use '-- --arg1 --arg2' for script args.\")\[email protected](\"script_args\", nargs=-1)\ndef submit(cluster_config_file, docker, screen, tmux, stop, start,\n cluster_name, port_forward, script, args, script_args):\n \"\"\"Uploads and runs a script on the specified cluster.\n\n The script is automatically synced to the following location:\n\n os.path.join(\"~\", os.path.basename(script))\n\n Example:\n >>> ray submit [CLUSTER.YAML] experiment.py -- --smoke-test\n \"\"\"\n assert not (screen and tmux), \"Can specify only one of `screen` or `tmux`.\"\n assert not (script_args and args), \"Use -- --arg1 --arg2 for script args.\"\n\n if args:\n logger.warning(\n \"ray submit [yaml] [script.py] --args=... is deprecated and \"\n \"will be removed in a future version of Ray. Use \"\n \"`ray submit [yaml] script.py -- --arg1 --arg2` instead.\")\n\n if start:\n create_or_update_cluster(cluster_config_file, None, None, False, False,\n True, cluster_name)\n\n target = os.path.join(\"~\", os.path.basename(script))\n rsync(cluster_config_file, script, target, cluster_name, down=False)\n\n command_parts = [\"python\", target]\n if script_args:\n command_parts += list(script_args)\n elif args is not None:\n command_parts += [args]\n\n port_forward = [(port, port) for port in list(port_forward)]\n cmd = \" \".join(command_parts)\n exec_cluster(\n cluster_config_file,\n cmd,\n docker,\n screen,\n tmux,\n stop,\n start=False,\n override_cluster_name=cluster_name,\n port_forward=port_forward)\n\n\[email protected](hidden=True)\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\"cmd\", required=True, type=str)\[email protected](\n \"--docker\",\n is_flag=True,\n default=False,\n help=\"Runs command in the docker container specified in cluster_config.\")\[email protected](\n \"--stop\",\n is_flag=True,\n default=False,\n help=\"Stop the cluster after the command finishes running.\")\[email protected](\n \"--start\",\n is_flag=True,\n default=False,\n help=\"Start the cluster if needed.\")\[email protected](\n \"--screen\",\n is_flag=True,\n default=False,\n help=\"Run the command in a screen.\")\[email protected](\n \"--tmux\", is_flag=True, default=False, help=\"Run the command in tmux.\")\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\[email protected](\n \"--port-forward\",\n \"-p\",\n required=False,\n multiple=True,\n type=int,\n help=\"Port to forward. Use this multiple times to forward multiple ports.\")\ndef exec_cmd(cluster_config_file, cmd, docker, screen, tmux, stop, start,\n cluster_name, port_forward):\n \"\"\"Execute a command via SSH on a Ray cluster.\"\"\"\n port_forward = [(port, port) for port in list(port_forward)]\n exec_cluster(cluster_config_file, cmd, docker, screen, tmux, stop, start,\n cluster_name, port_forward)\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef get_head_ip(cluster_config_file, cluster_name):\n \"\"\"Return the head node IP of a Ray cluster.\"\"\"\n click.echo(get_head_node_ip(cluster_config_file, cluster_name))\n\n\[email protected]()\[email protected](\"cluster_config_file\", required=True, type=str)\[email protected](\n \"--cluster-name\",\n \"-n\",\n required=False,\n type=str,\n help=\"Override the configured cluster name.\")\ndef get_worker_ips(cluster_config_file, cluster_name):\n \"\"\"Return the list of worker IPs of a Ray cluster.\"\"\"\n worker_ips = get_worker_node_ips(cluster_config_file, cluster_name)\n click.echo(\"\\n\".join(worker_ips))\n\n\[email protected]()\ndef stack():\n \"\"\"Take a stack dump of all Python workers on the local machine.\"\"\"\n COMMAND = \"\"\"\npyspy=`which py-spy`\nif [ ! -e \"$pyspy\" ]; then\n echo \"ERROR: Please 'pip install py-spy' (or ray[debug]) first\"\n exit 1\nfi\n# Set IFS to iterate over lines instead of over words.\nexport IFS=\"\n\"\n# Call sudo to prompt for password before anything has been printed.\nsudo true\nworkers=$(\n ps aux | grep -E ' ray_|default_worker.py' | grep -v grep\n)\nfor worker in $workers; do\n echo \"Stack dump for $worker\";\n pid=`echo $worker | awk '{print $2}'`;\n sudo $pyspy dump --pid $pid;\n echo;\ndone\n \"\"\"\n subprocess.call(COMMAND, shell=True)\n\n\[email protected]()\ndef microbenchmark():\n \"\"\"Run a local Ray microbenchmark on the current machine.\"\"\"\n from ray.ray_perf import main\n main()\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the redis address to connect to.\")\ndef timeline(address):\n \"\"\"Take a Chrome tracing timeline for a Ray cluster.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n time = datetime.today().strftime(\"%Y-%m-%d_%H-%M-%S\")\n filename = os.path.join(ray.utils.get_user_temp_dir(),\n \"ray-timeline-{}.json\".format(time))\n ray.timeline(filename=filename)\n size = os.path.getsize(filename)\n logger.info(\"Trace file written to {} ({} bytes).\".format(filename, size))\n logger.info(\n \"You can open this with chrome://tracing in the Chrome browser.\")\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef stat(address):\n \"\"\"Get the current metrics protobuf from a Ray cluster (developer tool).\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n\n import grpc\n from ray.core.generated import node_manager_pb2\n from ray.core.generated import node_manager_pb2_grpc\n\n for raylet in ray.nodes():\n raylet_address = \"{}:{}\".format(raylet[\"NodeManagerAddress\"],\n ray.nodes()[0][\"NodeManagerPort\"])\n logger.info(\"Querying raylet {}\".format(raylet_address))\n\n channel = grpc.insecure_channel(raylet_address)\n stub = node_manager_pb2_grpc.NodeManagerServiceStub(channel)\n reply = stub.GetNodeStats(\n node_manager_pb2.GetNodeStatsRequest(include_memory_info=False),\n timeout=2.0)\n print(reply)\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef memory(address):\n \"\"\"Print object references held in a Ray cluster.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n print(ray.internal.internal_api.memory_summary())\n\n\[email protected]()\[email protected](\n \"--address\",\n required=False,\n type=str,\n help=\"Override the address to connect to.\")\ndef globalgc(address):\n \"\"\"Trigger Python garbage collection on all cluster workers.\"\"\"\n if not address:\n address = services.find_redis_address_or_die()\n logger.info(\"Connecting to Ray instance at {}.\".format(address))\n ray.init(address=address)\n ray.internal.internal_api.global_gc()\n print(\"Triggered gc.collect() on all workers.\")\n\n\ndef add_command_alias(command, name, hidden):\n new_command = copy.deepcopy(command)\n new_command.hidden = hidden\n cli.add_command(new_command, name=name)\n\n\ncli.add_command(dashboard)\ncli.add_command(start)\ncli.add_command(stop)\nadd_command_alias(create_or_update, name=\"up\", hidden=False)\ncli.add_command(attach)\nadd_command_alias(exec_cmd, name=\"exec\", hidden=False)\nadd_command_alias(rsync_down, name=\"rsync_down\", hidden=True)\nadd_command_alias(rsync_up, name=\"rsync_up\", hidden=True)\ncli.add_command(submit)\ncli.add_command(teardown)\nadd_command_alias(teardown, name=\"down\", hidden=False)\ncli.add_command(kill_random_node)\nadd_command_alias(get_head_ip, name=\"get_head_ip\", hidden=True)\ncli.add_command(get_worker_ips)\ncli.add_command(microbenchmark)\ncli.add_command(stack)\ncli.add_command(stat)\ncli.add_command(memory)\ncli.add_command(globalgc)\ncli.add_command(timeline)\ncli.add_command(project_cli)\ncli.add_command(session_cli)\n\ntry:\n from ray.serve.scripts import serve_cli\n cli.add_command(serve_cli)\nexcept Exception as e:\n logger.debug(\n \"Integrating ray serve command line tool failed with {}\".format(e))\n\n\ndef main():\n return cli()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "python/ray/scripts/scripts.py"}]} |
gh_patches_debug_1175 | rasdani/github-patches | git_diff | privacyidea__privacyidea-1978 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Missing requirement in setup.py
The `flask-versioned` package is missing in `setup.py`s `install_requires` list. When installing privacyIDEA via `setup.py` or `pip` this will break.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function
3 from setuptools import setup, find_packages
4 import os
5 import stat
6 import sys
7
8 #VERSION="2.1dev4"
9 VERSION="3.2"
10
11 # Taken from kennethreitz/requests/setup.py
12 package_directory = os.path.realpath(os.path.dirname(__file__))
13
14
15 def get_file_contents(file_path):
16 """Get the context of the file using full path name."""
17 content = ""
18 try:
19 full_path = os.path.join(package_directory, file_path)
20 content = open(full_path, 'r').read()
21 except:
22 print("### could not open file {0!r}".format(file_path), file=sys.stderr)
23 return content
24
25
26 def get_file_list(file_path):
27 full_path = os.path.join(package_directory, file_path)
28 file_list = os.listdir(full_path)
29 # now we need to add the path to the files
30 return [file_path + f for f in file_list]
31
32
33 install_requires = ["Flask>=0.10.1",
34 "Flask-Migrate>=1.2.0",
35 "Flask-SQLAlchemy>=2.0",
36 "Flask-Script>=2.0.5",
37 "Jinja2>=2.10.1",
38 "Mako>=0.9.1",
39 "PyMySQL>=0.6.6",
40 "Pillow>=6.2.1",
41 "PyJWT>=1.3.0",
42 "PyYAML>=5.1",
43 "SQLAlchemy>=1.3.0",
44 "Werkzeug>=0.10.4",
45 "alembic>=0.6.7",
46 "bcrypt>=1.1.0",
47 "beautifulsoup4>=4.3.2",
48 "ldap3>=2.6",
49 "netaddr>=0.7.12",
50 "passlib>=1.6.2",
51 "pyOpenSSL>=17.5",
52 "pyrad>=2.0",
53 "qrcode>=6.1",
54 "requests>=2.7.0",
55 "sqlsoup>=0.9.0",
56 "ecdsa>=0.13.3",
57 "lxml>=4.2.5",
58 "python-gnupg>=0.4.4",
59 "defusedxml>=0.4.1",
60 "flask-babel>=0.9",
61 "croniter>=0.3.8",
62 "oauth2client>=2.0.1",
63 "configobj>=5.0.6"
64 ]
65
66 # For python 2.6 we need additional dependency importlib
67 try:
68 import importlib
69 except ImportError:
70 install_requires.append('importlib')
71
72
73 def get_man_pages(dir):
74 """
75 Get man pages in a directory.
76 :param dir:
77 :return: list of file names
78 """
79 files = os.listdir(dir)
80 r_files = []
81 for file in files:
82 if file.endswith(".1"):
83 r_files.append(dir + "/" + file)
84 return r_files
85
86
87 def get_scripts(dir):
88 """
89 Get files that are executable
90 :param dir:
91 :return: list of file names
92 """
93 files = os.listdir(dir)
94 r_files = []
95 for file in files:
96 if os.stat(dir + "/" + file)[stat.ST_MODE] & stat.S_IEXEC:
97 r_files.append(dir + "/" + file)
98 return r_files
99
100
101 setup(
102 name='privacyIDEA',
103 version=VERSION,
104 description='privacyIDEA: identity, multifactor authentication (OTP), '
105 'authorization, audit',
106 author='privacyidea.org',
107 license='AGPLv3',
108 author_email='[email protected]',
109 url='http://www.privacyidea.org',
110 keywords='OTP, two factor authentication, management, security',
111 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
112 packages=find_packages(),
113 scripts=["pi-manage"] + get_scripts("tools"),
114 extras_require={
115 'dev': ["Sphinx>=1.3.1",
116 "sphinxcontrib-httpdomain>=1.3.0"],
117 'test': ["coverage>=3.7.1",
118 "mock>=1.0.1",
119 "pyparsing>=2.0.3",
120 "nose>=1.3.4",
121 "responses>=0.4.0",
122 "six>=1.8.0"],
123 },
124 install_requires=install_requires,
125 include_package_data=True,
126 data_files=[('etc/privacyidea/',
127 ['deploy/apache/privacyideaapp.wsgi',
128 'deploy/privacyidea/dictionary']),
129 ('share/man/man1', get_man_pages("tools")),
130 ('lib/privacyidea/migrations',
131 ["migrations/alembic.ini",
132 "migrations/env.py",
133 "migrations/README",
134 "migrations/script.py.mako"]),
135 ('lib/privacyidea/migrations/versions',
136 get_file_list("migrations/versions/")),
137 ('lib/privacyidea/', ['requirements.txt'])
138 ],
139 classifiers=["Framework :: Flask",
140 "License :: OSI Approved :: "
141 "GNU Affero General Public License v3",
142 "Programming Language :: Python",
143 "Development Status :: 5 - Production/Stable",
144 "Topic :: Internet",
145 "Topic :: Security",
146 "Topic :: System ::"
147 " Systems Administration :: Authentication/Directory",
148 'Programming Language :: Python',
149 'Programming Language :: Python :: 2',
150 'Programming Language :: Python :: 2.7',
151 'Programming Language :: Python :: 3',
152 'Programming Language :: Python :: 3.5',
153 'Programming Language :: Python :: 3.6',
154 'Programming Language :: Python :: 3.7'
155 ],
156 #message_extractors={'privacyidea': [
157 # ('**.py', 'python', None),
158 # ('static/**.html', 'html', {'input_encoding': 'utf-8'})]},
159 zip_safe=False,
160 long_description=get_file_contents('README.rst')
161 )
162
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -60,7 +60,8 @@
"flask-babel>=0.9",
"croniter>=0.3.8",
"oauth2client>=2.0.1",
- "configobj>=5.0.6"
+ "configobj>=5.0.6",
+ "flask-versioned>=0.9.4"
]
# For python 2.6 we need additional dependency importlib
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -60,7 +60,8 @@\n \"flask-babel>=0.9\",\n \"croniter>=0.3.8\",\n \"oauth2client>=2.0.1\",\n- \"configobj>=5.0.6\"\n+ \"configobj>=5.0.6\",\n+ \"flask-versioned>=0.9.4\"\n ]\n \n # For python 2.6 we need additional dependency importlib\n", "issue": "Missing requirement in setup.py\nThe `flask-versioned` package is missing in `setup.py`s `install_requires` list. When installing privacyIDEA via `setup.py` or `pip` this will break.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup, find_packages\nimport os\nimport stat\nimport sys\n\n#VERSION=\"2.1dev4\"\nVERSION=\"3.2\"\n\n# Taken from kennethreitz/requests/setup.py\npackage_directory = os.path.realpath(os.path.dirname(__file__))\n\n\ndef get_file_contents(file_path):\n \"\"\"Get the context of the file using full path name.\"\"\"\n content = \"\"\n try:\n full_path = os.path.join(package_directory, file_path)\n content = open(full_path, 'r').read()\n except:\n print(\"### could not open file {0!r}\".format(file_path), file=sys.stderr)\n return content\n\n\ndef get_file_list(file_path):\n full_path = os.path.join(package_directory, file_path)\n file_list = os.listdir(full_path)\n # now we need to add the path to the files\n return [file_path + f for f in file_list]\n\n\ninstall_requires = [\"Flask>=0.10.1\",\n \"Flask-Migrate>=1.2.0\",\n \"Flask-SQLAlchemy>=2.0\",\n \"Flask-Script>=2.0.5\",\n \"Jinja2>=2.10.1\",\n \"Mako>=0.9.1\",\n \"PyMySQL>=0.6.6\",\n \"Pillow>=6.2.1\",\n \"PyJWT>=1.3.0\",\n \"PyYAML>=5.1\",\n \"SQLAlchemy>=1.3.0\",\n \"Werkzeug>=0.10.4\",\n \"alembic>=0.6.7\",\n \"bcrypt>=1.1.0\",\n \"beautifulsoup4>=4.3.2\",\n \"ldap3>=2.6\",\n \"netaddr>=0.7.12\",\n \"passlib>=1.6.2\",\n \"pyOpenSSL>=17.5\",\n \"pyrad>=2.0\",\n \"qrcode>=6.1\",\n \"requests>=2.7.0\",\n \"sqlsoup>=0.9.0\",\n \"ecdsa>=0.13.3\",\n \"lxml>=4.2.5\",\n \"python-gnupg>=0.4.4\",\n \"defusedxml>=0.4.1\",\n \"flask-babel>=0.9\",\n \"croniter>=0.3.8\",\n \"oauth2client>=2.0.1\",\n \"configobj>=5.0.6\"\n ]\n\n# For python 2.6 we need additional dependency importlib\ntry:\n import importlib\nexcept ImportError:\n install_requires.append('importlib')\n\n\ndef get_man_pages(dir):\n \"\"\"\n Get man pages in a directory.\n :param dir: \n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if file.endswith(\".1\"):\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\ndef get_scripts(dir):\n \"\"\"\n Get files that are executable\n :param dir: \n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if os.stat(dir + \"/\" + file)[stat.ST_MODE] & stat.S_IEXEC:\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\nsetup(\n name='privacyIDEA',\n version=VERSION,\n description='privacyIDEA: identity, multifactor authentication (OTP), '\n 'authorization, audit',\n author='privacyidea.org',\n license='AGPLv3',\n author_email='[email protected]',\n url='http://www.privacyidea.org',\n keywords='OTP, two factor authentication, management, security',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n packages=find_packages(),\n scripts=[\"pi-manage\"] + get_scripts(\"tools\"),\n extras_require={\n 'dev': [\"Sphinx>=1.3.1\",\n \"sphinxcontrib-httpdomain>=1.3.0\"],\n 'test': [\"coverage>=3.7.1\",\n \"mock>=1.0.1\",\n \"pyparsing>=2.0.3\",\n \"nose>=1.3.4\",\n \"responses>=0.4.0\",\n \"six>=1.8.0\"],\n },\n install_requires=install_requires,\n include_package_data=True,\n data_files=[('etc/privacyidea/',\n ['deploy/apache/privacyideaapp.wsgi',\n 'deploy/privacyidea/dictionary']),\n ('share/man/man1', get_man_pages(\"tools\")),\n ('lib/privacyidea/migrations',\n [\"migrations/alembic.ini\",\n \"migrations/env.py\",\n \"migrations/README\",\n \"migrations/script.py.mako\"]),\n ('lib/privacyidea/migrations/versions',\n get_file_list(\"migrations/versions/\")),\n ('lib/privacyidea/', ['requirements.txt'])\n ],\n classifiers=[\"Framework :: Flask\",\n \"License :: OSI Approved :: \"\n \"GNU Affero General Public License v3\",\n \"Programming Language :: Python\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Internet\",\n \"Topic :: Security\",\n \"Topic :: System ::\"\n \" Systems Administration :: Authentication/Directory\",\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7'\n ],\n #message_extractors={'privacyidea': [\n # ('**.py', 'python', None),\n # ('static/**.html', 'html', {'input_encoding': 'utf-8'})]},\n zip_safe=False,\n long_description=get_file_contents('README.rst')\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup, find_packages\nimport os\nimport stat\nimport sys\n\n#VERSION=\"2.1dev4\"\nVERSION=\"3.2\"\n\n# Taken from kennethreitz/requests/setup.py\npackage_directory = os.path.realpath(os.path.dirname(__file__))\n\n\ndef get_file_contents(file_path):\n \"\"\"Get the context of the file using full path name.\"\"\"\n content = \"\"\n try:\n full_path = os.path.join(package_directory, file_path)\n content = open(full_path, 'r').read()\n except:\n print(\"### could not open file {0!r}\".format(file_path), file=sys.stderr)\n return content\n\n\ndef get_file_list(file_path):\n full_path = os.path.join(package_directory, file_path)\n file_list = os.listdir(full_path)\n # now we need to add the path to the files\n return [file_path + f for f in file_list]\n\n\ninstall_requires = [\"Flask>=0.10.1\",\n \"Flask-Migrate>=1.2.0\",\n \"Flask-SQLAlchemy>=2.0\",\n \"Flask-Script>=2.0.5\",\n \"Jinja2>=2.10.1\",\n \"Mako>=0.9.1\",\n \"PyMySQL>=0.6.6\",\n \"Pillow>=6.2.1\",\n \"PyJWT>=1.3.0\",\n \"PyYAML>=5.1\",\n \"SQLAlchemy>=1.3.0\",\n \"Werkzeug>=0.10.4\",\n \"alembic>=0.6.7\",\n \"bcrypt>=1.1.0\",\n \"beautifulsoup4>=4.3.2\",\n \"ldap3>=2.6\",\n \"netaddr>=0.7.12\",\n \"passlib>=1.6.2\",\n \"pyOpenSSL>=17.5\",\n \"pyrad>=2.0\",\n \"qrcode>=6.1\",\n \"requests>=2.7.0\",\n \"sqlsoup>=0.9.0\",\n \"ecdsa>=0.13.3\",\n \"lxml>=4.2.5\",\n \"python-gnupg>=0.4.4\",\n \"defusedxml>=0.4.1\",\n \"flask-babel>=0.9\",\n \"croniter>=0.3.8\",\n \"oauth2client>=2.0.1\",\n \"configobj>=5.0.6\",\n \"flask-versioned>=0.9.4\"\n ]\n\n# For python 2.6 we need additional dependency importlib\ntry:\n import importlib\nexcept ImportError:\n install_requires.append('importlib')\n\n\ndef get_man_pages(dir):\n \"\"\"\n Get man pages in a directory.\n :param dir: \n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if file.endswith(\".1\"):\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\ndef get_scripts(dir):\n \"\"\"\n Get files that are executable\n :param dir: \n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if os.stat(dir + \"/\" + file)[stat.ST_MODE] & stat.S_IEXEC:\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\nsetup(\n name='privacyIDEA',\n version=VERSION,\n description='privacyIDEA: identity, multifactor authentication (OTP), '\n 'authorization, audit',\n author='privacyidea.org',\n license='AGPLv3',\n author_email='[email protected]',\n url='http://www.privacyidea.org',\n keywords='OTP, two factor authentication, management, security',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n packages=find_packages(),\n scripts=[\"pi-manage\"] + get_scripts(\"tools\"),\n extras_require={\n 'dev': [\"Sphinx>=1.3.1\",\n \"sphinxcontrib-httpdomain>=1.3.0\"],\n 'test': [\"coverage>=3.7.1\",\n \"mock>=1.0.1\",\n \"pyparsing>=2.0.3\",\n \"nose>=1.3.4\",\n \"responses>=0.4.0\",\n \"six>=1.8.0\"],\n },\n install_requires=install_requires,\n include_package_data=True,\n data_files=[('etc/privacyidea/',\n ['deploy/apache/privacyideaapp.wsgi',\n 'deploy/privacyidea/dictionary']),\n ('share/man/man1', get_man_pages(\"tools\")),\n ('lib/privacyidea/migrations',\n [\"migrations/alembic.ini\",\n \"migrations/env.py\",\n \"migrations/README\",\n \"migrations/script.py.mako\"]),\n ('lib/privacyidea/migrations/versions',\n get_file_list(\"migrations/versions/\")),\n ('lib/privacyidea/', ['requirements.txt'])\n ],\n classifiers=[\"Framework :: Flask\",\n \"License :: OSI Approved :: \"\n \"GNU Affero General Public License v3\",\n \"Programming Language :: Python\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Internet\",\n \"Topic :: Security\",\n \"Topic :: System ::\"\n \" Systems Administration :: Authentication/Directory\",\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7'\n ],\n #message_extractors={'privacyidea': [\n # ('**.py', 'python', None),\n # ('static/**.html', 'html', {'input_encoding': 'utf-8'})]},\n zip_safe=False,\n long_description=get_file_contents('README.rst')\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1176 | rasdani/github-patches | git_diff | python-poetry__poetry-235 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
"poetry remove" case-sensitivity (qol)
```bash
$ poetry add pyyaml
Using version ^3.12 for PyYAML
Updating dependencies
Resolving dependencies...
Package operations: 1 install, 0 updates, 0 removals
Writing lock file
- Installing pyyaml (3.12)
$ poetry remove pyyaml
[KeyError]
remove [-D|--dev] [--dry-run] [--] <packages> (<packages>)...
$ poetry remove PyYAML
Updating dependencies
Resolving dependencies...
Package operations: 0 installs, 0 updates, 1 removal
Writing lock file
- Removing pyyaml (3.12)
```
Not urgent but sending a hint such as "Dependencies are case sensitive." would have been really helpful.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `poetry/console/commands/remove.py`
Content:
```
1 from .venv_command import VenvCommand
2
3
4 class RemoveCommand(VenvCommand):
5 """
6 Removes a package from the project dependencies.
7
8 remove
9 { packages* : Packages that should be removed. }
10 {--D|dev : Removes a package from the development dependencies. }
11 {--dry-run : Outputs the operations but will not execute anything
12 (implicitly enables --verbose). }
13 """
14
15 help = """The <info>remove</info> command removes a package from the current
16 list of installed packages
17
18 <info>poetry remove</info>"""
19
20 _loggers = ["poetry.repositories.pypi_repository"]
21
22 def handle(self):
23 from poetry.installation import Installer
24
25 packages = self.argument("packages")
26 is_dev = self.option("dev")
27
28 original_content = self.poetry.file.read()
29 content = self.poetry.file.read()
30 poetry_content = content["tool"]["poetry"]
31 section = "dependencies"
32 if is_dev:
33 section = "dev-dependencies"
34
35 # Deleting entries
36 requirements = {}
37 for name in packages:
38 found = False
39 for key in poetry_content[section]:
40 if key.lower() == name.lower():
41 found = True
42 requirements[name] = poetry_content[section][name]
43 break
44
45 if not found:
46 raise ValueError("Package {} not found".format(name))
47
48 for key in requirements:
49 del poetry_content[section][key]
50
51 # Write the new content back
52 self.poetry.file.write(content)
53
54 # Update packages
55 self.reset_poetry()
56
57 installer = Installer(
58 self.output,
59 self.venv,
60 self.poetry.package,
61 self.poetry.locker,
62 self.poetry.pool,
63 )
64
65 installer.dry_run(self.option("dry-run"))
66 installer.update(True)
67 installer.whitelist(requirements)
68
69 try:
70 status = installer.run()
71 except Exception:
72 self.poetry.file.write(original_content)
73
74 raise
75
76 if status != 0 or self.option("dry-run"):
77 # Revert changes
78 if not self.option("dry-run"):
79 self.error(
80 "\n"
81 "Removal failed, reverting pyproject.toml "
82 "to its original content."
83 )
84
85 self.poetry.file.write(original_content)
86
87 return status
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/poetry/console/commands/remove.py b/poetry/console/commands/remove.py
--- a/poetry/console/commands/remove.py
+++ b/poetry/console/commands/remove.py
@@ -39,7 +39,7 @@
for key in poetry_content[section]:
if key.lower() == name.lower():
found = True
- requirements[name] = poetry_content[section][name]
+ requirements[key] = poetry_content[section][key]
break
if not found:
| {"golden_diff": "diff --git a/poetry/console/commands/remove.py b/poetry/console/commands/remove.py\n--- a/poetry/console/commands/remove.py\n+++ b/poetry/console/commands/remove.py\n@@ -39,7 +39,7 @@\n for key in poetry_content[section]:\n if key.lower() == name.lower():\n found = True\n- requirements[name] = poetry_content[section][name]\n+ requirements[key] = poetry_content[section][key]\n break\n \n if not found:\n", "issue": "\"poetry remove\" case-sensitivity (qol)\n```bash\r\n$ poetry add pyyaml\r\nUsing version ^3.12 for PyYAML\r\n\r\nUpdating dependencies\r\nResolving dependencies...\r\n\r\n\r\nPackage operations: 1 install, 0 updates, 0 removals\r\n\r\nWriting lock file\r\n\r\n - Installing pyyaml (3.12)\r\n$ poetry remove pyyaml\r\n\r\n[KeyError]\r\n\r\nremove [-D|--dev] [--dry-run] [--] <packages> (<packages>)...\r\n\r\n$ poetry remove PyYAML\r\nUpdating dependencies\r\nResolving dependencies...\r\n\r\n\r\nPackage operations: 0 installs, 0 updates, 1 removal\r\n\r\nWriting lock file\r\n\r\n - Removing pyyaml (3.12)\r\n```\r\n\r\nNot urgent but sending a hint such as \"Dependencies are case sensitive.\" would have been really helpful.\n", "before_files": [{"content": "from .venv_command import VenvCommand\n\n\nclass RemoveCommand(VenvCommand):\n \"\"\"\n Removes a package from the project dependencies.\n\n remove\n { packages* : Packages that should be removed. }\n {--D|dev : Removes a package from the development dependencies. }\n {--dry-run : Outputs the operations but will not execute anything\n (implicitly enables --verbose). }\n \"\"\"\n\n help = \"\"\"The <info>remove</info> command removes a package from the current\nlist of installed packages\n\n<info>poetry remove</info>\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from poetry.installation import Installer\n\n packages = self.argument(\"packages\")\n is_dev = self.option(\"dev\")\n\n original_content = self.poetry.file.read()\n content = self.poetry.file.read()\n poetry_content = content[\"tool\"][\"poetry\"]\n section = \"dependencies\"\n if is_dev:\n section = \"dev-dependencies\"\n\n # Deleting entries\n requirements = {}\n for name in packages:\n found = False\n for key in poetry_content[section]:\n if key.lower() == name.lower():\n found = True\n requirements[name] = poetry_content[section][name]\n break\n\n if not found:\n raise ValueError(\"Package {} not found\".format(name))\n\n for key in requirements:\n del poetry_content[section][key]\n\n # Write the new content back\n self.poetry.file.write(content)\n\n # Update packages\n self.reset_poetry()\n\n installer = Installer(\n self.output,\n self.venv,\n self.poetry.package,\n self.poetry.locker,\n self.poetry.pool,\n )\n\n installer.dry_run(self.option(\"dry-run\"))\n installer.update(True)\n installer.whitelist(requirements)\n\n try:\n status = installer.run()\n except Exception:\n self.poetry.file.write(original_content)\n\n raise\n\n if status != 0 or self.option(\"dry-run\"):\n # Revert changes\n if not self.option(\"dry-run\"):\n self.error(\n \"\\n\"\n \"Removal failed, reverting pyproject.toml \"\n \"to its original content.\"\n )\n\n self.poetry.file.write(original_content)\n\n return status\n", "path": "poetry/console/commands/remove.py"}], "after_files": [{"content": "from .venv_command import VenvCommand\n\n\nclass RemoveCommand(VenvCommand):\n \"\"\"\n Removes a package from the project dependencies.\n\n remove\n { packages* : Packages that should be removed. }\n {--D|dev : Removes a package from the development dependencies. }\n {--dry-run : Outputs the operations but will not execute anything\n (implicitly enables --verbose). }\n \"\"\"\n\n help = \"\"\"The <info>remove</info> command removes a package from the current\nlist of installed packages\n\n<info>poetry remove</info>\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from poetry.installation import Installer\n\n packages = self.argument(\"packages\")\n is_dev = self.option(\"dev\")\n\n original_content = self.poetry.file.read()\n content = self.poetry.file.read()\n poetry_content = content[\"tool\"][\"poetry\"]\n section = \"dependencies\"\n if is_dev:\n section = \"dev-dependencies\"\n\n # Deleting entries\n requirements = {}\n for name in packages:\n found = False\n for key in poetry_content[section]:\n if key.lower() == name.lower():\n found = True\n requirements[key] = poetry_content[section][key]\n break\n\n if not found:\n raise ValueError(\"Package {} not found\".format(name))\n\n for key in requirements:\n del poetry_content[section][key]\n\n # Write the new content back\n self.poetry.file.write(content)\n\n # Update packages\n self.reset_poetry()\n\n installer = Installer(\n self.output,\n self.venv,\n self.poetry.package,\n self.poetry.locker,\n self.poetry.pool,\n )\n\n installer.dry_run(self.option(\"dry-run\"))\n installer.update(True)\n installer.whitelist(requirements)\n\n try:\n status = installer.run()\n except Exception:\n self.poetry.file.write(original_content)\n\n raise\n\n if status != 0 or self.option(\"dry-run\"):\n # Revert changes\n if not self.option(\"dry-run\"):\n self.error(\n \"\\n\"\n \"Removal failed, reverting pyproject.toml \"\n \"to its original content.\"\n )\n\n self.poetry.file.write(original_content)\n\n return status\n", "path": "poetry/console/commands/remove.py"}]} |
gh_patches_debug_1177 | rasdani/github-patches | git_diff | openvinotoolkit__datumaro-426 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug: concatenation for the different types in COCO format
https://github.com/openvinotoolkit/datumaro/blob/935fb0ede3c70a68582c7d13b5f5450e51f81235/datumaro/plugins/coco_format/converter.py#L232-L234
masks has type generator, but then we use += between generator and itertools.chain Python doesn't support such concatenation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `datumaro/plugins/coco_format/converter.py`
Content:
```
1 # Copyright (C) 2020-2021 Intel Corporation
2 #
3 # SPDX-License-Identifier: MIT
4
5 from enum import Enum, auto
6 from itertools import chain, groupby
7 import json
8 import logging as log
9 import os
10 import os.path as osp
11
12 import pycocotools.mask as mask_utils
13
14 from datumaro.components.converter import Converter
15 from datumaro.components.dataset import ItemStatus
16 from datumaro.components.extractor import (
17 _COORDINATE_ROUNDING_DIGITS, AnnotationType, DatasetItem, Points,
18 )
19 from datumaro.util import cast, find, str_to_bool
20 from datumaro.util.image import save_image
21 import datumaro.util.annotation_util as anno_tools
22 import datumaro.util.mask_tools as mask_tools
23
24 from .format import CocoPath, CocoTask
25
26
27 class SegmentationMode(Enum):
28 guess = auto()
29 polygons = auto()
30 mask = auto()
31
32 class _TaskConverter:
33 def __init__(self, context):
34 self._min_ann_id = 1
35 self._context = context
36
37 data = {
38 'licenses': [],
39 'info': {},
40 'categories': [],
41 'images': [],
42 'annotations': []
43 }
44
45 data['licenses'].append({
46 'name': '',
47 'id': 0,
48 'url': ''
49 })
50
51 data['info'] = {
52 'contributor': '',
53 'date_created': '',
54 'description': '',
55 'url': '',
56 'version': '',
57 'year': ''
58 }
59 self._data = data
60
61 def is_empty(self):
62 return len(self._data['annotations']) == 0
63
64 def _get_image_id(self, item):
65 return self._context._get_image_id(item)
66
67 def save_image_info(self, item, filename):
68 h = 0
69 w = 0
70 if item.has_image and item.image.size:
71 h, w = item.image.size
72
73 self._data['images'].append({
74 'id': self._get_image_id(item),
75 'width': int(w),
76 'height': int(h),
77 'file_name': cast(filename, str, ''),
78 'license': 0,
79 'flickr_url': '',
80 'coco_url': '',
81 'date_captured': 0,
82 })
83
84 def save_categories(self, dataset):
85 raise NotImplementedError()
86
87 def save_annotations(self, item):
88 raise NotImplementedError()
89
90 def write(self, path):
91 next_id = self._min_ann_id
92 for ann in self.annotations:
93 if not ann['id']:
94 ann['id'] = next_id
95 next_id += 1
96
97 with open(path, 'w', encoding='utf-8') as outfile:
98 json.dump(self._data, outfile, ensure_ascii=False)
99
100 @property
101 def annotations(self):
102 return self._data['annotations']
103
104 @property
105 def categories(self):
106 return self._data['categories']
107
108 def _get_ann_id(self, annotation):
109 ann_id = 0 if self._context._reindex else annotation.id
110 if ann_id:
111 self._min_ann_id = max(ann_id, self._min_ann_id)
112 return ann_id
113
114 @staticmethod
115 def _convert_attributes(ann):
116 return { k: v for k, v in ann.attributes.items()
117 if k not in {'is_crowd', 'score'}
118 }
119
120 class _ImageInfoConverter(_TaskConverter):
121 def is_empty(self):
122 return len(self._data['images']) == 0
123
124 def save_categories(self, dataset):
125 pass
126
127 def save_annotations(self, item):
128 pass
129
130 class _CaptionsConverter(_TaskConverter):
131 def save_categories(self, dataset):
132 pass
133
134 def save_annotations(self, item):
135 for ann_idx, ann in enumerate(item.annotations):
136 if ann.type != AnnotationType.caption:
137 continue
138
139 elem = {
140 'id': self._get_ann_id(ann),
141 'image_id': self._get_image_id(item),
142 'category_id': 0, # NOTE: workaround for a bug in cocoapi
143 'caption': ann.caption,
144 }
145 if 'score' in ann.attributes:
146 try:
147 elem['score'] = float(ann.attributes['score'])
148 except Exception as e:
149 log.warning("Item '%s', ann #%s: failed to convert "
150 "attribute 'score': %e" % (item.id, ann_idx, e))
151 if self._context._allow_attributes:
152 attrs = self._convert_attributes(ann)
153 if attrs:
154 elem['attributes'] = attrs
155
156 self.annotations.append(elem)
157
158 class _InstancesConverter(_TaskConverter):
159 def save_categories(self, dataset):
160 label_categories = dataset.categories().get(AnnotationType.label)
161 if label_categories is None:
162 return
163
164 for idx, cat in enumerate(label_categories.items):
165 self.categories.append({
166 'id': 1 + idx,
167 'name': cast(cat.name, str, ''),
168 'supercategory': cast(cat.parent, str, ''),
169 })
170
171 @classmethod
172 def crop_segments(cls, instances, img_width, img_height):
173 instances = sorted(instances, key=lambda x: x[0].z_order)
174
175 segment_map = []
176 segments = []
177 for inst_idx, (_, polygons, mask, _) in enumerate(instances):
178 if polygons:
179 segment_map.extend(inst_idx for p in polygons)
180 segments.extend(polygons)
181 elif mask is not None:
182 segment_map.append(inst_idx)
183 segments.append(mask)
184
185 segments = mask_tools.crop_covered_segments(
186 segments, img_width, img_height)
187
188 for inst_idx, inst in enumerate(instances):
189 new_segments = [s for si_id, s in zip(segment_map, segments)
190 if si_id == inst_idx]
191
192 if not new_segments:
193 inst[1] = []
194 inst[2] = None
195 continue
196
197 if inst[1]:
198 inst[1] = sum(new_segments, [])
199 else:
200 mask = mask_tools.merge_masks(new_segments)
201 inst[2] = mask_tools.mask_to_rle(mask)
202
203 return instances
204
205 def find_instance_parts(self, group, img_width, img_height):
206 boxes = [a for a in group if a.type == AnnotationType.bbox]
207 polygons = [a for a in group if a.type == AnnotationType.polygon]
208 masks = [a for a in group if a.type == AnnotationType.mask]
209
210 anns = boxes + polygons + masks
211 leader = anno_tools.find_group_leader(anns)
212 bbox = anno_tools.max_bbox(anns)
213 mask = None
214 polygons = [p.points for p in polygons]
215
216 if self._context._segmentation_mode == SegmentationMode.guess:
217 use_masks = True == leader.attributes.get('is_crowd',
218 find(masks, lambda x: x.label == leader.label) is not None)
219 elif self._context._segmentation_mode == SegmentationMode.polygons:
220 use_masks = False
221 elif self._context._segmentation_mode == SegmentationMode.mask:
222 use_masks = True
223 else:
224 raise NotImplementedError("Unexpected segmentation mode '%s'" % \
225 self._context._segmentation_mode)
226
227 if use_masks:
228 if polygons:
229 mask = mask_tools.rles_to_mask(polygons, img_width, img_height)
230
231 if masks:
232 masks = (m.image for m in masks)
233 if mask is not None:
234 masks += chain(masks, [mask])
235 mask = mask_tools.merge_masks(masks)
236
237 if mask is not None:
238 mask = mask_tools.mask_to_rle(mask)
239 polygons = []
240 else:
241 if masks:
242 mask = mask_tools.merge_masks(m.image for m in masks)
243 polygons += mask_tools.mask_to_polygons(mask)
244 mask = None
245
246 return [leader, polygons, mask, bbox]
247
248 @staticmethod
249 def find_instance_anns(annotations):
250 return [a for a in annotations
251 if a.type in { AnnotationType.bbox,
252 AnnotationType.polygon, AnnotationType.mask }
253 ]
254
255 @classmethod
256 def find_instances(cls, annotations):
257 return anno_tools.find_instances(cls.find_instance_anns(annotations))
258
259 def save_annotations(self, item):
260 instances = self.find_instances(item.annotations)
261 if not instances:
262 return
263
264 if not item.has_image or not item.image.size:
265 log.warning("Item '%s': skipping writing instances "
266 "since no image info available" % item.id)
267 return
268 h, w = item.image.size
269 instances = [self.find_instance_parts(i, w, h) for i in instances]
270
271 if self._context._crop_covered:
272 instances = self.crop_segments(instances, w, h)
273
274 for instance in instances:
275 elem = self.convert_instance(instance, item)
276 if elem:
277 self.annotations.append(elem)
278
279 def convert_instance(self, instance, item):
280 ann, polygons, mask, bbox = instance
281
282 is_crowd = mask is not None
283 if is_crowd:
284 segmentation = {
285 'counts': list(int(c) for c in mask['counts']),
286 'size': list(int(c) for c in mask['size'])
287 }
288 else:
289 segmentation = [list(map(float, p)) for p in polygons]
290
291 area = 0
292 if segmentation:
293 if item.has_image and item.image.size:
294 h, w = item.image.size
295 else:
296 # Here we can guess the image size as
297 # it is only needed for the area computation
298 w = bbox[0] + bbox[2]
299 h = bbox[1] + bbox[3]
300
301 rles = mask_utils.frPyObjects(segmentation, h, w)
302 if is_crowd:
303 rles = [rles]
304 else:
305 rles = mask_utils.merge(rles)
306 area = mask_utils.area(rles)
307 else:
308 _, _, w, h = bbox
309 segmentation = []
310 area = w * h
311
312 elem = {
313 'id': self._get_ann_id(ann),
314 'image_id': self._get_image_id(item),
315 'category_id': cast(ann.label, int, -1) + 1,
316 'segmentation': segmentation,
317 'area': float(area),
318 'bbox': [round(float(n), _COORDINATE_ROUNDING_DIGITS) for n in bbox],
319 'iscrowd': int(is_crowd),
320 }
321 if 'score' in ann.attributes:
322 try:
323 elem['score'] = float(ann.attributes['score'])
324 except Exception as e:
325 log.warning("Item '%s': failed to convert attribute "
326 "'score': %e" % (item.id, e))
327 if self._context._allow_attributes:
328 attrs = self._convert_attributes(ann)
329 if attrs:
330 elem['attributes'] = attrs
331
332 return elem
333
334 class _KeypointsConverter(_InstancesConverter):
335 def save_categories(self, dataset):
336 label_categories = dataset.categories().get(AnnotationType.label)
337 if label_categories is None:
338 return
339 point_categories = dataset.categories().get(AnnotationType.points)
340
341 for idx, label_cat in enumerate(label_categories.items):
342 cat = {
343 'id': 1 + idx,
344 'name': cast(label_cat.name, str, ''),
345 'supercategory': cast(label_cat.parent, str, ''),
346 'keypoints': [],
347 'skeleton': [],
348 }
349
350 if point_categories is not None:
351 kp_cat = point_categories.items.get(idx)
352 if kp_cat is not None:
353 cat.update({
354 'keypoints': [str(l) for l in kp_cat.labels],
355 'skeleton': [list(map(int, j)) for j in kp_cat.joints],
356 })
357 self.categories.append(cat)
358
359 def save_annotations(self, item):
360 point_annotations = [a for a in item.annotations
361 if a.type == AnnotationType.points]
362 if not point_annotations:
363 return
364
365 # Create annotations for solitary keypoints annotations
366 for points in self.find_solitary_points(item.annotations):
367 instance = [points, [], None, points.get_bbox()]
368 elem = super().convert_instance(instance, item)
369 elem.update(self.convert_points_object(points))
370 self.annotations.append(elem)
371
372 # Create annotations for complete instance + keypoints annotations
373 super().save_annotations(item)
374
375 @classmethod
376 def find_solitary_points(cls, annotations):
377 annotations = sorted(annotations, key=lambda a: a.group)
378 solitary_points = []
379
380 for g_id, group in groupby(annotations, lambda a: a.group):
381 if not g_id or g_id and not cls.find_instance_anns(group):
382 group = [a for a in group if a.type == AnnotationType.points]
383 solitary_points.extend(group)
384
385 return solitary_points
386
387 @staticmethod
388 def convert_points_object(ann):
389 keypoints = []
390 points = ann.points
391 visibility = ann.visibility
392 for index in range(0, len(points), 2):
393 kp = points[index : index + 2]
394 state = visibility[index // 2].value
395 keypoints.extend([*kp, state])
396
397 num_annotated = len([v for v in visibility \
398 if v != Points.Visibility.absent])
399
400 return {
401 'keypoints': keypoints,
402 'num_keypoints': num_annotated,
403 }
404
405 def convert_instance(self, instance, item):
406 points_ann = find(item.annotations, lambda x: \
407 x.type == AnnotationType.points and \
408 instance[0].group and x.group == instance[0].group)
409 if not points_ann:
410 return None
411
412 elem = super().convert_instance(instance, item)
413 elem.update(self.convert_points_object(points_ann))
414
415 return elem
416
417 class _LabelsConverter(_TaskConverter):
418 def save_categories(self, dataset):
419 label_categories = dataset.categories().get(AnnotationType.label)
420 if label_categories is None:
421 return
422
423 for idx, cat in enumerate(label_categories.items):
424 self.categories.append({
425 'id': 1 + idx,
426 'name': cast(cat.name, str, ''),
427 'supercategory': cast(cat.parent, str, ''),
428 })
429
430 def save_annotations(self, item):
431 for ann in item.annotations:
432 if ann.type != AnnotationType.label:
433 continue
434
435 elem = {
436 'id': self._get_ann_id(ann),
437 'image_id': self._get_image_id(item),
438 'category_id': int(ann.label) + 1,
439 }
440 if 'score' in ann.attributes:
441 try:
442 elem['score'] = float(ann.attributes['score'])
443 except Exception as e:
444 log.warning("Item '%s': failed to convert attribute "
445 "'score': %e" % (item.id, e))
446 if self._context._allow_attributes:
447 attrs = self._convert_attributes(ann)
448 if attrs:
449 elem['attributes'] = attrs
450
451 self.annotations.append(elem)
452
453 class _StuffConverter(_InstancesConverter):
454 pass
455
456 class _PanopticConverter(_TaskConverter):
457 def write(self, path):
458 with open(path, 'w', encoding='utf-8') as outfile:
459 json.dump(self._data, outfile, ensure_ascii=False)
460
461 def save_categories(self, dataset):
462 label_categories = dataset.categories().get(AnnotationType.label)
463 if label_categories is None:
464 return
465
466 for idx, cat in enumerate(label_categories.items):
467 self.categories.append({
468 'id': 1 + idx,
469 'name': cast(cat.name, str, ''),
470 'supercategory': cast(cat.parent, str, ''),
471 'isthing': 0, # TODO: can't represent this information yet
472 })
473
474 def save_annotations(self, item):
475 if not item.has_image:
476 return
477
478 ann_filename = item.id + CocoPath.PANOPTIC_EXT
479
480 segments_info = list()
481 masks = []
482 next_id = self._min_ann_id
483 for ann in item.annotations:
484 if ann.type != AnnotationType.mask:
485 continue
486
487 if not ann.id:
488 ann.id = next_id
489 next_id += 1
490
491 segment_info = {}
492 segment_info['id'] = ann.id
493 segment_info['category_id'] = cast(ann.label, int, -1) + 1
494 segment_info['area'] = float(ann.get_area())
495 segment_info['bbox'] = [float(p) for p in ann.get_bbox()]
496 segment_info['iscrowd'] = cast(ann.attributes.get("is_crowd"), int, 0)
497 segments_info.append(segment_info)
498 masks.append(ann)
499
500 if not masks:
501 return
502
503 pan_format = mask_tools.merge_masks((m.image, m.id) for m in masks)
504 save_image(osp.join(self._context._segmentation_dir, ann_filename),
505 mask_tools.index2bgr(pan_format), create_dir=True)
506
507 elem = {
508 'image_id': self._get_image_id(item),
509 'file_name': ann_filename,
510 'segments_info': segments_info
511 }
512 self.annotations.append(elem)
513
514 class CocoConverter(Converter):
515 @staticmethod
516 def _split_tasks_string(s):
517 return [CocoTask[i.strip()] for i in s.split(',')]
518
519 @classmethod
520 def build_cmdline_parser(cls, **kwargs):
521 parser = super().build_cmdline_parser(**kwargs)
522 parser.add_argument('--segmentation-mode',
523 choices=[m.name for m in SegmentationMode],
524 default=SegmentationMode.guess.name,
525 help="""
526 Save mode for instance segmentation:|n
527 - '{sm.guess.name}': guess the mode for each instance,|n
528 |s|suse 'is_crowd' attribute as hint|n
529 - '{sm.polygons.name}': save polygons,|n
530 |s|smerge and convert masks, prefer polygons|n
531 - '{sm.mask.name}': save masks,|n
532 |s|smerge and convert polygons, prefer masks|n
533 Default: %(default)s.
534 """.format(sm=SegmentationMode))
535 parser.add_argument('--crop-covered', action='store_true',
536 help="Crop covered segments so that background objects' "
537 "segmentation was more accurate (default: %(default)s)")
538 parser.add_argument('--allow-attributes',
539 type=str_to_bool, default=True,
540 help="Allow export of attributes (default: %(default)s)")
541 parser.add_argument('--reindex', type=str_to_bool, default=False,
542 help="Assign new indices to images and annotations, "
543 "useful to avoid merge conflicts (default: %(default)s)")
544 parser.add_argument('--merge-images', type=str_to_bool, default=False,
545 help="Save all images into a single "
546 "directory (default: %(default)s)")
547 parser.add_argument('--tasks', type=cls._split_tasks_string,
548 help="COCO task filter, comma-separated list of {%s} "
549 "(default: all)" % ', '.join(t.name for t in CocoTask))
550 return parser
551
552 DEFAULT_IMAGE_EXT = CocoPath.IMAGE_EXT
553
554 _TASK_CONVERTER = {
555 CocoTask.image_info: _ImageInfoConverter,
556 CocoTask.instances: _InstancesConverter,
557 CocoTask.person_keypoints: _KeypointsConverter,
558 CocoTask.captions: _CaptionsConverter,
559 CocoTask.labels: _LabelsConverter,
560 CocoTask.panoptic: _PanopticConverter,
561 CocoTask.stuff: _StuffConverter,
562 }
563
564 def __init__(self, extractor, save_dir,
565 tasks=None, segmentation_mode=None, crop_covered=False,
566 allow_attributes=True, reindex=False, merge_images=False,
567 **kwargs):
568 super().__init__(extractor, save_dir, **kwargs)
569
570 assert tasks is None or isinstance(tasks, (CocoTask, list, str))
571 if isinstance(tasks, CocoTask):
572 tasks = [tasks]
573 elif isinstance(tasks, str):
574 tasks = [CocoTask[tasks]]
575 elif tasks:
576 for i, t in enumerate(tasks):
577 if isinstance(t, str):
578 tasks[i] = CocoTask[t]
579 else:
580 assert t in CocoTask, t
581 else:
582 tasks = set()
583 self._tasks = tasks
584
585 assert segmentation_mode is None or \
586 isinstance(segmentation_mode, str) or \
587 segmentation_mode in SegmentationMode
588 if segmentation_mode is None:
589 segmentation_mode = SegmentationMode.guess
590 if isinstance(segmentation_mode, str):
591 segmentation_mode = SegmentationMode[segmentation_mode]
592 self._segmentation_mode = segmentation_mode
593
594 self._crop_covered = crop_covered
595 self._allow_attributes = allow_attributes
596 self._reindex = reindex
597 self._merge_images = merge_images
598
599 self._image_ids = {}
600
601 self._patch = None
602
603 def _make_dirs(self):
604 self._images_dir = osp.join(self._save_dir, CocoPath.IMAGES_DIR)
605 os.makedirs(self._images_dir, exist_ok=True)
606
607 self._ann_dir = osp.join(self._save_dir, CocoPath.ANNOTATIONS_DIR)
608 os.makedirs(self._ann_dir, exist_ok=True)
609
610 def _make_segmentation_dir(self, subset_name):
611 self._segmentation_dir = osp.join(self._save_dir,
612 CocoPath.ANNOTATIONS_DIR, 'panoptic_'+ subset_name)
613 os.makedirs(self._segmentation_dir, exist_ok=True)
614
615 def _make_task_converter(self, task):
616 if task not in self._TASK_CONVERTER:
617 raise NotImplementedError()
618 return self._TASK_CONVERTER[task](self)
619
620 def _make_task_converters(self):
621 return { task: self._make_task_converter(task)
622 for task in (self._tasks or self._TASK_CONVERTER) }
623
624 def _get_image_id(self, item):
625 image_id = self._image_ids.get(item.id)
626 if image_id is None:
627 if not self._reindex:
628 image_id = cast(item.attributes.get('id'), int,
629 len(self._image_ids) + 1)
630 else:
631 image_id = len(self._image_ids) + 1
632 self._image_ids[item.id] = image_id
633 return image_id
634
635 def apply(self):
636 self._make_dirs()
637
638 for subset_name, subset in self._extractor.subsets().items():
639 task_converters = self._make_task_converters()
640 for task_conv in task_converters.values():
641 task_conv.save_categories(subset)
642 if CocoTask.panoptic in task_converters:
643 self._make_segmentation_dir(subset_name)
644
645 for item in subset:
646 if self._save_images:
647 if item.has_image:
648 self._save_image(item, subdir=osp.join(self._images_dir,
649 '' if self._merge_images else subset_name))
650 else:
651 log.debug("Item '%s' has no image info", item.id)
652 for task_conv in task_converters.values():
653 task_conv.save_image_info(item,
654 self._make_image_filename(item))
655 task_conv.save_annotations(item)
656
657 for task, task_conv in task_converters.items():
658 ann_file = osp.join(self._ann_dir,
659 '%s_%s.json' % (task.name, subset_name))
660
661 if task_conv.is_empty() and (not self._tasks or self._patch):
662 if task == CocoTask.panoptic:
663 os.rmdir(self._segmentation_dir)
664 if self._patch:
665 if osp.isfile(ann_file):
666 # Remove subsets that became empty
667 os.remove(ann_file)
668 continue
669
670 task_conv.write(ann_file)
671
672 @classmethod
673 def patch(cls, dataset, patch, save_dir, **kwargs):
674 for subset in patch.updated_subsets:
675 conv = cls(dataset.get_subset(subset), save_dir=save_dir, **kwargs)
676 conv._patch = patch
677 conv.apply()
678
679 conv = cls(dataset, save_dir=save_dir, **kwargs)
680 images_dir = osp.join(save_dir, CocoPath.IMAGES_DIR)
681 for (item_id, subset), status in patch.updated_items.items():
682 if status != ItemStatus.removed:
683 item = patch.data.get(item_id, subset)
684 else:
685 item = DatasetItem(item_id, subset=subset)
686
687 if not (status == ItemStatus.removed or not item.has_image):
688 continue
689
690 # Converter supports saving in separate dirs and common image dir
691
692 image_path = osp.join(images_dir, conv._make_image_filename(item))
693 if osp.isfile(image_path):
694 os.unlink(image_path)
695
696 image_path = osp.join(images_dir, subset,
697 conv._make_image_filename(item))
698 if osp.isfile(image_path):
699 os.unlink(image_path)
700
701
702 class CocoInstancesConverter(CocoConverter):
703 def __init__(self, *args, **kwargs):
704 kwargs['tasks'] = CocoTask.instances
705 super().__init__(*args, **kwargs)
706
707 class CocoImageInfoConverter(CocoConverter):
708 def __init__(self, *args, **kwargs):
709 kwargs['tasks'] = CocoTask.image_info
710 super().__init__(*args, **kwargs)
711
712 class CocoPersonKeypointsConverter(CocoConverter):
713 def __init__(self, *args, **kwargs):
714 kwargs['tasks'] = CocoTask.person_keypoints
715 super().__init__(*args, **kwargs)
716
717 class CocoCaptionsConverter(CocoConverter):
718 def __init__(self, *args, **kwargs):
719 kwargs['tasks'] = CocoTask.captions
720 super().__init__(*args, **kwargs)
721
722 class CocoLabelsConverter(CocoConverter):
723 def __init__(self, *args, **kwargs):
724 kwargs['tasks'] = CocoTask.labels
725 super().__init__(*args, **kwargs)
726
727 class CocoPanopticConverter(CocoConverter):
728 def __init__(self, *args, **kwargs):
729 kwargs['tasks'] = CocoTask.panoptic
730 super().__init__(*args, **kwargs)
731
732 class CocoStuffConverter(CocoConverter):
733 def __init__(self, *args, **kwargs):
734 kwargs['tasks'] = CocoTask.stuff
735 kwargs['segmentation_mode'] = SegmentationMode.mask
736 super().__init__(*args, **kwargs)
737
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/datumaro/plugins/coco_format/converter.py b/datumaro/plugins/coco_format/converter.py
--- a/datumaro/plugins/coco_format/converter.py
+++ b/datumaro/plugins/coco_format/converter.py
@@ -231,7 +231,7 @@
if masks:
masks = (m.image for m in masks)
if mask is not None:
- masks += chain(masks, [mask])
+ masks = chain(masks, [mask])
mask = mask_tools.merge_masks(masks)
if mask is not None:
| {"golden_diff": "diff --git a/datumaro/plugins/coco_format/converter.py b/datumaro/plugins/coco_format/converter.py\n--- a/datumaro/plugins/coco_format/converter.py\n+++ b/datumaro/plugins/coco_format/converter.py\n@@ -231,7 +231,7 @@\n if masks:\n masks = (m.image for m in masks)\n if mask is not None:\n- masks += chain(masks, [mask])\n+ masks = chain(masks, [mask])\n mask = mask_tools.merge_masks(masks)\n \n if mask is not None:\n", "issue": "Bug: concatenation for the different types in COCO format \nhttps://github.com/openvinotoolkit/datumaro/blob/935fb0ede3c70a68582c7d13b5f5450e51f81235/datumaro/plugins/coco_format/converter.py#L232-L234\r\nmasks has type generator, but then we use += between generator and itertools.chain Python doesn't support such concatenation.\n", "before_files": [{"content": "# Copyright (C) 2020-2021 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nfrom enum import Enum, auto\nfrom itertools import chain, groupby\nimport json\nimport logging as log\nimport os\nimport os.path as osp\n\nimport pycocotools.mask as mask_utils\n\nfrom datumaro.components.converter import Converter\nfrom datumaro.components.dataset import ItemStatus\nfrom datumaro.components.extractor import (\n _COORDINATE_ROUNDING_DIGITS, AnnotationType, DatasetItem, Points,\n)\nfrom datumaro.util import cast, find, str_to_bool\nfrom datumaro.util.image import save_image\nimport datumaro.util.annotation_util as anno_tools\nimport datumaro.util.mask_tools as mask_tools\n\nfrom .format import CocoPath, CocoTask\n\n\nclass SegmentationMode(Enum):\n guess = auto()\n polygons = auto()\n mask = auto()\n\nclass _TaskConverter:\n def __init__(self, context):\n self._min_ann_id = 1\n self._context = context\n\n data = {\n 'licenses': [],\n 'info': {},\n 'categories': [],\n 'images': [],\n 'annotations': []\n }\n\n data['licenses'].append({\n 'name': '',\n 'id': 0,\n 'url': ''\n })\n\n data['info'] = {\n 'contributor': '',\n 'date_created': '',\n 'description': '',\n 'url': '',\n 'version': '',\n 'year': ''\n }\n self._data = data\n\n def is_empty(self):\n return len(self._data['annotations']) == 0\n\n def _get_image_id(self, item):\n return self._context._get_image_id(item)\n\n def save_image_info(self, item, filename):\n h = 0\n w = 0\n if item.has_image and item.image.size:\n h, w = item.image.size\n\n self._data['images'].append({\n 'id': self._get_image_id(item),\n 'width': int(w),\n 'height': int(h),\n 'file_name': cast(filename, str, ''),\n 'license': 0,\n 'flickr_url': '',\n 'coco_url': '',\n 'date_captured': 0,\n })\n\n def save_categories(self, dataset):\n raise NotImplementedError()\n\n def save_annotations(self, item):\n raise NotImplementedError()\n\n def write(self, path):\n next_id = self._min_ann_id\n for ann in self.annotations:\n if not ann['id']:\n ann['id'] = next_id\n next_id += 1\n\n with open(path, 'w', encoding='utf-8') as outfile:\n json.dump(self._data, outfile, ensure_ascii=False)\n\n @property\n def annotations(self):\n return self._data['annotations']\n\n @property\n def categories(self):\n return self._data['categories']\n\n def _get_ann_id(self, annotation):\n ann_id = 0 if self._context._reindex else annotation.id\n if ann_id:\n self._min_ann_id = max(ann_id, self._min_ann_id)\n return ann_id\n\n @staticmethod\n def _convert_attributes(ann):\n return { k: v for k, v in ann.attributes.items()\n if k not in {'is_crowd', 'score'}\n }\n\nclass _ImageInfoConverter(_TaskConverter):\n def is_empty(self):\n return len(self._data['images']) == 0\n\n def save_categories(self, dataset):\n pass\n\n def save_annotations(self, item):\n pass\n\nclass _CaptionsConverter(_TaskConverter):\n def save_categories(self, dataset):\n pass\n\n def save_annotations(self, item):\n for ann_idx, ann in enumerate(item.annotations):\n if ann.type != AnnotationType.caption:\n continue\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': 0, # NOTE: workaround for a bug in cocoapi\n 'caption': ann.caption,\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s', ann #%s: failed to convert \"\n \"attribute 'score': %e\" % (item.id, ann_idx, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n self.annotations.append(elem)\n\nclass _InstancesConverter(_TaskConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n })\n\n @classmethod\n def crop_segments(cls, instances, img_width, img_height):\n instances = sorted(instances, key=lambda x: x[0].z_order)\n\n segment_map = []\n segments = []\n for inst_idx, (_, polygons, mask, _) in enumerate(instances):\n if polygons:\n segment_map.extend(inst_idx for p in polygons)\n segments.extend(polygons)\n elif mask is not None:\n segment_map.append(inst_idx)\n segments.append(mask)\n\n segments = mask_tools.crop_covered_segments(\n segments, img_width, img_height)\n\n for inst_idx, inst in enumerate(instances):\n new_segments = [s for si_id, s in zip(segment_map, segments)\n if si_id == inst_idx]\n\n if not new_segments:\n inst[1] = []\n inst[2] = None\n continue\n\n if inst[1]:\n inst[1] = sum(new_segments, [])\n else:\n mask = mask_tools.merge_masks(new_segments)\n inst[2] = mask_tools.mask_to_rle(mask)\n\n return instances\n\n def find_instance_parts(self, group, img_width, img_height):\n boxes = [a for a in group if a.type == AnnotationType.bbox]\n polygons = [a for a in group if a.type == AnnotationType.polygon]\n masks = [a for a in group if a.type == AnnotationType.mask]\n\n anns = boxes + polygons + masks\n leader = anno_tools.find_group_leader(anns)\n bbox = anno_tools.max_bbox(anns)\n mask = None\n polygons = [p.points for p in polygons]\n\n if self._context._segmentation_mode == SegmentationMode.guess:\n use_masks = True == leader.attributes.get('is_crowd',\n find(masks, lambda x: x.label == leader.label) is not None)\n elif self._context._segmentation_mode == SegmentationMode.polygons:\n use_masks = False\n elif self._context._segmentation_mode == SegmentationMode.mask:\n use_masks = True\n else:\n raise NotImplementedError(\"Unexpected segmentation mode '%s'\" % \\\n self._context._segmentation_mode)\n\n if use_masks:\n if polygons:\n mask = mask_tools.rles_to_mask(polygons, img_width, img_height)\n\n if masks:\n masks = (m.image for m in masks)\n if mask is not None:\n masks += chain(masks, [mask])\n mask = mask_tools.merge_masks(masks)\n\n if mask is not None:\n mask = mask_tools.mask_to_rle(mask)\n polygons = []\n else:\n if masks:\n mask = mask_tools.merge_masks(m.image for m in masks)\n polygons += mask_tools.mask_to_polygons(mask)\n mask = None\n\n return [leader, polygons, mask, bbox]\n\n @staticmethod\n def find_instance_anns(annotations):\n return [a for a in annotations\n if a.type in { AnnotationType.bbox,\n AnnotationType.polygon, AnnotationType.mask }\n ]\n\n @classmethod\n def find_instances(cls, annotations):\n return anno_tools.find_instances(cls.find_instance_anns(annotations))\n\n def save_annotations(self, item):\n instances = self.find_instances(item.annotations)\n if not instances:\n return\n\n if not item.has_image or not item.image.size:\n log.warning(\"Item '%s': skipping writing instances \"\n \"since no image info available\" % item.id)\n return\n h, w = item.image.size\n instances = [self.find_instance_parts(i, w, h) for i in instances]\n\n if self._context._crop_covered:\n instances = self.crop_segments(instances, w, h)\n\n for instance in instances:\n elem = self.convert_instance(instance, item)\n if elem:\n self.annotations.append(elem)\n\n def convert_instance(self, instance, item):\n ann, polygons, mask, bbox = instance\n\n is_crowd = mask is not None\n if is_crowd:\n segmentation = {\n 'counts': list(int(c) for c in mask['counts']),\n 'size': list(int(c) for c in mask['size'])\n }\n else:\n segmentation = [list(map(float, p)) for p in polygons]\n\n area = 0\n if segmentation:\n if item.has_image and item.image.size:\n h, w = item.image.size\n else:\n # Here we can guess the image size as\n # it is only needed for the area computation\n w = bbox[0] + bbox[2]\n h = bbox[1] + bbox[3]\n\n rles = mask_utils.frPyObjects(segmentation, h, w)\n if is_crowd:\n rles = [rles]\n else:\n rles = mask_utils.merge(rles)\n area = mask_utils.area(rles)\n else:\n _, _, w, h = bbox\n segmentation = []\n area = w * h\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': cast(ann.label, int, -1) + 1,\n 'segmentation': segmentation,\n 'area': float(area),\n 'bbox': [round(float(n), _COORDINATE_ROUNDING_DIGITS) for n in bbox],\n 'iscrowd': int(is_crowd),\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s': failed to convert attribute \"\n \"'score': %e\" % (item.id, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n return elem\n\nclass _KeypointsConverter(_InstancesConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n point_categories = dataset.categories().get(AnnotationType.points)\n\n for idx, label_cat in enumerate(label_categories.items):\n cat = {\n 'id': 1 + idx,\n 'name': cast(label_cat.name, str, ''),\n 'supercategory': cast(label_cat.parent, str, ''),\n 'keypoints': [],\n 'skeleton': [],\n }\n\n if point_categories is not None:\n kp_cat = point_categories.items.get(idx)\n if kp_cat is not None:\n cat.update({\n 'keypoints': [str(l) for l in kp_cat.labels],\n 'skeleton': [list(map(int, j)) for j in kp_cat.joints],\n })\n self.categories.append(cat)\n\n def save_annotations(self, item):\n point_annotations = [a for a in item.annotations\n if a.type == AnnotationType.points]\n if not point_annotations:\n return\n\n # Create annotations for solitary keypoints annotations\n for points in self.find_solitary_points(item.annotations):\n instance = [points, [], None, points.get_bbox()]\n elem = super().convert_instance(instance, item)\n elem.update(self.convert_points_object(points))\n self.annotations.append(elem)\n\n # Create annotations for complete instance + keypoints annotations\n super().save_annotations(item)\n\n @classmethod\n def find_solitary_points(cls, annotations):\n annotations = sorted(annotations, key=lambda a: a.group)\n solitary_points = []\n\n for g_id, group in groupby(annotations, lambda a: a.group):\n if not g_id or g_id and not cls.find_instance_anns(group):\n group = [a for a in group if a.type == AnnotationType.points]\n solitary_points.extend(group)\n\n return solitary_points\n\n @staticmethod\n def convert_points_object(ann):\n keypoints = []\n points = ann.points\n visibility = ann.visibility\n for index in range(0, len(points), 2):\n kp = points[index : index + 2]\n state = visibility[index // 2].value\n keypoints.extend([*kp, state])\n\n num_annotated = len([v for v in visibility \\\n if v != Points.Visibility.absent])\n\n return {\n 'keypoints': keypoints,\n 'num_keypoints': num_annotated,\n }\n\n def convert_instance(self, instance, item):\n points_ann = find(item.annotations, lambda x: \\\n x.type == AnnotationType.points and \\\n instance[0].group and x.group == instance[0].group)\n if not points_ann:\n return None\n\n elem = super().convert_instance(instance, item)\n elem.update(self.convert_points_object(points_ann))\n\n return elem\n\nclass _LabelsConverter(_TaskConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n })\n\n def save_annotations(self, item):\n for ann in item.annotations:\n if ann.type != AnnotationType.label:\n continue\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': int(ann.label) + 1,\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s': failed to convert attribute \"\n \"'score': %e\" % (item.id, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n self.annotations.append(elem)\n\nclass _StuffConverter(_InstancesConverter):\n pass\n\nclass _PanopticConverter(_TaskConverter):\n def write(self, path):\n with open(path, 'w', encoding='utf-8') as outfile:\n json.dump(self._data, outfile, ensure_ascii=False)\n\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n 'isthing': 0, # TODO: can't represent this information yet\n })\n\n def save_annotations(self, item):\n if not item.has_image:\n return\n\n ann_filename = item.id + CocoPath.PANOPTIC_EXT\n\n segments_info = list()\n masks = []\n next_id = self._min_ann_id\n for ann in item.annotations:\n if ann.type != AnnotationType.mask:\n continue\n\n if not ann.id:\n ann.id = next_id\n next_id += 1\n\n segment_info = {}\n segment_info['id'] = ann.id\n segment_info['category_id'] = cast(ann.label, int, -1) + 1\n segment_info['area'] = float(ann.get_area())\n segment_info['bbox'] = [float(p) for p in ann.get_bbox()]\n segment_info['iscrowd'] = cast(ann.attributes.get(\"is_crowd\"), int, 0)\n segments_info.append(segment_info)\n masks.append(ann)\n\n if not masks:\n return\n\n pan_format = mask_tools.merge_masks((m.image, m.id) for m in masks)\n save_image(osp.join(self._context._segmentation_dir, ann_filename),\n mask_tools.index2bgr(pan_format), create_dir=True)\n\n elem = {\n 'image_id': self._get_image_id(item),\n 'file_name': ann_filename,\n 'segments_info': segments_info\n }\n self.annotations.append(elem)\n\nclass CocoConverter(Converter):\n @staticmethod\n def _split_tasks_string(s):\n return [CocoTask[i.strip()] for i in s.split(',')]\n\n @classmethod\n def build_cmdline_parser(cls, **kwargs):\n parser = super().build_cmdline_parser(**kwargs)\n parser.add_argument('--segmentation-mode',\n choices=[m.name for m in SegmentationMode],\n default=SegmentationMode.guess.name,\n help=\"\"\"\n Save mode for instance segmentation:|n\n - '{sm.guess.name}': guess the mode for each instance,|n\n |s|suse 'is_crowd' attribute as hint|n\n - '{sm.polygons.name}': save polygons,|n\n |s|smerge and convert masks, prefer polygons|n\n - '{sm.mask.name}': save masks,|n\n |s|smerge and convert polygons, prefer masks|n\n Default: %(default)s.\n \"\"\".format(sm=SegmentationMode))\n parser.add_argument('--crop-covered', action='store_true',\n help=\"Crop covered segments so that background objects' \"\n \"segmentation was more accurate (default: %(default)s)\")\n parser.add_argument('--allow-attributes',\n type=str_to_bool, default=True,\n help=\"Allow export of attributes (default: %(default)s)\")\n parser.add_argument('--reindex', type=str_to_bool, default=False,\n help=\"Assign new indices to images and annotations, \"\n \"useful to avoid merge conflicts (default: %(default)s)\")\n parser.add_argument('--merge-images', type=str_to_bool, default=False,\n help=\"Save all images into a single \"\n \"directory (default: %(default)s)\")\n parser.add_argument('--tasks', type=cls._split_tasks_string,\n help=\"COCO task filter, comma-separated list of {%s} \"\n \"(default: all)\" % ', '.join(t.name for t in CocoTask))\n return parser\n\n DEFAULT_IMAGE_EXT = CocoPath.IMAGE_EXT\n\n _TASK_CONVERTER = {\n CocoTask.image_info: _ImageInfoConverter,\n CocoTask.instances: _InstancesConverter,\n CocoTask.person_keypoints: _KeypointsConverter,\n CocoTask.captions: _CaptionsConverter,\n CocoTask.labels: _LabelsConverter,\n CocoTask.panoptic: _PanopticConverter,\n CocoTask.stuff: _StuffConverter,\n }\n\n def __init__(self, extractor, save_dir,\n tasks=None, segmentation_mode=None, crop_covered=False,\n allow_attributes=True, reindex=False, merge_images=False,\n **kwargs):\n super().__init__(extractor, save_dir, **kwargs)\n\n assert tasks is None or isinstance(tasks, (CocoTask, list, str))\n if isinstance(tasks, CocoTask):\n tasks = [tasks]\n elif isinstance(tasks, str):\n tasks = [CocoTask[tasks]]\n elif tasks:\n for i, t in enumerate(tasks):\n if isinstance(t, str):\n tasks[i] = CocoTask[t]\n else:\n assert t in CocoTask, t\n else:\n tasks = set()\n self._tasks = tasks\n\n assert segmentation_mode is None or \\\n isinstance(segmentation_mode, str) or \\\n segmentation_mode in SegmentationMode\n if segmentation_mode is None:\n segmentation_mode = SegmentationMode.guess\n if isinstance(segmentation_mode, str):\n segmentation_mode = SegmentationMode[segmentation_mode]\n self._segmentation_mode = segmentation_mode\n\n self._crop_covered = crop_covered\n self._allow_attributes = allow_attributes\n self._reindex = reindex\n self._merge_images = merge_images\n\n self._image_ids = {}\n\n self._patch = None\n\n def _make_dirs(self):\n self._images_dir = osp.join(self._save_dir, CocoPath.IMAGES_DIR)\n os.makedirs(self._images_dir, exist_ok=True)\n\n self._ann_dir = osp.join(self._save_dir, CocoPath.ANNOTATIONS_DIR)\n os.makedirs(self._ann_dir, exist_ok=True)\n\n def _make_segmentation_dir(self, subset_name):\n self._segmentation_dir = osp.join(self._save_dir,\n CocoPath.ANNOTATIONS_DIR, 'panoptic_'+ subset_name)\n os.makedirs(self._segmentation_dir, exist_ok=True)\n\n def _make_task_converter(self, task):\n if task not in self._TASK_CONVERTER:\n raise NotImplementedError()\n return self._TASK_CONVERTER[task](self)\n\n def _make_task_converters(self):\n return { task: self._make_task_converter(task)\n for task in (self._tasks or self._TASK_CONVERTER) }\n\n def _get_image_id(self, item):\n image_id = self._image_ids.get(item.id)\n if image_id is None:\n if not self._reindex:\n image_id = cast(item.attributes.get('id'), int,\n len(self._image_ids) + 1)\n else:\n image_id = len(self._image_ids) + 1\n self._image_ids[item.id] = image_id\n return image_id\n\n def apply(self):\n self._make_dirs()\n\n for subset_name, subset in self._extractor.subsets().items():\n task_converters = self._make_task_converters()\n for task_conv in task_converters.values():\n task_conv.save_categories(subset)\n if CocoTask.panoptic in task_converters:\n self._make_segmentation_dir(subset_name)\n\n for item in subset:\n if self._save_images:\n if item.has_image:\n self._save_image(item, subdir=osp.join(self._images_dir,\n '' if self._merge_images else subset_name))\n else:\n log.debug(\"Item '%s' has no image info\", item.id)\n for task_conv in task_converters.values():\n task_conv.save_image_info(item,\n self._make_image_filename(item))\n task_conv.save_annotations(item)\n\n for task, task_conv in task_converters.items():\n ann_file = osp.join(self._ann_dir,\n '%s_%s.json' % (task.name, subset_name))\n\n if task_conv.is_empty() and (not self._tasks or self._patch):\n if task == CocoTask.panoptic:\n os.rmdir(self._segmentation_dir)\n if self._patch:\n if osp.isfile(ann_file):\n # Remove subsets that became empty\n os.remove(ann_file)\n continue\n\n task_conv.write(ann_file)\n\n @classmethod\n def patch(cls, dataset, patch, save_dir, **kwargs):\n for subset in patch.updated_subsets:\n conv = cls(dataset.get_subset(subset), save_dir=save_dir, **kwargs)\n conv._patch = patch\n conv.apply()\n\n conv = cls(dataset, save_dir=save_dir, **kwargs)\n images_dir = osp.join(save_dir, CocoPath.IMAGES_DIR)\n for (item_id, subset), status in patch.updated_items.items():\n if status != ItemStatus.removed:\n item = patch.data.get(item_id, subset)\n else:\n item = DatasetItem(item_id, subset=subset)\n\n if not (status == ItemStatus.removed or not item.has_image):\n continue\n\n # Converter supports saving in separate dirs and common image dir\n\n image_path = osp.join(images_dir, conv._make_image_filename(item))\n if osp.isfile(image_path):\n os.unlink(image_path)\n\n image_path = osp.join(images_dir, subset,\n conv._make_image_filename(item))\n if osp.isfile(image_path):\n os.unlink(image_path)\n\n\nclass CocoInstancesConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.instances\n super().__init__(*args, **kwargs)\n\nclass CocoImageInfoConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.image_info\n super().__init__(*args, **kwargs)\n\nclass CocoPersonKeypointsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.person_keypoints\n super().__init__(*args, **kwargs)\n\nclass CocoCaptionsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.captions\n super().__init__(*args, **kwargs)\n\nclass CocoLabelsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.labels\n super().__init__(*args, **kwargs)\n\nclass CocoPanopticConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.panoptic\n super().__init__(*args, **kwargs)\n\nclass CocoStuffConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.stuff\n kwargs['segmentation_mode'] = SegmentationMode.mask\n super().__init__(*args, **kwargs)\n", "path": "datumaro/plugins/coco_format/converter.py"}], "after_files": [{"content": "# Copyright (C) 2020-2021 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nfrom enum import Enum, auto\nfrom itertools import chain, groupby\nimport json\nimport logging as log\nimport os\nimport os.path as osp\n\nimport pycocotools.mask as mask_utils\n\nfrom datumaro.components.converter import Converter\nfrom datumaro.components.dataset import ItemStatus\nfrom datumaro.components.extractor import (\n _COORDINATE_ROUNDING_DIGITS, AnnotationType, DatasetItem, Points,\n)\nfrom datumaro.util import cast, find, str_to_bool\nfrom datumaro.util.image import save_image\nimport datumaro.util.annotation_util as anno_tools\nimport datumaro.util.mask_tools as mask_tools\n\nfrom .format import CocoPath, CocoTask\n\n\nclass SegmentationMode(Enum):\n guess = auto()\n polygons = auto()\n mask = auto()\n\nclass _TaskConverter:\n def __init__(self, context):\n self._min_ann_id = 1\n self._context = context\n\n data = {\n 'licenses': [],\n 'info': {},\n 'categories': [],\n 'images': [],\n 'annotations': []\n }\n\n data['licenses'].append({\n 'name': '',\n 'id': 0,\n 'url': ''\n })\n\n data['info'] = {\n 'contributor': '',\n 'date_created': '',\n 'description': '',\n 'url': '',\n 'version': '',\n 'year': ''\n }\n self._data = data\n\n def is_empty(self):\n return len(self._data['annotations']) == 0\n\n def _get_image_id(self, item):\n return self._context._get_image_id(item)\n\n def save_image_info(self, item, filename):\n h = 0\n w = 0\n if item.has_image and item.image.size:\n h, w = item.image.size\n\n self._data['images'].append({\n 'id': self._get_image_id(item),\n 'width': int(w),\n 'height': int(h),\n 'file_name': cast(filename, str, ''),\n 'license': 0,\n 'flickr_url': '',\n 'coco_url': '',\n 'date_captured': 0,\n })\n\n def save_categories(self, dataset):\n raise NotImplementedError()\n\n def save_annotations(self, item):\n raise NotImplementedError()\n\n def write(self, path):\n next_id = self._min_ann_id\n for ann in self.annotations:\n if not ann['id']:\n ann['id'] = next_id\n next_id += 1\n\n with open(path, 'w', encoding='utf-8') as outfile:\n json.dump(self._data, outfile, ensure_ascii=False)\n\n @property\n def annotations(self):\n return self._data['annotations']\n\n @property\n def categories(self):\n return self._data['categories']\n\n def _get_ann_id(self, annotation):\n ann_id = 0 if self._context._reindex else annotation.id\n if ann_id:\n self._min_ann_id = max(ann_id, self._min_ann_id)\n return ann_id\n\n @staticmethod\n def _convert_attributes(ann):\n return { k: v for k, v in ann.attributes.items()\n if k not in {'is_crowd', 'score'}\n }\n\nclass _ImageInfoConverter(_TaskConverter):\n def is_empty(self):\n return len(self._data['images']) == 0\n\n def save_categories(self, dataset):\n pass\n\n def save_annotations(self, item):\n pass\n\nclass _CaptionsConverter(_TaskConverter):\n def save_categories(self, dataset):\n pass\n\n def save_annotations(self, item):\n for ann_idx, ann in enumerate(item.annotations):\n if ann.type != AnnotationType.caption:\n continue\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': 0, # NOTE: workaround for a bug in cocoapi\n 'caption': ann.caption,\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s', ann #%s: failed to convert \"\n \"attribute 'score': %e\" % (item.id, ann_idx, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n self.annotations.append(elem)\n\nclass _InstancesConverter(_TaskConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n })\n\n @classmethod\n def crop_segments(cls, instances, img_width, img_height):\n instances = sorted(instances, key=lambda x: x[0].z_order)\n\n segment_map = []\n segments = []\n for inst_idx, (_, polygons, mask, _) in enumerate(instances):\n if polygons:\n segment_map.extend(inst_idx for p in polygons)\n segments.extend(polygons)\n elif mask is not None:\n segment_map.append(inst_idx)\n segments.append(mask)\n\n segments = mask_tools.crop_covered_segments(\n segments, img_width, img_height)\n\n for inst_idx, inst in enumerate(instances):\n new_segments = [s for si_id, s in zip(segment_map, segments)\n if si_id == inst_idx]\n\n if not new_segments:\n inst[1] = []\n inst[2] = None\n continue\n\n if inst[1]:\n inst[1] = sum(new_segments, [])\n else:\n mask = mask_tools.merge_masks(new_segments)\n inst[2] = mask_tools.mask_to_rle(mask)\n\n return instances\n\n def find_instance_parts(self, group, img_width, img_height):\n boxes = [a for a in group if a.type == AnnotationType.bbox]\n polygons = [a for a in group if a.type == AnnotationType.polygon]\n masks = [a for a in group if a.type == AnnotationType.mask]\n\n anns = boxes + polygons + masks\n leader = anno_tools.find_group_leader(anns)\n bbox = anno_tools.max_bbox(anns)\n mask = None\n polygons = [p.points for p in polygons]\n\n if self._context._segmentation_mode == SegmentationMode.guess:\n use_masks = True == leader.attributes.get('is_crowd',\n find(masks, lambda x: x.label == leader.label) is not None)\n elif self._context._segmentation_mode == SegmentationMode.polygons:\n use_masks = False\n elif self._context._segmentation_mode == SegmentationMode.mask:\n use_masks = True\n else:\n raise NotImplementedError(\"Unexpected segmentation mode '%s'\" % \\\n self._context._segmentation_mode)\n\n if use_masks:\n if polygons:\n mask = mask_tools.rles_to_mask(polygons, img_width, img_height)\n\n if masks:\n masks = (m.image for m in masks)\n if mask is not None:\n masks = chain(masks, [mask])\n mask = mask_tools.merge_masks(masks)\n\n if mask is not None:\n mask = mask_tools.mask_to_rle(mask)\n polygons = []\n else:\n if masks:\n mask = mask_tools.merge_masks(m.image for m in masks)\n polygons += mask_tools.mask_to_polygons(mask)\n mask = None\n\n return [leader, polygons, mask, bbox]\n\n @staticmethod\n def find_instance_anns(annotations):\n return [a for a in annotations\n if a.type in { AnnotationType.bbox,\n AnnotationType.polygon, AnnotationType.mask }\n ]\n\n @classmethod\n def find_instances(cls, annotations):\n return anno_tools.find_instances(cls.find_instance_anns(annotations))\n\n def save_annotations(self, item):\n instances = self.find_instances(item.annotations)\n if not instances:\n return\n\n if not item.has_image or not item.image.size:\n log.warning(\"Item '%s': skipping writing instances \"\n \"since no image info available\" % item.id)\n return\n h, w = item.image.size\n instances = [self.find_instance_parts(i, w, h) for i in instances]\n\n if self._context._crop_covered:\n instances = self.crop_segments(instances, w, h)\n\n for instance in instances:\n elem = self.convert_instance(instance, item)\n if elem:\n self.annotations.append(elem)\n\n def convert_instance(self, instance, item):\n ann, polygons, mask, bbox = instance\n\n is_crowd = mask is not None\n if is_crowd:\n segmentation = {\n 'counts': list(int(c) for c in mask['counts']),\n 'size': list(int(c) for c in mask['size'])\n }\n else:\n segmentation = [list(map(float, p)) for p in polygons]\n\n area = 0\n if segmentation:\n if item.has_image and item.image.size:\n h, w = item.image.size\n else:\n # Here we can guess the image size as\n # it is only needed for the area computation\n w = bbox[0] + bbox[2]\n h = bbox[1] + bbox[3]\n\n rles = mask_utils.frPyObjects(segmentation, h, w)\n if is_crowd:\n rles = [rles]\n else:\n rles = mask_utils.merge(rles)\n area = mask_utils.area(rles)\n else:\n _, _, w, h = bbox\n segmentation = []\n area = w * h\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': cast(ann.label, int, -1) + 1,\n 'segmentation': segmentation,\n 'area': float(area),\n 'bbox': [round(float(n), _COORDINATE_ROUNDING_DIGITS) for n in bbox],\n 'iscrowd': int(is_crowd),\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s': failed to convert attribute \"\n \"'score': %e\" % (item.id, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n return elem\n\nclass _KeypointsConverter(_InstancesConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n point_categories = dataset.categories().get(AnnotationType.points)\n\n for idx, label_cat in enumerate(label_categories.items):\n cat = {\n 'id': 1 + idx,\n 'name': cast(label_cat.name, str, ''),\n 'supercategory': cast(label_cat.parent, str, ''),\n 'keypoints': [],\n 'skeleton': [],\n }\n\n if point_categories is not None:\n kp_cat = point_categories.items.get(idx)\n if kp_cat is not None:\n cat.update({\n 'keypoints': [str(l) for l in kp_cat.labels],\n 'skeleton': [list(map(int, j)) for j in kp_cat.joints],\n })\n self.categories.append(cat)\n\n def save_annotations(self, item):\n point_annotations = [a for a in item.annotations\n if a.type == AnnotationType.points]\n if not point_annotations:\n return\n\n # Create annotations for solitary keypoints annotations\n for points in self.find_solitary_points(item.annotations):\n instance = [points, [], None, points.get_bbox()]\n elem = super().convert_instance(instance, item)\n elem.update(self.convert_points_object(points))\n self.annotations.append(elem)\n\n # Create annotations for complete instance + keypoints annotations\n super().save_annotations(item)\n\n @classmethod\n def find_solitary_points(cls, annotations):\n annotations = sorted(annotations, key=lambda a: a.group)\n solitary_points = []\n\n for g_id, group in groupby(annotations, lambda a: a.group):\n if not g_id or g_id and not cls.find_instance_anns(group):\n group = [a for a in group if a.type == AnnotationType.points]\n solitary_points.extend(group)\n\n return solitary_points\n\n @staticmethod\n def convert_points_object(ann):\n keypoints = []\n points = ann.points\n visibility = ann.visibility\n for index in range(0, len(points), 2):\n kp = points[index : index + 2]\n state = visibility[index // 2].value\n keypoints.extend([*kp, state])\n\n num_annotated = len([v for v in visibility \\\n if v != Points.Visibility.absent])\n\n return {\n 'keypoints': keypoints,\n 'num_keypoints': num_annotated,\n }\n\n def convert_instance(self, instance, item):\n points_ann = find(item.annotations, lambda x: \\\n x.type == AnnotationType.points and \\\n instance[0].group and x.group == instance[0].group)\n if not points_ann:\n return None\n\n elem = super().convert_instance(instance, item)\n elem.update(self.convert_points_object(points_ann))\n\n return elem\n\nclass _LabelsConverter(_TaskConverter):\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n })\n\n def save_annotations(self, item):\n for ann in item.annotations:\n if ann.type != AnnotationType.label:\n continue\n\n elem = {\n 'id': self._get_ann_id(ann),\n 'image_id': self._get_image_id(item),\n 'category_id': int(ann.label) + 1,\n }\n if 'score' in ann.attributes:\n try:\n elem['score'] = float(ann.attributes['score'])\n except Exception as e:\n log.warning(\"Item '%s': failed to convert attribute \"\n \"'score': %e\" % (item.id, e))\n if self._context._allow_attributes:\n attrs = self._convert_attributes(ann)\n if attrs:\n elem['attributes'] = attrs\n\n self.annotations.append(elem)\n\nclass _StuffConverter(_InstancesConverter):\n pass\n\nclass _PanopticConverter(_TaskConverter):\n def write(self, path):\n with open(path, 'w', encoding='utf-8') as outfile:\n json.dump(self._data, outfile, ensure_ascii=False)\n\n def save_categories(self, dataset):\n label_categories = dataset.categories().get(AnnotationType.label)\n if label_categories is None:\n return\n\n for idx, cat in enumerate(label_categories.items):\n self.categories.append({\n 'id': 1 + idx,\n 'name': cast(cat.name, str, ''),\n 'supercategory': cast(cat.parent, str, ''),\n 'isthing': 0, # TODO: can't represent this information yet\n })\n\n def save_annotations(self, item):\n if not item.has_image:\n return\n\n ann_filename = item.id + CocoPath.PANOPTIC_EXT\n\n segments_info = list()\n masks = []\n next_id = self._min_ann_id\n for ann in item.annotations:\n if ann.type != AnnotationType.mask:\n continue\n\n if not ann.id:\n ann.id = next_id\n next_id += 1\n\n segment_info = {}\n segment_info['id'] = ann.id\n segment_info['category_id'] = cast(ann.label, int, -1) + 1\n segment_info['area'] = float(ann.get_area())\n segment_info['bbox'] = [float(p) for p in ann.get_bbox()]\n segment_info['iscrowd'] = cast(ann.attributes.get(\"is_crowd\"), int, 0)\n segments_info.append(segment_info)\n masks.append(ann)\n\n if not masks:\n return\n\n pan_format = mask_tools.merge_masks((m.image, m.id) for m in masks)\n save_image(osp.join(self._context._segmentation_dir, ann_filename),\n mask_tools.index2bgr(pan_format), create_dir=True)\n\n elem = {\n 'image_id': self._get_image_id(item),\n 'file_name': ann_filename,\n 'segments_info': segments_info\n }\n self.annotations.append(elem)\n\nclass CocoConverter(Converter):\n @staticmethod\n def _split_tasks_string(s):\n return [CocoTask[i.strip()] for i in s.split(',')]\n\n @classmethod\n def build_cmdline_parser(cls, **kwargs):\n parser = super().build_cmdline_parser(**kwargs)\n parser.add_argument('--segmentation-mode',\n choices=[m.name for m in SegmentationMode],\n default=SegmentationMode.guess.name,\n help=\"\"\"\n Save mode for instance segmentation:|n\n - '{sm.guess.name}': guess the mode for each instance,|n\n |s|suse 'is_crowd' attribute as hint|n\n - '{sm.polygons.name}': save polygons,|n\n |s|smerge and convert masks, prefer polygons|n\n - '{sm.mask.name}': save masks,|n\n |s|smerge and convert polygons, prefer masks|n\n Default: %(default)s.\n \"\"\".format(sm=SegmentationMode))\n parser.add_argument('--crop-covered', action='store_true',\n help=\"Crop covered segments so that background objects' \"\n \"segmentation was more accurate (default: %(default)s)\")\n parser.add_argument('--allow-attributes',\n type=str_to_bool, default=True,\n help=\"Allow export of attributes (default: %(default)s)\")\n parser.add_argument('--reindex', type=str_to_bool, default=False,\n help=\"Assign new indices to images and annotations, \"\n \"useful to avoid merge conflicts (default: %(default)s)\")\n parser.add_argument('--merge-images', type=str_to_bool, default=False,\n help=\"Save all images into a single \"\n \"directory (default: %(default)s)\")\n parser.add_argument('--tasks', type=cls._split_tasks_string,\n help=\"COCO task filter, comma-separated list of {%s} \"\n \"(default: all)\" % ', '.join(t.name for t in CocoTask))\n return parser\n\n DEFAULT_IMAGE_EXT = CocoPath.IMAGE_EXT\n\n _TASK_CONVERTER = {\n CocoTask.image_info: _ImageInfoConverter,\n CocoTask.instances: _InstancesConverter,\n CocoTask.person_keypoints: _KeypointsConverter,\n CocoTask.captions: _CaptionsConverter,\n CocoTask.labels: _LabelsConverter,\n CocoTask.panoptic: _PanopticConverter,\n CocoTask.stuff: _StuffConverter,\n }\n\n def __init__(self, extractor, save_dir,\n tasks=None, segmentation_mode=None, crop_covered=False,\n allow_attributes=True, reindex=False, merge_images=False,\n **kwargs):\n super().__init__(extractor, save_dir, **kwargs)\n\n assert tasks is None or isinstance(tasks, (CocoTask, list, str))\n if isinstance(tasks, CocoTask):\n tasks = [tasks]\n elif isinstance(tasks, str):\n tasks = [CocoTask[tasks]]\n elif tasks:\n for i, t in enumerate(tasks):\n if isinstance(t, str):\n tasks[i] = CocoTask[t]\n else:\n assert t in CocoTask, t\n else:\n tasks = set()\n self._tasks = tasks\n\n assert segmentation_mode is None or \\\n isinstance(segmentation_mode, str) or \\\n segmentation_mode in SegmentationMode\n if segmentation_mode is None:\n segmentation_mode = SegmentationMode.guess\n if isinstance(segmentation_mode, str):\n segmentation_mode = SegmentationMode[segmentation_mode]\n self._segmentation_mode = segmentation_mode\n\n self._crop_covered = crop_covered\n self._allow_attributes = allow_attributes\n self._reindex = reindex\n self._merge_images = merge_images\n\n self._image_ids = {}\n\n self._patch = None\n\n def _make_dirs(self):\n self._images_dir = osp.join(self._save_dir, CocoPath.IMAGES_DIR)\n os.makedirs(self._images_dir, exist_ok=True)\n\n self._ann_dir = osp.join(self._save_dir, CocoPath.ANNOTATIONS_DIR)\n os.makedirs(self._ann_dir, exist_ok=True)\n\n def _make_segmentation_dir(self, subset_name):\n self._segmentation_dir = osp.join(self._save_dir,\n CocoPath.ANNOTATIONS_DIR, 'panoptic_'+ subset_name)\n os.makedirs(self._segmentation_dir, exist_ok=True)\n\n def _make_task_converter(self, task):\n if task not in self._TASK_CONVERTER:\n raise NotImplementedError()\n return self._TASK_CONVERTER[task](self)\n\n def _make_task_converters(self):\n return { task: self._make_task_converter(task)\n for task in (self._tasks or self._TASK_CONVERTER) }\n\n def _get_image_id(self, item):\n image_id = self._image_ids.get(item.id)\n if image_id is None:\n if not self._reindex:\n image_id = cast(item.attributes.get('id'), int,\n len(self._image_ids) + 1)\n else:\n image_id = len(self._image_ids) + 1\n self._image_ids[item.id] = image_id\n return image_id\n\n def apply(self):\n self._make_dirs()\n\n for subset_name, subset in self._extractor.subsets().items():\n task_converters = self._make_task_converters()\n for task_conv in task_converters.values():\n task_conv.save_categories(subset)\n if CocoTask.panoptic in task_converters:\n self._make_segmentation_dir(subset_name)\n\n for item in subset:\n if self._save_images:\n if item.has_image:\n self._save_image(item, subdir=osp.join(self._images_dir,\n '' if self._merge_images else subset_name))\n else:\n log.debug(\"Item '%s' has no image info\", item.id)\n for task_conv in task_converters.values():\n task_conv.save_image_info(item,\n self._make_image_filename(item))\n task_conv.save_annotations(item)\n\n for task, task_conv in task_converters.items():\n ann_file = osp.join(self._ann_dir,\n '%s_%s.json' % (task.name, subset_name))\n\n if task_conv.is_empty() and (not self._tasks or self._patch):\n if task == CocoTask.panoptic:\n os.rmdir(self._segmentation_dir)\n if self._patch:\n if osp.isfile(ann_file):\n # Remove subsets that became empty\n os.remove(ann_file)\n continue\n\n task_conv.write(ann_file)\n\n @classmethod\n def patch(cls, dataset, patch, save_dir, **kwargs):\n for subset in patch.updated_subsets:\n conv = cls(dataset.get_subset(subset), save_dir=save_dir, **kwargs)\n conv._patch = patch\n conv.apply()\n\n conv = cls(dataset, save_dir=save_dir, **kwargs)\n images_dir = osp.join(save_dir, CocoPath.IMAGES_DIR)\n for (item_id, subset), status in patch.updated_items.items():\n if status != ItemStatus.removed:\n item = patch.data.get(item_id, subset)\n else:\n item = DatasetItem(item_id, subset=subset)\n\n if not (status == ItemStatus.removed or not item.has_image):\n continue\n\n # Converter supports saving in separate dirs and common image dir\n\n image_path = osp.join(images_dir, conv._make_image_filename(item))\n if osp.isfile(image_path):\n os.unlink(image_path)\n\n image_path = osp.join(images_dir, subset,\n conv._make_image_filename(item))\n if osp.isfile(image_path):\n os.unlink(image_path)\n\n\nclass CocoInstancesConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.instances\n super().__init__(*args, **kwargs)\n\nclass CocoImageInfoConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.image_info\n super().__init__(*args, **kwargs)\n\nclass CocoPersonKeypointsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.person_keypoints\n super().__init__(*args, **kwargs)\n\nclass CocoCaptionsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.captions\n super().__init__(*args, **kwargs)\n\nclass CocoLabelsConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.labels\n super().__init__(*args, **kwargs)\n\nclass CocoPanopticConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.panoptic\n super().__init__(*args, **kwargs)\n\nclass CocoStuffConverter(CocoConverter):\n def __init__(self, *args, **kwargs):\n kwargs['tasks'] = CocoTask.stuff\n kwargs['segmentation_mode'] = SegmentationMode.mask\n super().__init__(*args, **kwargs)\n", "path": "datumaro/plugins/coco_format/converter.py"}]} |
gh_patches_debug_1178 | rasdani/github-patches | git_diff | jazzband__django-simple-history-1329 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
m2m historical records are saved even when SIMPLE_HISTORY_ENABLED = False
**Describe the bug**
m2m relationships ignores the SIMPLE_HISTORY_ENABLED = False, even when disabled the library is still saving the historical record.
This is causing me some issue when loading Django fixture using loaddata
**To Reproduce**
Steps to reproduce the behavior:
1. set SIMPLE_HISTORY_ENABLED = False
2. save a m2m relationship
3. you will see the record in the historical DB
**Expected behavior**
If SIMPLE_HISTORY_ENABLED = False no historical record should be created for m2m relationship
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Environment (please complete the following information):**
- OS: Ubuntu 22
- Browser (if applicable): [e.g. chrome, safari]
- Django Simple History Version: [e.g. 1.9.1]
- Django Version: [e.g. 1.11.11]
- Database Version: PostgreSQL 15
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `simple_history/models.py`
Content:
```
1 import copy
2 import importlib
3 import uuid
4 import warnings
5 from dataclasses import dataclass
6 from functools import partial
7 from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Sequence, Type, Union
8
9 import django
10 from django.apps import apps
11 from django.conf import settings
12 from django.contrib import admin
13 from django.contrib.auth import get_user_model
14 from django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist
15 from django.db import models
16 from django.db.models import ManyToManyField
17 from django.db.models.fields.proxy import OrderWrt
18 from django.db.models.fields.related import ForeignKey
19 from django.db.models.fields.related_descriptors import (
20 ForwardManyToOneDescriptor,
21 ReverseManyToOneDescriptor,
22 create_reverse_many_to_one_manager,
23 )
24 from django.db.models.query import QuerySet
25 from django.db.models.signals import m2m_changed
26 from django.forms.models import model_to_dict
27 from django.urls import reverse
28 from django.utils import timezone
29 from django.utils.encoding import smart_str
30 from django.utils.functional import cached_property
31 from django.utils.text import format_lazy
32 from django.utils.translation import gettext_lazy as _
33
34 from . import exceptions, utils
35 from .manager import (
36 SIMPLE_HISTORY_REVERSE_ATTR_NAME,
37 HistoricalQuerySet,
38 HistoryDescriptor,
39 HistoryManager,
40 )
41 from .signals import (
42 post_create_historical_m2m_records,
43 post_create_historical_record,
44 pre_create_historical_m2m_records,
45 pre_create_historical_record,
46 )
47
48 try:
49 from asgiref.local import Local as LocalContext
50 except ImportError:
51 from threading import local as LocalContext
52
53 if TYPE_CHECKING:
54 ModelTypeHint = models.Model
55 else:
56 ModelTypeHint = object
57
58 registered_models = {}
59
60
61 def _default_get_user(request, **kwargs):
62 try:
63 return request.user
64 except AttributeError:
65 return None
66
67
68 def _history_user_getter(historical_instance):
69 if historical_instance.history_user_id is None:
70 return None
71 User = get_user_model()
72 try:
73 return User.objects.get(pk=historical_instance.history_user_id)
74 except User.DoesNotExist:
75 return None
76
77
78 def _history_user_setter(historical_instance, user):
79 if user is not None:
80 historical_instance.history_user_id = user.pk
81
82
83 class HistoricalRecords:
84 DEFAULT_MODEL_NAME_PREFIX = "Historical"
85
86 thread = context = LocalContext() # retain thread for backwards compatibility
87 m2m_models = {}
88
89 def __init__(
90 self,
91 verbose_name=None,
92 verbose_name_plural=None,
93 bases=(models.Model,),
94 user_related_name="+",
95 table_name=None,
96 inherit=False,
97 excluded_fields=None,
98 history_id_field=None,
99 history_change_reason_field=None,
100 user_model=None,
101 get_user=_default_get_user,
102 cascade_delete_history=False,
103 custom_model_name=None,
104 app=None,
105 history_user_id_field=None,
106 history_user_getter=_history_user_getter,
107 history_user_setter=_history_user_setter,
108 related_name=None,
109 use_base_model_db=False,
110 user_db_constraint=True,
111 no_db_index=list(),
112 excluded_field_kwargs=None,
113 history_manager=HistoryManager,
114 historical_queryset=HistoricalQuerySet,
115 m2m_fields=(),
116 m2m_fields_model_field_name="_history_m2m_fields",
117 m2m_bases=(models.Model,),
118 ):
119 self.user_set_verbose_name = verbose_name
120 self.user_set_verbose_name_plural = verbose_name_plural
121 self.user_related_name = user_related_name
122 self.user_db_constraint = user_db_constraint
123 self.table_name = table_name
124 self.inherit = inherit
125 self.history_id_field = history_id_field
126 self.history_change_reason_field = history_change_reason_field
127 self.user_model = user_model
128 self.get_user = get_user
129 self.cascade_delete_history = cascade_delete_history
130 self.custom_model_name = custom_model_name
131 self.app = app
132 self.user_id_field = history_user_id_field
133 self.user_getter = history_user_getter
134 self.user_setter = history_user_setter
135 self.related_name = related_name
136 self.use_base_model_db = use_base_model_db
137 self.history_manager = history_manager
138 self.historical_queryset = historical_queryset
139 self.m2m_fields = m2m_fields
140 self.m2m_fields_model_field_name = m2m_fields_model_field_name
141
142 if isinstance(no_db_index, str):
143 no_db_index = [no_db_index]
144 self.no_db_index = no_db_index
145
146 if excluded_fields is None:
147 excluded_fields = []
148 self.excluded_fields = excluded_fields
149
150 if excluded_field_kwargs is None:
151 excluded_field_kwargs = {}
152 self.excluded_field_kwargs = excluded_field_kwargs
153 try:
154 if isinstance(bases, str):
155 raise TypeError
156 self.bases = (HistoricalChanges,) + tuple(bases)
157 except TypeError:
158 raise TypeError("The `bases` option must be a list or a tuple.")
159 try:
160 if isinstance(m2m_bases, str):
161 raise TypeError
162 self.m2m_bases = (HistoricalChanges,) + tuple(m2m_bases)
163 except TypeError:
164 raise TypeError("The `m2m_bases` option must be a list or a tuple.")
165
166 def contribute_to_class(self, cls, name):
167 self.manager_name = name
168 self.module = cls.__module__
169 self.cls = cls
170 models.signals.class_prepared.connect(self.finalize, weak=False)
171 self.add_extra_methods(cls)
172
173 if cls._meta.abstract and not self.inherit:
174 msg = (
175 "HistoricalRecords added to abstract model ({}) without "
176 "inherit=True".format(self.cls.__name__)
177 )
178 warnings.warn(msg, UserWarning)
179
180 def add_extra_methods(self, cls):
181 def save_without_historical_record(self, *args, **kwargs):
182 """
183 Save model without saving a historical record
184
185 Make sure you know what you're doing before you use this method.
186 """
187 self.skip_history_when_saving = True
188 try:
189 ret = self.save(*args, **kwargs)
190 finally:
191 del self.skip_history_when_saving
192 return ret
193
194 setattr(cls, "save_without_historical_record", save_without_historical_record)
195
196 def finalize(self, sender, **kwargs):
197 inherited = False
198 if self.cls is not sender: # set in concrete
199 inherited = self.inherit and issubclass(sender, self.cls)
200 if not inherited:
201 return # set in abstract
202
203 if hasattr(sender._meta, "simple_history_manager_attribute"):
204 raise exceptions.MultipleRegistrationsError(
205 "{}.{} registered multiple times for history tracking.".format(
206 sender._meta.app_label, sender._meta.object_name
207 )
208 )
209 history_model = self.create_history_model(sender, inherited)
210
211 if inherited:
212 # Make sure history model is in same module as concrete model
213 module = importlib.import_module(history_model.__module__)
214 else:
215 module = importlib.import_module(self.module)
216 setattr(module, history_model.__name__, history_model)
217
218 # The HistoricalRecords object will be discarded,
219 # so the signal handlers can't use weak references.
220 models.signals.post_save.connect(self.post_save, sender=sender, weak=False)
221 models.signals.post_delete.connect(self.post_delete, sender=sender, weak=False)
222
223 m2m_fields = self.get_m2m_fields_from_model(sender)
224
225 for field in m2m_fields:
226 m2m_changed.connect(
227 partial(self.m2m_changed, attr=field.name),
228 sender=field.remote_field.through,
229 weak=False,
230 )
231
232 descriptor = HistoryDescriptor(
233 history_model,
234 manager=self.history_manager,
235 queryset=self.historical_queryset,
236 )
237 setattr(sender, self.manager_name, descriptor)
238 sender._meta.simple_history_manager_attribute = self.manager_name
239
240 for field in m2m_fields:
241 m2m_model = self.create_history_m2m_model(
242 history_model, field.remote_field.through
243 )
244 self.m2m_models[field] = m2m_model
245
246 setattr(module, m2m_model.__name__, m2m_model)
247
248 m2m_descriptor = HistoryDescriptor(m2m_model)
249 setattr(history_model, field.name, m2m_descriptor)
250
251 def get_history_model_name(self, model):
252 if not self.custom_model_name:
253 return f"{self.DEFAULT_MODEL_NAME_PREFIX}{model._meta.object_name}"
254 # Must be trying to use a custom history model name
255 if callable(self.custom_model_name):
256 name = self.custom_model_name(model._meta.object_name)
257 else:
258 # simple string
259 name = self.custom_model_name
260 # Desired class name cannot be same as the model it is tracking
261 if not (
262 name.lower() == model._meta.object_name.lower()
263 and model.__module__ == self.module
264 ):
265 return name
266 raise ValueError(
267 "The 'custom_model_name' option '{}' evaluates to a name that is the same "
268 "as the model it is tracking. This is not permitted.".format(
269 self.custom_model_name
270 )
271 )
272
273 def create_history_m2m_model(self, model, through_model):
274 attrs = {}
275
276 fields = self.copy_fields(through_model)
277 attrs.update(fields)
278 attrs.update(self.get_extra_fields_m2m(model, through_model, fields))
279
280 name = self.get_history_model_name(through_model)
281 registered_models[through_model._meta.db_table] = through_model
282
283 attrs.update(Meta=type("Meta", (), self.get_meta_options_m2m(through_model)))
284
285 m2m_history_model = type(str(name), self.m2m_bases, attrs)
286
287 return m2m_history_model
288
289 def create_history_model(self, model, inherited):
290 """
291 Creates a historical model to associate with the model provided.
292 """
293 attrs = {
294 "__module__": self.module,
295 "_history_excluded_fields": self.excluded_fields,
296 "_history_m2m_fields": self.get_m2m_fields_from_model(model),
297 "tracked_fields": self.fields_included(model),
298 }
299
300 app_module = "%s.models" % model._meta.app_label
301
302 if inherited:
303 # inherited use models module
304 attrs["__module__"] = model.__module__
305 elif model.__module__ != self.module:
306 # registered under different app
307 attrs["__module__"] = self.module
308 elif app_module != self.module:
309 # Abuse an internal API because the app registry is loading.
310 app = apps.app_configs[model._meta.app_label]
311 models_module = app.name
312 attrs["__module__"] = models_module
313
314 fields = self.copy_fields(model)
315 attrs.update(fields)
316 attrs.update(self.get_extra_fields(model, fields))
317 # type in python2 wants str as a first argument
318 attrs.update(Meta=type("Meta", (), self.get_meta_options(model)))
319 if not inherited and self.table_name is not None:
320 attrs["Meta"].db_table = self.table_name
321
322 # Set as the default then check for overrides
323 name = self.get_history_model_name(model)
324
325 registered_models[model._meta.db_table] = model
326 history_model = type(str(name), self.bases, attrs)
327 return history_model
328
329 def fields_included(self, model):
330 fields = []
331 for field in model._meta.fields:
332 if field.name not in self.excluded_fields:
333 fields.append(field)
334 return fields
335
336 def field_excluded_kwargs(self, field):
337 """
338 Find the excluded kwargs for a given field.
339 """
340 return self.excluded_field_kwargs.get(field.name, set())
341
342 def copy_fields(self, model):
343 """
344 Creates copies of the model's original fields, returning
345 a dictionary mapping field name to copied field object.
346 """
347 fields = {}
348 for field in self.fields_included(model):
349 field = copy.copy(field)
350 field.remote_field = copy.copy(field.remote_field)
351 if isinstance(field, OrderWrt):
352 # OrderWrt is a proxy field, switch to a plain IntegerField
353 field.__class__ = models.IntegerField
354 if isinstance(field, models.ForeignKey):
355 old_field = field
356 old_swappable = old_field.swappable
357 old_field.swappable = False
358 try:
359 _name, _path, args, field_args = old_field.deconstruct()
360 finally:
361 old_field.swappable = old_swappable
362 if getattr(old_field, "one_to_one", False) or isinstance(
363 old_field, models.OneToOneField
364 ):
365 FieldType = models.ForeignKey
366 else:
367 FieldType = type(old_field)
368
369 # Remove any excluded kwargs for the field.
370 # This is useful when a custom OneToOneField is being used that
371 # has a different set of arguments than ForeignKey
372 for exclude_arg in self.field_excluded_kwargs(old_field):
373 field_args.pop(exclude_arg, None)
374
375 # If field_args['to'] is 'self' then we have a case where the object
376 # has a foreign key to itself. If we pass the historical record's
377 # field to = 'self', the foreign key will point to an historical
378 # record rather than the base record. We can use old_field.model here.
379 if field_args.get("to", None) == "self":
380 field_args["to"] = old_field.model
381
382 # Override certain arguments passed when creating the field
383 # so that they work for the historical field.
384 field_args.update(
385 db_constraint=False,
386 related_name="+",
387 null=True,
388 blank=True,
389 primary_key=False,
390 db_index=True,
391 serialize=True,
392 unique=False,
393 on_delete=models.DO_NOTHING,
394 )
395 field = FieldType(*args, **field_args)
396 field.name = old_field.name
397 else:
398 transform_field(field)
399
400 # drop db index
401 if field.name in self.no_db_index:
402 field.db_index = False
403
404 fields[field.name] = field
405 return fields
406
407 def _get_history_change_reason_field(self):
408 if self.history_change_reason_field:
409 # User specific field from init
410 history_change_reason_field = self.history_change_reason_field
411 elif getattr(
412 settings, "SIMPLE_HISTORY_HISTORY_CHANGE_REASON_USE_TEXT_FIELD", False
413 ):
414 # Use text field with no max length, not enforced by DB anyways
415 history_change_reason_field = models.TextField(null=True)
416 else:
417 # Current default, with max length
418 history_change_reason_field = models.CharField(max_length=100, null=True)
419
420 return history_change_reason_field
421
422 def _get_history_id_field(self):
423 if self.history_id_field:
424 history_id_field = self.history_id_field.clone()
425 history_id_field.primary_key = True
426 history_id_field.editable = False
427 elif getattr(settings, "SIMPLE_HISTORY_HISTORY_ID_USE_UUID", False):
428 history_id_field = models.UUIDField(
429 primary_key=True, default=uuid.uuid4, editable=False
430 )
431 else:
432 history_id_field = models.AutoField(primary_key=True)
433
434 return history_id_field
435
436 def _get_history_user_fields(self):
437 if self.user_id_field is not None:
438 # Tracking user using explicit id rather than Django ForeignKey
439 history_user_fields = {
440 "history_user": property(self.user_getter, self.user_setter),
441 "history_user_id": self.user_id_field,
442 }
443 else:
444 user_model = self.user_model or getattr(
445 settings, "AUTH_USER_MODEL", "auth.User"
446 )
447
448 history_user_fields = {
449 "history_user": models.ForeignKey(
450 user_model,
451 null=True,
452 related_name=self.user_related_name,
453 on_delete=models.SET_NULL,
454 db_constraint=self.user_db_constraint,
455 )
456 }
457
458 return history_user_fields
459
460 def _get_history_related_field(self, model):
461 if self.related_name:
462 if self.manager_name == self.related_name:
463 raise exceptions.RelatedNameConflictError(
464 "The related name must not be called like the history manager."
465 )
466 return {
467 "history_relation": models.ForeignKey(
468 model,
469 on_delete=models.DO_NOTHING,
470 related_name=self.related_name,
471 db_constraint=False,
472 )
473 }
474 else:
475 return {}
476
477 def get_extra_fields_m2m(self, model, through_model, fields):
478 """Return dict of extra fields added to the m2m historical record model"""
479
480 extra_fields = {
481 "__module__": model.__module__,
482 "__str__": lambda self: "{} as of {}".format(
483 self._meta.verbose_name, self.history.history_date
484 ),
485 "history": models.ForeignKey(
486 model,
487 db_constraint=False,
488 on_delete=models.DO_NOTHING,
489 ),
490 "instance_type": through_model,
491 "m2m_history_id": self._get_history_id_field(),
492 }
493
494 return extra_fields
495
496 def get_extra_fields(self, model, fields):
497 """Return dict of extra fields added to the historical record model"""
498
499 def revert_url(self):
500 """URL for this change in the default admin site."""
501 opts = model._meta
502 app_label, model_name = opts.app_label, opts.model_name
503 return reverse(
504 f"{admin.site.name}:{app_label}_{model_name}_simple_history",
505 args=[getattr(self, opts.pk.attname), self.history_id],
506 )
507
508 def get_instance(self):
509 attrs = {
510 field.attname: getattr(self, field.attname) for field in fields.values()
511 }
512 if self._history_excluded_fields:
513 # We don't add ManyToManyFields to this list because they may cause
514 # the subsequent `.get()` call to fail. See #706 for context.
515 excluded_attnames = [
516 model._meta.get_field(field).attname
517 for field in self._history_excluded_fields
518 if not isinstance(model._meta.get_field(field), ManyToManyField)
519 ]
520 try:
521 values = (
522 model.objects.filter(pk=getattr(self, model._meta.pk.attname))
523 .values(*excluded_attnames)
524 .get()
525 )
526 except ObjectDoesNotExist:
527 pass
528 else:
529 attrs.update(values)
530 result = model(**attrs)
531 # this is the only way external code could know an instance is historical
532 setattr(result, SIMPLE_HISTORY_REVERSE_ATTR_NAME, self)
533 return result
534
535 def get_next_record(self):
536 """
537 Get the next history record for the instance. `None` if last.
538 """
539 history = utils.get_history_manager_from_history(self)
540 return (
541 history.filter(history_date__gt=self.history_date)
542 .order_by("history_date")
543 .first()
544 )
545
546 def get_prev_record(self):
547 """
548 Get the previous history record for the instance. `None` if first.
549 """
550 history = utils.get_history_manager_from_history(self)
551 return (
552 history.filter(history_date__lt=self.history_date)
553 .order_by("history_date")
554 .last()
555 )
556
557 def get_default_history_user(instance):
558 """
559 Returns the user specified by `get_user` method for manually creating
560 historical objects
561 """
562 return self.get_history_user(instance)
563
564 extra_fields = {
565 "history_id": self._get_history_id_field(),
566 "history_date": models.DateTimeField(db_index=self._date_indexing is True),
567 "history_change_reason": self._get_history_change_reason_field(),
568 "history_type": models.CharField(
569 max_length=1,
570 choices=(("+", _("Created")), ("~", _("Changed")), ("-", _("Deleted"))),
571 ),
572 "history_object": HistoricalObjectDescriptor(
573 model, self.fields_included(model)
574 ),
575 "instance": property(get_instance),
576 "instance_type": model,
577 "next_record": property(get_next_record),
578 "prev_record": property(get_prev_record),
579 "revert_url": revert_url,
580 "__str__": lambda self: "{} as of {}".format(
581 self.history_object, self.history_date
582 ),
583 "get_default_history_user": staticmethod(get_default_history_user),
584 }
585
586 extra_fields.update(self._get_history_related_field(model))
587 extra_fields.update(self._get_history_user_fields())
588
589 return extra_fields
590
591 @property
592 def _date_indexing(self):
593 """False, True, or 'composite'; default is True"""
594 result = getattr(settings, "SIMPLE_HISTORY_DATE_INDEX", True)
595 valid = True
596 if isinstance(result, str):
597 result = result.lower()
598 if result not in ("composite",):
599 valid = False
600 elif not isinstance(result, bool):
601 valid = False
602 if not valid:
603 raise ImproperlyConfigured(
604 "SIMPLE_HISTORY_DATE_INDEX must be one of (False, True, 'Composite')"
605 )
606 return result
607
608 def get_meta_options_m2m(self, through_model):
609 """
610 Returns a dictionary of fields that will be added to
611 the Meta inner class of the m2m historical record model.
612 """
613 name = self.get_history_model_name(through_model)
614
615 meta_fields = {"verbose_name": name}
616
617 if self.app:
618 meta_fields["app_label"] = self.app
619
620 return meta_fields
621
622 def get_meta_options(self, model):
623 """
624 Returns a dictionary of fields that will be added to
625 the Meta inner class of the historical record model.
626 """
627 meta_fields = {
628 "ordering": ("-history_date", "-history_id"),
629 "get_latest_by": ("history_date", "history_id"),
630 }
631 if self.user_set_verbose_name:
632 name = self.user_set_verbose_name
633 else:
634 name = format_lazy("historical {}", smart_str(model._meta.verbose_name))
635 if self.user_set_verbose_name_plural:
636 plural_name = self.user_set_verbose_name_plural
637 else:
638 plural_name = format_lazy(
639 "historical {}", smart_str(model._meta.verbose_name_plural)
640 )
641 meta_fields["verbose_name"] = name
642 meta_fields["verbose_name_plural"] = plural_name
643 if self.app:
644 meta_fields["app_label"] = self.app
645 if self._date_indexing == "composite":
646 meta_fields["indexes"] = (
647 models.Index(fields=("history_date", model._meta.pk.attname)),
648 )
649 return meta_fields
650
651 def post_save(self, instance, created, using=None, **kwargs):
652 if not getattr(settings, "SIMPLE_HISTORY_ENABLED", True):
653 return
654 if not created and hasattr(instance, "skip_history_when_saving"):
655 return
656 if not kwargs.get("raw", False):
657 self.create_historical_record(instance, created and "+" or "~", using=using)
658
659 def post_delete(self, instance, using=None, **kwargs):
660 if not getattr(settings, "SIMPLE_HISTORY_ENABLED", True):
661 return
662 if self.cascade_delete_history:
663 manager = getattr(instance, self.manager_name)
664 manager.using(using).all().delete()
665 else:
666 self.create_historical_record(instance, "-", using=using)
667
668 def get_change_reason_for_object(self, instance, history_type, using):
669 """
670 Get change reason for object.
671 Customize this method to automatically fill change reason from context.
672 """
673 return utils.get_change_reason_from_object(instance)
674
675 def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):
676 if hasattr(instance, "skip_history_when_saving"):
677 return
678
679 if action in ("post_add", "post_remove", "post_clear"):
680 # It should be safe to ~ this since the row must exist to modify m2m on it
681 self.create_historical_record(instance, "~")
682
683 def create_historical_record_m2ms(self, history_instance, instance):
684 for field in history_instance._history_m2m_fields:
685 m2m_history_model = self.m2m_models[field]
686 original_instance = history_instance.instance
687 through_model = getattr(original_instance, field.name).through
688
689 insert_rows = []
690
691 through_field_name = utils.get_m2m_field_name(field)
692 rows = through_model.objects.filter(**{through_field_name: instance})
693 for row in rows:
694 insert_row = {"history": history_instance}
695
696 for through_model_field in through_model._meta.fields:
697 insert_row[through_model_field.name] = getattr(
698 row, through_model_field.name
699 )
700 insert_rows.append(m2m_history_model(**insert_row))
701
702 pre_create_historical_m2m_records.send(
703 sender=m2m_history_model,
704 rows=insert_rows,
705 history_instance=history_instance,
706 instance=instance,
707 field=field,
708 )
709 created_rows = m2m_history_model.objects.bulk_create(insert_rows)
710 post_create_historical_m2m_records.send(
711 sender=m2m_history_model,
712 created_rows=created_rows,
713 history_instance=history_instance,
714 instance=instance,
715 field=field,
716 )
717
718 def create_historical_record(self, instance, history_type, using=None):
719 using = using if self.use_base_model_db else None
720 history_date = getattr(instance, "_history_date", timezone.now())
721 history_user = self.get_history_user(instance)
722 history_change_reason = self.get_change_reason_for_object(
723 instance, history_type, using
724 )
725 manager = getattr(instance, self.manager_name)
726
727 attrs = {}
728 for field in self.fields_included(instance):
729 attrs[field.attname] = getattr(instance, field.attname)
730
731 relation_field = getattr(manager.model, "history_relation", None)
732 if relation_field is not None:
733 attrs["history_relation"] = instance
734
735 history_instance = manager.model(
736 history_date=history_date,
737 history_type=history_type,
738 history_user=history_user,
739 history_change_reason=history_change_reason,
740 **attrs,
741 )
742
743 pre_create_historical_record.send(
744 sender=manager.model,
745 instance=instance,
746 history_date=history_date,
747 history_user=history_user,
748 history_change_reason=history_change_reason,
749 history_instance=history_instance,
750 using=using,
751 )
752
753 history_instance.save(using=using)
754 self.create_historical_record_m2ms(history_instance, instance)
755
756 post_create_historical_record.send(
757 sender=manager.model,
758 instance=instance,
759 history_instance=history_instance,
760 history_date=history_date,
761 history_user=history_user,
762 history_change_reason=history_change_reason,
763 using=using,
764 )
765
766 def get_history_user(self, instance):
767 """Get the modifying user from instance or middleware."""
768 try:
769 return instance._history_user
770 except AttributeError:
771 request = None
772 try:
773 if self.context.request.user.is_authenticated:
774 request = self.context.request
775 except AttributeError:
776 pass
777
778 return self.get_user(instance=instance, request=request)
779
780 def get_m2m_fields_from_model(self, model):
781 m2m_fields = set(self.m2m_fields)
782 try:
783 m2m_fields.update(getattr(model, self.m2m_fields_model_field_name))
784 except AttributeError:
785 pass
786 field_names = [
787 field if isinstance(field, str) else field.name for field in m2m_fields
788 ]
789 return [getattr(model, field_name).field for field_name in field_names]
790
791
792 def transform_field(field):
793 """Customize field appropriately for use in historical model"""
794 field.name = field.attname
795 if isinstance(field, models.BigAutoField):
796 field.__class__ = models.BigIntegerField
797 elif isinstance(field, models.AutoField):
798 field.__class__ = models.IntegerField
799
800 elif isinstance(field, models.FileField):
801 # Don't copy file, just path.
802 if getattr(settings, "SIMPLE_HISTORY_FILEFIELD_TO_CHARFIELD", False):
803 field.__class__ = models.CharField
804 else:
805 field.__class__ = models.TextField
806
807 # Historical instance shouldn't change create/update timestamps
808 field.auto_now = False
809 field.auto_now_add = False
810 # Just setting db_collation explicitly since we're not using
811 # field.deconstruct() here
812 field.db_collation = None
813
814 if field.primary_key or field.unique:
815 # Unique fields can no longer be guaranteed unique,
816 # but they should still be indexed for faster lookups.
817 field.primary_key = False
818 # DEV: Remove this check (but keep the contents) when the minimum required
819 # Django version is 5.1
820 if django.VERSION >= (5, 1):
821 field.unique = False
822 # (Django < 5.1) Can't set `unique` as it's a property, so set the backing field
823 # (Django >= 5.1) Set the backing field in addition to the cached property
824 # above, to cover all bases
825 field._unique = False
826 field.db_index = True
827 field.serialize = True
828
829
830 class HistoricForwardManyToOneDescriptor(ForwardManyToOneDescriptor):
831 """
832 Overrides get_queryset to provide historic query support, should the
833 instance be historic (and therefore was generated by a timepoint query)
834 and the other side of the relation also uses a history manager.
835 """
836
837 def get_queryset(self, **hints) -> QuerySet:
838 instance = hints.get("instance")
839 if instance:
840 history = getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)
841 histmgr = getattr(
842 self.field.remote_field.model,
843 getattr(
844 self.field.remote_field.model._meta,
845 "simple_history_manager_attribute",
846 "_notthere",
847 ),
848 None,
849 )
850 if history and histmgr:
851 return histmgr.as_of(getattr(history, "_as_of", history.history_date))
852 return super().get_queryset(**hints)
853
854
855 class HistoricReverseManyToOneDescriptor(ReverseManyToOneDescriptor):
856 """
857 Overrides get_queryset to provide historic query support, should the
858 instance be historic (and therefore was generated by a timepoint query)
859 and the other side of the relation also uses a history manager.
860 """
861
862 @cached_property
863 def related_manager_cls(self):
864 related_model = self.rel.related_model
865
866 class HistoricRelationModelManager(related_model._default_manager.__class__):
867 def get_queryset(self):
868 try:
869 return self.instance._prefetched_objects_cache[
870 self.field.remote_field.get_cache_name()
871 ]
872 except (AttributeError, KeyError):
873 history = getattr(
874 self.instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None
875 )
876 histmgr = getattr(
877 self.model,
878 getattr(
879 self.model._meta,
880 "simple_history_manager_attribute",
881 "_notthere",
882 ),
883 None,
884 )
885 if history and histmgr:
886 queryset = histmgr.as_of(
887 getattr(history, "_as_of", history.history_date)
888 )
889 else:
890 queryset = super().get_queryset()
891 return self._apply_rel_filters(queryset)
892
893 return create_reverse_many_to_one_manager(
894 HistoricRelationModelManager, self.rel
895 )
896
897
898 class HistoricForeignKey(ForeignKey):
899 """
900 Allows foreign keys to work properly from a historic instance.
901
902 If you use as_of queries to extract historical instances from
903 a model, and you have other models that are related by foreign
904 key and also historic, changing them to a HistoricForeignKey
905 field type will allow you to naturally cross the relationship
906 boundary at the same point in time as the origin instance.
907
908 A historic instance maintains an attribute ("_historic") when
909 it is historic, holding the historic record instance and the
910 timepoint used to query it ("_as_of"). HistoricForeignKey
911 looks for this and uses an as_of query against the related
912 object so the relationship is assessed at the same timepoint.
913 """
914
915 forward_related_accessor_class = HistoricForwardManyToOneDescriptor
916 related_accessor_class = HistoricReverseManyToOneDescriptor
917
918
919 def is_historic(instance):
920 """
921 Returns True if the instance was acquired with an as_of timepoint.
922 """
923 return to_historic(instance) is not None
924
925
926 def to_historic(instance):
927 """
928 Returns a historic model instance if the instance was acquired with
929 an as_of timepoint, or None.
930 """
931 return getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)
932
933
934 class HistoricalObjectDescriptor:
935 def __init__(self, model, fields_included):
936 self.model = model
937 self.fields_included = fields_included
938
939 def __get__(self, instance, owner):
940 if instance is None:
941 return self
942 values = {f.attname: getattr(instance, f.attname) for f in self.fields_included}
943 return self.model(**values)
944
945
946 class HistoricalChanges(ModelTypeHint):
947 def diff_against(
948 self,
949 old_history: "HistoricalChanges",
950 excluded_fields: Iterable[str] = None,
951 included_fields: Iterable[str] = None,
952 *,
953 foreign_keys_are_objs=False,
954 ) -> "ModelDelta":
955 """
956 :param old_history:
957 :param excluded_fields: The names of fields to exclude from diffing.
958 This takes precedence over ``included_fields``.
959 :param included_fields: The names of the only fields to include when diffing.
960 If not provided, all history-tracked fields will be included.
961 :param foreign_keys_are_objs: If ``False``, the returned diff will only contain
962 the raw PKs of any ``ForeignKey`` fields.
963 If ``True``, the diff will contain the actual related model objects
964 instead of just the PKs; deleted related objects will be instances of
965 ``DeletedObject``.
966 Note that passing ``True`` will necessarily query the database if the
967 related objects have not been prefetched (using e.g.
968 ``select_related()``).
969 """
970 if not isinstance(old_history, type(self)):
971 raise TypeError(
972 "unsupported type(s) for diffing:"
973 f" '{type(self)}' and '{type(old_history)}'"
974 )
975 if excluded_fields is None:
976 excluded_fields = set()
977
978 included_m2m_fields = {field.name for field in old_history._history_m2m_fields}
979 if included_fields is None:
980 included_fields = {f.name for f in old_history.tracked_fields if f.editable}
981 else:
982 included_m2m_fields = included_m2m_fields.intersection(included_fields)
983
984 fields = (
985 set(included_fields)
986 .difference(included_m2m_fields)
987 .difference(excluded_fields)
988 )
989 m2m_fields = set(included_m2m_fields).difference(excluded_fields)
990
991 changes = [
992 *self._get_field_changes_for_diff(
993 old_history, fields, foreign_keys_are_objs
994 ),
995 *self._get_m2m_field_changes_for_diff(
996 old_history, m2m_fields, foreign_keys_are_objs
997 ),
998 ]
999 # Sort by field (attribute) name, to ensure a consistent order
1000 changes.sort(key=lambda change: change.field)
1001 changed_fields = [change.field for change in changes]
1002 return ModelDelta(changes, changed_fields, old_history, self)
1003
1004 def _get_field_changes_for_diff(
1005 self,
1006 old_history: "HistoricalChanges",
1007 fields: Iterable[str],
1008 foreign_keys_are_objs: bool,
1009 ) -> List["ModelChange"]:
1010 """Helper method for ``diff_against()``."""
1011 changes = []
1012
1013 old_values = model_to_dict(old_history, fields=fields)
1014 new_values = model_to_dict(self, fields=fields)
1015
1016 for field in fields:
1017 old_value = old_values[field]
1018 new_value = new_values[field]
1019
1020 if old_value != new_value:
1021 field_meta = self._meta.get_field(field)
1022 if foreign_keys_are_objs and isinstance(field_meta, ForeignKey):
1023 # Set the fields to their related model objects instead of
1024 # the raw PKs from `model_to_dict()`
1025 def get_value(record, foreign_key):
1026 try:
1027 value = getattr(record, field)
1028 # `value` seems to be None (without raising this exception)
1029 # if the object has not been refreshed from the database
1030 except ObjectDoesNotExist:
1031 value = None
1032
1033 if value is None:
1034 value = DeletedObject(field_meta.related_model, foreign_key)
1035 return value
1036
1037 old_value = get_value(old_history, old_value)
1038 new_value = get_value(self, new_value)
1039
1040 change = ModelChange(field, old_value, new_value)
1041 changes.append(change)
1042
1043 return changes
1044
1045 def _get_m2m_field_changes_for_diff(
1046 self,
1047 old_history: "HistoricalChanges",
1048 m2m_fields: Iterable[str],
1049 foreign_keys_are_objs: bool,
1050 ) -> List["ModelChange"]:
1051 """Helper method for ``diff_against()``."""
1052 changes = []
1053
1054 for field in m2m_fields:
1055 original_field_meta = self.instance_type._meta.get_field(field)
1056 reverse_field_name = utils.get_m2m_reverse_field_name(original_field_meta)
1057 # Sort the M2M rows by the related object, to ensure a consistent order
1058 old_m2m_qs = getattr(old_history, field).order_by(reverse_field_name)
1059 new_m2m_qs = getattr(self, field).order_by(reverse_field_name)
1060 m2m_through_model_opts = new_m2m_qs.model._meta
1061
1062 # Create a list of field names to compare against.
1063 # The list is generated without the PK of the intermediate (through)
1064 # table, the foreign key to the history record, and the actual `history`
1065 # field, to avoid false positives while diffing.
1066 through_model_fields = [
1067 f.name
1068 for f in m2m_through_model_opts.fields
1069 if f.editable and f.name not in ["id", "m2m_history_id", "history"]
1070 ]
1071 old_rows = list(old_m2m_qs.values(*through_model_fields))
1072 new_rows = list(new_m2m_qs.values(*through_model_fields))
1073
1074 if old_rows != new_rows:
1075 if foreign_keys_are_objs:
1076 fk_fields = [
1077 f
1078 for f in through_model_fields
1079 if isinstance(m2m_through_model_opts.get_field(f), ForeignKey)
1080 ]
1081
1082 # Set the through fields to their related model objects instead of
1083 # the raw PKs from `values()`
1084 def rows_with_foreign_key_objs(m2m_qs):
1085 def get_value(obj, through_field):
1086 try:
1087 value = getattr(obj, through_field)
1088 # If the related object has been deleted, `value` seems to
1089 # usually already be None instead of raising this exception
1090 except ObjectDoesNotExist:
1091 value = None
1092
1093 if value is None:
1094 meta = m2m_through_model_opts.get_field(through_field)
1095 foreign_key = getattr(obj, meta.attname)
1096 value = DeletedObject(meta.related_model, foreign_key)
1097 return value
1098
1099 # Replicate the format of the return value of QuerySet.values()
1100 return [
1101 {
1102 through_field: get_value(through_obj, through_field)
1103 for through_field in through_model_fields
1104 }
1105 for through_obj in m2m_qs.select_related(*fk_fields)
1106 ]
1107
1108 old_rows = rows_with_foreign_key_objs(old_m2m_qs)
1109 new_rows = rows_with_foreign_key_objs(new_m2m_qs)
1110
1111 change = ModelChange(field, old_rows, new_rows)
1112 changes.append(change)
1113
1114 return changes
1115
1116
1117 @dataclass(frozen=True)
1118 class DeletedObject:
1119 model: Type[models.Model]
1120 pk: Any
1121
1122 def __str__(self):
1123 deleted_model_str = _("Deleted %(type_name)s") % {
1124 "type_name": self.model._meta.verbose_name,
1125 }
1126 return f"{deleted_model_str} (pk={self.pk})"
1127
1128
1129 # Either:
1130 # - The value of a foreign key field:
1131 # - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:
1132 # Either the related object or ``DeletedObject``.
1133 # - Otherwise:
1134 # The PK of the related object.
1135 #
1136 # - The value of a many-to-many field:
1137 # A list of dicts from the through model field names to either:
1138 # - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:
1139 # Either the through model's related objects or ``DeletedObject``.
1140 # - Otherwise:
1141 # The PK of the through model's related objects.
1142 #
1143 # - Any of the other possible values of a model field.
1144 ModelChangeValue = Union[Any, DeletedObject, List[Dict[str, Union[Any, DeletedObject]]]]
1145
1146
1147 @dataclass(frozen=True)
1148 class ModelChange:
1149 field: str
1150 old: ModelChangeValue
1151 new: ModelChangeValue
1152
1153
1154 @dataclass(frozen=True)
1155 class ModelDelta:
1156 changes: Sequence[ModelChange]
1157 changed_fields: Sequence[str]
1158 old_record: HistoricalChanges
1159 new_record: HistoricalChanges
1160
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/simple_history/models.py b/simple_history/models.py
--- a/simple_history/models.py
+++ b/simple_history/models.py
@@ -673,6 +673,8 @@
return utils.get_change_reason_from_object(instance)
def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):
+ if not getattr(settings, "SIMPLE_HISTORY_ENABLED", True):
+ return
if hasattr(instance, "skip_history_when_saving"):
return
| {"golden_diff": "diff --git a/simple_history/models.py b/simple_history/models.py\n--- a/simple_history/models.py\n+++ b/simple_history/models.py\n@@ -673,6 +673,8 @@\n return utils.get_change_reason_from_object(instance)\n \n def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):\n+ if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n+ return\n if hasattr(instance, \"skip_history_when_saving\"):\n return\n", "issue": "m2m historical records are saved even when SIMPLE_HISTORY_ENABLED = False\n**Describe the bug**\r\nm2m relationships ignores the SIMPLE_HISTORY_ENABLED = False, even when disabled the library is still saving the historical record.\r\nThis is causing me some issue when loading Django fixture using loaddata\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. set SIMPLE_HISTORY_ENABLED = False\r\n2. save a m2m relationship\r\n3. you will see the record in the historical DB\r\n\r\n\r\n**Expected behavior**\r\nIf SIMPLE_HISTORY_ENABLED = False no historical record should be created for m2m relationship\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Environment (please complete the following information):**\r\n - OS: Ubuntu 22\r\n - Browser (if applicable): [e.g. chrome, safari]\r\n - Django Simple History Version: [e.g. 1.9.1]\r\n - Django Version: [e.g. 1.11.11]\r\n - Database Version: PostgreSQL 15\r\n\n", "before_files": [{"content": "import copy\nimport importlib\nimport uuid\nimport warnings\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Sequence, Type, Union\n\nimport django\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist\nfrom django.db import models\nfrom django.db.models import ManyToManyField\nfrom django.db.models.fields.proxy import OrderWrt\nfrom django.db.models.fields.related import ForeignKey\nfrom django.db.models.fields.related_descriptors import (\n ForwardManyToOneDescriptor,\n ReverseManyToOneDescriptor,\n create_reverse_many_to_one_manager,\n)\nfrom django.db.models.query import QuerySet\nfrom django.db.models.signals import m2m_changed\nfrom django.forms.models import model_to_dict\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.encoding import smart_str\nfrom django.utils.functional import cached_property\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom . import exceptions, utils\nfrom .manager import (\n SIMPLE_HISTORY_REVERSE_ATTR_NAME,\n HistoricalQuerySet,\n HistoryDescriptor,\n HistoryManager,\n)\nfrom .signals import (\n post_create_historical_m2m_records,\n post_create_historical_record,\n pre_create_historical_m2m_records,\n pre_create_historical_record,\n)\n\ntry:\n from asgiref.local import Local as LocalContext\nexcept ImportError:\n from threading import local as LocalContext\n\nif TYPE_CHECKING:\n ModelTypeHint = models.Model\nelse:\n ModelTypeHint = object\n\nregistered_models = {}\n\n\ndef _default_get_user(request, **kwargs):\n try:\n return request.user\n except AttributeError:\n return None\n\n\ndef _history_user_getter(historical_instance):\n if historical_instance.history_user_id is None:\n return None\n User = get_user_model()\n try:\n return User.objects.get(pk=historical_instance.history_user_id)\n except User.DoesNotExist:\n return None\n\n\ndef _history_user_setter(historical_instance, user):\n if user is not None:\n historical_instance.history_user_id = user.pk\n\n\nclass HistoricalRecords:\n DEFAULT_MODEL_NAME_PREFIX = \"Historical\"\n\n thread = context = LocalContext() # retain thread for backwards compatibility\n m2m_models = {}\n\n def __init__(\n self,\n verbose_name=None,\n verbose_name_plural=None,\n bases=(models.Model,),\n user_related_name=\"+\",\n table_name=None,\n inherit=False,\n excluded_fields=None,\n history_id_field=None,\n history_change_reason_field=None,\n user_model=None,\n get_user=_default_get_user,\n cascade_delete_history=False,\n custom_model_name=None,\n app=None,\n history_user_id_field=None,\n history_user_getter=_history_user_getter,\n history_user_setter=_history_user_setter,\n related_name=None,\n use_base_model_db=False,\n user_db_constraint=True,\n no_db_index=list(),\n excluded_field_kwargs=None,\n history_manager=HistoryManager,\n historical_queryset=HistoricalQuerySet,\n m2m_fields=(),\n m2m_fields_model_field_name=\"_history_m2m_fields\",\n m2m_bases=(models.Model,),\n ):\n self.user_set_verbose_name = verbose_name\n self.user_set_verbose_name_plural = verbose_name_plural\n self.user_related_name = user_related_name\n self.user_db_constraint = user_db_constraint\n self.table_name = table_name\n self.inherit = inherit\n self.history_id_field = history_id_field\n self.history_change_reason_field = history_change_reason_field\n self.user_model = user_model\n self.get_user = get_user\n self.cascade_delete_history = cascade_delete_history\n self.custom_model_name = custom_model_name\n self.app = app\n self.user_id_field = history_user_id_field\n self.user_getter = history_user_getter\n self.user_setter = history_user_setter\n self.related_name = related_name\n self.use_base_model_db = use_base_model_db\n self.history_manager = history_manager\n self.historical_queryset = historical_queryset\n self.m2m_fields = m2m_fields\n self.m2m_fields_model_field_name = m2m_fields_model_field_name\n\n if isinstance(no_db_index, str):\n no_db_index = [no_db_index]\n self.no_db_index = no_db_index\n\n if excluded_fields is None:\n excluded_fields = []\n self.excluded_fields = excluded_fields\n\n if excluded_field_kwargs is None:\n excluded_field_kwargs = {}\n self.excluded_field_kwargs = excluded_field_kwargs\n try:\n if isinstance(bases, str):\n raise TypeError\n self.bases = (HistoricalChanges,) + tuple(bases)\n except TypeError:\n raise TypeError(\"The `bases` option must be a list or a tuple.\")\n try:\n if isinstance(m2m_bases, str):\n raise TypeError\n self.m2m_bases = (HistoricalChanges,) + tuple(m2m_bases)\n except TypeError:\n raise TypeError(\"The `m2m_bases` option must be a list or a tuple.\")\n\n def contribute_to_class(self, cls, name):\n self.manager_name = name\n self.module = cls.__module__\n self.cls = cls\n models.signals.class_prepared.connect(self.finalize, weak=False)\n self.add_extra_methods(cls)\n\n if cls._meta.abstract and not self.inherit:\n msg = (\n \"HistoricalRecords added to abstract model ({}) without \"\n \"inherit=True\".format(self.cls.__name__)\n )\n warnings.warn(msg, UserWarning)\n\n def add_extra_methods(self, cls):\n def save_without_historical_record(self, *args, **kwargs):\n \"\"\"\n Save model without saving a historical record\n\n Make sure you know what you're doing before you use this method.\n \"\"\"\n self.skip_history_when_saving = True\n try:\n ret = self.save(*args, **kwargs)\n finally:\n del self.skip_history_when_saving\n return ret\n\n setattr(cls, \"save_without_historical_record\", save_without_historical_record)\n\n def finalize(self, sender, **kwargs):\n inherited = False\n if self.cls is not sender: # set in concrete\n inherited = self.inherit and issubclass(sender, self.cls)\n if not inherited:\n return # set in abstract\n\n if hasattr(sender._meta, \"simple_history_manager_attribute\"):\n raise exceptions.MultipleRegistrationsError(\n \"{}.{} registered multiple times for history tracking.\".format(\n sender._meta.app_label, sender._meta.object_name\n )\n )\n history_model = self.create_history_model(sender, inherited)\n\n if inherited:\n # Make sure history model is in same module as concrete model\n module = importlib.import_module(history_model.__module__)\n else:\n module = importlib.import_module(self.module)\n setattr(module, history_model.__name__, history_model)\n\n # The HistoricalRecords object will be discarded,\n # so the signal handlers can't use weak references.\n models.signals.post_save.connect(self.post_save, sender=sender, weak=False)\n models.signals.post_delete.connect(self.post_delete, sender=sender, weak=False)\n\n m2m_fields = self.get_m2m_fields_from_model(sender)\n\n for field in m2m_fields:\n m2m_changed.connect(\n partial(self.m2m_changed, attr=field.name),\n sender=field.remote_field.through,\n weak=False,\n )\n\n descriptor = HistoryDescriptor(\n history_model,\n manager=self.history_manager,\n queryset=self.historical_queryset,\n )\n setattr(sender, self.manager_name, descriptor)\n sender._meta.simple_history_manager_attribute = self.manager_name\n\n for field in m2m_fields:\n m2m_model = self.create_history_m2m_model(\n history_model, field.remote_field.through\n )\n self.m2m_models[field] = m2m_model\n\n setattr(module, m2m_model.__name__, m2m_model)\n\n m2m_descriptor = HistoryDescriptor(m2m_model)\n setattr(history_model, field.name, m2m_descriptor)\n\n def get_history_model_name(self, model):\n if not self.custom_model_name:\n return f\"{self.DEFAULT_MODEL_NAME_PREFIX}{model._meta.object_name}\"\n # Must be trying to use a custom history model name\n if callable(self.custom_model_name):\n name = self.custom_model_name(model._meta.object_name)\n else:\n # simple string\n name = self.custom_model_name\n # Desired class name cannot be same as the model it is tracking\n if not (\n name.lower() == model._meta.object_name.lower()\n and model.__module__ == self.module\n ):\n return name\n raise ValueError(\n \"The 'custom_model_name' option '{}' evaluates to a name that is the same \"\n \"as the model it is tracking. This is not permitted.\".format(\n self.custom_model_name\n )\n )\n\n def create_history_m2m_model(self, model, through_model):\n attrs = {}\n\n fields = self.copy_fields(through_model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields_m2m(model, through_model, fields))\n\n name = self.get_history_model_name(through_model)\n registered_models[through_model._meta.db_table] = through_model\n\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options_m2m(through_model)))\n\n m2m_history_model = type(str(name), self.m2m_bases, attrs)\n\n return m2m_history_model\n\n def create_history_model(self, model, inherited):\n \"\"\"\n Creates a historical model to associate with the model provided.\n \"\"\"\n attrs = {\n \"__module__\": self.module,\n \"_history_excluded_fields\": self.excluded_fields,\n \"_history_m2m_fields\": self.get_m2m_fields_from_model(model),\n \"tracked_fields\": self.fields_included(model),\n }\n\n app_module = \"%s.models\" % model._meta.app_label\n\n if inherited:\n # inherited use models module\n attrs[\"__module__\"] = model.__module__\n elif model.__module__ != self.module:\n # registered under different app\n attrs[\"__module__\"] = self.module\n elif app_module != self.module:\n # Abuse an internal API because the app registry is loading.\n app = apps.app_configs[model._meta.app_label]\n models_module = app.name\n attrs[\"__module__\"] = models_module\n\n fields = self.copy_fields(model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields(model, fields))\n # type in python2 wants str as a first argument\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options(model)))\n if not inherited and self.table_name is not None:\n attrs[\"Meta\"].db_table = self.table_name\n\n # Set as the default then check for overrides\n name = self.get_history_model_name(model)\n\n registered_models[model._meta.db_table] = model\n history_model = type(str(name), self.bases, attrs)\n return history_model\n\n def fields_included(self, model):\n fields = []\n for field in model._meta.fields:\n if field.name not in self.excluded_fields:\n fields.append(field)\n return fields\n\n def field_excluded_kwargs(self, field):\n \"\"\"\n Find the excluded kwargs for a given field.\n \"\"\"\n return self.excluded_field_kwargs.get(field.name, set())\n\n def copy_fields(self, model):\n \"\"\"\n Creates copies of the model's original fields, returning\n a dictionary mapping field name to copied field object.\n \"\"\"\n fields = {}\n for field in self.fields_included(model):\n field = copy.copy(field)\n field.remote_field = copy.copy(field.remote_field)\n if isinstance(field, OrderWrt):\n # OrderWrt is a proxy field, switch to a plain IntegerField\n field.__class__ = models.IntegerField\n if isinstance(field, models.ForeignKey):\n old_field = field\n old_swappable = old_field.swappable\n old_field.swappable = False\n try:\n _name, _path, args, field_args = old_field.deconstruct()\n finally:\n old_field.swappable = old_swappable\n if getattr(old_field, \"one_to_one\", False) or isinstance(\n old_field, models.OneToOneField\n ):\n FieldType = models.ForeignKey\n else:\n FieldType = type(old_field)\n\n # Remove any excluded kwargs for the field.\n # This is useful when a custom OneToOneField is being used that\n # has a different set of arguments than ForeignKey\n for exclude_arg in self.field_excluded_kwargs(old_field):\n field_args.pop(exclude_arg, None)\n\n # If field_args['to'] is 'self' then we have a case where the object\n # has a foreign key to itself. If we pass the historical record's\n # field to = 'self', the foreign key will point to an historical\n # record rather than the base record. We can use old_field.model here.\n if field_args.get(\"to\", None) == \"self\":\n field_args[\"to\"] = old_field.model\n\n # Override certain arguments passed when creating the field\n # so that they work for the historical field.\n field_args.update(\n db_constraint=False,\n related_name=\"+\",\n null=True,\n blank=True,\n primary_key=False,\n db_index=True,\n serialize=True,\n unique=False,\n on_delete=models.DO_NOTHING,\n )\n field = FieldType(*args, **field_args)\n field.name = old_field.name\n else:\n transform_field(field)\n\n # drop db index\n if field.name in self.no_db_index:\n field.db_index = False\n\n fields[field.name] = field\n return fields\n\n def _get_history_change_reason_field(self):\n if self.history_change_reason_field:\n # User specific field from init\n history_change_reason_field = self.history_change_reason_field\n elif getattr(\n settings, \"SIMPLE_HISTORY_HISTORY_CHANGE_REASON_USE_TEXT_FIELD\", False\n ):\n # Use text field with no max length, not enforced by DB anyways\n history_change_reason_field = models.TextField(null=True)\n else:\n # Current default, with max length\n history_change_reason_field = models.CharField(max_length=100, null=True)\n\n return history_change_reason_field\n\n def _get_history_id_field(self):\n if self.history_id_field:\n history_id_field = self.history_id_field.clone()\n history_id_field.primary_key = True\n history_id_field.editable = False\n elif getattr(settings, \"SIMPLE_HISTORY_HISTORY_ID_USE_UUID\", False):\n history_id_field = models.UUIDField(\n primary_key=True, default=uuid.uuid4, editable=False\n )\n else:\n history_id_field = models.AutoField(primary_key=True)\n\n return history_id_field\n\n def _get_history_user_fields(self):\n if self.user_id_field is not None:\n # Tracking user using explicit id rather than Django ForeignKey\n history_user_fields = {\n \"history_user\": property(self.user_getter, self.user_setter),\n \"history_user_id\": self.user_id_field,\n }\n else:\n user_model = self.user_model or getattr(\n settings, \"AUTH_USER_MODEL\", \"auth.User\"\n )\n\n history_user_fields = {\n \"history_user\": models.ForeignKey(\n user_model,\n null=True,\n related_name=self.user_related_name,\n on_delete=models.SET_NULL,\n db_constraint=self.user_db_constraint,\n )\n }\n\n return history_user_fields\n\n def _get_history_related_field(self, model):\n if self.related_name:\n if self.manager_name == self.related_name:\n raise exceptions.RelatedNameConflictError(\n \"The related name must not be called like the history manager.\"\n )\n return {\n \"history_relation\": models.ForeignKey(\n model,\n on_delete=models.DO_NOTHING,\n related_name=self.related_name,\n db_constraint=False,\n )\n }\n else:\n return {}\n\n def get_extra_fields_m2m(self, model, through_model, fields):\n \"\"\"Return dict of extra fields added to the m2m historical record model\"\"\"\n\n extra_fields = {\n \"__module__\": model.__module__,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self._meta.verbose_name, self.history.history_date\n ),\n \"history\": models.ForeignKey(\n model,\n db_constraint=False,\n on_delete=models.DO_NOTHING,\n ),\n \"instance_type\": through_model,\n \"m2m_history_id\": self._get_history_id_field(),\n }\n\n return extra_fields\n\n def get_extra_fields(self, model, fields):\n \"\"\"Return dict of extra fields added to the historical record model\"\"\"\n\n def revert_url(self):\n \"\"\"URL for this change in the default admin site.\"\"\"\n opts = model._meta\n app_label, model_name = opts.app_label, opts.model_name\n return reverse(\n f\"{admin.site.name}:{app_label}_{model_name}_simple_history\",\n args=[getattr(self, opts.pk.attname), self.history_id],\n )\n\n def get_instance(self):\n attrs = {\n field.attname: getattr(self, field.attname) for field in fields.values()\n }\n if self._history_excluded_fields:\n # We don't add ManyToManyFields to this list because they may cause\n # the subsequent `.get()` call to fail. See #706 for context.\n excluded_attnames = [\n model._meta.get_field(field).attname\n for field in self._history_excluded_fields\n if not isinstance(model._meta.get_field(field), ManyToManyField)\n ]\n try:\n values = (\n model.objects.filter(pk=getattr(self, model._meta.pk.attname))\n .values(*excluded_attnames)\n .get()\n )\n except ObjectDoesNotExist:\n pass\n else:\n attrs.update(values)\n result = model(**attrs)\n # this is the only way external code could know an instance is historical\n setattr(result, SIMPLE_HISTORY_REVERSE_ATTR_NAME, self)\n return result\n\n def get_next_record(self):\n \"\"\"\n Get the next history record for the instance. `None` if last.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__gt=self.history_date)\n .order_by(\"history_date\")\n .first()\n )\n\n def get_prev_record(self):\n \"\"\"\n Get the previous history record for the instance. `None` if first.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__lt=self.history_date)\n .order_by(\"history_date\")\n .last()\n )\n\n def get_default_history_user(instance):\n \"\"\"\n Returns the user specified by `get_user` method for manually creating\n historical objects\n \"\"\"\n return self.get_history_user(instance)\n\n extra_fields = {\n \"history_id\": self._get_history_id_field(),\n \"history_date\": models.DateTimeField(db_index=self._date_indexing is True),\n \"history_change_reason\": self._get_history_change_reason_field(),\n \"history_type\": models.CharField(\n max_length=1,\n choices=((\"+\", _(\"Created\")), (\"~\", _(\"Changed\")), (\"-\", _(\"Deleted\"))),\n ),\n \"history_object\": HistoricalObjectDescriptor(\n model, self.fields_included(model)\n ),\n \"instance\": property(get_instance),\n \"instance_type\": model,\n \"next_record\": property(get_next_record),\n \"prev_record\": property(get_prev_record),\n \"revert_url\": revert_url,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self.history_object, self.history_date\n ),\n \"get_default_history_user\": staticmethod(get_default_history_user),\n }\n\n extra_fields.update(self._get_history_related_field(model))\n extra_fields.update(self._get_history_user_fields())\n\n return extra_fields\n\n @property\n def _date_indexing(self):\n \"\"\"False, True, or 'composite'; default is True\"\"\"\n result = getattr(settings, \"SIMPLE_HISTORY_DATE_INDEX\", True)\n valid = True\n if isinstance(result, str):\n result = result.lower()\n if result not in (\"composite\",):\n valid = False\n elif not isinstance(result, bool):\n valid = False\n if not valid:\n raise ImproperlyConfigured(\n \"SIMPLE_HISTORY_DATE_INDEX must be one of (False, True, 'Composite')\"\n )\n return result\n\n def get_meta_options_m2m(self, through_model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the m2m historical record model.\n \"\"\"\n name = self.get_history_model_name(through_model)\n\n meta_fields = {\"verbose_name\": name}\n\n if self.app:\n meta_fields[\"app_label\"] = self.app\n\n return meta_fields\n\n def get_meta_options(self, model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the historical record model.\n \"\"\"\n meta_fields = {\n \"ordering\": (\"-history_date\", \"-history_id\"),\n \"get_latest_by\": (\"history_date\", \"history_id\"),\n }\n if self.user_set_verbose_name:\n name = self.user_set_verbose_name\n else:\n name = format_lazy(\"historical {}\", smart_str(model._meta.verbose_name))\n if self.user_set_verbose_name_plural:\n plural_name = self.user_set_verbose_name_plural\n else:\n plural_name = format_lazy(\n \"historical {}\", smart_str(model._meta.verbose_name_plural)\n )\n meta_fields[\"verbose_name\"] = name\n meta_fields[\"verbose_name_plural\"] = plural_name\n if self.app:\n meta_fields[\"app_label\"] = self.app\n if self._date_indexing == \"composite\":\n meta_fields[\"indexes\"] = (\n models.Index(fields=(\"history_date\", model._meta.pk.attname)),\n )\n return meta_fields\n\n def post_save(self, instance, created, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if not created and hasattr(instance, \"skip_history_when_saving\"):\n return\n if not kwargs.get(\"raw\", False):\n self.create_historical_record(instance, created and \"+\" or \"~\", using=using)\n\n def post_delete(self, instance, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if self.cascade_delete_history:\n manager = getattr(instance, self.manager_name)\n manager.using(using).all().delete()\n else:\n self.create_historical_record(instance, \"-\", using=using)\n\n def get_change_reason_for_object(self, instance, history_type, using):\n \"\"\"\n Get change reason for object.\n Customize this method to automatically fill change reason from context.\n \"\"\"\n return utils.get_change_reason_from_object(instance)\n\n def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):\n if hasattr(instance, \"skip_history_when_saving\"):\n return\n\n if action in (\"post_add\", \"post_remove\", \"post_clear\"):\n # It should be safe to ~ this since the row must exist to modify m2m on it\n self.create_historical_record(instance, \"~\")\n\n def create_historical_record_m2ms(self, history_instance, instance):\n for field in history_instance._history_m2m_fields:\n m2m_history_model = self.m2m_models[field]\n original_instance = history_instance.instance\n through_model = getattr(original_instance, field.name).through\n\n insert_rows = []\n\n through_field_name = utils.get_m2m_field_name(field)\n rows = through_model.objects.filter(**{through_field_name: instance})\n for row in rows:\n insert_row = {\"history\": history_instance}\n\n for through_model_field in through_model._meta.fields:\n insert_row[through_model_field.name] = getattr(\n row, through_model_field.name\n )\n insert_rows.append(m2m_history_model(**insert_row))\n\n pre_create_historical_m2m_records.send(\n sender=m2m_history_model,\n rows=insert_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n created_rows = m2m_history_model.objects.bulk_create(insert_rows)\n post_create_historical_m2m_records.send(\n sender=m2m_history_model,\n created_rows=created_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n\n def create_historical_record(self, instance, history_type, using=None):\n using = using if self.use_base_model_db else None\n history_date = getattr(instance, \"_history_date\", timezone.now())\n history_user = self.get_history_user(instance)\n history_change_reason = self.get_change_reason_for_object(\n instance, history_type, using\n )\n manager = getattr(instance, self.manager_name)\n\n attrs = {}\n for field in self.fields_included(instance):\n attrs[field.attname] = getattr(instance, field.attname)\n\n relation_field = getattr(manager.model, \"history_relation\", None)\n if relation_field is not None:\n attrs[\"history_relation\"] = instance\n\n history_instance = manager.model(\n history_date=history_date,\n history_type=history_type,\n history_user=history_user,\n history_change_reason=history_change_reason,\n **attrs,\n )\n\n pre_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n history_instance=history_instance,\n using=using,\n )\n\n history_instance.save(using=using)\n self.create_historical_record_m2ms(history_instance, instance)\n\n post_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_instance=history_instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n using=using,\n )\n\n def get_history_user(self, instance):\n \"\"\"Get the modifying user from instance or middleware.\"\"\"\n try:\n return instance._history_user\n except AttributeError:\n request = None\n try:\n if self.context.request.user.is_authenticated:\n request = self.context.request\n except AttributeError:\n pass\n\n return self.get_user(instance=instance, request=request)\n\n def get_m2m_fields_from_model(self, model):\n m2m_fields = set(self.m2m_fields)\n try:\n m2m_fields.update(getattr(model, self.m2m_fields_model_field_name))\n except AttributeError:\n pass\n field_names = [\n field if isinstance(field, str) else field.name for field in m2m_fields\n ]\n return [getattr(model, field_name).field for field_name in field_names]\n\n\ndef transform_field(field):\n \"\"\"Customize field appropriately for use in historical model\"\"\"\n field.name = field.attname\n if isinstance(field, models.BigAutoField):\n field.__class__ = models.BigIntegerField\n elif isinstance(field, models.AutoField):\n field.__class__ = models.IntegerField\n\n elif isinstance(field, models.FileField):\n # Don't copy file, just path.\n if getattr(settings, \"SIMPLE_HISTORY_FILEFIELD_TO_CHARFIELD\", False):\n field.__class__ = models.CharField\n else:\n field.__class__ = models.TextField\n\n # Historical instance shouldn't change create/update timestamps\n field.auto_now = False\n field.auto_now_add = False\n # Just setting db_collation explicitly since we're not using\n # field.deconstruct() here\n field.db_collation = None\n\n if field.primary_key or field.unique:\n # Unique fields can no longer be guaranteed unique,\n # but they should still be indexed for faster lookups.\n field.primary_key = False\n # DEV: Remove this check (but keep the contents) when the minimum required\n # Django version is 5.1\n if django.VERSION >= (5, 1):\n field.unique = False\n # (Django < 5.1) Can't set `unique` as it's a property, so set the backing field\n # (Django >= 5.1) Set the backing field in addition to the cached property\n # above, to cover all bases\n field._unique = False\n field.db_index = True\n field.serialize = True\n\n\nclass HistoricForwardManyToOneDescriptor(ForwardManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n def get_queryset(self, **hints) -> QuerySet:\n instance = hints.get(\"instance\")\n if instance:\n history = getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n histmgr = getattr(\n self.field.remote_field.model,\n getattr(\n self.field.remote_field.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n return histmgr.as_of(getattr(history, \"_as_of\", history.history_date))\n return super().get_queryset(**hints)\n\n\nclass HistoricReverseManyToOneDescriptor(ReverseManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n @cached_property\n def related_manager_cls(self):\n related_model = self.rel.related_model\n\n class HistoricRelationModelManager(related_model._default_manager.__class__):\n def get_queryset(self):\n try:\n return self.instance._prefetched_objects_cache[\n self.field.remote_field.get_cache_name()\n ]\n except (AttributeError, KeyError):\n history = getattr(\n self.instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None\n )\n histmgr = getattr(\n self.model,\n getattr(\n self.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n queryset = histmgr.as_of(\n getattr(history, \"_as_of\", history.history_date)\n )\n else:\n queryset = super().get_queryset()\n return self._apply_rel_filters(queryset)\n\n return create_reverse_many_to_one_manager(\n HistoricRelationModelManager, self.rel\n )\n\n\nclass HistoricForeignKey(ForeignKey):\n \"\"\"\n Allows foreign keys to work properly from a historic instance.\n\n If you use as_of queries to extract historical instances from\n a model, and you have other models that are related by foreign\n key and also historic, changing them to a HistoricForeignKey\n field type will allow you to naturally cross the relationship\n boundary at the same point in time as the origin instance.\n\n A historic instance maintains an attribute (\"_historic\") when\n it is historic, holding the historic record instance and the\n timepoint used to query it (\"_as_of\"). HistoricForeignKey\n looks for this and uses an as_of query against the related\n object so the relationship is assessed at the same timepoint.\n \"\"\"\n\n forward_related_accessor_class = HistoricForwardManyToOneDescriptor\n related_accessor_class = HistoricReverseManyToOneDescriptor\n\n\ndef is_historic(instance):\n \"\"\"\n Returns True if the instance was acquired with an as_of timepoint.\n \"\"\"\n return to_historic(instance) is not None\n\n\ndef to_historic(instance):\n \"\"\"\n Returns a historic model instance if the instance was acquired with\n an as_of timepoint, or None.\n \"\"\"\n return getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n\n\nclass HistoricalObjectDescriptor:\n def __init__(self, model, fields_included):\n self.model = model\n self.fields_included = fields_included\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n values = {f.attname: getattr(instance, f.attname) for f in self.fields_included}\n return self.model(**values)\n\n\nclass HistoricalChanges(ModelTypeHint):\n def diff_against(\n self,\n old_history: \"HistoricalChanges\",\n excluded_fields: Iterable[str] = None,\n included_fields: Iterable[str] = None,\n *,\n foreign_keys_are_objs=False,\n ) -> \"ModelDelta\":\n \"\"\"\n :param old_history:\n :param excluded_fields: The names of fields to exclude from diffing.\n This takes precedence over ``included_fields``.\n :param included_fields: The names of the only fields to include when diffing.\n If not provided, all history-tracked fields will be included.\n :param foreign_keys_are_objs: If ``False``, the returned diff will only contain\n the raw PKs of any ``ForeignKey`` fields.\n If ``True``, the diff will contain the actual related model objects\n instead of just the PKs; deleted related objects will be instances of\n ``DeletedObject``.\n Note that passing ``True`` will necessarily query the database if the\n related objects have not been prefetched (using e.g.\n ``select_related()``).\n \"\"\"\n if not isinstance(old_history, type(self)):\n raise TypeError(\n \"unsupported type(s) for diffing:\"\n f\" '{type(self)}' and '{type(old_history)}'\"\n )\n if excluded_fields is None:\n excluded_fields = set()\n\n included_m2m_fields = {field.name for field in old_history._history_m2m_fields}\n if included_fields is None:\n included_fields = {f.name for f in old_history.tracked_fields if f.editable}\n else:\n included_m2m_fields = included_m2m_fields.intersection(included_fields)\n\n fields = (\n set(included_fields)\n .difference(included_m2m_fields)\n .difference(excluded_fields)\n )\n m2m_fields = set(included_m2m_fields).difference(excluded_fields)\n\n changes = [\n *self._get_field_changes_for_diff(\n old_history, fields, foreign_keys_are_objs\n ),\n *self._get_m2m_field_changes_for_diff(\n old_history, m2m_fields, foreign_keys_are_objs\n ),\n ]\n # Sort by field (attribute) name, to ensure a consistent order\n changes.sort(key=lambda change: change.field)\n changed_fields = [change.field for change in changes]\n return ModelDelta(changes, changed_fields, old_history, self)\n\n def _get_field_changes_for_diff(\n self,\n old_history: \"HistoricalChanges\",\n fields: Iterable[str],\n foreign_keys_are_objs: bool,\n ) -> List[\"ModelChange\"]:\n \"\"\"Helper method for ``diff_against()``.\"\"\"\n changes = []\n\n old_values = model_to_dict(old_history, fields=fields)\n new_values = model_to_dict(self, fields=fields)\n\n for field in fields:\n old_value = old_values[field]\n new_value = new_values[field]\n\n if old_value != new_value:\n field_meta = self._meta.get_field(field)\n if foreign_keys_are_objs and isinstance(field_meta, ForeignKey):\n # Set the fields to their related model objects instead of\n # the raw PKs from `model_to_dict()`\n def get_value(record, foreign_key):\n try:\n value = getattr(record, field)\n # `value` seems to be None (without raising this exception)\n # if the object has not been refreshed from the database\n except ObjectDoesNotExist:\n value = None\n\n if value is None:\n value = DeletedObject(field_meta.related_model, foreign_key)\n return value\n\n old_value = get_value(old_history, old_value)\n new_value = get_value(self, new_value)\n\n change = ModelChange(field, old_value, new_value)\n changes.append(change)\n\n return changes\n\n def _get_m2m_field_changes_for_diff(\n self,\n old_history: \"HistoricalChanges\",\n m2m_fields: Iterable[str],\n foreign_keys_are_objs: bool,\n ) -> List[\"ModelChange\"]:\n \"\"\"Helper method for ``diff_against()``.\"\"\"\n changes = []\n\n for field in m2m_fields:\n original_field_meta = self.instance_type._meta.get_field(field)\n reverse_field_name = utils.get_m2m_reverse_field_name(original_field_meta)\n # Sort the M2M rows by the related object, to ensure a consistent order\n old_m2m_qs = getattr(old_history, field).order_by(reverse_field_name)\n new_m2m_qs = getattr(self, field).order_by(reverse_field_name)\n m2m_through_model_opts = new_m2m_qs.model._meta\n\n # Create a list of field names to compare against.\n # The list is generated without the PK of the intermediate (through)\n # table, the foreign key to the history record, and the actual `history`\n # field, to avoid false positives while diffing.\n through_model_fields = [\n f.name\n for f in m2m_through_model_opts.fields\n if f.editable and f.name not in [\"id\", \"m2m_history_id\", \"history\"]\n ]\n old_rows = list(old_m2m_qs.values(*through_model_fields))\n new_rows = list(new_m2m_qs.values(*through_model_fields))\n\n if old_rows != new_rows:\n if foreign_keys_are_objs:\n fk_fields = [\n f\n for f in through_model_fields\n if isinstance(m2m_through_model_opts.get_field(f), ForeignKey)\n ]\n\n # Set the through fields to their related model objects instead of\n # the raw PKs from `values()`\n def rows_with_foreign_key_objs(m2m_qs):\n def get_value(obj, through_field):\n try:\n value = getattr(obj, through_field)\n # If the related object has been deleted, `value` seems to\n # usually already be None instead of raising this exception\n except ObjectDoesNotExist:\n value = None\n\n if value is None:\n meta = m2m_through_model_opts.get_field(through_field)\n foreign_key = getattr(obj, meta.attname)\n value = DeletedObject(meta.related_model, foreign_key)\n return value\n\n # Replicate the format of the return value of QuerySet.values()\n return [\n {\n through_field: get_value(through_obj, through_field)\n for through_field in through_model_fields\n }\n for through_obj in m2m_qs.select_related(*fk_fields)\n ]\n\n old_rows = rows_with_foreign_key_objs(old_m2m_qs)\n new_rows = rows_with_foreign_key_objs(new_m2m_qs)\n\n change = ModelChange(field, old_rows, new_rows)\n changes.append(change)\n\n return changes\n\n\n@dataclass(frozen=True)\nclass DeletedObject:\n model: Type[models.Model]\n pk: Any\n\n def __str__(self):\n deleted_model_str = _(\"Deleted %(type_name)s\") % {\n \"type_name\": self.model._meta.verbose_name,\n }\n return f\"{deleted_model_str} (pk={self.pk})\"\n\n\n# Either:\n# - The value of a foreign key field:\n# - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:\n# Either the related object or ``DeletedObject``.\n# - Otherwise:\n# The PK of the related object.\n#\n# - The value of a many-to-many field:\n# A list of dicts from the through model field names to either:\n# - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:\n# Either the through model's related objects or ``DeletedObject``.\n# - Otherwise:\n# The PK of the through model's related objects.\n#\n# - Any of the other possible values of a model field.\nModelChangeValue = Union[Any, DeletedObject, List[Dict[str, Union[Any, DeletedObject]]]]\n\n\n@dataclass(frozen=True)\nclass ModelChange:\n field: str\n old: ModelChangeValue\n new: ModelChangeValue\n\n\n@dataclass(frozen=True)\nclass ModelDelta:\n changes: Sequence[ModelChange]\n changed_fields: Sequence[str]\n old_record: HistoricalChanges\n new_record: HistoricalChanges\n", "path": "simple_history/models.py"}], "after_files": [{"content": "import copy\nimport importlib\nimport uuid\nimport warnings\nfrom dataclasses import dataclass\nfrom functools import partial\nfrom typing import TYPE_CHECKING, Any, Dict, Iterable, List, Sequence, Type, Union\n\nimport django\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib import admin\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import ImproperlyConfigured, ObjectDoesNotExist\nfrom django.db import models\nfrom django.db.models import ManyToManyField\nfrom django.db.models.fields.proxy import OrderWrt\nfrom django.db.models.fields.related import ForeignKey\nfrom django.db.models.fields.related_descriptors import (\n ForwardManyToOneDescriptor,\n ReverseManyToOneDescriptor,\n create_reverse_many_to_one_manager,\n)\nfrom django.db.models.query import QuerySet\nfrom django.db.models.signals import m2m_changed\nfrom django.forms.models import model_to_dict\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.encoding import smart_str\nfrom django.utils.functional import cached_property\nfrom django.utils.text import format_lazy\nfrom django.utils.translation import gettext_lazy as _\n\nfrom . import exceptions, utils\nfrom .manager import (\n SIMPLE_HISTORY_REVERSE_ATTR_NAME,\n HistoricalQuerySet,\n HistoryDescriptor,\n HistoryManager,\n)\nfrom .signals import (\n post_create_historical_m2m_records,\n post_create_historical_record,\n pre_create_historical_m2m_records,\n pre_create_historical_record,\n)\n\ntry:\n from asgiref.local import Local as LocalContext\nexcept ImportError:\n from threading import local as LocalContext\n\nif TYPE_CHECKING:\n ModelTypeHint = models.Model\nelse:\n ModelTypeHint = object\n\nregistered_models = {}\n\n\ndef _default_get_user(request, **kwargs):\n try:\n return request.user\n except AttributeError:\n return None\n\n\ndef _history_user_getter(historical_instance):\n if historical_instance.history_user_id is None:\n return None\n User = get_user_model()\n try:\n return User.objects.get(pk=historical_instance.history_user_id)\n except User.DoesNotExist:\n return None\n\n\ndef _history_user_setter(historical_instance, user):\n if user is not None:\n historical_instance.history_user_id = user.pk\n\n\nclass HistoricalRecords:\n DEFAULT_MODEL_NAME_PREFIX = \"Historical\"\n\n thread = context = LocalContext() # retain thread for backwards compatibility\n m2m_models = {}\n\n def __init__(\n self,\n verbose_name=None,\n verbose_name_plural=None,\n bases=(models.Model,),\n user_related_name=\"+\",\n table_name=None,\n inherit=False,\n excluded_fields=None,\n history_id_field=None,\n history_change_reason_field=None,\n user_model=None,\n get_user=_default_get_user,\n cascade_delete_history=False,\n custom_model_name=None,\n app=None,\n history_user_id_field=None,\n history_user_getter=_history_user_getter,\n history_user_setter=_history_user_setter,\n related_name=None,\n use_base_model_db=False,\n user_db_constraint=True,\n no_db_index=list(),\n excluded_field_kwargs=None,\n history_manager=HistoryManager,\n historical_queryset=HistoricalQuerySet,\n m2m_fields=(),\n m2m_fields_model_field_name=\"_history_m2m_fields\",\n m2m_bases=(models.Model,),\n ):\n self.user_set_verbose_name = verbose_name\n self.user_set_verbose_name_plural = verbose_name_plural\n self.user_related_name = user_related_name\n self.user_db_constraint = user_db_constraint\n self.table_name = table_name\n self.inherit = inherit\n self.history_id_field = history_id_field\n self.history_change_reason_field = history_change_reason_field\n self.user_model = user_model\n self.get_user = get_user\n self.cascade_delete_history = cascade_delete_history\n self.custom_model_name = custom_model_name\n self.app = app\n self.user_id_field = history_user_id_field\n self.user_getter = history_user_getter\n self.user_setter = history_user_setter\n self.related_name = related_name\n self.use_base_model_db = use_base_model_db\n self.history_manager = history_manager\n self.historical_queryset = historical_queryset\n self.m2m_fields = m2m_fields\n self.m2m_fields_model_field_name = m2m_fields_model_field_name\n\n if isinstance(no_db_index, str):\n no_db_index = [no_db_index]\n self.no_db_index = no_db_index\n\n if excluded_fields is None:\n excluded_fields = []\n self.excluded_fields = excluded_fields\n\n if excluded_field_kwargs is None:\n excluded_field_kwargs = {}\n self.excluded_field_kwargs = excluded_field_kwargs\n try:\n if isinstance(bases, str):\n raise TypeError\n self.bases = (HistoricalChanges,) + tuple(bases)\n except TypeError:\n raise TypeError(\"The `bases` option must be a list or a tuple.\")\n try:\n if isinstance(m2m_bases, str):\n raise TypeError\n self.m2m_bases = (HistoricalChanges,) + tuple(m2m_bases)\n except TypeError:\n raise TypeError(\"The `m2m_bases` option must be a list or a tuple.\")\n\n def contribute_to_class(self, cls, name):\n self.manager_name = name\n self.module = cls.__module__\n self.cls = cls\n models.signals.class_prepared.connect(self.finalize, weak=False)\n self.add_extra_methods(cls)\n\n if cls._meta.abstract and not self.inherit:\n msg = (\n \"HistoricalRecords added to abstract model ({}) without \"\n \"inherit=True\".format(self.cls.__name__)\n )\n warnings.warn(msg, UserWarning)\n\n def add_extra_methods(self, cls):\n def save_without_historical_record(self, *args, **kwargs):\n \"\"\"\n Save model without saving a historical record\n\n Make sure you know what you're doing before you use this method.\n \"\"\"\n self.skip_history_when_saving = True\n try:\n ret = self.save(*args, **kwargs)\n finally:\n del self.skip_history_when_saving\n return ret\n\n setattr(cls, \"save_without_historical_record\", save_without_historical_record)\n\n def finalize(self, sender, **kwargs):\n inherited = False\n if self.cls is not sender: # set in concrete\n inherited = self.inherit and issubclass(sender, self.cls)\n if not inherited:\n return # set in abstract\n\n if hasattr(sender._meta, \"simple_history_manager_attribute\"):\n raise exceptions.MultipleRegistrationsError(\n \"{}.{} registered multiple times for history tracking.\".format(\n sender._meta.app_label, sender._meta.object_name\n )\n )\n history_model = self.create_history_model(sender, inherited)\n\n if inherited:\n # Make sure history model is in same module as concrete model\n module = importlib.import_module(history_model.__module__)\n else:\n module = importlib.import_module(self.module)\n setattr(module, history_model.__name__, history_model)\n\n # The HistoricalRecords object will be discarded,\n # so the signal handlers can't use weak references.\n models.signals.post_save.connect(self.post_save, sender=sender, weak=False)\n models.signals.post_delete.connect(self.post_delete, sender=sender, weak=False)\n\n m2m_fields = self.get_m2m_fields_from_model(sender)\n\n for field in m2m_fields:\n m2m_changed.connect(\n partial(self.m2m_changed, attr=field.name),\n sender=field.remote_field.through,\n weak=False,\n )\n\n descriptor = HistoryDescriptor(\n history_model,\n manager=self.history_manager,\n queryset=self.historical_queryset,\n )\n setattr(sender, self.manager_name, descriptor)\n sender._meta.simple_history_manager_attribute = self.manager_name\n\n for field in m2m_fields:\n m2m_model = self.create_history_m2m_model(\n history_model, field.remote_field.through\n )\n self.m2m_models[field] = m2m_model\n\n setattr(module, m2m_model.__name__, m2m_model)\n\n m2m_descriptor = HistoryDescriptor(m2m_model)\n setattr(history_model, field.name, m2m_descriptor)\n\n def get_history_model_name(self, model):\n if not self.custom_model_name:\n return f\"{self.DEFAULT_MODEL_NAME_PREFIX}{model._meta.object_name}\"\n # Must be trying to use a custom history model name\n if callable(self.custom_model_name):\n name = self.custom_model_name(model._meta.object_name)\n else:\n # simple string\n name = self.custom_model_name\n # Desired class name cannot be same as the model it is tracking\n if not (\n name.lower() == model._meta.object_name.lower()\n and model.__module__ == self.module\n ):\n return name\n raise ValueError(\n \"The 'custom_model_name' option '{}' evaluates to a name that is the same \"\n \"as the model it is tracking. This is not permitted.\".format(\n self.custom_model_name\n )\n )\n\n def create_history_m2m_model(self, model, through_model):\n attrs = {}\n\n fields = self.copy_fields(through_model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields_m2m(model, through_model, fields))\n\n name = self.get_history_model_name(through_model)\n registered_models[through_model._meta.db_table] = through_model\n\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options_m2m(through_model)))\n\n m2m_history_model = type(str(name), self.m2m_bases, attrs)\n\n return m2m_history_model\n\n def create_history_model(self, model, inherited):\n \"\"\"\n Creates a historical model to associate with the model provided.\n \"\"\"\n attrs = {\n \"__module__\": self.module,\n \"_history_excluded_fields\": self.excluded_fields,\n \"_history_m2m_fields\": self.get_m2m_fields_from_model(model),\n \"tracked_fields\": self.fields_included(model),\n }\n\n app_module = \"%s.models\" % model._meta.app_label\n\n if inherited:\n # inherited use models module\n attrs[\"__module__\"] = model.__module__\n elif model.__module__ != self.module:\n # registered under different app\n attrs[\"__module__\"] = self.module\n elif app_module != self.module:\n # Abuse an internal API because the app registry is loading.\n app = apps.app_configs[model._meta.app_label]\n models_module = app.name\n attrs[\"__module__\"] = models_module\n\n fields = self.copy_fields(model)\n attrs.update(fields)\n attrs.update(self.get_extra_fields(model, fields))\n # type in python2 wants str as a first argument\n attrs.update(Meta=type(\"Meta\", (), self.get_meta_options(model)))\n if not inherited and self.table_name is not None:\n attrs[\"Meta\"].db_table = self.table_name\n\n # Set as the default then check for overrides\n name = self.get_history_model_name(model)\n\n registered_models[model._meta.db_table] = model\n history_model = type(str(name), self.bases, attrs)\n return history_model\n\n def fields_included(self, model):\n fields = []\n for field in model._meta.fields:\n if field.name not in self.excluded_fields:\n fields.append(field)\n return fields\n\n def field_excluded_kwargs(self, field):\n \"\"\"\n Find the excluded kwargs for a given field.\n \"\"\"\n return self.excluded_field_kwargs.get(field.name, set())\n\n def copy_fields(self, model):\n \"\"\"\n Creates copies of the model's original fields, returning\n a dictionary mapping field name to copied field object.\n \"\"\"\n fields = {}\n for field in self.fields_included(model):\n field = copy.copy(field)\n field.remote_field = copy.copy(field.remote_field)\n if isinstance(field, OrderWrt):\n # OrderWrt is a proxy field, switch to a plain IntegerField\n field.__class__ = models.IntegerField\n if isinstance(field, models.ForeignKey):\n old_field = field\n old_swappable = old_field.swappable\n old_field.swappable = False\n try:\n _name, _path, args, field_args = old_field.deconstruct()\n finally:\n old_field.swappable = old_swappable\n if getattr(old_field, \"one_to_one\", False) or isinstance(\n old_field, models.OneToOneField\n ):\n FieldType = models.ForeignKey\n else:\n FieldType = type(old_field)\n\n # Remove any excluded kwargs for the field.\n # This is useful when a custom OneToOneField is being used that\n # has a different set of arguments than ForeignKey\n for exclude_arg in self.field_excluded_kwargs(old_field):\n field_args.pop(exclude_arg, None)\n\n # If field_args['to'] is 'self' then we have a case where the object\n # has a foreign key to itself. If we pass the historical record's\n # field to = 'self', the foreign key will point to an historical\n # record rather than the base record. We can use old_field.model here.\n if field_args.get(\"to\", None) == \"self\":\n field_args[\"to\"] = old_field.model\n\n # Override certain arguments passed when creating the field\n # so that they work for the historical field.\n field_args.update(\n db_constraint=False,\n related_name=\"+\",\n null=True,\n blank=True,\n primary_key=False,\n db_index=True,\n serialize=True,\n unique=False,\n on_delete=models.DO_NOTHING,\n )\n field = FieldType(*args, **field_args)\n field.name = old_field.name\n else:\n transform_field(field)\n\n # drop db index\n if field.name in self.no_db_index:\n field.db_index = False\n\n fields[field.name] = field\n return fields\n\n def _get_history_change_reason_field(self):\n if self.history_change_reason_field:\n # User specific field from init\n history_change_reason_field = self.history_change_reason_field\n elif getattr(\n settings, \"SIMPLE_HISTORY_HISTORY_CHANGE_REASON_USE_TEXT_FIELD\", False\n ):\n # Use text field with no max length, not enforced by DB anyways\n history_change_reason_field = models.TextField(null=True)\n else:\n # Current default, with max length\n history_change_reason_field = models.CharField(max_length=100, null=True)\n\n return history_change_reason_field\n\n def _get_history_id_field(self):\n if self.history_id_field:\n history_id_field = self.history_id_field.clone()\n history_id_field.primary_key = True\n history_id_field.editable = False\n elif getattr(settings, \"SIMPLE_HISTORY_HISTORY_ID_USE_UUID\", False):\n history_id_field = models.UUIDField(\n primary_key=True, default=uuid.uuid4, editable=False\n )\n else:\n history_id_field = models.AutoField(primary_key=True)\n\n return history_id_field\n\n def _get_history_user_fields(self):\n if self.user_id_field is not None:\n # Tracking user using explicit id rather than Django ForeignKey\n history_user_fields = {\n \"history_user\": property(self.user_getter, self.user_setter),\n \"history_user_id\": self.user_id_field,\n }\n else:\n user_model = self.user_model or getattr(\n settings, \"AUTH_USER_MODEL\", \"auth.User\"\n )\n\n history_user_fields = {\n \"history_user\": models.ForeignKey(\n user_model,\n null=True,\n related_name=self.user_related_name,\n on_delete=models.SET_NULL,\n db_constraint=self.user_db_constraint,\n )\n }\n\n return history_user_fields\n\n def _get_history_related_field(self, model):\n if self.related_name:\n if self.manager_name == self.related_name:\n raise exceptions.RelatedNameConflictError(\n \"The related name must not be called like the history manager.\"\n )\n return {\n \"history_relation\": models.ForeignKey(\n model,\n on_delete=models.DO_NOTHING,\n related_name=self.related_name,\n db_constraint=False,\n )\n }\n else:\n return {}\n\n def get_extra_fields_m2m(self, model, through_model, fields):\n \"\"\"Return dict of extra fields added to the m2m historical record model\"\"\"\n\n extra_fields = {\n \"__module__\": model.__module__,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self._meta.verbose_name, self.history.history_date\n ),\n \"history\": models.ForeignKey(\n model,\n db_constraint=False,\n on_delete=models.DO_NOTHING,\n ),\n \"instance_type\": through_model,\n \"m2m_history_id\": self._get_history_id_field(),\n }\n\n return extra_fields\n\n def get_extra_fields(self, model, fields):\n \"\"\"Return dict of extra fields added to the historical record model\"\"\"\n\n def revert_url(self):\n \"\"\"URL for this change in the default admin site.\"\"\"\n opts = model._meta\n app_label, model_name = opts.app_label, opts.model_name\n return reverse(\n f\"{admin.site.name}:{app_label}_{model_name}_simple_history\",\n args=[getattr(self, opts.pk.attname), self.history_id],\n )\n\n def get_instance(self):\n attrs = {\n field.attname: getattr(self, field.attname) for field in fields.values()\n }\n if self._history_excluded_fields:\n # We don't add ManyToManyFields to this list because they may cause\n # the subsequent `.get()` call to fail. See #706 for context.\n excluded_attnames = [\n model._meta.get_field(field).attname\n for field in self._history_excluded_fields\n if not isinstance(model._meta.get_field(field), ManyToManyField)\n ]\n try:\n values = (\n model.objects.filter(pk=getattr(self, model._meta.pk.attname))\n .values(*excluded_attnames)\n .get()\n )\n except ObjectDoesNotExist:\n pass\n else:\n attrs.update(values)\n result = model(**attrs)\n # this is the only way external code could know an instance is historical\n setattr(result, SIMPLE_HISTORY_REVERSE_ATTR_NAME, self)\n return result\n\n def get_next_record(self):\n \"\"\"\n Get the next history record for the instance. `None` if last.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__gt=self.history_date)\n .order_by(\"history_date\")\n .first()\n )\n\n def get_prev_record(self):\n \"\"\"\n Get the previous history record for the instance. `None` if first.\n \"\"\"\n history = utils.get_history_manager_from_history(self)\n return (\n history.filter(history_date__lt=self.history_date)\n .order_by(\"history_date\")\n .last()\n )\n\n def get_default_history_user(instance):\n \"\"\"\n Returns the user specified by `get_user` method for manually creating\n historical objects\n \"\"\"\n return self.get_history_user(instance)\n\n extra_fields = {\n \"history_id\": self._get_history_id_field(),\n \"history_date\": models.DateTimeField(db_index=self._date_indexing is True),\n \"history_change_reason\": self._get_history_change_reason_field(),\n \"history_type\": models.CharField(\n max_length=1,\n choices=((\"+\", _(\"Created\")), (\"~\", _(\"Changed\")), (\"-\", _(\"Deleted\"))),\n ),\n \"history_object\": HistoricalObjectDescriptor(\n model, self.fields_included(model)\n ),\n \"instance\": property(get_instance),\n \"instance_type\": model,\n \"next_record\": property(get_next_record),\n \"prev_record\": property(get_prev_record),\n \"revert_url\": revert_url,\n \"__str__\": lambda self: \"{} as of {}\".format(\n self.history_object, self.history_date\n ),\n \"get_default_history_user\": staticmethod(get_default_history_user),\n }\n\n extra_fields.update(self._get_history_related_field(model))\n extra_fields.update(self._get_history_user_fields())\n\n return extra_fields\n\n @property\n def _date_indexing(self):\n \"\"\"False, True, or 'composite'; default is True\"\"\"\n result = getattr(settings, \"SIMPLE_HISTORY_DATE_INDEX\", True)\n valid = True\n if isinstance(result, str):\n result = result.lower()\n if result not in (\"composite\",):\n valid = False\n elif not isinstance(result, bool):\n valid = False\n if not valid:\n raise ImproperlyConfigured(\n \"SIMPLE_HISTORY_DATE_INDEX must be one of (False, True, 'Composite')\"\n )\n return result\n\n def get_meta_options_m2m(self, through_model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the m2m historical record model.\n \"\"\"\n name = self.get_history_model_name(through_model)\n\n meta_fields = {\"verbose_name\": name}\n\n if self.app:\n meta_fields[\"app_label\"] = self.app\n\n return meta_fields\n\n def get_meta_options(self, model):\n \"\"\"\n Returns a dictionary of fields that will be added to\n the Meta inner class of the historical record model.\n \"\"\"\n meta_fields = {\n \"ordering\": (\"-history_date\", \"-history_id\"),\n \"get_latest_by\": (\"history_date\", \"history_id\"),\n }\n if self.user_set_verbose_name:\n name = self.user_set_verbose_name\n else:\n name = format_lazy(\"historical {}\", smart_str(model._meta.verbose_name))\n if self.user_set_verbose_name_plural:\n plural_name = self.user_set_verbose_name_plural\n else:\n plural_name = format_lazy(\n \"historical {}\", smart_str(model._meta.verbose_name_plural)\n )\n meta_fields[\"verbose_name\"] = name\n meta_fields[\"verbose_name_plural\"] = plural_name\n if self.app:\n meta_fields[\"app_label\"] = self.app\n if self._date_indexing == \"composite\":\n meta_fields[\"indexes\"] = (\n models.Index(fields=(\"history_date\", model._meta.pk.attname)),\n )\n return meta_fields\n\n def post_save(self, instance, created, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if not created and hasattr(instance, \"skip_history_when_saving\"):\n return\n if not kwargs.get(\"raw\", False):\n self.create_historical_record(instance, created and \"+\" or \"~\", using=using)\n\n def post_delete(self, instance, using=None, **kwargs):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if self.cascade_delete_history:\n manager = getattr(instance, self.manager_name)\n manager.using(using).all().delete()\n else:\n self.create_historical_record(instance, \"-\", using=using)\n\n def get_change_reason_for_object(self, instance, history_type, using):\n \"\"\"\n Get change reason for object.\n Customize this method to automatically fill change reason from context.\n \"\"\"\n return utils.get_change_reason_from_object(instance)\n\n def m2m_changed(self, instance, action, attr, pk_set, reverse, **_):\n if not getattr(settings, \"SIMPLE_HISTORY_ENABLED\", True):\n return\n if hasattr(instance, \"skip_history_when_saving\"):\n return\n\n if action in (\"post_add\", \"post_remove\", \"post_clear\"):\n # It should be safe to ~ this since the row must exist to modify m2m on it\n self.create_historical_record(instance, \"~\")\n\n def create_historical_record_m2ms(self, history_instance, instance):\n for field in history_instance._history_m2m_fields:\n m2m_history_model = self.m2m_models[field]\n original_instance = history_instance.instance\n through_model = getattr(original_instance, field.name).through\n\n insert_rows = []\n\n through_field_name = utils.get_m2m_field_name(field)\n rows = through_model.objects.filter(**{through_field_name: instance})\n for row in rows:\n insert_row = {\"history\": history_instance}\n\n for through_model_field in through_model._meta.fields:\n insert_row[through_model_field.name] = getattr(\n row, through_model_field.name\n )\n insert_rows.append(m2m_history_model(**insert_row))\n\n pre_create_historical_m2m_records.send(\n sender=m2m_history_model,\n rows=insert_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n created_rows = m2m_history_model.objects.bulk_create(insert_rows)\n post_create_historical_m2m_records.send(\n sender=m2m_history_model,\n created_rows=created_rows,\n history_instance=history_instance,\n instance=instance,\n field=field,\n )\n\n def create_historical_record(self, instance, history_type, using=None):\n using = using if self.use_base_model_db else None\n history_date = getattr(instance, \"_history_date\", timezone.now())\n history_user = self.get_history_user(instance)\n history_change_reason = self.get_change_reason_for_object(\n instance, history_type, using\n )\n manager = getattr(instance, self.manager_name)\n\n attrs = {}\n for field in self.fields_included(instance):\n attrs[field.attname] = getattr(instance, field.attname)\n\n relation_field = getattr(manager.model, \"history_relation\", None)\n if relation_field is not None:\n attrs[\"history_relation\"] = instance\n\n history_instance = manager.model(\n history_date=history_date,\n history_type=history_type,\n history_user=history_user,\n history_change_reason=history_change_reason,\n **attrs,\n )\n\n pre_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n history_instance=history_instance,\n using=using,\n )\n\n history_instance.save(using=using)\n self.create_historical_record_m2ms(history_instance, instance)\n\n post_create_historical_record.send(\n sender=manager.model,\n instance=instance,\n history_instance=history_instance,\n history_date=history_date,\n history_user=history_user,\n history_change_reason=history_change_reason,\n using=using,\n )\n\n def get_history_user(self, instance):\n \"\"\"Get the modifying user from instance or middleware.\"\"\"\n try:\n return instance._history_user\n except AttributeError:\n request = None\n try:\n if self.context.request.user.is_authenticated:\n request = self.context.request\n except AttributeError:\n pass\n\n return self.get_user(instance=instance, request=request)\n\n def get_m2m_fields_from_model(self, model):\n m2m_fields = set(self.m2m_fields)\n try:\n m2m_fields.update(getattr(model, self.m2m_fields_model_field_name))\n except AttributeError:\n pass\n field_names = [\n field if isinstance(field, str) else field.name for field in m2m_fields\n ]\n return [getattr(model, field_name).field for field_name in field_names]\n\n\ndef transform_field(field):\n \"\"\"Customize field appropriately for use in historical model\"\"\"\n field.name = field.attname\n if isinstance(field, models.BigAutoField):\n field.__class__ = models.BigIntegerField\n elif isinstance(field, models.AutoField):\n field.__class__ = models.IntegerField\n\n elif isinstance(field, models.FileField):\n # Don't copy file, just path.\n if getattr(settings, \"SIMPLE_HISTORY_FILEFIELD_TO_CHARFIELD\", False):\n field.__class__ = models.CharField\n else:\n field.__class__ = models.TextField\n\n # Historical instance shouldn't change create/update timestamps\n field.auto_now = False\n field.auto_now_add = False\n # Just setting db_collation explicitly since we're not using\n # field.deconstruct() here\n field.db_collation = None\n\n if field.primary_key or field.unique:\n # Unique fields can no longer be guaranteed unique,\n # but they should still be indexed for faster lookups.\n field.primary_key = False\n # DEV: Remove this check (but keep the contents) when the minimum required\n # Django version is 5.1\n if django.VERSION >= (5, 1):\n field.unique = False\n # (Django < 5.1) Can't set `unique` as it's a property, so set the backing field\n # (Django >= 5.1) Set the backing field in addition to the cached property\n # above, to cover all bases\n field._unique = False\n field.db_index = True\n field.serialize = True\n\n\nclass HistoricForwardManyToOneDescriptor(ForwardManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n def get_queryset(self, **hints) -> QuerySet:\n instance = hints.get(\"instance\")\n if instance:\n history = getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n histmgr = getattr(\n self.field.remote_field.model,\n getattr(\n self.field.remote_field.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n return histmgr.as_of(getattr(history, \"_as_of\", history.history_date))\n return super().get_queryset(**hints)\n\n\nclass HistoricReverseManyToOneDescriptor(ReverseManyToOneDescriptor):\n \"\"\"\n Overrides get_queryset to provide historic query support, should the\n instance be historic (and therefore was generated by a timepoint query)\n and the other side of the relation also uses a history manager.\n \"\"\"\n\n @cached_property\n def related_manager_cls(self):\n related_model = self.rel.related_model\n\n class HistoricRelationModelManager(related_model._default_manager.__class__):\n def get_queryset(self):\n try:\n return self.instance._prefetched_objects_cache[\n self.field.remote_field.get_cache_name()\n ]\n except (AttributeError, KeyError):\n history = getattr(\n self.instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None\n )\n histmgr = getattr(\n self.model,\n getattr(\n self.model._meta,\n \"simple_history_manager_attribute\",\n \"_notthere\",\n ),\n None,\n )\n if history and histmgr:\n queryset = histmgr.as_of(\n getattr(history, \"_as_of\", history.history_date)\n )\n else:\n queryset = super().get_queryset()\n return self._apply_rel_filters(queryset)\n\n return create_reverse_many_to_one_manager(\n HistoricRelationModelManager, self.rel\n )\n\n\nclass HistoricForeignKey(ForeignKey):\n \"\"\"\n Allows foreign keys to work properly from a historic instance.\n\n If you use as_of queries to extract historical instances from\n a model, and you have other models that are related by foreign\n key and also historic, changing them to a HistoricForeignKey\n field type will allow you to naturally cross the relationship\n boundary at the same point in time as the origin instance.\n\n A historic instance maintains an attribute (\"_historic\") when\n it is historic, holding the historic record instance and the\n timepoint used to query it (\"_as_of\"). HistoricForeignKey\n looks for this and uses an as_of query against the related\n object so the relationship is assessed at the same timepoint.\n \"\"\"\n\n forward_related_accessor_class = HistoricForwardManyToOneDescriptor\n related_accessor_class = HistoricReverseManyToOneDescriptor\n\n\ndef is_historic(instance):\n \"\"\"\n Returns True if the instance was acquired with an as_of timepoint.\n \"\"\"\n return to_historic(instance) is not None\n\n\ndef to_historic(instance):\n \"\"\"\n Returns a historic model instance if the instance was acquired with\n an as_of timepoint, or None.\n \"\"\"\n return getattr(instance, SIMPLE_HISTORY_REVERSE_ATTR_NAME, None)\n\n\nclass HistoricalObjectDescriptor:\n def __init__(self, model, fields_included):\n self.model = model\n self.fields_included = fields_included\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n values = {f.attname: getattr(instance, f.attname) for f in self.fields_included}\n return self.model(**values)\n\n\nclass HistoricalChanges(ModelTypeHint):\n def diff_against(\n self,\n old_history: \"HistoricalChanges\",\n excluded_fields: Iterable[str] = None,\n included_fields: Iterable[str] = None,\n *,\n foreign_keys_are_objs=False,\n ) -> \"ModelDelta\":\n \"\"\"\n :param old_history:\n :param excluded_fields: The names of fields to exclude from diffing.\n This takes precedence over ``included_fields``.\n :param included_fields: The names of the only fields to include when diffing.\n If not provided, all history-tracked fields will be included.\n :param foreign_keys_are_objs: If ``False``, the returned diff will only contain\n the raw PKs of any ``ForeignKey`` fields.\n If ``True``, the diff will contain the actual related model objects\n instead of just the PKs; deleted related objects will be instances of\n ``DeletedObject``.\n Note that passing ``True`` will necessarily query the database if the\n related objects have not been prefetched (using e.g.\n ``select_related()``).\n \"\"\"\n if not isinstance(old_history, type(self)):\n raise TypeError(\n \"unsupported type(s) for diffing:\"\n f\" '{type(self)}' and '{type(old_history)}'\"\n )\n if excluded_fields is None:\n excluded_fields = set()\n\n included_m2m_fields = {field.name for field in old_history._history_m2m_fields}\n if included_fields is None:\n included_fields = {f.name for f in old_history.tracked_fields if f.editable}\n else:\n included_m2m_fields = included_m2m_fields.intersection(included_fields)\n\n fields = (\n set(included_fields)\n .difference(included_m2m_fields)\n .difference(excluded_fields)\n )\n m2m_fields = set(included_m2m_fields).difference(excluded_fields)\n\n changes = [\n *self._get_field_changes_for_diff(\n old_history, fields, foreign_keys_are_objs\n ),\n *self._get_m2m_field_changes_for_diff(\n old_history, m2m_fields, foreign_keys_are_objs\n ),\n ]\n # Sort by field (attribute) name, to ensure a consistent order\n changes.sort(key=lambda change: change.field)\n changed_fields = [change.field for change in changes]\n return ModelDelta(changes, changed_fields, old_history, self)\n\n def _get_field_changes_for_diff(\n self,\n old_history: \"HistoricalChanges\",\n fields: Iterable[str],\n foreign_keys_are_objs: bool,\n ) -> List[\"ModelChange\"]:\n \"\"\"Helper method for ``diff_against()``.\"\"\"\n changes = []\n\n old_values = model_to_dict(old_history, fields=fields)\n new_values = model_to_dict(self, fields=fields)\n\n for field in fields:\n old_value = old_values[field]\n new_value = new_values[field]\n\n if old_value != new_value:\n field_meta = self._meta.get_field(field)\n if foreign_keys_are_objs and isinstance(field_meta, ForeignKey):\n # Set the fields to their related model objects instead of\n # the raw PKs from `model_to_dict()`\n def get_value(record, foreign_key):\n try:\n value = getattr(record, field)\n # `value` seems to be None (without raising this exception)\n # if the object has not been refreshed from the database\n except ObjectDoesNotExist:\n value = None\n\n if value is None:\n value = DeletedObject(field_meta.related_model, foreign_key)\n return value\n\n old_value = get_value(old_history, old_value)\n new_value = get_value(self, new_value)\n\n change = ModelChange(field, old_value, new_value)\n changes.append(change)\n\n return changes\n\n def _get_m2m_field_changes_for_diff(\n self,\n old_history: \"HistoricalChanges\",\n m2m_fields: Iterable[str],\n foreign_keys_are_objs: bool,\n ) -> List[\"ModelChange\"]:\n \"\"\"Helper method for ``diff_against()``.\"\"\"\n changes = []\n\n for field in m2m_fields:\n original_field_meta = self.instance_type._meta.get_field(field)\n reverse_field_name = utils.get_m2m_reverse_field_name(original_field_meta)\n # Sort the M2M rows by the related object, to ensure a consistent order\n old_m2m_qs = getattr(old_history, field).order_by(reverse_field_name)\n new_m2m_qs = getattr(self, field).order_by(reverse_field_name)\n m2m_through_model_opts = new_m2m_qs.model._meta\n\n # Create a list of field names to compare against.\n # The list is generated without the PK of the intermediate (through)\n # table, the foreign key to the history record, and the actual `history`\n # field, to avoid false positives while diffing.\n through_model_fields = [\n f.name\n for f in m2m_through_model_opts.fields\n if f.editable and f.name not in [\"id\", \"m2m_history_id\", \"history\"]\n ]\n old_rows = list(old_m2m_qs.values(*through_model_fields))\n new_rows = list(new_m2m_qs.values(*through_model_fields))\n\n if old_rows != new_rows:\n if foreign_keys_are_objs:\n fk_fields = [\n f\n for f in through_model_fields\n if isinstance(m2m_through_model_opts.get_field(f), ForeignKey)\n ]\n\n # Set the through fields to their related model objects instead of\n # the raw PKs from `values()`\n def rows_with_foreign_key_objs(m2m_qs):\n def get_value(obj, through_field):\n try:\n value = getattr(obj, through_field)\n # If the related object has been deleted, `value` seems to\n # usually already be None instead of raising this exception\n except ObjectDoesNotExist:\n value = None\n\n if value is None:\n meta = m2m_through_model_opts.get_field(through_field)\n foreign_key = getattr(obj, meta.attname)\n value = DeletedObject(meta.related_model, foreign_key)\n return value\n\n # Replicate the format of the return value of QuerySet.values()\n return [\n {\n through_field: get_value(through_obj, through_field)\n for through_field in through_model_fields\n }\n for through_obj in m2m_qs.select_related(*fk_fields)\n ]\n\n old_rows = rows_with_foreign_key_objs(old_m2m_qs)\n new_rows = rows_with_foreign_key_objs(new_m2m_qs)\n\n change = ModelChange(field, old_rows, new_rows)\n changes.append(change)\n\n return changes\n\n\n@dataclass(frozen=True)\nclass DeletedObject:\n model: Type[models.Model]\n pk: Any\n\n def __str__(self):\n deleted_model_str = _(\"Deleted %(type_name)s\") % {\n \"type_name\": self.model._meta.verbose_name,\n }\n return f\"{deleted_model_str} (pk={self.pk})\"\n\n\n# Either:\n# - The value of a foreign key field:\n# - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:\n# Either the related object or ``DeletedObject``.\n# - Otherwise:\n# The PK of the related object.\n#\n# - The value of a many-to-many field:\n# A list of dicts from the through model field names to either:\n# - If ``foreign_keys_are_objs=True`` is passed to ``diff_against()``:\n# Either the through model's related objects or ``DeletedObject``.\n# - Otherwise:\n# The PK of the through model's related objects.\n#\n# - Any of the other possible values of a model field.\nModelChangeValue = Union[Any, DeletedObject, List[Dict[str, Union[Any, DeletedObject]]]]\n\n\n@dataclass(frozen=True)\nclass ModelChange:\n field: str\n old: ModelChangeValue\n new: ModelChangeValue\n\n\n@dataclass(frozen=True)\nclass ModelDelta:\n changes: Sequence[ModelChange]\n changed_fields: Sequence[str]\n old_record: HistoricalChanges\n new_record: HistoricalChanges\n", "path": "simple_history/models.py"}]} |
gh_patches_debug_1179 | rasdani/github-patches | git_diff | vispy__vispy-1784 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fix simple typo: withour -> without
There is a small typo in vispy/app/backends/_ipynb_vnc.py.
Should read without rather than withour.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `vispy/app/backends/_ipynb_vnc.py`
Content:
```
1 # -*- coding: utf-8 -*-
2 # Copyright (c) Vispy Development Team. All Rights Reserved.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 """
6 vispy backend for the IPython notebook (vnc approach).
7
8 We aim to have:
9 * ipynb_static - export visualization to a static notebook
10 * ipynb_vnc - vnc-approach: render in Python, send result to JS as png
11 * ipynb_webgl - send gl commands to JS and execute in webgl context
12
13 """
14
15 from __future__ import division
16
17 from ..base import (BaseApplicationBackend, BaseCanvasBackend,
18 BaseTimerBackend)
19 from .. import Application, Canvas
20 from ...util import logger
21 #from ...util.event import Event # For timer
22
23 # Imports for screenshot
24 # Perhaps we should refactor these to have just one import
25 from ...gloo.util import _screenshot
26 from ...io import _make_png
27 from base64 import b64encode
28
29 # Import for displaying Javascript on notebook
30 import os.path as op
31
32 # -------------------------------------------------------------------- init ---
33
34 capability = dict( # things that can be set by the backend
35 title=True, # But it only applies to the dummy window :P
36 size=True, # We cannot possibly say we dont, because Canvas always sets it
37 position=True, # Dito
38 show=True, # Note: we don't alow this, but all scripts call show ...
39 vsync=False,
40 resizable=True, # Yes, you can set to not be resizable (it always is)
41 decorate=False,
42 fullscreen=False,
43 context=True,
44 multi_window=True,
45 scroll=True,
46 parent=False,
47 always_on_top=False,
48 )
49
50
51 def _set_config(c):
52 _app.backend_module._set_config(c)
53
54
55 # Init dummy objects needed to import this module withour errors.
56 # These are all overwritten with imports from IPython (on success)
57 DOMWidget = object
58 Unicode = Int = Float = Bool = lambda *args, **kwargs: None
59
60 # Create our "backend" backend; The toolkit that is going to provide a
61 # canvas (e.g. OpenGL context) so we can render images.
62 # Note that if IPython has already loaded a GUI backend, vispy is
63 # probably going to use that as well, because it prefers loaded backends.
64 try:
65 # Explicitly use default (avoid using test-app)
66 _app = Application('default')
67 except Exception:
68 _msg = 'ipynb_vnc backend relies on a core backend'
69 available, testable, why_not, which = False, False, _msg, None
70 else:
71 # Try importing IPython
72 try:
73 import IPython
74 if IPython.version_info < (2,):
75 raise RuntimeError('ipynb_vnc backend need IPython version >= 2.0')
76 from IPython.html.widgets import DOMWidget
77 from IPython.utils.traitlets import Unicode, Int, Float, Bool
78 from IPython.display import display, Javascript
79 from IPython.html.nbextensions import install_nbextension
80 except Exception as exp:
81 available, testable, why_not, which = False, False, str(exp), None
82 else:
83 available, testable, why_not = True, False, None
84 which = _app.backend_module.which
85 print(' NOTE: this backend requires the Chromium browser')
86 # Use that backend's shared context
87 KEYMAP = _app.backend_module.KEYMAP
88
89
90 # ------------------------------------------------------------- application ---
91
92 # todo: maybe trigger something in JS on any of these methods?
93 class ApplicationBackend(BaseApplicationBackend):
94
95 def __init__(self):
96 BaseApplicationBackend.__init__(self)
97 self._backend2 = _app._backend
98
99 def _vispy_get_backend_name(self):
100 realname = self._backend2._vispy_get_backend_name()
101 return 'ipynb_vnc (via %s)' % realname
102
103 def _vispy_process_events(self):
104 return self._backend2._vispy_process_events()
105
106 def _vispy_run(self):
107 pass # We run in IPython, so we don't run!
108 #return self._backend2._vispy_run()
109
110 def _vispy_quit(self):
111 return self._backend2._vispy_quit()
112
113 def _vispy_get_native_app(self):
114 return self._backend2._vispy_get_native_app()
115
116
117 # ------------------------------------------------------------------ canvas ---
118
119 class CanvasBackend(BaseCanvasBackend):
120
121 # args are for BaseCanvasBackend, kwargs are for us.
122 def __init__(self, *args, **kwargs):
123 BaseCanvasBackend.__init__(self, *args)
124 self._initialized = False
125
126 # Test kwargs
127 # if kwargs['size']:
128 # raise RuntimeError('ipynb_vnc Canvas is not resizable')
129 # if kwargs['position']:
130 # raise RuntimeError('ipynb_vnc Canvas is not positionable')
131 if not kwargs['decorate']:
132 raise RuntimeError('ipynb_vnc Canvas is not decoratable (or not)')
133 if kwargs['vsync']:
134 raise RuntimeError('ipynb_vnc Canvas does not support vsync')
135 if kwargs['fullscreen']:
136 raise RuntimeError('ipynb_vnc Canvas does not support fullscreen')
137
138 # Create real canvas. It is a backend to this backend
139 kwargs.pop('vispy_canvas', None)
140 kwargs['autoswap'] = False
141 canvas = Canvas(app=_app, **kwargs) # Pass kwargs to underlying canvas
142 self._backend2 = canvas.native
143
144 # Connect to events of canvas to keep up to date with size and draws
145 canvas.events.draw.connect(self._on_draw)
146 canvas.events.resize.connect(self._on_resize)
147
148 # Show the widget, we will hide it after the first time it's drawn
149 self._backend2._vispy_set_visible(True)
150 self._need_draw = False
151
152 # Prepare Javascript code by displaying on notebook
153 self._prepare_js()
154 # Create IPython Widget
155 self._widget = Widget(self._gen_event, size=canvas.size)
156
157 def _vispy_warmup(self):
158 return self._backend2._vispy_warmup()
159
160 def _vispy_set_current(self):
161 return self._backend2._vispy_set_current()
162
163 def _vispy_swap_buffers(self):
164 return self._backend2._vispy_swap_buffers()
165
166 def _vispy_set_title(self, title):
167 return self._backend2._vispy_set_title(title)
168 #logger.warning('IPython notebook canvas has not title.')
169
170 def _vispy_set_size(self, w, h):
171 #logger.warn('IPython notebook canvas cannot be resized.')
172 res = self._backend2._vispy_set_size(w, h)
173 self._backend2._vispy_set_visible(True)
174 return res
175
176 def _vispy_set_position(self, x, y):
177 logger.warning('IPython notebook canvas cannot be repositioned.')
178
179 def _vispy_set_visible(self, visible):
180 #self._backend2._vispy_set_visible(visible)
181 if not visible:
182 logger.warning('IPython notebook canvas cannot be hidden.')
183 else:
184 display(self._widget)
185
186 def _vispy_update(self):
187 self._need_draw = True
188 return self._backend2._vispy_update()
189
190 def _vispy_close(self):
191 self._need_draw = False
192 self._widget.quit()
193 return self._backend2._vispy_close()
194
195 def _vispy_get_position(self):
196 return 0, 0
197
198 def _vispy_get_size(self):
199 return self._backend2._vispy_get_size()
200
201 def _on_resize(self, event=None):
202 # Event handler that is called by the underlying canvas
203 if self._vispy_canvas is None:
204 return
205 size = self._backend2._vispy_get_size()
206 self._widget.size = size
207 self._vispy_canvas.events.resize(size=size)
208
209 def _on_draw(self, event=None):
210 # Event handler that is called by the underlying canvas
211 if self._vispy_canvas is None:
212 return
213 # Handle initialization
214 if not self._initialized:
215 self._initialized = True
216 #self._vispy_canvas.events.add(timer=Event)
217 self._vispy_canvas.events.initialize()
218 self._on_resize()
219
220 # We are drawn, so no need for a redraw
221 self._need_draw = False
222
223 # We hide the widget once it has received a paint event. So at
224 # initialization and after a resize the widget is briefly visible.
225 # Now that it is hidden the widget is unlikely to receive paint
226 # events anymore, so we need to force repaints from now on, via
227 # a trigger from JS.
228 self._backend2._vispy_set_visible(False)
229
230 # Normal behavior
231 self._vispy_canvas.set_current()
232 self._vispy_canvas.events.draw(region=None)
233 # Save the encoded screenshot image to widget
234 self._save_screenshot()
235
236 def _save_screenshot(self):
237 # Take the screenshot
238 img = _screenshot()
239 # Convert to PNG and encode
240 self._widget.value = b64encode(_make_png(img))
241
242 # Generate vispy events according to upcoming JS events
243 def _gen_event(self, ev):
244 if self._vispy_canvas is None:
245 return
246
247 ev = ev.get("event")
248 # Parse and generate event
249 if ev.get("name") == "MouseEvent":
250 mouse = ev.get("properties")
251 # Generate
252 if mouse.get("type") == "mouse_move":
253 self._vispy_mouse_move(native=mouse,
254 pos=mouse.get("pos"),
255 modifiers=mouse.get("modifiers"),
256 )
257 elif mouse.get("type") == "mouse_press":
258 self._vispy_mouse_press(native=mouse,
259 pos=mouse.get("pos"),
260 button=mouse.get("button"),
261 modifiers=mouse.get("modifiers"),
262 )
263 elif mouse.get("type") == "mouse_release":
264 self._vispy_mouse_release(native=mouse,
265 pos=mouse.get("pos"),
266 button=mouse.get("button"),
267 modifiers=mouse.get("modifiers"),
268 )
269 elif mouse.get("type") == "mouse_wheel":
270 self._vispy_canvas.events.mouse_wheel(native=mouse,
271 delta=mouse.get("delta"),
272 pos=mouse.get("pos"),
273 modifiers=mouse.get
274 ("modifiers"),
275 )
276 elif ev.get("name") == "KeyEvent":
277 key = ev.get("properties")
278 if key.get("type") == "key_press":
279 self._vispy_canvas.events.key_press(native=key,
280 key=key.get("key"),
281 text=key.get("text"),
282 modifiers=key.get
283 ("modifiers"),
284 )
285 elif key.get("type") == "key_release":
286 self._vispy_canvas.events.key_release(native=key,
287 key=key.get("key"),
288 text=key.get("text"),
289 modifiers=key.get
290 ("modifiers"),
291 )
292 elif ev.get("name") == "PollEvent": # Ticking from front-end (JS)
293 # Allthough the event originates from JS, this is basically
294 # a poll event from IPyhon's event loop, which we use to
295 # update the backend app and draw stuff if necessary. If we
296 # succeed to make IPython process GUI app events directly,
297 # this "JS timer" should not be necessary.
298 self._vispy_canvas.app.process_events()
299 if self._need_draw:
300 self._on_draw()
301 # Generate a timer event on every poll from JS
302 # AK: no, just use app.Timer as usual!
303 #self._vispy_canvas.events.timer(type="timer")
304
305 def _prepare_js(self):
306 pkgdir = op.dirname(__file__)
307 install_nbextension([op.join(pkgdir, '../../html/static/js')])
308 script = 'IPython.load_extensions("js/vispy");'
309 display(Javascript(script))
310
311
312 # ------------------------------------------------------------------- timer ---
313
314 class TimerBackend(BaseTimerBackend):
315
316 def __init__(self, vispy_timer):
317 self._backend2 = _app.backend_module.TimerBackend(vispy_timer)
318
319 def _vispy_start(self, interval):
320 return self._backend2._vispy_start(interval)
321
322 def _vispy_stop(self):
323 return self._backend2._vispy_stop()
324
325 def _vispy_timeout(self):
326 return self._backend2._vispy_timeout()
327
328
329 # ---------------------------------------------------------- IPython Widget ---
330
331 class Widget(DOMWidget):
332 _view_name = Unicode("Widget", sync=True)
333
334 # Define the custom state properties to sync with the front-end
335 format = Unicode('png', sync=True)
336 width = Int(sync=True)
337 height = Int(sync=True)
338 interval = Float(sync=True)
339 is_closing = Bool(sync=True)
340 value = Unicode(sync=True)
341
342 def __init__(self, gen_event, **kwargs):
343 super(Widget, self).__init__(**kwargs)
344 self.size = kwargs["size"]
345 self.interval = 50.0
346 self.gen_event = gen_event
347 self.on_msg(self._handle_event_msg)
348
349 def _handle_event_msg(self, _, content):
350 # If closing, don't bother generating the event
351 if not self.is_closing:
352 self.gen_event(content)
353
354 @property
355 def size(self):
356 return self.width, self.height
357
358 @size.setter
359 def size(self, size):
360 self.width, self.height = size
361
362 def quit(self):
363 self.is_closing = True
364 self.close()
365
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/vispy/app/backends/_ipynb_vnc.py b/vispy/app/backends/_ipynb_vnc.py
--- a/vispy/app/backends/_ipynb_vnc.py
+++ b/vispy/app/backends/_ipynb_vnc.py
@@ -52,7 +52,7 @@
_app.backend_module._set_config(c)
-# Init dummy objects needed to import this module withour errors.
+# Init dummy objects needed to import this module without errors.
# These are all overwritten with imports from IPython (on success)
DOMWidget = object
Unicode = Int = Float = Bool = lambda *args, **kwargs: None
| {"golden_diff": "diff --git a/vispy/app/backends/_ipynb_vnc.py b/vispy/app/backends/_ipynb_vnc.py\n--- a/vispy/app/backends/_ipynb_vnc.py\n+++ b/vispy/app/backends/_ipynb_vnc.py\n@@ -52,7 +52,7 @@\n _app.backend_module._set_config(c)\n \n \n-# Init dummy objects needed to import this module withour errors.\n+# Init dummy objects needed to import this module without errors.\n # These are all overwritten with imports from IPython (on success)\n DOMWidget = object\n Unicode = Int = Float = Bool = lambda *args, **kwargs: None\n", "issue": "Fix simple typo: withour -> without\nThere is a small typo in vispy/app/backends/_ipynb_vnc.py.\nShould read without rather than withour.\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\nvispy backend for the IPython notebook (vnc approach).\n\nWe aim to have:\n* ipynb_static - export visualization to a static notebook\n* ipynb_vnc - vnc-approach: render in Python, send result to JS as png\n* ipynb_webgl - send gl commands to JS and execute in webgl context\n\n\"\"\"\n\nfrom __future__ import division\n\nfrom ..base import (BaseApplicationBackend, BaseCanvasBackend,\n BaseTimerBackend)\nfrom .. import Application, Canvas\nfrom ...util import logger\n#from ...util.event import Event # For timer\n\n# Imports for screenshot\n# Perhaps we should refactor these to have just one import\nfrom ...gloo.util import _screenshot\nfrom ...io import _make_png\nfrom base64 import b64encode\n\n# Import for displaying Javascript on notebook\nimport os.path as op\n\n# -------------------------------------------------------------------- init ---\n\ncapability = dict( # things that can be set by the backend\n title=True, # But it only applies to the dummy window :P\n size=True, # We cannot possibly say we dont, because Canvas always sets it\n position=True, # Dito\n show=True, # Note: we don't alow this, but all scripts call show ...\n vsync=False,\n resizable=True, # Yes, you can set to not be resizable (it always is)\n decorate=False,\n fullscreen=False,\n context=True,\n multi_window=True,\n scroll=True,\n parent=False,\n always_on_top=False,\n)\n\n\ndef _set_config(c):\n _app.backend_module._set_config(c)\n\n\n# Init dummy objects needed to import this module withour errors.\n# These are all overwritten with imports from IPython (on success)\nDOMWidget = object\nUnicode = Int = Float = Bool = lambda *args, **kwargs: None\n\n# Create our \"backend\" backend; The toolkit that is going to provide a\n# canvas (e.g. OpenGL context) so we can render images.\n# Note that if IPython has already loaded a GUI backend, vispy is\n# probably going to use that as well, because it prefers loaded backends.\ntry:\n # Explicitly use default (avoid using test-app)\n _app = Application('default')\nexcept Exception:\n _msg = 'ipynb_vnc backend relies on a core backend'\n available, testable, why_not, which = False, False, _msg, None\nelse:\n # Try importing IPython\n try:\n import IPython\n if IPython.version_info < (2,):\n raise RuntimeError('ipynb_vnc backend need IPython version >= 2.0')\n from IPython.html.widgets import DOMWidget\n from IPython.utils.traitlets import Unicode, Int, Float, Bool\n from IPython.display import display, Javascript\n from IPython.html.nbextensions import install_nbextension\n except Exception as exp:\n available, testable, why_not, which = False, False, str(exp), None\n else:\n available, testable, why_not = True, False, None\n which = _app.backend_module.which\n print(' NOTE: this backend requires the Chromium browser')\n # Use that backend's shared context\n KEYMAP = _app.backend_module.KEYMAP\n\n\n# ------------------------------------------------------------- application ---\n\n# todo: maybe trigger something in JS on any of these methods?\nclass ApplicationBackend(BaseApplicationBackend):\n\n def __init__(self):\n BaseApplicationBackend.__init__(self)\n self._backend2 = _app._backend\n\n def _vispy_get_backend_name(self):\n realname = self._backend2._vispy_get_backend_name()\n return 'ipynb_vnc (via %s)' % realname\n\n def _vispy_process_events(self):\n return self._backend2._vispy_process_events()\n\n def _vispy_run(self):\n pass # We run in IPython, so we don't run!\n #return self._backend2._vispy_run()\n\n def _vispy_quit(self):\n return self._backend2._vispy_quit()\n\n def _vispy_get_native_app(self):\n return self._backend2._vispy_get_native_app()\n\n\n# ------------------------------------------------------------------ canvas ---\n\nclass CanvasBackend(BaseCanvasBackend):\n\n # args are for BaseCanvasBackend, kwargs are for us.\n def __init__(self, *args, **kwargs):\n BaseCanvasBackend.__init__(self, *args)\n self._initialized = False\n\n # Test kwargs\n# if kwargs['size']:\n# raise RuntimeError('ipynb_vnc Canvas is not resizable')\n# if kwargs['position']:\n# raise RuntimeError('ipynb_vnc Canvas is not positionable')\n if not kwargs['decorate']:\n raise RuntimeError('ipynb_vnc Canvas is not decoratable (or not)')\n if kwargs['vsync']:\n raise RuntimeError('ipynb_vnc Canvas does not support vsync')\n if kwargs['fullscreen']:\n raise RuntimeError('ipynb_vnc Canvas does not support fullscreen')\n\n # Create real canvas. It is a backend to this backend\n kwargs.pop('vispy_canvas', None)\n kwargs['autoswap'] = False\n canvas = Canvas(app=_app, **kwargs) # Pass kwargs to underlying canvas\n self._backend2 = canvas.native\n\n # Connect to events of canvas to keep up to date with size and draws\n canvas.events.draw.connect(self._on_draw)\n canvas.events.resize.connect(self._on_resize)\n\n # Show the widget, we will hide it after the first time it's drawn\n self._backend2._vispy_set_visible(True)\n self._need_draw = False\n\n # Prepare Javascript code by displaying on notebook\n self._prepare_js()\n # Create IPython Widget\n self._widget = Widget(self._gen_event, size=canvas.size)\n\n def _vispy_warmup(self):\n return self._backend2._vispy_warmup()\n\n def _vispy_set_current(self):\n return self._backend2._vispy_set_current()\n\n def _vispy_swap_buffers(self):\n return self._backend2._vispy_swap_buffers()\n\n def _vispy_set_title(self, title):\n return self._backend2._vispy_set_title(title)\n #logger.warning('IPython notebook canvas has not title.')\n\n def _vispy_set_size(self, w, h):\n #logger.warn('IPython notebook canvas cannot be resized.')\n res = self._backend2._vispy_set_size(w, h)\n self._backend2._vispy_set_visible(True)\n return res\n\n def _vispy_set_position(self, x, y):\n logger.warning('IPython notebook canvas cannot be repositioned.')\n\n def _vispy_set_visible(self, visible):\n #self._backend2._vispy_set_visible(visible)\n if not visible:\n logger.warning('IPython notebook canvas cannot be hidden.')\n else:\n display(self._widget)\n\n def _vispy_update(self):\n self._need_draw = True\n return self._backend2._vispy_update()\n\n def _vispy_close(self):\n self._need_draw = False\n self._widget.quit()\n return self._backend2._vispy_close()\n\n def _vispy_get_position(self):\n return 0, 0\n\n def _vispy_get_size(self):\n return self._backend2._vispy_get_size()\n\n def _on_resize(self, event=None):\n # Event handler that is called by the underlying canvas\n if self._vispy_canvas is None:\n return\n size = self._backend2._vispy_get_size()\n self._widget.size = size\n self._vispy_canvas.events.resize(size=size)\n\n def _on_draw(self, event=None):\n # Event handler that is called by the underlying canvas\n if self._vispy_canvas is None:\n return\n # Handle initialization\n if not self._initialized:\n self._initialized = True\n #self._vispy_canvas.events.add(timer=Event)\n self._vispy_canvas.events.initialize()\n self._on_resize()\n\n # We are drawn, so no need for a redraw\n self._need_draw = False\n\n # We hide the widget once it has received a paint event. So at\n # initialization and after a resize the widget is briefly visible.\n # Now that it is hidden the widget is unlikely to receive paint\n # events anymore, so we need to force repaints from now on, via\n # a trigger from JS.\n self._backend2._vispy_set_visible(False)\n\n # Normal behavior\n self._vispy_canvas.set_current()\n self._vispy_canvas.events.draw(region=None)\n # Save the encoded screenshot image to widget\n self._save_screenshot()\n\n def _save_screenshot(self):\n # Take the screenshot\n img = _screenshot()\n # Convert to PNG and encode\n self._widget.value = b64encode(_make_png(img))\n\n # Generate vispy events according to upcoming JS events\n def _gen_event(self, ev):\n if self._vispy_canvas is None:\n return\n\n ev = ev.get(\"event\")\n # Parse and generate event\n if ev.get(\"name\") == \"MouseEvent\":\n mouse = ev.get(\"properties\")\n # Generate\n if mouse.get(\"type\") == \"mouse_move\":\n self._vispy_mouse_move(native=mouse,\n pos=mouse.get(\"pos\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_press\":\n self._vispy_mouse_press(native=mouse,\n pos=mouse.get(\"pos\"),\n button=mouse.get(\"button\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_release\":\n self._vispy_mouse_release(native=mouse,\n pos=mouse.get(\"pos\"),\n button=mouse.get(\"button\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_wheel\":\n self._vispy_canvas.events.mouse_wheel(native=mouse,\n delta=mouse.get(\"delta\"),\n pos=mouse.get(\"pos\"),\n modifiers=mouse.get\n (\"modifiers\"),\n )\n elif ev.get(\"name\") == \"KeyEvent\":\n key = ev.get(\"properties\")\n if key.get(\"type\") == \"key_press\":\n self._vispy_canvas.events.key_press(native=key,\n key=key.get(\"key\"),\n text=key.get(\"text\"),\n modifiers=key.get\n (\"modifiers\"),\n )\n elif key.get(\"type\") == \"key_release\":\n self._vispy_canvas.events.key_release(native=key,\n key=key.get(\"key\"),\n text=key.get(\"text\"),\n modifiers=key.get\n (\"modifiers\"),\n )\n elif ev.get(\"name\") == \"PollEvent\": # Ticking from front-end (JS)\n # Allthough the event originates from JS, this is basically\n # a poll event from IPyhon's event loop, which we use to\n # update the backend app and draw stuff if necessary. If we\n # succeed to make IPython process GUI app events directly,\n # this \"JS timer\" should not be necessary.\n self._vispy_canvas.app.process_events()\n if self._need_draw:\n self._on_draw()\n # Generate a timer event on every poll from JS\n # AK: no, just use app.Timer as usual!\n #self._vispy_canvas.events.timer(type=\"timer\")\n\n def _prepare_js(self):\n pkgdir = op.dirname(__file__)\n install_nbextension([op.join(pkgdir, '../../html/static/js')])\n script = 'IPython.load_extensions(\"js/vispy\");'\n display(Javascript(script))\n\n\n# ------------------------------------------------------------------- timer ---\n\nclass TimerBackend(BaseTimerBackend):\n\n def __init__(self, vispy_timer):\n self._backend2 = _app.backend_module.TimerBackend(vispy_timer)\n\n def _vispy_start(self, interval):\n return self._backend2._vispy_start(interval)\n\n def _vispy_stop(self):\n return self._backend2._vispy_stop()\n\n def _vispy_timeout(self):\n return self._backend2._vispy_timeout()\n\n\n# ---------------------------------------------------------- IPython Widget ---\n\nclass Widget(DOMWidget):\n _view_name = Unicode(\"Widget\", sync=True)\n\n # Define the custom state properties to sync with the front-end\n format = Unicode('png', sync=True)\n width = Int(sync=True)\n height = Int(sync=True)\n interval = Float(sync=True)\n is_closing = Bool(sync=True)\n value = Unicode(sync=True)\n\n def __init__(self, gen_event, **kwargs):\n super(Widget, self).__init__(**kwargs)\n self.size = kwargs[\"size\"]\n self.interval = 50.0\n self.gen_event = gen_event\n self.on_msg(self._handle_event_msg)\n\n def _handle_event_msg(self, _, content):\n # If closing, don't bother generating the event\n if not self.is_closing:\n self.gen_event(content)\n\n @property\n def size(self):\n return self.width, self.height\n\n @size.setter\n def size(self, size):\n self.width, self.height = size\n\n def quit(self):\n self.is_closing = True\n self.close()\n", "path": "vispy/app/backends/_ipynb_vnc.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\n\"\"\"\nvispy backend for the IPython notebook (vnc approach).\n\nWe aim to have:\n* ipynb_static - export visualization to a static notebook\n* ipynb_vnc - vnc-approach: render in Python, send result to JS as png\n* ipynb_webgl - send gl commands to JS and execute in webgl context\n\n\"\"\"\n\nfrom __future__ import division\n\nfrom ..base import (BaseApplicationBackend, BaseCanvasBackend,\n BaseTimerBackend)\nfrom .. import Application, Canvas\nfrom ...util import logger\n#from ...util.event import Event # For timer\n\n# Imports for screenshot\n# Perhaps we should refactor these to have just one import\nfrom ...gloo.util import _screenshot\nfrom ...io import _make_png\nfrom base64 import b64encode\n\n# Import for displaying Javascript on notebook\nimport os.path as op\n\n# -------------------------------------------------------------------- init ---\n\ncapability = dict( # things that can be set by the backend\n title=True, # But it only applies to the dummy window :P\n size=True, # We cannot possibly say we dont, because Canvas always sets it\n position=True, # Dito\n show=True, # Note: we don't alow this, but all scripts call show ...\n vsync=False,\n resizable=True, # Yes, you can set to not be resizable (it always is)\n decorate=False,\n fullscreen=False,\n context=True,\n multi_window=True,\n scroll=True,\n parent=False,\n always_on_top=False,\n)\n\n\ndef _set_config(c):\n _app.backend_module._set_config(c)\n\n\n# Init dummy objects needed to import this module without errors.\n# These are all overwritten with imports from IPython (on success)\nDOMWidget = object\nUnicode = Int = Float = Bool = lambda *args, **kwargs: None\n\n# Create our \"backend\" backend; The toolkit that is going to provide a\n# canvas (e.g. OpenGL context) so we can render images.\n# Note that if IPython has already loaded a GUI backend, vispy is\n# probably going to use that as well, because it prefers loaded backends.\ntry:\n # Explicitly use default (avoid using test-app)\n _app = Application('default')\nexcept Exception:\n _msg = 'ipynb_vnc backend relies on a core backend'\n available, testable, why_not, which = False, False, _msg, None\nelse:\n # Try importing IPython\n try:\n import IPython\n if IPython.version_info < (2,):\n raise RuntimeError('ipynb_vnc backend need IPython version >= 2.0')\n from IPython.html.widgets import DOMWidget\n from IPython.utils.traitlets import Unicode, Int, Float, Bool\n from IPython.display import display, Javascript\n from IPython.html.nbextensions import install_nbextension\n except Exception as exp:\n available, testable, why_not, which = False, False, str(exp), None\n else:\n available, testable, why_not = True, False, None\n which = _app.backend_module.which\n print(' NOTE: this backend requires the Chromium browser')\n # Use that backend's shared context\n KEYMAP = _app.backend_module.KEYMAP\n\n\n# ------------------------------------------------------------- application ---\n\n# todo: maybe trigger something in JS on any of these methods?\nclass ApplicationBackend(BaseApplicationBackend):\n\n def __init__(self):\n BaseApplicationBackend.__init__(self)\n self._backend2 = _app._backend\n\n def _vispy_get_backend_name(self):\n realname = self._backend2._vispy_get_backend_name()\n return 'ipynb_vnc (via %s)' % realname\n\n def _vispy_process_events(self):\n return self._backend2._vispy_process_events()\n\n def _vispy_run(self):\n pass # We run in IPython, so we don't run!\n #return self._backend2._vispy_run()\n\n def _vispy_quit(self):\n return self._backend2._vispy_quit()\n\n def _vispy_get_native_app(self):\n return self._backend2._vispy_get_native_app()\n\n\n# ------------------------------------------------------------------ canvas ---\n\nclass CanvasBackend(BaseCanvasBackend):\n\n # args are for BaseCanvasBackend, kwargs are for us.\n def __init__(self, *args, **kwargs):\n BaseCanvasBackend.__init__(self, *args)\n self._initialized = False\n\n # Test kwargs\n# if kwargs['size']:\n# raise RuntimeError('ipynb_vnc Canvas is not resizable')\n# if kwargs['position']:\n# raise RuntimeError('ipynb_vnc Canvas is not positionable')\n if not kwargs['decorate']:\n raise RuntimeError('ipynb_vnc Canvas is not decoratable (or not)')\n if kwargs['vsync']:\n raise RuntimeError('ipynb_vnc Canvas does not support vsync')\n if kwargs['fullscreen']:\n raise RuntimeError('ipynb_vnc Canvas does not support fullscreen')\n\n # Create real canvas. It is a backend to this backend\n kwargs.pop('vispy_canvas', None)\n kwargs['autoswap'] = False\n canvas = Canvas(app=_app, **kwargs) # Pass kwargs to underlying canvas\n self._backend2 = canvas.native\n\n # Connect to events of canvas to keep up to date with size and draws\n canvas.events.draw.connect(self._on_draw)\n canvas.events.resize.connect(self._on_resize)\n\n # Show the widget, we will hide it after the first time it's drawn\n self._backend2._vispy_set_visible(True)\n self._need_draw = False\n\n # Prepare Javascript code by displaying on notebook\n self._prepare_js()\n # Create IPython Widget\n self._widget = Widget(self._gen_event, size=canvas.size)\n\n def _vispy_warmup(self):\n return self._backend2._vispy_warmup()\n\n def _vispy_set_current(self):\n return self._backend2._vispy_set_current()\n\n def _vispy_swap_buffers(self):\n return self._backend2._vispy_swap_buffers()\n\n def _vispy_set_title(self, title):\n return self._backend2._vispy_set_title(title)\n #logger.warning('IPython notebook canvas has not title.')\n\n def _vispy_set_size(self, w, h):\n #logger.warn('IPython notebook canvas cannot be resized.')\n res = self._backend2._vispy_set_size(w, h)\n self._backend2._vispy_set_visible(True)\n return res\n\n def _vispy_set_position(self, x, y):\n logger.warning('IPython notebook canvas cannot be repositioned.')\n\n def _vispy_set_visible(self, visible):\n #self._backend2._vispy_set_visible(visible)\n if not visible:\n logger.warning('IPython notebook canvas cannot be hidden.')\n else:\n display(self._widget)\n\n def _vispy_update(self):\n self._need_draw = True\n return self._backend2._vispy_update()\n\n def _vispy_close(self):\n self._need_draw = False\n self._widget.quit()\n return self._backend2._vispy_close()\n\n def _vispy_get_position(self):\n return 0, 0\n\n def _vispy_get_size(self):\n return self._backend2._vispy_get_size()\n\n def _on_resize(self, event=None):\n # Event handler that is called by the underlying canvas\n if self._vispy_canvas is None:\n return\n size = self._backend2._vispy_get_size()\n self._widget.size = size\n self._vispy_canvas.events.resize(size=size)\n\n def _on_draw(self, event=None):\n # Event handler that is called by the underlying canvas\n if self._vispy_canvas is None:\n return\n # Handle initialization\n if not self._initialized:\n self._initialized = True\n #self._vispy_canvas.events.add(timer=Event)\n self._vispy_canvas.events.initialize()\n self._on_resize()\n\n # We are drawn, so no need for a redraw\n self._need_draw = False\n\n # We hide the widget once it has received a paint event. So at\n # initialization and after a resize the widget is briefly visible.\n # Now that it is hidden the widget is unlikely to receive paint\n # events anymore, so we need to force repaints from now on, via\n # a trigger from JS.\n self._backend2._vispy_set_visible(False)\n\n # Normal behavior\n self._vispy_canvas.set_current()\n self._vispy_canvas.events.draw(region=None)\n # Save the encoded screenshot image to widget\n self._save_screenshot()\n\n def _save_screenshot(self):\n # Take the screenshot\n img = _screenshot()\n # Convert to PNG and encode\n self._widget.value = b64encode(_make_png(img))\n\n # Generate vispy events according to upcoming JS events\n def _gen_event(self, ev):\n if self._vispy_canvas is None:\n return\n\n ev = ev.get(\"event\")\n # Parse and generate event\n if ev.get(\"name\") == \"MouseEvent\":\n mouse = ev.get(\"properties\")\n # Generate\n if mouse.get(\"type\") == \"mouse_move\":\n self._vispy_mouse_move(native=mouse,\n pos=mouse.get(\"pos\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_press\":\n self._vispy_mouse_press(native=mouse,\n pos=mouse.get(\"pos\"),\n button=mouse.get(\"button\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_release\":\n self._vispy_mouse_release(native=mouse,\n pos=mouse.get(\"pos\"),\n button=mouse.get(\"button\"),\n modifiers=mouse.get(\"modifiers\"),\n )\n elif mouse.get(\"type\") == \"mouse_wheel\":\n self._vispy_canvas.events.mouse_wheel(native=mouse,\n delta=mouse.get(\"delta\"),\n pos=mouse.get(\"pos\"),\n modifiers=mouse.get\n (\"modifiers\"),\n )\n elif ev.get(\"name\") == \"KeyEvent\":\n key = ev.get(\"properties\")\n if key.get(\"type\") == \"key_press\":\n self._vispy_canvas.events.key_press(native=key,\n key=key.get(\"key\"),\n text=key.get(\"text\"),\n modifiers=key.get\n (\"modifiers\"),\n )\n elif key.get(\"type\") == \"key_release\":\n self._vispy_canvas.events.key_release(native=key,\n key=key.get(\"key\"),\n text=key.get(\"text\"),\n modifiers=key.get\n (\"modifiers\"),\n )\n elif ev.get(\"name\") == \"PollEvent\": # Ticking from front-end (JS)\n # Allthough the event originates from JS, this is basically\n # a poll event from IPyhon's event loop, which we use to\n # update the backend app and draw stuff if necessary. If we\n # succeed to make IPython process GUI app events directly,\n # this \"JS timer\" should not be necessary.\n self._vispy_canvas.app.process_events()\n if self._need_draw:\n self._on_draw()\n # Generate a timer event on every poll from JS\n # AK: no, just use app.Timer as usual!\n #self._vispy_canvas.events.timer(type=\"timer\")\n\n def _prepare_js(self):\n pkgdir = op.dirname(__file__)\n install_nbextension([op.join(pkgdir, '../../html/static/js')])\n script = 'IPython.load_extensions(\"js/vispy\");'\n display(Javascript(script))\n\n\n# ------------------------------------------------------------------- timer ---\n\nclass TimerBackend(BaseTimerBackend):\n\n def __init__(self, vispy_timer):\n self._backend2 = _app.backend_module.TimerBackend(vispy_timer)\n\n def _vispy_start(self, interval):\n return self._backend2._vispy_start(interval)\n\n def _vispy_stop(self):\n return self._backend2._vispy_stop()\n\n def _vispy_timeout(self):\n return self._backend2._vispy_timeout()\n\n\n# ---------------------------------------------------------- IPython Widget ---\n\nclass Widget(DOMWidget):\n _view_name = Unicode(\"Widget\", sync=True)\n\n # Define the custom state properties to sync with the front-end\n format = Unicode('png', sync=True)\n width = Int(sync=True)\n height = Int(sync=True)\n interval = Float(sync=True)\n is_closing = Bool(sync=True)\n value = Unicode(sync=True)\n\n def __init__(self, gen_event, **kwargs):\n super(Widget, self).__init__(**kwargs)\n self.size = kwargs[\"size\"]\n self.interval = 50.0\n self.gen_event = gen_event\n self.on_msg(self._handle_event_msg)\n\n def _handle_event_msg(self, _, content):\n # If closing, don't bother generating the event\n if not self.is_closing:\n self.gen_event(content)\n\n @property\n def size(self):\n return self.width, self.height\n\n @size.setter\n def size(self, size):\n self.width, self.height = size\n\n def quit(self):\n self.is_closing = True\n self.close()\n", "path": "vispy/app/backends/_ipynb_vnc.py"}]} |
gh_patches_debug_1180 | rasdani/github-patches | git_diff | oppia__oppia-10093 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Get rid of unnecessary optional properties
There are a lot of `?` in the types that are not required. They should be removed.
Get rid of unnecessary optional properties
There are a lot of `?` in the types that are not required. They should be removed.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `core/controllers/creator_dashboard.py`
Content:
```
1 # Copyright 2014 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Controllers for the creator dashboard, notifications, and creating new
16 activities.
17 """
18
19 from __future__ import absolute_import # pylint: disable=import-only-modules
20 from __future__ import unicode_literals # pylint: disable=import-only-modules
21
22 import logging
23
24 from constants import constants
25 from core.controllers import acl_decorators
26 from core.controllers import base
27 from core.domain import collection_domain
28 from core.domain import collection_services
29 from core.domain import exp_domain
30 from core.domain import exp_fetchers
31 from core.domain import exp_services
32 from core.domain import feedback_services
33 from core.domain import role_services
34 from core.domain import subscription_services
35 from core.domain import suggestion_services
36 from core.domain import summary_services
37 from core.domain import topic_services
38 from core.domain import user_jobs_continuous
39 from core.domain import user_services
40 from core.platform import models
41 import feconf
42 import python_utils
43 import utils
44
45 (feedback_models, suggestion_models) = models.Registry.import_models(
46 [models.NAMES.feedback, models.NAMES.suggestion])
47
48 EXPLORATION_ID_KEY = 'exploration_id'
49 COLLECTION_ID_KEY = 'collection_id'
50
51
52 class OldNotificationsDashboardRedirectPage(base.BaseHandler):
53 """Redirects the old notifications dashboard URL to the new one."""
54
55 @acl_decorators.open_access
56 def get(self):
57 """Handles GET requests."""
58 self.redirect(feconf.NOTIFICATIONS_DASHBOARD_URL, permanent=True)
59
60
61 class OldCommunityDashboardRedirectPage(base.BaseHandler):
62 """Redirects the old community dashboard URL to the new one."""
63
64 @acl_decorators.open_access
65 def get(self):
66 """Handles GET requests."""
67 self.redirect('/community-dashboard', permanent=True)
68
69
70 class NotificationsDashboardPage(base.BaseHandler):
71 """Page with notifications for the user."""
72
73 @acl_decorators.can_access_creator_dashboard
74 def get(self):
75 self.render_template(
76 'notifications-dashboard-page.mainpage.html')
77
78
79 class NotificationsDashboardHandler(base.BaseHandler):
80 """Provides data for the user notifications dashboard."""
81
82 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
83
84 @acl_decorators.can_access_creator_dashboard
85 def get(self):
86 """Handles GET requests."""
87 job_queued_msec, recent_notifications = (
88 user_jobs_continuous.DashboardRecentUpdatesAggregator
89 .get_recent_user_changes(self.user_id))
90
91 last_seen_msec = (
92 subscription_services.get_last_seen_notifications_msec(
93 self.user_id))
94
95 # Replace author_ids with their usernames.
96 author_ids = [
97 notification['author_id'] for notification in recent_notifications
98 if notification['author_id']]
99 author_usernames = user_services.get_usernames(author_ids)
100
101 author_id_to_username = {
102 None: '',
103 }
104 for ind, author_id in enumerate(author_ids):
105 author_id_to_username[author_id] = author_usernames[ind]
106 for notification in recent_notifications:
107 notification['author_username'] = (
108 author_id_to_username[notification['author_id']])
109 del notification['author_id']
110
111 subscription_services.record_user_has_seen_notifications(
112 self.user_id, job_queued_msec if job_queued_msec else 0.0)
113
114 self.values.update({
115 # This may be None if no job has ever run for this user.
116 'job_queued_msec': job_queued_msec,
117 # This may be None if this is the first time the user has seen
118 # the dashboard.
119 'last_seen_msec': last_seen_msec,
120 'recent_notifications': recent_notifications,
121 })
122 self.render_json(self.values)
123
124
125 class OldCreatorDashboardRedirectPage(base.BaseHandler):
126 """Redirects the old creator dashboard URL to the new one."""
127
128 @acl_decorators.open_access
129 def get(self):
130 """Handles GET requests."""
131 self.redirect(feconf.CREATOR_DASHBOARD_URL, permanent=True)
132
133
134 class CreatorDashboardPage(base.BaseHandler):
135 """Page showing the user's creator dashboard."""
136
137 ADDITIONAL_DEPENDENCY_IDS = ['codemirror']
138
139 @acl_decorators.can_access_creator_dashboard
140 def get(self):
141
142 self.render_template('creator-dashboard-page.mainpage.html')
143
144
145 class CreatorDashboardHandler(base.BaseHandler):
146 """Provides data for the user's creator dashboard page."""
147
148 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
149
150 @acl_decorators.can_access_creator_dashboard
151 def get(self):
152 """Handles GET requests."""
153
154 def _round_average_ratings(rating):
155 """Returns the rounded average rating to display on the creator
156 dashboard.
157
158 Args:
159 rating: float. The rating of the lesson.
160
161 Returns:
162 float. The rounded average value of rating.
163 """
164 return python_utils.ROUND(
165 rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)
166
167 subscribed_exploration_summaries = (
168 exp_fetchers.get_exploration_summaries_subscribed_to(
169 self.user_id))
170 subscribed_collection_summaries = (
171 collection_services.get_collection_summaries_subscribed_to(
172 self.user_id))
173
174 exploration_ids_subscribed_to = [
175 summary.id for summary in subscribed_exploration_summaries]
176
177 exp_summary_dicts = summary_services.get_displayable_exp_summary_dicts(
178 subscribed_exploration_summaries)
179 collection_summary_dicts = []
180
181 feedback_thread_analytics = (
182 feedback_services.get_thread_analytics_multi(
183 exploration_ids_subscribed_to))
184
185 # TODO(bhenning): Update this to use unresolved answers from
186 # stats_services once the training interface is enabled and it's cheaper
187 # to retrieve top answers from stats_services.
188 for ind, exploration in enumerate(exp_summary_dicts):
189 exploration.update(feedback_thread_analytics[ind].to_dict())
190
191 exp_summary_dicts = sorted(
192 exp_summary_dicts,
193 key=lambda x: (x['num_open_threads'], x['last_updated_msec']),
194 reverse=True)
195
196 if constants.ENABLE_NEW_STRUCTURE_PLAYERS:
197 topic_summaries = topic_services.get_all_topic_summaries()
198 topic_summary_dicts = [
199 summary.to_dict() for summary in topic_summaries]
200
201 if role_services.ACTION_CREATE_COLLECTION in self.user.actions:
202 for collection_summary in subscribed_collection_summaries:
203 # TODO(sll): Reuse _get_displayable_collection_summary_dicts()
204 # in summary_services, instead of replicating it like this.
205 collection_summary_dicts.append({
206 'id': collection_summary.id,
207 'title': collection_summary.title,
208 'category': collection_summary.category,
209 'objective': collection_summary.objective,
210 'language_code': collection_summary.language_code,
211 'last_updated_msec': utils.get_time_in_millisecs(
212 collection_summary.collection_model_last_updated),
213 'created_on': utils.get_time_in_millisecs(
214 collection_summary.collection_model_created_on),
215 'status': collection_summary.status,
216 'node_count': collection_summary.node_count,
217 'community_owned': collection_summary.community_owned,
218 'thumbnail_icon_url': (
219 utils.get_thumbnail_icon_url_for_category(
220 collection_summary.category)),
221 'thumbnail_bg_color': utils.get_hex_color_for_category(
222 collection_summary.category),
223 })
224
225 dashboard_stats = (
226 user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(
227 self.user_id))
228 dashboard_stats.update({
229 'total_open_feedback': feedback_services.get_total_open_threads(
230 feedback_thread_analytics)
231 })
232 if dashboard_stats and dashboard_stats.get('average_ratings'):
233 dashboard_stats['average_ratings'] = (
234 _round_average_ratings(dashboard_stats['average_ratings']))
235
236 last_week_stats = (
237 user_services.get_last_week_dashboard_stats(self.user_id))
238
239 if last_week_stats and len(list(last_week_stats.keys())) != 1:
240 logging.error(
241 '\'last_week_stats\' should contain only one key-value pair'
242 ' denoting last week dashboard stats of the user keyed by a'
243 ' datetime string.')
244 last_week_stats = None
245
246 if last_week_stats:
247 # 'last_week_stats' is a dict with only one key-value pair denoting
248 # last week dashboard stats of the user keyed by a datetime string.
249 datetime_of_stats = list(last_week_stats.keys())[0]
250 last_week_stats_average_ratings = (
251 list(last_week_stats.values())[0].get('average_ratings'))
252 if last_week_stats_average_ratings:
253 last_week_stats[datetime_of_stats]['average_ratings'] = (
254 _round_average_ratings(last_week_stats_average_ratings))
255
256 subscriber_ids = subscription_services.get_all_subscribers_of_creator(
257 self.user_id)
258 subscribers_settings = user_services.get_users_settings(subscriber_ids)
259 subscribers_list = []
260 for index, subscriber_settings in enumerate(subscribers_settings):
261 subscriber_summary = {
262 'subscriber_picture_data_url': (
263 subscriber_settings.profile_picture_data_url),
264 'subscriber_username': subscriber_settings.username,
265 'subscriber_impact': (
266 user_services.get_user_impact_score(subscriber_ids[index]))
267 }
268
269 subscribers_list.append(subscriber_summary)
270
271 user_settings = user_services.get_user_settings(
272 self.user_id, strict=False)
273 creator_dashboard_display_pref = (
274 user_settings.creator_dashboard_display_pref)
275
276 suggestions_created_by_user = suggestion_services.query_suggestions(
277 [('author_id', self.user_id),
278 (
279 'suggestion_type',
280 suggestion_models.SUGGESTION_TYPE_EDIT_STATE_CONTENT)])
281 suggestions_which_can_be_reviewed = (
282 suggestion_services
283 .get_all_suggestions_that_can_be_reviewed_by_user(self.user_id))
284
285 for s in suggestions_created_by_user:
286 s.populate_old_value_of_change()
287
288 for s in suggestions_which_can_be_reviewed:
289 s.populate_old_value_of_change()
290
291 suggestion_dicts_created_by_user = (
292 [s.to_dict() for s in suggestions_created_by_user])
293 suggestion_dicts_which_can_be_reviewed = (
294 [s.to_dict() for s in suggestions_which_can_be_reviewed])
295
296 ids_of_suggestions_created_by_user = (
297 [s['suggestion_id'] for s in suggestion_dicts_created_by_user])
298 ids_of_suggestions_which_can_be_reviewed = (
299 [s['suggestion_id']
300 for s in suggestion_dicts_which_can_be_reviewed])
301
302 threads_linked_to_suggestions_by_user = (
303 [t.to_dict() for t in feedback_services.get_multiple_threads(
304 ids_of_suggestions_created_by_user)])
305 threads_linked_to_suggestions_which_can_be_reviewed = (
306 [t.to_dict() for t in feedback_services.get_multiple_threads(
307 ids_of_suggestions_which_can_be_reviewed)])
308
309 self.values.update({
310 'explorations_list': exp_summary_dicts,
311 'collections_list': collection_summary_dicts,
312 'dashboard_stats': dashboard_stats,
313 'last_week_stats': last_week_stats,
314 'subscribers_list': subscribers_list,
315 'display_preference': creator_dashboard_display_pref,
316 'threads_for_created_suggestions_list': (
317 threads_linked_to_suggestions_by_user),
318 'threads_for_suggestions_to_review_list': (
319 threads_linked_to_suggestions_which_can_be_reviewed),
320 'created_suggestions_list': suggestion_dicts_created_by_user,
321 'suggestions_to_review_list': suggestion_dicts_which_can_be_reviewed
322 })
323 if constants.ENABLE_NEW_STRUCTURE_PLAYERS:
324 self.values.update({
325 'topic_summary_dicts': topic_summary_dicts
326 })
327 self.render_json(self.values)
328
329 @acl_decorators.can_access_creator_dashboard
330 def post(self):
331 creator_dashboard_display_pref = self.payload.get('display_preference')
332 user_services.update_user_creator_dashboard_display(
333 self.user_id, creator_dashboard_display_pref)
334 self.render_json({})
335
336
337 class NotificationsHandler(base.BaseHandler):
338 """Provides data about unseen notifications."""
339
340 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
341
342 @acl_decorators.can_access_creator_dashboard
343 def get(self):
344 """Handles GET requests."""
345 num_unseen_notifications = 0
346 last_seen_msec = (
347 subscription_services.get_last_seen_notifications_msec(
348 self.user_id))
349 _, recent_notifications = (
350 user_jobs_continuous.DashboardRecentUpdatesAggregator
351 .get_recent_user_changes(self.user_id))
352 for notification in recent_notifications:
353 if (notification['last_updated_ms'] > last_seen_msec and
354 notification['author_id'] != self.user_id):
355 num_unseen_notifications += 1
356
357 self.render_json({
358 'num_unseen_notifications': num_unseen_notifications,
359 })
360
361
362 class NewExplorationHandler(base.BaseHandler):
363 """Creates a new exploration."""
364
365 @acl_decorators.can_create_exploration
366 def post(self):
367 """Handles POST requests."""
368 title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)
369
370 new_exploration_id = exp_fetchers.get_new_exploration_id()
371 exploration = exp_domain.Exploration.create_default_exploration(
372 new_exploration_id, title=title)
373 exp_services.save_new_exploration(self.user_id, exploration)
374
375 self.render_json({
376 EXPLORATION_ID_KEY: new_exploration_id
377 })
378
379
380 class NewCollectionHandler(base.BaseHandler):
381 """Creates a new collection."""
382
383 @acl_decorators.can_create_collection
384 def post(self):
385 """Handles POST requests."""
386 new_collection_id = collection_services.get_new_collection_id()
387 collection = collection_domain.Collection.create_default_collection(
388 new_collection_id)
389 collection_services.save_new_collection(self.user_id, collection)
390
391 self.render_json({
392 COLLECTION_ID_KEY: new_collection_id
393 })
394
395
396 class UploadExplorationHandler(base.BaseHandler):
397 """Uploads a new exploration."""
398
399 @acl_decorators.can_upload_exploration
400 def post(self):
401 """Handles POST requests."""
402 yaml_content = self.request.get('yaml_file')
403
404 new_exploration_id = exp_fetchers.get_new_exploration_id()
405 if constants.ALLOW_YAML_FILE_UPLOAD:
406 exp_services.save_new_exploration_from_yaml_and_assets(
407 self.user_id, yaml_content, new_exploration_id, [],
408 strip_voiceovers=True)
409 self.render_json({
410 EXPLORATION_ID_KEY: new_exploration_id
411 })
412 else:
413 raise self.InvalidInputException(
414 'This server does not allow file uploads.')
415
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/core/controllers/creator_dashboard.py b/core/controllers/creator_dashboard.py
--- a/core/controllers/creator_dashboard.py
+++ b/core/controllers/creator_dashboard.py
@@ -324,6 +324,11 @@
self.values.update({
'topic_summary_dicts': topic_summary_dicts
})
+ else:
+ self.values.update({
+ 'topic_summary_dicts': []
+ })
+
self.render_json(self.values)
@acl_decorators.can_access_creator_dashboard
| {"golden_diff": "diff --git a/core/controllers/creator_dashboard.py b/core/controllers/creator_dashboard.py\n--- a/core/controllers/creator_dashboard.py\n+++ b/core/controllers/creator_dashboard.py\n@@ -324,6 +324,11 @@\n self.values.update({\n 'topic_summary_dicts': topic_summary_dicts\n })\n+ else:\n+ self.values.update({\n+ 'topic_summary_dicts': []\n+ })\n+\n self.render_json(self.values)\n \n @acl_decorators.can_access_creator_dashboard\n", "issue": "Get rid of unnecessary optional properties\nThere are a lot of `?` in the types that are not required. They should be removed.\nGet rid of unnecessary optional properties\nThere are a lot of `?` in the types that are not required. They should be removed.\n", "before_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the creator dashboard, notifications, and creating new\nactivities.\n\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport logging\n\nfrom constants import constants\nfrom core.controllers import acl_decorators\nfrom core.controllers import base\nfrom core.domain import collection_domain\nfrom core.domain import collection_services\nfrom core.domain import exp_domain\nfrom core.domain import exp_fetchers\nfrom core.domain import exp_services\nfrom core.domain import feedback_services\nfrom core.domain import role_services\nfrom core.domain import subscription_services\nfrom core.domain import suggestion_services\nfrom core.domain import summary_services\nfrom core.domain import topic_services\nfrom core.domain import user_jobs_continuous\nfrom core.domain import user_services\nfrom core.platform import models\nimport feconf\nimport python_utils\nimport utils\n\n(feedback_models, suggestion_models) = models.Registry.import_models(\n [models.NAMES.feedback, models.NAMES.suggestion])\n\nEXPLORATION_ID_KEY = 'exploration_id'\nCOLLECTION_ID_KEY = 'collection_id'\n\n\nclass OldNotificationsDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old notifications dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.NOTIFICATIONS_DASHBOARD_URL, permanent=True)\n\n\nclass OldCommunityDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old community dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect('/community-dashboard', permanent=True)\n\n\nclass NotificationsDashboardPage(base.BaseHandler):\n \"\"\"Page with notifications for the user.\"\"\"\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n self.render_template(\n 'notifications-dashboard-page.mainpage.html')\n\n\nclass NotificationsDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user notifications dashboard.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n job_queued_msec, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator\n .get_recent_user_changes(self.user_id))\n\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n\n # Replace author_ids with their usernames.\n author_ids = [\n notification['author_id'] for notification in recent_notifications\n if notification['author_id']]\n author_usernames = user_services.get_usernames(author_ids)\n\n author_id_to_username = {\n None: '',\n }\n for ind, author_id in enumerate(author_ids):\n author_id_to_username[author_id] = author_usernames[ind]\n for notification in recent_notifications:\n notification['author_username'] = (\n author_id_to_username[notification['author_id']])\n del notification['author_id']\n\n subscription_services.record_user_has_seen_notifications(\n self.user_id, job_queued_msec if job_queued_msec else 0.0)\n\n self.values.update({\n # This may be None if no job has ever run for this user.\n 'job_queued_msec': job_queued_msec,\n # This may be None if this is the first time the user has seen\n # the dashboard.\n 'last_seen_msec': last_seen_msec,\n 'recent_notifications': recent_notifications,\n })\n self.render_json(self.values)\n\n\nclass OldCreatorDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old creator dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.CREATOR_DASHBOARD_URL, permanent=True)\n\n\nclass CreatorDashboardPage(base.BaseHandler):\n \"\"\"Page showing the user's creator dashboard.\"\"\"\n\n ADDITIONAL_DEPENDENCY_IDS = ['codemirror']\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n\n self.render_template('creator-dashboard-page.mainpage.html')\n\n\nclass CreatorDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user's creator dashboard page.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n\n def _round_average_ratings(rating):\n \"\"\"Returns the rounded average rating to display on the creator\n dashboard.\n\n Args:\n rating: float. The rating of the lesson.\n\n Returns:\n float. The rounded average value of rating.\n \"\"\"\n return python_utils.ROUND(\n rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)\n\n subscribed_exploration_summaries = (\n exp_fetchers.get_exploration_summaries_subscribed_to(\n self.user_id))\n subscribed_collection_summaries = (\n collection_services.get_collection_summaries_subscribed_to(\n self.user_id))\n\n exploration_ids_subscribed_to = [\n summary.id for summary in subscribed_exploration_summaries]\n\n exp_summary_dicts = summary_services.get_displayable_exp_summary_dicts(\n subscribed_exploration_summaries)\n collection_summary_dicts = []\n\n feedback_thread_analytics = (\n feedback_services.get_thread_analytics_multi(\n exploration_ids_subscribed_to))\n\n # TODO(bhenning): Update this to use unresolved answers from\n # stats_services once the training interface is enabled and it's cheaper\n # to retrieve top answers from stats_services.\n for ind, exploration in enumerate(exp_summary_dicts):\n exploration.update(feedback_thread_analytics[ind].to_dict())\n\n exp_summary_dicts = sorted(\n exp_summary_dicts,\n key=lambda x: (x['num_open_threads'], x['last_updated_msec']),\n reverse=True)\n\n if constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n topic_summaries = topic_services.get_all_topic_summaries()\n topic_summary_dicts = [\n summary.to_dict() for summary in topic_summaries]\n\n if role_services.ACTION_CREATE_COLLECTION in self.user.actions:\n for collection_summary in subscribed_collection_summaries:\n # TODO(sll): Reuse _get_displayable_collection_summary_dicts()\n # in summary_services, instead of replicating it like this.\n collection_summary_dicts.append({\n 'id': collection_summary.id,\n 'title': collection_summary.title,\n 'category': collection_summary.category,\n 'objective': collection_summary.objective,\n 'language_code': collection_summary.language_code,\n 'last_updated_msec': utils.get_time_in_millisecs(\n collection_summary.collection_model_last_updated),\n 'created_on': utils.get_time_in_millisecs(\n collection_summary.collection_model_created_on),\n 'status': collection_summary.status,\n 'node_count': collection_summary.node_count,\n 'community_owned': collection_summary.community_owned,\n 'thumbnail_icon_url': (\n utils.get_thumbnail_icon_url_for_category(\n collection_summary.category)),\n 'thumbnail_bg_color': utils.get_hex_color_for_category(\n collection_summary.category),\n })\n\n dashboard_stats = (\n user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(\n self.user_id))\n dashboard_stats.update({\n 'total_open_feedback': feedback_services.get_total_open_threads(\n feedback_thread_analytics)\n })\n if dashboard_stats and dashboard_stats.get('average_ratings'):\n dashboard_stats['average_ratings'] = (\n _round_average_ratings(dashboard_stats['average_ratings']))\n\n last_week_stats = (\n user_services.get_last_week_dashboard_stats(self.user_id))\n\n if last_week_stats and len(list(last_week_stats.keys())) != 1:\n logging.error(\n '\\'last_week_stats\\' should contain only one key-value pair'\n ' denoting last week dashboard stats of the user keyed by a'\n ' datetime string.')\n last_week_stats = None\n\n if last_week_stats:\n # 'last_week_stats' is a dict with only one key-value pair denoting\n # last week dashboard stats of the user keyed by a datetime string.\n datetime_of_stats = list(last_week_stats.keys())[0]\n last_week_stats_average_ratings = (\n list(last_week_stats.values())[0].get('average_ratings'))\n if last_week_stats_average_ratings:\n last_week_stats[datetime_of_stats]['average_ratings'] = (\n _round_average_ratings(last_week_stats_average_ratings))\n\n subscriber_ids = subscription_services.get_all_subscribers_of_creator(\n self.user_id)\n subscribers_settings = user_services.get_users_settings(subscriber_ids)\n subscribers_list = []\n for index, subscriber_settings in enumerate(subscribers_settings):\n subscriber_summary = {\n 'subscriber_picture_data_url': (\n subscriber_settings.profile_picture_data_url),\n 'subscriber_username': subscriber_settings.username,\n 'subscriber_impact': (\n user_services.get_user_impact_score(subscriber_ids[index]))\n }\n\n subscribers_list.append(subscriber_summary)\n\n user_settings = user_services.get_user_settings(\n self.user_id, strict=False)\n creator_dashboard_display_pref = (\n user_settings.creator_dashboard_display_pref)\n\n suggestions_created_by_user = suggestion_services.query_suggestions(\n [('author_id', self.user_id),\n (\n 'suggestion_type',\n suggestion_models.SUGGESTION_TYPE_EDIT_STATE_CONTENT)])\n suggestions_which_can_be_reviewed = (\n suggestion_services\n .get_all_suggestions_that_can_be_reviewed_by_user(self.user_id))\n\n for s in suggestions_created_by_user:\n s.populate_old_value_of_change()\n\n for s in suggestions_which_can_be_reviewed:\n s.populate_old_value_of_change()\n\n suggestion_dicts_created_by_user = (\n [s.to_dict() for s in suggestions_created_by_user])\n suggestion_dicts_which_can_be_reviewed = (\n [s.to_dict() for s in suggestions_which_can_be_reviewed])\n\n ids_of_suggestions_created_by_user = (\n [s['suggestion_id'] for s in suggestion_dicts_created_by_user])\n ids_of_suggestions_which_can_be_reviewed = (\n [s['suggestion_id']\n for s in suggestion_dicts_which_can_be_reviewed])\n\n threads_linked_to_suggestions_by_user = (\n [t.to_dict() for t in feedback_services.get_multiple_threads(\n ids_of_suggestions_created_by_user)])\n threads_linked_to_suggestions_which_can_be_reviewed = (\n [t.to_dict() for t in feedback_services.get_multiple_threads(\n ids_of_suggestions_which_can_be_reviewed)])\n\n self.values.update({\n 'explorations_list': exp_summary_dicts,\n 'collections_list': collection_summary_dicts,\n 'dashboard_stats': dashboard_stats,\n 'last_week_stats': last_week_stats,\n 'subscribers_list': subscribers_list,\n 'display_preference': creator_dashboard_display_pref,\n 'threads_for_created_suggestions_list': (\n threads_linked_to_suggestions_by_user),\n 'threads_for_suggestions_to_review_list': (\n threads_linked_to_suggestions_which_can_be_reviewed),\n 'created_suggestions_list': suggestion_dicts_created_by_user,\n 'suggestions_to_review_list': suggestion_dicts_which_can_be_reviewed\n })\n if constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n self.values.update({\n 'topic_summary_dicts': topic_summary_dicts\n })\n self.render_json(self.values)\n\n @acl_decorators.can_access_creator_dashboard\n def post(self):\n creator_dashboard_display_pref = self.payload.get('display_preference')\n user_services.update_user_creator_dashboard_display(\n self.user_id, creator_dashboard_display_pref)\n self.render_json({})\n\n\nclass NotificationsHandler(base.BaseHandler):\n \"\"\"Provides data about unseen notifications.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n num_unseen_notifications = 0\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n _, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator\n .get_recent_user_changes(self.user_id))\n for notification in recent_notifications:\n if (notification['last_updated_ms'] > last_seen_msec and\n notification['author_id'] != self.user_id):\n num_unseen_notifications += 1\n\n self.render_json({\n 'num_unseen_notifications': num_unseen_notifications,\n })\n\n\nclass NewExplorationHandler(base.BaseHandler):\n \"\"\"Creates a new exploration.\"\"\"\n\n @acl_decorators.can_create_exploration\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)\n\n new_exploration_id = exp_fetchers.get_new_exploration_id()\n exploration = exp_domain.Exploration.create_default_exploration(\n new_exploration_id, title=title)\n exp_services.save_new_exploration(self.user_id, exploration)\n\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n\n\nclass NewCollectionHandler(base.BaseHandler):\n \"\"\"Creates a new collection.\"\"\"\n\n @acl_decorators.can_create_collection\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n new_collection_id = collection_services.get_new_collection_id()\n collection = collection_domain.Collection.create_default_collection(\n new_collection_id)\n collection_services.save_new_collection(self.user_id, collection)\n\n self.render_json({\n COLLECTION_ID_KEY: new_collection_id\n })\n\n\nclass UploadExplorationHandler(base.BaseHandler):\n \"\"\"Uploads a new exploration.\"\"\"\n\n @acl_decorators.can_upload_exploration\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n yaml_content = self.request.get('yaml_file')\n\n new_exploration_id = exp_fetchers.get_new_exploration_id()\n if constants.ALLOW_YAML_FILE_UPLOAD:\n exp_services.save_new_exploration_from_yaml_and_assets(\n self.user_id, yaml_content, new_exploration_id, [],\n strip_voiceovers=True)\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n else:\n raise self.InvalidInputException(\n 'This server does not allow file uploads.')\n", "path": "core/controllers/creator_dashboard.py"}], "after_files": [{"content": "# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the creator dashboard, notifications, and creating new\nactivities.\n\"\"\"\n\nfrom __future__ import absolute_import # pylint: disable=import-only-modules\nfrom __future__ import unicode_literals # pylint: disable=import-only-modules\n\nimport logging\n\nfrom constants import constants\nfrom core.controllers import acl_decorators\nfrom core.controllers import base\nfrom core.domain import collection_domain\nfrom core.domain import collection_services\nfrom core.domain import exp_domain\nfrom core.domain import exp_fetchers\nfrom core.domain import exp_services\nfrom core.domain import feedback_services\nfrom core.domain import role_services\nfrom core.domain import subscription_services\nfrom core.domain import suggestion_services\nfrom core.domain import summary_services\nfrom core.domain import topic_services\nfrom core.domain import user_jobs_continuous\nfrom core.domain import user_services\nfrom core.platform import models\nimport feconf\nimport python_utils\nimport utils\n\n(feedback_models, suggestion_models) = models.Registry.import_models(\n [models.NAMES.feedback, models.NAMES.suggestion])\n\nEXPLORATION_ID_KEY = 'exploration_id'\nCOLLECTION_ID_KEY = 'collection_id'\n\n\nclass OldNotificationsDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old notifications dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.NOTIFICATIONS_DASHBOARD_URL, permanent=True)\n\n\nclass OldCommunityDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old community dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect('/community-dashboard', permanent=True)\n\n\nclass NotificationsDashboardPage(base.BaseHandler):\n \"\"\"Page with notifications for the user.\"\"\"\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n self.render_template(\n 'notifications-dashboard-page.mainpage.html')\n\n\nclass NotificationsDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user notifications dashboard.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n job_queued_msec, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator\n .get_recent_user_changes(self.user_id))\n\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n\n # Replace author_ids with their usernames.\n author_ids = [\n notification['author_id'] for notification in recent_notifications\n if notification['author_id']]\n author_usernames = user_services.get_usernames(author_ids)\n\n author_id_to_username = {\n None: '',\n }\n for ind, author_id in enumerate(author_ids):\n author_id_to_username[author_id] = author_usernames[ind]\n for notification in recent_notifications:\n notification['author_username'] = (\n author_id_to_username[notification['author_id']])\n del notification['author_id']\n\n subscription_services.record_user_has_seen_notifications(\n self.user_id, job_queued_msec if job_queued_msec else 0.0)\n\n self.values.update({\n # This may be None if no job has ever run for this user.\n 'job_queued_msec': job_queued_msec,\n # This may be None if this is the first time the user has seen\n # the dashboard.\n 'last_seen_msec': last_seen_msec,\n 'recent_notifications': recent_notifications,\n })\n self.render_json(self.values)\n\n\nclass OldCreatorDashboardRedirectPage(base.BaseHandler):\n \"\"\"Redirects the old creator dashboard URL to the new one.\"\"\"\n\n @acl_decorators.open_access\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n self.redirect(feconf.CREATOR_DASHBOARD_URL, permanent=True)\n\n\nclass CreatorDashboardPage(base.BaseHandler):\n \"\"\"Page showing the user's creator dashboard.\"\"\"\n\n ADDITIONAL_DEPENDENCY_IDS = ['codemirror']\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n\n self.render_template('creator-dashboard-page.mainpage.html')\n\n\nclass CreatorDashboardHandler(base.BaseHandler):\n \"\"\"Provides data for the user's creator dashboard page.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n\n def _round_average_ratings(rating):\n \"\"\"Returns the rounded average rating to display on the creator\n dashboard.\n\n Args:\n rating: float. The rating of the lesson.\n\n Returns:\n float. The rounded average value of rating.\n \"\"\"\n return python_utils.ROUND(\n rating, feconf.AVERAGE_RATINGS_DASHBOARD_PRECISION)\n\n subscribed_exploration_summaries = (\n exp_fetchers.get_exploration_summaries_subscribed_to(\n self.user_id))\n subscribed_collection_summaries = (\n collection_services.get_collection_summaries_subscribed_to(\n self.user_id))\n\n exploration_ids_subscribed_to = [\n summary.id for summary in subscribed_exploration_summaries]\n\n exp_summary_dicts = summary_services.get_displayable_exp_summary_dicts(\n subscribed_exploration_summaries)\n collection_summary_dicts = []\n\n feedback_thread_analytics = (\n feedback_services.get_thread_analytics_multi(\n exploration_ids_subscribed_to))\n\n # TODO(bhenning): Update this to use unresolved answers from\n # stats_services once the training interface is enabled and it's cheaper\n # to retrieve top answers from stats_services.\n for ind, exploration in enumerate(exp_summary_dicts):\n exploration.update(feedback_thread_analytics[ind].to_dict())\n\n exp_summary_dicts = sorted(\n exp_summary_dicts,\n key=lambda x: (x['num_open_threads'], x['last_updated_msec']),\n reverse=True)\n\n if constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n topic_summaries = topic_services.get_all_topic_summaries()\n topic_summary_dicts = [\n summary.to_dict() for summary in topic_summaries]\n\n if role_services.ACTION_CREATE_COLLECTION in self.user.actions:\n for collection_summary in subscribed_collection_summaries:\n # TODO(sll): Reuse _get_displayable_collection_summary_dicts()\n # in summary_services, instead of replicating it like this.\n collection_summary_dicts.append({\n 'id': collection_summary.id,\n 'title': collection_summary.title,\n 'category': collection_summary.category,\n 'objective': collection_summary.objective,\n 'language_code': collection_summary.language_code,\n 'last_updated_msec': utils.get_time_in_millisecs(\n collection_summary.collection_model_last_updated),\n 'created_on': utils.get_time_in_millisecs(\n collection_summary.collection_model_created_on),\n 'status': collection_summary.status,\n 'node_count': collection_summary.node_count,\n 'community_owned': collection_summary.community_owned,\n 'thumbnail_icon_url': (\n utils.get_thumbnail_icon_url_for_category(\n collection_summary.category)),\n 'thumbnail_bg_color': utils.get_hex_color_for_category(\n collection_summary.category),\n })\n\n dashboard_stats = (\n user_jobs_continuous.UserStatsAggregator.get_dashboard_stats(\n self.user_id))\n dashboard_stats.update({\n 'total_open_feedback': feedback_services.get_total_open_threads(\n feedback_thread_analytics)\n })\n if dashboard_stats and dashboard_stats.get('average_ratings'):\n dashboard_stats['average_ratings'] = (\n _round_average_ratings(dashboard_stats['average_ratings']))\n\n last_week_stats = (\n user_services.get_last_week_dashboard_stats(self.user_id))\n\n if last_week_stats and len(list(last_week_stats.keys())) != 1:\n logging.error(\n '\\'last_week_stats\\' should contain only one key-value pair'\n ' denoting last week dashboard stats of the user keyed by a'\n ' datetime string.')\n last_week_stats = None\n\n if last_week_stats:\n # 'last_week_stats' is a dict with only one key-value pair denoting\n # last week dashboard stats of the user keyed by a datetime string.\n datetime_of_stats = list(last_week_stats.keys())[0]\n last_week_stats_average_ratings = (\n list(last_week_stats.values())[0].get('average_ratings'))\n if last_week_stats_average_ratings:\n last_week_stats[datetime_of_stats]['average_ratings'] = (\n _round_average_ratings(last_week_stats_average_ratings))\n\n subscriber_ids = subscription_services.get_all_subscribers_of_creator(\n self.user_id)\n subscribers_settings = user_services.get_users_settings(subscriber_ids)\n subscribers_list = []\n for index, subscriber_settings in enumerate(subscribers_settings):\n subscriber_summary = {\n 'subscriber_picture_data_url': (\n subscriber_settings.profile_picture_data_url),\n 'subscriber_username': subscriber_settings.username,\n 'subscriber_impact': (\n user_services.get_user_impact_score(subscriber_ids[index]))\n }\n\n subscribers_list.append(subscriber_summary)\n\n user_settings = user_services.get_user_settings(\n self.user_id, strict=False)\n creator_dashboard_display_pref = (\n user_settings.creator_dashboard_display_pref)\n\n suggestions_created_by_user = suggestion_services.query_suggestions(\n [('author_id', self.user_id),\n (\n 'suggestion_type',\n suggestion_models.SUGGESTION_TYPE_EDIT_STATE_CONTENT)])\n suggestions_which_can_be_reviewed = (\n suggestion_services\n .get_all_suggestions_that_can_be_reviewed_by_user(self.user_id))\n\n for s in suggestions_created_by_user:\n s.populate_old_value_of_change()\n\n for s in suggestions_which_can_be_reviewed:\n s.populate_old_value_of_change()\n\n suggestion_dicts_created_by_user = (\n [s.to_dict() for s in suggestions_created_by_user])\n suggestion_dicts_which_can_be_reviewed = (\n [s.to_dict() for s in suggestions_which_can_be_reviewed])\n\n ids_of_suggestions_created_by_user = (\n [s['suggestion_id'] for s in suggestion_dicts_created_by_user])\n ids_of_suggestions_which_can_be_reviewed = (\n [s['suggestion_id']\n for s in suggestion_dicts_which_can_be_reviewed])\n\n threads_linked_to_suggestions_by_user = (\n [t.to_dict() for t in feedback_services.get_multiple_threads(\n ids_of_suggestions_created_by_user)])\n threads_linked_to_suggestions_which_can_be_reviewed = (\n [t.to_dict() for t in feedback_services.get_multiple_threads(\n ids_of_suggestions_which_can_be_reviewed)])\n\n self.values.update({\n 'explorations_list': exp_summary_dicts,\n 'collections_list': collection_summary_dicts,\n 'dashboard_stats': dashboard_stats,\n 'last_week_stats': last_week_stats,\n 'subscribers_list': subscribers_list,\n 'display_preference': creator_dashboard_display_pref,\n 'threads_for_created_suggestions_list': (\n threads_linked_to_suggestions_by_user),\n 'threads_for_suggestions_to_review_list': (\n threads_linked_to_suggestions_which_can_be_reviewed),\n 'created_suggestions_list': suggestion_dicts_created_by_user,\n 'suggestions_to_review_list': suggestion_dicts_which_can_be_reviewed\n })\n if constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n self.values.update({\n 'topic_summary_dicts': topic_summary_dicts\n })\n else:\n self.values.update({\n 'topic_summary_dicts': []\n })\n\n self.render_json(self.values)\n\n @acl_decorators.can_access_creator_dashboard\n def post(self):\n creator_dashboard_display_pref = self.payload.get('display_preference')\n user_services.update_user_creator_dashboard_display(\n self.user_id, creator_dashboard_display_pref)\n self.render_json({})\n\n\nclass NotificationsHandler(base.BaseHandler):\n \"\"\"Provides data about unseen notifications.\"\"\"\n\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_creator_dashboard\n def get(self):\n \"\"\"Handles GET requests.\"\"\"\n num_unseen_notifications = 0\n last_seen_msec = (\n subscription_services.get_last_seen_notifications_msec(\n self.user_id))\n _, recent_notifications = (\n user_jobs_continuous.DashboardRecentUpdatesAggregator\n .get_recent_user_changes(self.user_id))\n for notification in recent_notifications:\n if (notification['last_updated_ms'] > last_seen_msec and\n notification['author_id'] != self.user_id):\n num_unseen_notifications += 1\n\n self.render_json({\n 'num_unseen_notifications': num_unseen_notifications,\n })\n\n\nclass NewExplorationHandler(base.BaseHandler):\n \"\"\"Creates a new exploration.\"\"\"\n\n @acl_decorators.can_create_exploration\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n title = self.payload.get('title', feconf.DEFAULT_EXPLORATION_TITLE)\n\n new_exploration_id = exp_fetchers.get_new_exploration_id()\n exploration = exp_domain.Exploration.create_default_exploration(\n new_exploration_id, title=title)\n exp_services.save_new_exploration(self.user_id, exploration)\n\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n\n\nclass NewCollectionHandler(base.BaseHandler):\n \"\"\"Creates a new collection.\"\"\"\n\n @acl_decorators.can_create_collection\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n new_collection_id = collection_services.get_new_collection_id()\n collection = collection_domain.Collection.create_default_collection(\n new_collection_id)\n collection_services.save_new_collection(self.user_id, collection)\n\n self.render_json({\n COLLECTION_ID_KEY: new_collection_id\n })\n\n\nclass UploadExplorationHandler(base.BaseHandler):\n \"\"\"Uploads a new exploration.\"\"\"\n\n @acl_decorators.can_upload_exploration\n def post(self):\n \"\"\"Handles POST requests.\"\"\"\n yaml_content = self.request.get('yaml_file')\n\n new_exploration_id = exp_fetchers.get_new_exploration_id()\n if constants.ALLOW_YAML_FILE_UPLOAD:\n exp_services.save_new_exploration_from_yaml_and_assets(\n self.user_id, yaml_content, new_exploration_id, [],\n strip_voiceovers=True)\n self.render_json({\n EXPLORATION_ID_KEY: new_exploration_id\n })\n else:\n raise self.InvalidInputException(\n 'This server does not allow file uploads.')\n", "path": "core/controllers/creator_dashboard.py"}]} |
gh_patches_debug_1181 | rasdani/github-patches | git_diff | agconti__cookiecutter-django-rest-177 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Update `IsOwnerOrReadOnly` permission docstring to be more clear
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py`
Content:
```
1 from rest_framework import permissions
2
3
4 class IsOwnerOrReadOnly(permissions.BasePermission):
5 """
6 Object-level permission to only allow owners of an object to edit it.
7 Assumes the model instance has an `owner` attribute.
8 """
9
10 def has_object_permission(self, request, view, obj):
11
12 if request.method in permissions.SAFE_METHODS:
13 return True
14
15 return obj == request.user
16
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py
--- a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py
+++ b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py
@@ -4,7 +4,6 @@
class IsOwnerOrReadOnly(permissions.BasePermission):
"""
Object-level permission to only allow owners of an object to edit it.
- Assumes the model instance has an `owner` attribute.
"""
def has_object_permission(self, request, view, obj):
| {"golden_diff": "diff --git a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py\n--- a/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py\n+++ b/{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py\n@@ -4,7 +4,6 @@\n class IsOwnerOrReadOnly(permissions.BasePermission):\n \"\"\"\n Object-level permission to only allow owners of an object to edit it.\n- Assumes the model instance has an `owner` attribute.\n \"\"\"\n \n def has_object_permission(self, request, view, obj):\n", "issue": "Update `IsOwnerOrReadOnly` permission docstring to be more clear\n\n", "before_files": [{"content": "from rest_framework import permissions\n\n\nclass IsOwnerOrReadOnly(permissions.BasePermission):\n \"\"\"\n Object-level permission to only allow owners of an object to edit it.\n Assumes the model instance has an `owner` attribute.\n \"\"\"\n\n def has_object_permission(self, request, view, obj):\n\n if request.method in permissions.SAFE_METHODS:\n return True\n\n return obj == request.user\n", "path": "{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py"}], "after_files": [{"content": "from rest_framework import permissions\n\n\nclass IsOwnerOrReadOnly(permissions.BasePermission):\n \"\"\"\n Object-level permission to only allow owners of an object to edit it.\n \"\"\"\n\n def has_object_permission(self, request, view, obj):\n\n if request.method in permissions.SAFE_METHODS:\n return True\n\n return obj == request.user\n", "path": "{{cookiecutter.github_repository_name}}/{{cookiecutter.app_name}}/users/permissions.py"}]} |
gh_patches_debug_1182 | rasdani/github-patches | git_diff | adamchainz__django-mysql-502 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Officially support Django 2.1
Testing is passing with 2.1, document and release if any changes required.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 # -*- encoding:utf-8 -*-
2 from __future__ import (
3 absolute_import, division, print_function, unicode_literals,
4 )
5
6 import codecs
7 import os
8 import re
9
10 from setuptools import find_packages, setup
11
12
13 def get_version(filename):
14 with codecs.open(filename, 'r', 'utf-8') as fp:
15 contents = fp.read()
16 return re.search(r"__version__ = ['\"]([^'\"]+)['\"]", contents).group(1)
17
18
19 version = get_version(os.path.join('django_mysql', '__init__.py'))
20
21
22 with codecs.open('README.rst', 'r', 'utf-8') as readme_file:
23 readme = readme_file.read()
24
25 with codecs.open('HISTORY.rst', 'r', 'utf-8') as history_file:
26 history = history_file.read().replace('.. :changelog:', '')
27
28 setup(
29 name='django-mysql',
30 version=version,
31 description="Extensions to Django for use with MySQL/MariaDB",
32 long_description=readme + '\n\n' + history,
33 author="Adam Johnson",
34 author_email='[email protected]',
35 url='https://github.com/adamchainz/django-mysql',
36 packages=find_packages(exclude=['tests', 'tests.*']),
37 include_package_data=True,
38 install_requires=[
39 'Django>=1.8',
40 ],
41 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
42 license="BSD",
43 zip_safe=False,
44 keywords=['Django', 'MySQL', 'MariaDB'],
45 classifiers=[
46 'Development Status :: 5 - Production/Stable',
47 'Framework :: Django',
48 'Framework :: Django :: 1.8',
49 'Framework :: Django :: 1.9',
50 'Framework :: Django :: 1.10',
51 'Framework :: Django :: 1.11',
52 'Framework :: Django :: 2.0',
53 'Intended Audience :: Developers',
54 'License :: OSI Approved :: BSD License',
55 'Natural Language :: English',
56 'Operating System :: OS Independent',
57 'Programming Language :: Python',
58 "Programming Language :: Python :: 2",
59 'Programming Language :: Python :: 2.7',
60 'Programming Language :: Python :: 3',
61 'Programming Language :: Python :: 3.5',
62 'Programming Language :: Python :: 3.6',
63 'Topic :: Database',
64 ],
65 )
66
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
'Framework :: Django :: 1.10',
'Framework :: Django :: 1.11',
'Framework :: Django :: 2.0',
+ 'Framework :: Django :: 2.1',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Natural Language :: English',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n 'Framework :: Django :: 1.10',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n+ 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n", "issue": "Officially support Django 2.1\nTesting is passing with 2.1, document and release if any changes required.\n", "before_files": [{"content": "# -*- encoding:utf-8 -*-\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals,\n)\n\nimport codecs\nimport os\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef get_version(filename):\n with codecs.open(filename, 'r', 'utf-8') as fp:\n contents = fp.read()\n return re.search(r\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", contents).group(1)\n\n\nversion = get_version(os.path.join('django_mysql', '__init__.py'))\n\n\nwith codecs.open('README.rst', 'r', 'utf-8') as readme_file:\n readme = readme_file.read()\n\nwith codecs.open('HISTORY.rst', 'r', 'utf-8') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nsetup(\n name='django-mysql',\n version=version,\n description=\"Extensions to Django for use with MySQL/MariaDB\",\n long_description=readme + '\\n\\n' + history,\n author=\"Adam Johnson\",\n author_email='[email protected]',\n url='https://github.com/adamchainz/django-mysql',\n packages=find_packages(exclude=['tests', 'tests.*']),\n include_package_data=True,\n install_requires=[\n 'Django>=1.8',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n license=\"BSD\",\n zip_safe=False,\n keywords=['Django', 'MySQL', 'MariaDB'],\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Framework :: Django',\n 'Framework :: Django :: 1.8',\n 'Framework :: Django :: 1.9',\n 'Framework :: Django :: 1.10',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n \"Programming Language :: Python :: 2\",\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Database',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- encoding:utf-8 -*-\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals,\n)\n\nimport codecs\nimport os\nimport re\n\nfrom setuptools import find_packages, setup\n\n\ndef get_version(filename):\n with codecs.open(filename, 'r', 'utf-8') as fp:\n contents = fp.read()\n return re.search(r\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", contents).group(1)\n\n\nversion = get_version(os.path.join('django_mysql', '__init__.py'))\n\n\nwith codecs.open('README.rst', 'r', 'utf-8') as readme_file:\n readme = readme_file.read()\n\nwith codecs.open('HISTORY.rst', 'r', 'utf-8') as history_file:\n history = history_file.read().replace('.. :changelog:', '')\n\nsetup(\n name='django-mysql',\n version=version,\n description=\"Extensions to Django for use with MySQL/MariaDB\",\n long_description=readme + '\\n\\n' + history,\n author=\"Adam Johnson\",\n author_email='[email protected]',\n url='https://github.com/adamchainz/django-mysql',\n packages=find_packages(exclude=['tests', 'tests.*']),\n include_package_data=True,\n install_requires=[\n 'Django>=1.8',\n ],\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',\n license=\"BSD\",\n zip_safe=False,\n keywords=['Django', 'MySQL', 'MariaDB'],\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Framework :: Django',\n 'Framework :: Django :: 1.8',\n 'Framework :: Django :: 1.9',\n 'Framework :: Django :: 1.10',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n \"Programming Language :: Python :: 2\",\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Database',\n ],\n)\n", "path": "setup.py"}]} |
gh_patches_debug_1183 | rasdani/github-patches | git_diff | searx__searx-2256 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Make secret_key default one that will fail if not set to a custom value
Currently, the `secret_key` default value is `ultrasecretkey` which is a valid value. Would it not be better to let the default value of this setting be one that will make searx fail to start? This will force the user to conciously change this setting to a secure value instead of accidentally forgetting to set this to something random and secure.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `searx/__init__.py`
Content:
```
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14
15 (C) 2013- by Adam Tauber, <[email protected]>
16 '''
17
18 import logging
19 from os import environ
20 from os.path import realpath, dirname, join, abspath, isfile
21 from io import open
22 from yaml import safe_load
23
24
25 searx_dir = abspath(dirname(__file__))
26 engine_dir = dirname(realpath(__file__))
27 static_path = abspath(join(dirname(__file__), 'static'))
28
29
30 def check_settings_yml(file_name):
31 if isfile(file_name):
32 return file_name
33 else:
34 return None
35
36
37 # find location of settings.yml
38 if 'SEARX_SETTINGS_PATH' in environ:
39 # if possible set path to settings using the
40 # enviroment variable SEARX_SETTINGS_PATH
41 settings_path = check_settings_yml(environ['SEARX_SETTINGS_PATH'])
42 else:
43 # if not, get it from searx code base or last solution from /etc/searx
44 settings_path = check_settings_yml(join(searx_dir, 'settings.yml')) or check_settings_yml('/etc/searx/settings.yml')
45
46 if not settings_path:
47 raise Exception('settings.yml not found')
48
49 # load settings
50 with open(settings_path, 'r', encoding='utf-8') as settings_yaml:
51 settings = safe_load(settings_yaml)
52
53 if settings['ui']['static_path']:
54 static_path = settings['ui']['static_path']
55
56 '''
57 enable debug if
58 the environnement variable SEARX_DEBUG is 1 or true
59 (whatever the value in settings.yml)
60 or general.debug=True in settings.yml
61
62 disable debug if
63 the environnement variable SEARX_DEBUG is 0 or false
64 (whatever the value in settings.yml)
65 or general.debug=False in settings.yml
66 '''
67 searx_debug_env = environ.get('SEARX_DEBUG', '').lower()
68 if searx_debug_env == 'true' or searx_debug_env == '1':
69 searx_debug = True
70 elif searx_debug_env == 'false' or searx_debug_env == '0':
71 searx_debug = False
72 else:
73 searx_debug = settings.get('general', {}).get('debug')
74
75 if searx_debug:
76 logging.basicConfig(level=logging.DEBUG)
77 else:
78 logging.basicConfig(level=logging.WARNING)
79
80 logger = logging.getLogger('searx')
81 logger.debug('read configuration from %s', settings_path)
82 logger.info('Initialisation done')
83
84 if 'SEARX_SECRET' in environ:
85 settings['server']['secret_key'] = environ['SEARX_SECRET']
86 if 'SEARX_BIND_ADDRESS' in environ:
87 settings['server']['bind_address'] = environ['SEARX_BIND_ADDRESS']
88
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/searx/__init__.py b/searx/__init__.py
--- a/searx/__init__.py
+++ b/searx/__init__.py
@@ -85,3 +85,7 @@
settings['server']['secret_key'] = environ['SEARX_SECRET']
if 'SEARX_BIND_ADDRESS' in environ:
settings['server']['bind_address'] = environ['SEARX_BIND_ADDRESS']
+
+if not searx_debug and settings['server']['secret_key'] == 'ultrasecretkey':
+ logger.error('server.secret_key is not changed. Please use something else instead of ultrasecretkey.')
+ exit(1)
| {"golden_diff": "diff --git a/searx/__init__.py b/searx/__init__.py\n--- a/searx/__init__.py\n+++ b/searx/__init__.py\n@@ -85,3 +85,7 @@\n settings['server']['secret_key'] = environ['SEARX_SECRET']\n if 'SEARX_BIND_ADDRESS' in environ:\n settings['server']['bind_address'] = environ['SEARX_BIND_ADDRESS']\n+\n+if not searx_debug and settings['server']['secret_key'] == 'ultrasecretkey':\n+ logger.error('server.secret_key is not changed. Please use something else instead of ultrasecretkey.')\n+ exit(1)\n", "issue": "Make secret_key default one that will fail if not set to a custom value\nCurrently, the `secret_key` default value is `ultrasecretkey` which is a valid value. Would it not be better to let the default value of this setting be one that will make searx fail to start? This will force the user to conciously change this setting to a secure value instead of accidentally forgetting to set this to something random and secure.\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\nimport logging\nfrom os import environ\nfrom os.path import realpath, dirname, join, abspath, isfile\nfrom io import open\nfrom yaml import safe_load\n\n\nsearx_dir = abspath(dirname(__file__))\nengine_dir = dirname(realpath(__file__))\nstatic_path = abspath(join(dirname(__file__), 'static'))\n\n\ndef check_settings_yml(file_name):\n if isfile(file_name):\n return file_name\n else:\n return None\n\n\n# find location of settings.yml\nif 'SEARX_SETTINGS_PATH' in environ:\n # if possible set path to settings using the\n # enviroment variable SEARX_SETTINGS_PATH\n settings_path = check_settings_yml(environ['SEARX_SETTINGS_PATH'])\nelse:\n # if not, get it from searx code base or last solution from /etc/searx\n settings_path = check_settings_yml(join(searx_dir, 'settings.yml')) or check_settings_yml('/etc/searx/settings.yml')\n\nif not settings_path:\n raise Exception('settings.yml not found')\n\n# load settings\nwith open(settings_path, 'r', encoding='utf-8') as settings_yaml:\n settings = safe_load(settings_yaml)\n\nif settings['ui']['static_path']:\n static_path = settings['ui']['static_path']\n\n'''\nenable debug if\nthe environnement variable SEARX_DEBUG is 1 or true\n(whatever the value in settings.yml)\nor general.debug=True in settings.yml\n\ndisable debug if\nthe environnement variable SEARX_DEBUG is 0 or false\n(whatever the value in settings.yml)\nor general.debug=False in settings.yml\n'''\nsearx_debug_env = environ.get('SEARX_DEBUG', '').lower()\nif searx_debug_env == 'true' or searx_debug_env == '1':\n searx_debug = True\nelif searx_debug_env == 'false' or searx_debug_env == '0':\n searx_debug = False\nelse:\n searx_debug = settings.get('general', {}).get('debug')\n\nif searx_debug:\n logging.basicConfig(level=logging.DEBUG)\nelse:\n logging.basicConfig(level=logging.WARNING)\n\nlogger = logging.getLogger('searx')\nlogger.debug('read configuration from %s', settings_path)\nlogger.info('Initialisation done')\n\nif 'SEARX_SECRET' in environ:\n settings['server']['secret_key'] = environ['SEARX_SECRET']\nif 'SEARX_BIND_ADDRESS' in environ:\n settings['server']['bind_address'] = environ['SEARX_BIND_ADDRESS']\n", "path": "searx/__init__.py"}], "after_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n\n(C) 2013- by Adam Tauber, <[email protected]>\n'''\n\nimport logging\nfrom os import environ\nfrom os.path import realpath, dirname, join, abspath, isfile\nfrom io import open\nfrom yaml import safe_load\n\n\nsearx_dir = abspath(dirname(__file__))\nengine_dir = dirname(realpath(__file__))\nstatic_path = abspath(join(dirname(__file__), 'static'))\n\n\ndef check_settings_yml(file_name):\n if isfile(file_name):\n return file_name\n else:\n return None\n\n\n# find location of settings.yml\nif 'SEARX_SETTINGS_PATH' in environ:\n # if possible set path to settings using the\n # enviroment variable SEARX_SETTINGS_PATH\n settings_path = check_settings_yml(environ['SEARX_SETTINGS_PATH'])\nelse:\n # if not, get it from searx code base or last solution from /etc/searx\n settings_path = check_settings_yml(join(searx_dir, 'settings.yml')) or check_settings_yml('/etc/searx/settings.yml')\n\nif not settings_path:\n raise Exception('settings.yml not found')\n\n# load settings\nwith open(settings_path, 'r', encoding='utf-8') as settings_yaml:\n settings = safe_load(settings_yaml)\n\nif settings['ui']['static_path']:\n static_path = settings['ui']['static_path']\n\n'''\nenable debug if\nthe environnement variable SEARX_DEBUG is 1 or true\n(whatever the value in settings.yml)\nor general.debug=True in settings.yml\n\ndisable debug if\nthe environnement variable SEARX_DEBUG is 0 or false\n(whatever the value in settings.yml)\nor general.debug=False in settings.yml\n'''\nsearx_debug_env = environ.get('SEARX_DEBUG', '').lower()\nif searx_debug_env == 'true' or searx_debug_env == '1':\n searx_debug = True\nelif searx_debug_env == 'false' or searx_debug_env == '0':\n searx_debug = False\nelse:\n searx_debug = settings.get('general', {}).get('debug')\n\nif searx_debug:\n logging.basicConfig(level=logging.DEBUG)\nelse:\n logging.basicConfig(level=logging.WARNING)\n\nlogger = logging.getLogger('searx')\nlogger.debug('read configuration from %s', settings_path)\nlogger.info('Initialisation done')\n\nif 'SEARX_SECRET' in environ:\n settings['server']['secret_key'] = environ['SEARX_SECRET']\nif 'SEARX_BIND_ADDRESS' in environ:\n settings['server']['bind_address'] = environ['SEARX_BIND_ADDRESS']\n\nif not searx_debug and settings['server']['secret_key'] == 'ultrasecretkey':\n logger.error('server.secret_key is not changed. Please use something else instead of ultrasecretkey.')\n exit(1)\n", "path": "searx/__init__.py"}]} |
gh_patches_debug_1184 | rasdani/github-patches | git_diff | aws__aws-cli-577 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
typo in s3api list-objects documentation
The documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.
So in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `awscli/customizations/paginate.py`
Content:
```
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """This module has customizations to unify paging paramters.
14
15 For any operation that can be paginated, we will:
16
17 * Remove the service specific pagination params. This can vary across
18 services and we're going to replace them with a consistent set of
19 arguments.
20 * Add a ``--starting-token`` and a ``--max-items`` argument.
21
22 """
23 import logging
24
25 from awscli.arguments import BaseCLIArgument
26 from botocore.parameters import StringParameter
27
28 logger = logging.getLogger(__name__)
29
30
31 STARTING_TOKEN_HELP = """
32 <p>A token to specify where to start paginating. This is the
33 <code>NextToken</code> from a previously truncated response.</p>
34 """
35
36 MAX_ITEMS_HELP = """
37 <p>The total number of items to return. If the total number
38 of items available is more than the value specified in
39 max-items then a <code>NextMarker</code> will
40 be provided in the output that you can use to resume pagination.
41 """
42
43
44 def unify_paging_params(argument_table, operation, **kwargs):
45 if not operation.can_paginate:
46 # We only apply these customizations to paginated responses.
47 return
48 logger.debug("Modifying paging parameters for operation: %s", operation)
49 _remove_existing_paging_arguments(argument_table, operation)
50 argument_table['starting-token'] = PageArgument('starting-token',
51 STARTING_TOKEN_HELP,
52 operation,
53 parse_type='string')
54 argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,
55 operation, parse_type='integer')
56
57
58 def _remove_existing_paging_arguments(argument_table, operation):
59 tokens = _get_input_tokens(operation)
60 for token_name in tokens:
61 cli_name = _get_cli_name(operation.params, token_name)
62 del argument_table[cli_name]
63 if 'limit_key' in operation.pagination:
64 key_name = operation.pagination['limit_key']
65 cli_name = _get_cli_name(operation.params, key_name)
66 del argument_table[cli_name]
67
68
69 def _get_input_tokens(operation):
70 config = operation.pagination
71 tokens = config['input_token']
72 if not isinstance(tokens, list):
73 return [tokens]
74 return tokens
75
76
77 def _get_cli_name(param_objects, token_name):
78 for param in param_objects:
79 if param.name == token_name:
80 return param.cli_name.lstrip('-')
81
82
83 class PageArgument(BaseCLIArgument):
84 type_map = {
85 'string': str,
86 'integer': int,
87 }
88
89 def __init__(self, name, documentation, operation, parse_type):
90 param = StringParameter(operation, name=name, type=parse_type)
91 self._name = name
92 self.argument_object = param
93 self._name = name
94 self._documentation = documentation
95 self._parse_type = parse_type
96
97 @property
98 def cli_name(self):
99 return '--' + self._name
100
101 @property
102 def cli_type_name(self):
103 return self._parse_type
104
105 @property
106 def required(self):
107 return False
108
109 @property
110 def documentation(self):
111 return self._documentation
112
113 def add_to_parser(self, parser):
114 parser.add_argument(self.cli_name, dest=self.py_name,
115 type=self.type_map[self._parse_type])
116
117 def add_to_params(self, parameters, value):
118 if value is not None:
119 parameters[self.py_name] = value
120
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py
--- a/awscli/customizations/paginate.py
+++ b/awscli/customizations/paginate.py
@@ -36,7 +36,7 @@
MAX_ITEMS_HELP = """
<p>The total number of items to return. If the total number
of items available is more than the value specified in
-max-items then a <code>NextMarker</code> will
+max-items then a <code>NextToken</code> will
be provided in the output that you can use to resume pagination.
"""
| {"golden_diff": "diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py\n--- a/awscli/customizations/paginate.py\n+++ b/awscli/customizations/paginate.py\n@@ -36,7 +36,7 @@\n MAX_ITEMS_HELP = \"\"\"\n <p>The total number of items to return. If the total number\n of items available is more than the value specified in\n-max-items then a <code>NextMarker</code> will\n+max-items then a <code>NextToken</code> will\n be provided in the output that you can use to resume pagination.\n \"\"\"\n", "issue": "typo in s3api list-objects documentation\nThe documentation for the s3api list-objects --max-items parameter says that a `NextMarker` will be provided, while the --starting-token parameter refers to this as `NextToken` which is the actual name of the returned token in JSON.\n\nSo in short I think that the `NextMarker` should really say `NextToken` to prevent any confusion.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Remove the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\n\nfrom awscli.arguments import BaseCLIArgument\nfrom botocore.parameters import StringParameter\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextMarker</code> will\nbe provided in the output that you can use to resume pagination.\n\"\"\"\n\n\ndef unify_paging_params(argument_table, operation, **kwargs):\n if not operation.can_paginate:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\", operation)\n _remove_existing_paging_arguments(argument_table, operation)\n argument_table['starting-token'] = PageArgument('starting-token',\n STARTING_TOKEN_HELP,\n operation,\n parse_type='string')\n argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,\n operation, parse_type='integer')\n\n\ndef _remove_existing_paging_arguments(argument_table, operation):\n tokens = _get_input_tokens(operation)\n for token_name in tokens:\n cli_name = _get_cli_name(operation.params, token_name)\n del argument_table[cli_name]\n if 'limit_key' in operation.pagination:\n key_name = operation.pagination['limit_key']\n cli_name = _get_cli_name(operation.params, key_name)\n del argument_table[cli_name]\n\n\ndef _get_input_tokens(operation):\n config = operation.pagination\n tokens = config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, operation, parse_type):\n param = StringParameter(operation, name=name, type=parse_type)\n self._name = name\n self.argument_object = param\n self._name = name\n self._documentation = documentation\n self._parse_type = parse_type\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return False\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n parameters[self.py_name] = value\n", "path": "awscli/customizations/paginate.py"}], "after_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Remove the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\n\nfrom awscli.arguments import BaseCLIArgument\nfrom botocore.parameters import StringParameter\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextToken</code> will\nbe provided in the output that you can use to resume pagination.\n\"\"\"\n\n\ndef unify_paging_params(argument_table, operation, **kwargs):\n if not operation.can_paginate:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\", operation)\n _remove_existing_paging_arguments(argument_table, operation)\n argument_table['starting-token'] = PageArgument('starting-token',\n STARTING_TOKEN_HELP,\n operation,\n parse_type='string')\n argument_table['max-items'] = PageArgument('max-items', MAX_ITEMS_HELP,\n operation, parse_type='integer')\n\n\ndef _remove_existing_paging_arguments(argument_table, operation):\n tokens = _get_input_tokens(operation)\n for token_name in tokens:\n cli_name = _get_cli_name(operation.params, token_name)\n del argument_table[cli_name]\n if 'limit_key' in operation.pagination:\n key_name = operation.pagination['limit_key']\n cli_name = _get_cli_name(operation.params, key_name)\n del argument_table[cli_name]\n\n\ndef _get_input_tokens(operation):\n config = operation.pagination\n tokens = config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, operation, parse_type):\n param = StringParameter(operation, name=name, type=parse_type)\n self._name = name\n self.argument_object = param\n self._name = name\n self._documentation = documentation\n self._parse_type = parse_type\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return False\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n parameters[self.py_name] = value\n", "path": "awscli/customizations/paginate.py"}]} |
gh_patches_debug_1185 | rasdani/github-patches | git_diff | dmlc__dgl-413 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Fixing random seeds does not yield deterministic results
- [Related discussion thread](https://discuss.dgl.ai/t/how-to-fix-random-seeds-in-dgl-in-training/65/3)
It has been observed by Erutan-pku, @hbsun2113 , me and @yzh119 that after fixing NumPy random seed, PyTorch random seed and `cudnn.deterministic=True` we still get different results across runs. There can be two possibilities:
1. This is a bug.
2. Some system optimization involves randomness, in which case we should provide APIs for users to fix results.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `python/dgl/data/citation_graph.py`
Content:
```
1 """Cora, citeseer, pubmed dataset.
2
3 (lingfan): following dataset loading and preprocessing code from tkipf/gcn
4 https://github.com/tkipf/gcn/blob/master/gcn/utils.py
5 """
6 from __future__ import absolute_import
7
8 import numpy as np
9 import pickle as pkl
10 import networkx as nx
11 import scipy.sparse as sp
12 import os, sys
13
14 import dgl
15 from .utils import download, extract_archive, get_download_dir, _get_dgl_url
16
17 _urls = {
18 'cora' : 'dataset/cora_raw.zip',
19 'citeseer' : 'dataset/citeseer.zip',
20 'pubmed' : 'dataset/pubmed.zip',
21 'cora_binary' : 'dataset/cora_binary.zip',
22 }
23
24 def _pickle_load(pkl_file):
25 if sys.version_info > (3, 0):
26 return pkl.load(pkl_file, encoding='latin1')
27 else:
28 return pkl.load(pkl_file)
29
30 class CitationGraphDataset(object):
31 def __init__(self, name):
32 self.name = name
33 self.dir = get_download_dir()
34 self.zip_file_path='{}/{}.zip'.format(self.dir, name)
35 download(_get_dgl_url(_urls[name]), path=self.zip_file_path)
36 extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, name))
37 self._load()
38
39 def _load(self):
40 """Loads input data from gcn/data directory
41
42 ind.name.x => the feature vectors of the training instances as scipy.sparse.csr.csr_matrix object;
43 ind.name.tx => the feature vectors of the test instances as scipy.sparse.csr.csr_matrix object;
44 ind.name.allx => the feature vectors of both labeled and unlabeled training instances
45 (a superset of ind.name.x) as scipy.sparse.csr.csr_matrix object;
46 ind.name.y => the one-hot labels of the labeled training instances as numpy.ndarray object;
47 ind.name.ty => the one-hot labels of the test instances as numpy.ndarray object;
48 ind.name.ally => the labels for instances in ind.name.allx as numpy.ndarray object;
49 ind.name.graph => a dict in the format {index: [index_of_neighbor_nodes]} as collections.defaultdict
50 object;
51 ind.name.test.index => the indices of test instances in graph, for the inductive setting as list object.
52
53 All objects above must be saved using python pickle module.
54
55 :param name: Dataset name
56 :return: All data input files loaded (as well the training/test data).
57 """
58 root = '{}/{}'.format(self.dir, self.name)
59 objnames = ['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph']
60 objects = []
61 for i in range(len(objnames)):
62 with open("{}/ind.{}.{}".format(root, self.name, objnames[i]), 'rb') as f:
63 objects.append(_pickle_load(f))
64
65 x, y, tx, ty, allx, ally, graph = tuple(objects)
66 test_idx_reorder = _parse_index_file("{}/ind.{}.test.index".format(root, self.name))
67 test_idx_range = np.sort(test_idx_reorder)
68
69 if self.name == 'citeseer':
70 # Fix citeseer dataset (there are some isolated nodes in the graph)
71 # Find isolated nodes, add them as zero-vecs into the right position
72 test_idx_range_full = range(min(test_idx_reorder), max(test_idx_reorder)+1)
73 tx_extended = sp.lil_matrix((len(test_idx_range_full), x.shape[1]))
74 tx_extended[test_idx_range-min(test_idx_range), :] = tx
75 tx = tx_extended
76 ty_extended = np.zeros((len(test_idx_range_full), y.shape[1]))
77 ty_extended[test_idx_range-min(test_idx_range), :] = ty
78 ty = ty_extended
79
80 features = sp.vstack((allx, tx)).tolil()
81 features[test_idx_reorder, :] = features[test_idx_range, :]
82 graph = nx.DiGraph(nx.from_dict_of_lists(graph))
83
84 onehot_labels = np.vstack((ally, ty))
85 onehot_labels[test_idx_reorder, :] = onehot_labels[test_idx_range, :]
86 labels = np.argmax(onehot_labels, 1)
87
88 idx_test = test_idx_range.tolist()
89 idx_train = range(len(y))
90 idx_val = range(len(y), len(y)+500)
91
92 train_mask = _sample_mask(idx_train, labels.shape[0])
93 val_mask = _sample_mask(idx_val, labels.shape[0])
94 test_mask = _sample_mask(idx_test, labels.shape[0])
95
96 self.graph = graph
97 self.features = _preprocess_features(features)
98 self.labels = labels
99 self.onehot_labels = onehot_labels
100 self.num_labels = onehot_labels.shape[1]
101 self.train_mask = train_mask
102 self.val_mask = val_mask
103 self.test_mask = test_mask
104
105 print('Finished data loading and preprocessing.')
106 print(' NumNodes: {}'.format(self.graph.number_of_nodes()))
107 print(' NumEdges: {}'.format(self.graph.number_of_edges()))
108 print(' NumFeats: {}'.format(self.features.shape[1]))
109 print(' NumClasses: {}'.format(self.num_labels))
110 print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))
111 print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))
112 print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))
113
114 def __getitem__(self, idx):
115 return self
116
117 def __len__(self):
118 return 1
119
120 def _preprocess_features(features):
121 """Row-normalize feature matrix and convert to tuple representation"""
122 rowsum = np.array(features.sum(1))
123 r_inv = np.power(rowsum, -1).flatten()
124 r_inv[np.isinf(r_inv)] = 0.
125 r_mat_inv = sp.diags(r_inv)
126 features = r_mat_inv.dot(features)
127 return np.array(features.todense())
128
129 def _parse_index_file(filename):
130 """Parse index file."""
131 index = []
132 for line in open(filename):
133 index.append(int(line.strip()))
134 return index
135
136 def _sample_mask(idx, l):
137 """Create mask."""
138 mask = np.zeros(l)
139 mask[idx] = 1
140 return mask
141
142 def load_cora():
143 data = CoraDataset()
144 return data
145
146 def load_citeseer():
147 data = CitationGraphDataset('citeseer')
148 return data
149
150 def load_pubmed():
151 data = CitationGraphDataset('pubmed')
152 return data
153
154 class GCNSyntheticDataset(object):
155 def __init__(self,
156 graph_generator,
157 num_feats=500,
158 num_classes=10,
159 train_ratio=1.,
160 val_ratio=0.,
161 test_ratio=0.,
162 seed=None):
163 rng = np.random.RandomState(seed)
164 # generate graph
165 self.graph = graph_generator(seed)
166 num_nodes = self.graph.number_of_nodes()
167
168 # generate features
169 #self.features = rng.randn(num_nodes, num_feats).astype(np.float32)
170 self.features = np.zeros((num_nodes, num_feats), dtype=np.float32)
171
172 # generate labels
173 self.labels = rng.randint(num_classes, size=num_nodes)
174 onehot_labels = np.zeros((num_nodes, num_classes), dtype=np.float32)
175 onehot_labels[np.arange(num_nodes), self.labels] = 1.
176 self.onehot_labels = onehot_labels
177 self.num_labels = num_classes
178
179 # generate masks
180 ntrain = int(num_nodes * train_ratio)
181 nval = int(num_nodes * val_ratio)
182 ntest = int(num_nodes * test_ratio)
183 mask_array = np.zeros((num_nodes,), dtype=np.int32)
184 mask_array[0:ntrain] = 1
185 mask_array[ntrain:ntrain+nval] = 2
186 mask_array[ntrain+nval:ntrain+nval+ntest] = 3
187 rng.shuffle(mask_array)
188 self.train_mask = (mask_array == 1).astype(np.int32)
189 self.val_mask = (mask_array == 2).astype(np.int32)
190 self.test_mask = (mask_array == 3).astype(np.int32)
191
192 print('Finished synthetic dataset generation.')
193 print(' NumNodes: {}'.format(self.graph.number_of_nodes()))
194 print(' NumEdges: {}'.format(self.graph.number_of_edges()))
195 print(' NumFeats: {}'.format(self.features.shape[1]))
196 print(' NumClasses: {}'.format(self.num_labels))
197 print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))
198 print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))
199 print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))
200
201 def __getitem__(self, idx):
202 return self
203
204 def __len__(self):
205 return 1
206
207 def get_gnp_generator(args):
208 n = args.syn_gnp_n
209 p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p
210 def _gen(seed):
211 return nx.fast_gnp_random_graph(n, p, seed, True)
212 return _gen
213
214 class ScipyGraph(object):
215 """A simple graph object that uses scipy matrix."""
216 def __init__(self, mat):
217 self._mat = mat
218
219 def get_graph(self):
220 return self._mat
221
222 def number_of_nodes(self):
223 return self._mat.shape[0]
224
225 def number_of_edges(self):
226 return self._mat.getnnz()
227
228 def get_scipy_generator(args):
229 n = args.syn_gnp_n
230 p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p
231 def _gen(seed):
232 return ScipyGraph(sp.random(n, n, p, format='coo'))
233 return _gen
234
235 def load_synthetic(args):
236 ty = args.syn_type
237 if ty == 'gnp':
238 gen = get_gnp_generator(args)
239 elif ty == 'scipy':
240 gen = get_scipy_generator(args)
241 else:
242 raise ValueError('Unknown graph generator type: {}'.format(ty))
243 return GCNSyntheticDataset(
244 gen,
245 args.syn_nfeats,
246 args.syn_nclasses,
247 args.syn_train_ratio,
248 args.syn_val_ratio,
249 args.syn_test_ratio,
250 args.syn_seed)
251
252 def register_args(parser):
253 # Args for synthetic graphs.
254 parser.add_argument('--syn-type', type=str, default='gnp',
255 help='Type of the synthetic graph generator')
256 parser.add_argument('--syn-nfeats', type=int, default=500,
257 help='Number of node features')
258 parser.add_argument('--syn-nclasses', type=int, default=10,
259 help='Number of output classes')
260 parser.add_argument('--syn-train-ratio', type=float, default=.1,
261 help='Ratio of training nodes')
262 parser.add_argument('--syn-val-ratio', type=float, default=.2,
263 help='Ratio of validation nodes')
264 parser.add_argument('--syn-test-ratio', type=float, default=.5,
265 help='Ratio of testing nodes')
266 # Args for GNP generator
267 parser.add_argument('--syn-gnp-n', type=int, default=1000,
268 help='n in gnp random graph')
269 parser.add_argument('--syn-gnp-p', type=float, default=0.0,
270 help='p in gnp random graph')
271 parser.add_argument('--syn-seed', type=int, default=42,
272 help='random seed')
273
274 class CoraBinary(object):
275 """A mini-dataset for binary classification task using Cora.
276
277 After loaded, it has following members:
278
279 graphs : list of :class:`~dgl.DGLGraph`
280 pmpds : list of :class:`scipy.sparse.coo_matrix`
281 labels : list of :class:`numpy.ndarray`
282 """
283 def __init__(self):
284 self.dir = get_download_dir()
285 self.name = 'cora_binary'
286 self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)
287 download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)
288 extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, self.name))
289 self._load()
290
291 def _load(self):
292 root = '{}/{}'.format(self.dir, self.name)
293 # load graphs
294 self.graphs = []
295 with open("{}/graphs.txt".format(root), 'r') as f:
296 elist = []
297 for line in f.readlines():
298 if line.startswith('graph'):
299 if len(elist) != 0:
300 self.graphs.append(dgl.DGLGraph(elist))
301 elist = []
302 else:
303 u, v = line.strip().split(' ')
304 elist.append((int(u), int(v)))
305 if len(elist) != 0:
306 self.graphs.append(dgl.DGLGraph(elist))
307 with open("{}/pmpds.pkl".format(root), 'rb') as f:
308 self.pmpds = _pickle_load(f)
309 self.labels = []
310 with open("{}/labels.txt".format(root), 'r') as f:
311 cur = []
312 for line in f.readlines():
313 if line.startswith('graph'):
314 if len(cur) != 0:
315 self.labels.append(np.array(cur))
316 cur = []
317 else:
318 cur.append(int(line.strip()))
319 if len(cur) != 0:
320 self.labels.append(np.array(cur))
321 # sanity check
322 assert len(self.graphs) == len(self.pmpds)
323 assert len(self.graphs) == len(self.labels)
324
325 def __len__(self):
326 return len(self.graphs)
327
328 def __getitem__(self, i):
329 return (self.graphs[i], self.pmpds[i], self.labels[i])
330
331 @staticmethod
332 def collate_fn(batch):
333 graphs, pmpds, labels = zip(*batch)
334 batched_graphs = dgl.batch(graphs)
335 batched_pmpds = sp.block_diag(pmpds)
336 batched_labels = np.concatenate(labels, axis=0)
337 return batched_graphs, batched_pmpds, batched_labels
338
339 class CoraDataset(object):
340 def __init__(self):
341 self.name = 'cora'
342 self.dir = get_download_dir()
343 self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)
344 download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)
345 extract_archive(self.zip_file_path,
346 '{}/{}'.format(self.dir, self.name))
347 self._load()
348
349 def _load(self):
350 idx_features_labels = np.genfromtxt("{}/cora/cora.content".
351 format(self.dir),
352 dtype=np.dtype(str))
353 features = sp.csr_matrix(idx_features_labels[:, 1:-1],
354 dtype=np.float32)
355 labels = _encode_onehot(idx_features_labels[:, -1])
356 self.num_labels = labels.shape[1]
357
358 # build graph
359 idx = np.array(idx_features_labels[:, 0], dtype=np.int32)
360 idx_map = {j: i for i, j in enumerate(idx)}
361 edges_unordered = np.genfromtxt("{}/cora/cora.cites".format(self.dir),
362 dtype=np.int32)
363 edges = np.array(list(map(idx_map.get, edges_unordered.flatten())),
364 dtype=np.int32).reshape(edges_unordered.shape)
365 adj = sp.coo_matrix((np.ones(edges.shape[0]),
366 (edges[:, 0], edges[:, 1])),
367 shape=(labels.shape[0], labels.shape[0]),
368 dtype=np.float32)
369
370 # build symmetric adjacency matrix
371 adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)
372 self.graph = nx.from_scipy_sparse_matrix(adj, create_using=nx.DiGraph())
373
374 features = _normalize(features)
375 self.features = np.array(features.todense())
376 self.labels = np.where(labels)[1]
377
378 self.train_mask = _sample_mask(range(140), labels.shape[0])
379 self.val_mask = _sample_mask(range(200, 500), labels.shape[0])
380 self.test_mask = _sample_mask(range(500, 1500), labels.shape[0])
381
382 def _normalize(mx):
383 """Row-normalize sparse matrix"""
384 rowsum = np.array(mx.sum(1))
385 r_inv = np.power(rowsum, -1).flatten()
386 r_inv[np.isinf(r_inv)] = np.inf
387 r_mat_inv = sp.diags(r_inv)
388 mx = r_mat_inv.dot(mx)
389 return mx
390
391 def _encode_onehot(labels):
392 classes = set(labels)
393 classes_dict = {c: np.identity(len(classes))[i, :] for i, c in
394 enumerate(classes)}
395 labels_onehot = np.array(list(map(classes_dict.get, labels)),
396 dtype=np.int32)
397 return labels_onehot
398
399
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/python/dgl/data/citation_graph.py b/python/dgl/data/citation_graph.py
--- a/python/dgl/data/citation_graph.py
+++ b/python/dgl/data/citation_graph.py
@@ -389,7 +389,7 @@
return mx
def _encode_onehot(labels):
- classes = set(labels)
+ classes = list(sorted(set(labels)))
classes_dict = {c: np.identity(len(classes))[i, :] for i, c in
enumerate(classes)}
labels_onehot = np.array(list(map(classes_dict.get, labels)),
| {"golden_diff": "diff --git a/python/dgl/data/citation_graph.py b/python/dgl/data/citation_graph.py\n--- a/python/dgl/data/citation_graph.py\n+++ b/python/dgl/data/citation_graph.py\n@@ -389,7 +389,7 @@\n return mx\n \n def _encode_onehot(labels):\n- classes = set(labels)\n+ classes = list(sorted(set(labels)))\n classes_dict = {c: np.identity(len(classes))[i, :] for i, c in\n enumerate(classes)}\n labels_onehot = np.array(list(map(classes_dict.get, labels)),\n", "issue": "Fixing random seeds does not yield deterministic results\n- [Related discussion thread](https://discuss.dgl.ai/t/how-to-fix-random-seeds-in-dgl-in-training/65/3)\r\n\r\nIt has been observed by Erutan-pku, @hbsun2113 , me and @yzh119 that after fixing NumPy random seed, PyTorch random seed and `cudnn.deterministic=True` we still get different results across runs. There can be two possibilities:\r\n1. This is a bug.\r\n2. Some system optimization involves randomness, in which case we should provide APIs for users to fix results.\n", "before_files": [{"content": "\"\"\"Cora, citeseer, pubmed dataset.\n\n(lingfan): following dataset loading and preprocessing code from tkipf/gcn\nhttps://github.com/tkipf/gcn/blob/master/gcn/utils.py\n\"\"\"\nfrom __future__ import absolute_import\n\nimport numpy as np\nimport pickle as pkl\nimport networkx as nx\nimport scipy.sparse as sp\nimport os, sys\n\nimport dgl\nfrom .utils import download, extract_archive, get_download_dir, _get_dgl_url\n\n_urls = {\n 'cora' : 'dataset/cora_raw.zip',\n 'citeseer' : 'dataset/citeseer.zip',\n 'pubmed' : 'dataset/pubmed.zip',\n 'cora_binary' : 'dataset/cora_binary.zip',\n}\n\ndef _pickle_load(pkl_file):\n if sys.version_info > (3, 0):\n return pkl.load(pkl_file, encoding='latin1')\n else:\n return pkl.load(pkl_file)\n\nclass CitationGraphDataset(object):\n def __init__(self, name):\n self.name = name\n self.dir = get_download_dir()\n self.zip_file_path='{}/{}.zip'.format(self.dir, name)\n download(_get_dgl_url(_urls[name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, name))\n self._load()\n\n def _load(self):\n \"\"\"Loads input data from gcn/data directory\n\n ind.name.x => the feature vectors of the training instances as scipy.sparse.csr.csr_matrix object;\n ind.name.tx => the feature vectors of the test instances as scipy.sparse.csr.csr_matrix object;\n ind.name.allx => the feature vectors of both labeled and unlabeled training instances\n (a superset of ind.name.x) as scipy.sparse.csr.csr_matrix object;\n ind.name.y => the one-hot labels of the labeled training instances as numpy.ndarray object;\n ind.name.ty => the one-hot labels of the test instances as numpy.ndarray object;\n ind.name.ally => the labels for instances in ind.name.allx as numpy.ndarray object;\n ind.name.graph => a dict in the format {index: [index_of_neighbor_nodes]} as collections.defaultdict\n object;\n ind.name.test.index => the indices of test instances in graph, for the inductive setting as list object.\n\n All objects above must be saved using python pickle module.\n\n :param name: Dataset name\n :return: All data input files loaded (as well the training/test data).\n \"\"\"\n root = '{}/{}'.format(self.dir, self.name)\n objnames = ['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph']\n objects = []\n for i in range(len(objnames)):\n with open(\"{}/ind.{}.{}\".format(root, self.name, objnames[i]), 'rb') as f:\n objects.append(_pickle_load(f))\n\n x, y, tx, ty, allx, ally, graph = tuple(objects)\n test_idx_reorder = _parse_index_file(\"{}/ind.{}.test.index\".format(root, self.name))\n test_idx_range = np.sort(test_idx_reorder)\n\n if self.name == 'citeseer':\n # Fix citeseer dataset (there are some isolated nodes in the graph)\n # Find isolated nodes, add them as zero-vecs into the right position\n test_idx_range_full = range(min(test_idx_reorder), max(test_idx_reorder)+1)\n tx_extended = sp.lil_matrix((len(test_idx_range_full), x.shape[1]))\n tx_extended[test_idx_range-min(test_idx_range), :] = tx\n tx = tx_extended\n ty_extended = np.zeros((len(test_idx_range_full), y.shape[1]))\n ty_extended[test_idx_range-min(test_idx_range), :] = ty\n ty = ty_extended\n\n features = sp.vstack((allx, tx)).tolil()\n features[test_idx_reorder, :] = features[test_idx_range, :]\n graph = nx.DiGraph(nx.from_dict_of_lists(graph))\n\n onehot_labels = np.vstack((ally, ty))\n onehot_labels[test_idx_reorder, :] = onehot_labels[test_idx_range, :]\n labels = np.argmax(onehot_labels, 1)\n\n idx_test = test_idx_range.tolist()\n idx_train = range(len(y))\n idx_val = range(len(y), len(y)+500)\n\n train_mask = _sample_mask(idx_train, labels.shape[0])\n val_mask = _sample_mask(idx_val, labels.shape[0])\n test_mask = _sample_mask(idx_test, labels.shape[0])\n\n self.graph = graph\n self.features = _preprocess_features(features)\n self.labels = labels\n self.onehot_labels = onehot_labels\n self.num_labels = onehot_labels.shape[1]\n self.train_mask = train_mask\n self.val_mask = val_mask\n self.test_mask = test_mask\n\n print('Finished data loading and preprocessing.')\n print(' NumNodes: {}'.format(self.graph.number_of_nodes()))\n print(' NumEdges: {}'.format(self.graph.number_of_edges()))\n print(' NumFeats: {}'.format(self.features.shape[1]))\n print(' NumClasses: {}'.format(self.num_labels))\n print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))\n print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))\n print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))\n\n def __getitem__(self, idx):\n return self\n\n def __len__(self):\n return 1\n\ndef _preprocess_features(features):\n \"\"\"Row-normalize feature matrix and convert to tuple representation\"\"\"\n rowsum = np.array(features.sum(1))\n r_inv = np.power(rowsum, -1).flatten()\n r_inv[np.isinf(r_inv)] = 0.\n r_mat_inv = sp.diags(r_inv)\n features = r_mat_inv.dot(features)\n return np.array(features.todense())\n\ndef _parse_index_file(filename):\n \"\"\"Parse index file.\"\"\"\n index = []\n for line in open(filename):\n index.append(int(line.strip()))\n return index\n\ndef _sample_mask(idx, l):\n \"\"\"Create mask.\"\"\"\n mask = np.zeros(l)\n mask[idx] = 1\n return mask\n\ndef load_cora():\n data = CoraDataset()\n return data\n\ndef load_citeseer():\n data = CitationGraphDataset('citeseer')\n return data\n\ndef load_pubmed():\n data = CitationGraphDataset('pubmed')\n return data\n\nclass GCNSyntheticDataset(object):\n def __init__(self,\n graph_generator,\n num_feats=500,\n num_classes=10,\n train_ratio=1.,\n val_ratio=0.,\n test_ratio=0.,\n seed=None):\n rng = np.random.RandomState(seed)\n # generate graph\n self.graph = graph_generator(seed)\n num_nodes = self.graph.number_of_nodes()\n\n # generate features\n #self.features = rng.randn(num_nodes, num_feats).astype(np.float32)\n self.features = np.zeros((num_nodes, num_feats), dtype=np.float32)\n\n # generate labels\n self.labels = rng.randint(num_classes, size=num_nodes)\n onehot_labels = np.zeros((num_nodes, num_classes), dtype=np.float32)\n onehot_labels[np.arange(num_nodes), self.labels] = 1.\n self.onehot_labels = onehot_labels\n self.num_labels = num_classes\n\n # generate masks\n ntrain = int(num_nodes * train_ratio)\n nval = int(num_nodes * val_ratio)\n ntest = int(num_nodes * test_ratio)\n mask_array = np.zeros((num_nodes,), dtype=np.int32)\n mask_array[0:ntrain] = 1\n mask_array[ntrain:ntrain+nval] = 2\n mask_array[ntrain+nval:ntrain+nval+ntest] = 3\n rng.shuffle(mask_array)\n self.train_mask = (mask_array == 1).astype(np.int32)\n self.val_mask = (mask_array == 2).astype(np.int32)\n self.test_mask = (mask_array == 3).astype(np.int32)\n\n print('Finished synthetic dataset generation.')\n print(' NumNodes: {}'.format(self.graph.number_of_nodes()))\n print(' NumEdges: {}'.format(self.graph.number_of_edges()))\n print(' NumFeats: {}'.format(self.features.shape[1]))\n print(' NumClasses: {}'.format(self.num_labels))\n print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))\n print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))\n print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))\n\n def __getitem__(self, idx):\n return self\n\n def __len__(self):\n return 1\n\ndef get_gnp_generator(args):\n n = args.syn_gnp_n\n p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p\n def _gen(seed):\n return nx.fast_gnp_random_graph(n, p, seed, True)\n return _gen\n\nclass ScipyGraph(object):\n \"\"\"A simple graph object that uses scipy matrix.\"\"\"\n def __init__(self, mat):\n self._mat = mat\n\n def get_graph(self):\n return self._mat\n\n def number_of_nodes(self):\n return self._mat.shape[0]\n\n def number_of_edges(self):\n return self._mat.getnnz()\n\ndef get_scipy_generator(args):\n n = args.syn_gnp_n\n p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p\n def _gen(seed):\n return ScipyGraph(sp.random(n, n, p, format='coo'))\n return _gen\n\ndef load_synthetic(args):\n ty = args.syn_type\n if ty == 'gnp':\n gen = get_gnp_generator(args)\n elif ty == 'scipy':\n gen = get_scipy_generator(args)\n else:\n raise ValueError('Unknown graph generator type: {}'.format(ty))\n return GCNSyntheticDataset(\n gen,\n args.syn_nfeats,\n args.syn_nclasses,\n args.syn_train_ratio,\n args.syn_val_ratio,\n args.syn_test_ratio,\n args.syn_seed)\n\ndef register_args(parser):\n # Args for synthetic graphs.\n parser.add_argument('--syn-type', type=str, default='gnp',\n help='Type of the synthetic graph generator')\n parser.add_argument('--syn-nfeats', type=int, default=500,\n help='Number of node features')\n parser.add_argument('--syn-nclasses', type=int, default=10,\n help='Number of output classes')\n parser.add_argument('--syn-train-ratio', type=float, default=.1,\n help='Ratio of training nodes')\n parser.add_argument('--syn-val-ratio', type=float, default=.2,\n help='Ratio of validation nodes')\n parser.add_argument('--syn-test-ratio', type=float, default=.5,\n help='Ratio of testing nodes')\n # Args for GNP generator\n parser.add_argument('--syn-gnp-n', type=int, default=1000,\n help='n in gnp random graph')\n parser.add_argument('--syn-gnp-p', type=float, default=0.0,\n help='p in gnp random graph')\n parser.add_argument('--syn-seed', type=int, default=42,\n help='random seed')\n\nclass CoraBinary(object):\n \"\"\"A mini-dataset for binary classification task using Cora.\n\n After loaded, it has following members:\n\n graphs : list of :class:`~dgl.DGLGraph`\n pmpds : list of :class:`scipy.sparse.coo_matrix`\n labels : list of :class:`numpy.ndarray`\n \"\"\"\n def __init__(self):\n self.dir = get_download_dir()\n self.name = 'cora_binary'\n self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)\n download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, self.name))\n self._load()\n\n def _load(self):\n root = '{}/{}'.format(self.dir, self.name)\n # load graphs\n self.graphs = []\n with open(\"{}/graphs.txt\".format(root), 'r') as f:\n elist = []\n for line in f.readlines():\n if line.startswith('graph'):\n if len(elist) != 0:\n self.graphs.append(dgl.DGLGraph(elist))\n elist = []\n else:\n u, v = line.strip().split(' ')\n elist.append((int(u), int(v)))\n if len(elist) != 0:\n self.graphs.append(dgl.DGLGraph(elist))\n with open(\"{}/pmpds.pkl\".format(root), 'rb') as f:\n self.pmpds = _pickle_load(f)\n self.labels = []\n with open(\"{}/labels.txt\".format(root), 'r') as f:\n cur = []\n for line in f.readlines():\n if line.startswith('graph'):\n if len(cur) != 0:\n self.labels.append(np.array(cur))\n cur = []\n else:\n cur.append(int(line.strip()))\n if len(cur) != 0:\n self.labels.append(np.array(cur))\n # sanity check\n assert len(self.graphs) == len(self.pmpds)\n assert len(self.graphs) == len(self.labels)\n\n def __len__(self):\n return len(self.graphs)\n\n def __getitem__(self, i):\n return (self.graphs[i], self.pmpds[i], self.labels[i])\n\n @staticmethod\n def collate_fn(batch):\n graphs, pmpds, labels = zip(*batch)\n batched_graphs = dgl.batch(graphs)\n batched_pmpds = sp.block_diag(pmpds)\n batched_labels = np.concatenate(labels, axis=0)\n return batched_graphs, batched_pmpds, batched_labels\n\nclass CoraDataset(object):\n def __init__(self):\n self.name = 'cora'\n self.dir = get_download_dir()\n self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)\n download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path,\n '{}/{}'.format(self.dir, self.name))\n self._load()\n\n def _load(self):\n idx_features_labels = np.genfromtxt(\"{}/cora/cora.content\".\n format(self.dir),\n dtype=np.dtype(str))\n features = sp.csr_matrix(idx_features_labels[:, 1:-1],\n dtype=np.float32)\n labels = _encode_onehot(idx_features_labels[:, -1])\n self.num_labels = labels.shape[1]\n\n # build graph\n idx = np.array(idx_features_labels[:, 0], dtype=np.int32)\n idx_map = {j: i for i, j in enumerate(idx)}\n edges_unordered = np.genfromtxt(\"{}/cora/cora.cites\".format(self.dir),\n dtype=np.int32)\n edges = np.array(list(map(idx_map.get, edges_unordered.flatten())),\n dtype=np.int32).reshape(edges_unordered.shape)\n adj = sp.coo_matrix((np.ones(edges.shape[0]),\n (edges[:, 0], edges[:, 1])),\n shape=(labels.shape[0], labels.shape[0]),\n dtype=np.float32)\n\n # build symmetric adjacency matrix\n adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)\n self.graph = nx.from_scipy_sparse_matrix(adj, create_using=nx.DiGraph())\n\n features = _normalize(features)\n self.features = np.array(features.todense())\n self.labels = np.where(labels)[1]\n\n self.train_mask = _sample_mask(range(140), labels.shape[0])\n self.val_mask = _sample_mask(range(200, 500), labels.shape[0])\n self.test_mask = _sample_mask(range(500, 1500), labels.shape[0])\n\ndef _normalize(mx):\n \"\"\"Row-normalize sparse matrix\"\"\"\n rowsum = np.array(mx.sum(1))\n r_inv = np.power(rowsum, -1).flatten()\n r_inv[np.isinf(r_inv)] = np.inf\n r_mat_inv = sp.diags(r_inv)\n mx = r_mat_inv.dot(mx)\n return mx\n\ndef _encode_onehot(labels):\n classes = set(labels)\n classes_dict = {c: np.identity(len(classes))[i, :] for i, c in\n enumerate(classes)}\n labels_onehot = np.array(list(map(classes_dict.get, labels)),\n dtype=np.int32)\n return labels_onehot\n\n", "path": "python/dgl/data/citation_graph.py"}], "after_files": [{"content": "\"\"\"Cora, citeseer, pubmed dataset.\n\n(lingfan): following dataset loading and preprocessing code from tkipf/gcn\nhttps://github.com/tkipf/gcn/blob/master/gcn/utils.py\n\"\"\"\nfrom __future__ import absolute_import\n\nimport numpy as np\nimport pickle as pkl\nimport networkx as nx\nimport scipy.sparse as sp\nimport os, sys\n\nimport dgl\nfrom .utils import download, extract_archive, get_download_dir, _get_dgl_url\n\n_urls = {\n 'cora' : 'dataset/cora_raw.zip',\n 'citeseer' : 'dataset/citeseer.zip',\n 'pubmed' : 'dataset/pubmed.zip',\n 'cora_binary' : 'dataset/cora_binary.zip',\n}\n\ndef _pickle_load(pkl_file):\n if sys.version_info > (3, 0):\n return pkl.load(pkl_file, encoding='latin1')\n else:\n return pkl.load(pkl_file)\n\nclass CitationGraphDataset(object):\n def __init__(self, name):\n self.name = name\n self.dir = get_download_dir()\n self.zip_file_path='{}/{}.zip'.format(self.dir, name)\n download(_get_dgl_url(_urls[name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, name))\n self._load()\n\n def _load(self):\n \"\"\"Loads input data from gcn/data directory\n\n ind.name.x => the feature vectors of the training instances as scipy.sparse.csr.csr_matrix object;\n ind.name.tx => the feature vectors of the test instances as scipy.sparse.csr.csr_matrix object;\n ind.name.allx => the feature vectors of both labeled and unlabeled training instances\n (a superset of ind.name.x) as scipy.sparse.csr.csr_matrix object;\n ind.name.y => the one-hot labels of the labeled training instances as numpy.ndarray object;\n ind.name.ty => the one-hot labels of the test instances as numpy.ndarray object;\n ind.name.ally => the labels for instances in ind.name.allx as numpy.ndarray object;\n ind.name.graph => a dict in the format {index: [index_of_neighbor_nodes]} as collections.defaultdict\n object;\n ind.name.test.index => the indices of test instances in graph, for the inductive setting as list object.\n\n All objects above must be saved using python pickle module.\n\n :param name: Dataset name\n :return: All data input files loaded (as well the training/test data).\n \"\"\"\n root = '{}/{}'.format(self.dir, self.name)\n objnames = ['x', 'y', 'tx', 'ty', 'allx', 'ally', 'graph']\n objects = []\n for i in range(len(objnames)):\n with open(\"{}/ind.{}.{}\".format(root, self.name, objnames[i]), 'rb') as f:\n objects.append(_pickle_load(f))\n\n x, y, tx, ty, allx, ally, graph = tuple(objects)\n test_idx_reorder = _parse_index_file(\"{}/ind.{}.test.index\".format(root, self.name))\n test_idx_range = np.sort(test_idx_reorder)\n\n if self.name == 'citeseer':\n # Fix citeseer dataset (there are some isolated nodes in the graph)\n # Find isolated nodes, add them as zero-vecs into the right position\n test_idx_range_full = range(min(test_idx_reorder), max(test_idx_reorder)+1)\n tx_extended = sp.lil_matrix((len(test_idx_range_full), x.shape[1]))\n tx_extended[test_idx_range-min(test_idx_range), :] = tx\n tx = tx_extended\n ty_extended = np.zeros((len(test_idx_range_full), y.shape[1]))\n ty_extended[test_idx_range-min(test_idx_range), :] = ty\n ty = ty_extended\n\n features = sp.vstack((allx, tx)).tolil()\n features[test_idx_reorder, :] = features[test_idx_range, :]\n graph = nx.DiGraph(nx.from_dict_of_lists(graph))\n\n onehot_labels = np.vstack((ally, ty))\n onehot_labels[test_idx_reorder, :] = onehot_labels[test_idx_range, :]\n labels = np.argmax(onehot_labels, 1)\n\n idx_test = test_idx_range.tolist()\n idx_train = range(len(y))\n idx_val = range(len(y), len(y)+500)\n\n train_mask = _sample_mask(idx_train, labels.shape[0])\n val_mask = _sample_mask(idx_val, labels.shape[0])\n test_mask = _sample_mask(idx_test, labels.shape[0])\n\n self.graph = graph\n self.features = _preprocess_features(features)\n self.labels = labels\n self.onehot_labels = onehot_labels\n self.num_labels = onehot_labels.shape[1]\n self.train_mask = train_mask\n self.val_mask = val_mask\n self.test_mask = test_mask\n\n print('Finished data loading and preprocessing.')\n print(' NumNodes: {}'.format(self.graph.number_of_nodes()))\n print(' NumEdges: {}'.format(self.graph.number_of_edges()))\n print(' NumFeats: {}'.format(self.features.shape[1]))\n print(' NumClasses: {}'.format(self.num_labels))\n print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))\n print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))\n print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))\n\n def __getitem__(self, idx):\n return self\n\n def __len__(self):\n return 1\n\ndef _preprocess_features(features):\n \"\"\"Row-normalize feature matrix and convert to tuple representation\"\"\"\n rowsum = np.array(features.sum(1))\n r_inv = np.power(rowsum, -1).flatten()\n r_inv[np.isinf(r_inv)] = 0.\n r_mat_inv = sp.diags(r_inv)\n features = r_mat_inv.dot(features)\n return np.array(features.todense())\n\ndef _parse_index_file(filename):\n \"\"\"Parse index file.\"\"\"\n index = []\n for line in open(filename):\n index.append(int(line.strip()))\n return index\n\ndef _sample_mask(idx, l):\n \"\"\"Create mask.\"\"\"\n mask = np.zeros(l)\n mask[idx] = 1\n return mask\n\ndef load_cora():\n data = CoraDataset()\n return data\n\ndef load_citeseer():\n data = CitationGraphDataset('citeseer')\n return data\n\ndef load_pubmed():\n data = CitationGraphDataset('pubmed')\n return data\n\nclass GCNSyntheticDataset(object):\n def __init__(self,\n graph_generator,\n num_feats=500,\n num_classes=10,\n train_ratio=1.,\n val_ratio=0.,\n test_ratio=0.,\n seed=None):\n rng = np.random.RandomState(seed)\n # generate graph\n self.graph = graph_generator(seed)\n num_nodes = self.graph.number_of_nodes()\n\n # generate features\n #self.features = rng.randn(num_nodes, num_feats).astype(np.float32)\n self.features = np.zeros((num_nodes, num_feats), dtype=np.float32)\n\n # generate labels\n self.labels = rng.randint(num_classes, size=num_nodes)\n onehot_labels = np.zeros((num_nodes, num_classes), dtype=np.float32)\n onehot_labels[np.arange(num_nodes), self.labels] = 1.\n self.onehot_labels = onehot_labels\n self.num_labels = num_classes\n\n # generate masks\n ntrain = int(num_nodes * train_ratio)\n nval = int(num_nodes * val_ratio)\n ntest = int(num_nodes * test_ratio)\n mask_array = np.zeros((num_nodes,), dtype=np.int32)\n mask_array[0:ntrain] = 1\n mask_array[ntrain:ntrain+nval] = 2\n mask_array[ntrain+nval:ntrain+nval+ntest] = 3\n rng.shuffle(mask_array)\n self.train_mask = (mask_array == 1).astype(np.int32)\n self.val_mask = (mask_array == 2).astype(np.int32)\n self.test_mask = (mask_array == 3).astype(np.int32)\n\n print('Finished synthetic dataset generation.')\n print(' NumNodes: {}'.format(self.graph.number_of_nodes()))\n print(' NumEdges: {}'.format(self.graph.number_of_edges()))\n print(' NumFeats: {}'.format(self.features.shape[1]))\n print(' NumClasses: {}'.format(self.num_labels))\n print(' NumTrainingSamples: {}'.format(len(np.nonzero(self.train_mask)[0])))\n print(' NumValidationSamples: {}'.format(len(np.nonzero(self.val_mask)[0])))\n print(' NumTestSamples: {}'.format(len(np.nonzero(self.test_mask)[0])))\n\n def __getitem__(self, idx):\n return self\n\n def __len__(self):\n return 1\n\ndef get_gnp_generator(args):\n n = args.syn_gnp_n\n p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p\n def _gen(seed):\n return nx.fast_gnp_random_graph(n, p, seed, True)\n return _gen\n\nclass ScipyGraph(object):\n \"\"\"A simple graph object that uses scipy matrix.\"\"\"\n def __init__(self, mat):\n self._mat = mat\n\n def get_graph(self):\n return self._mat\n\n def number_of_nodes(self):\n return self._mat.shape[0]\n\n def number_of_edges(self):\n return self._mat.getnnz()\n\ndef get_scipy_generator(args):\n n = args.syn_gnp_n\n p = (2 * np.log(n) / n) if args.syn_gnp_p == 0. else args.syn_gnp_p\n def _gen(seed):\n return ScipyGraph(sp.random(n, n, p, format='coo'))\n return _gen\n\ndef load_synthetic(args):\n ty = args.syn_type\n if ty == 'gnp':\n gen = get_gnp_generator(args)\n elif ty == 'scipy':\n gen = get_scipy_generator(args)\n else:\n raise ValueError('Unknown graph generator type: {}'.format(ty))\n return GCNSyntheticDataset(\n gen,\n args.syn_nfeats,\n args.syn_nclasses,\n args.syn_train_ratio,\n args.syn_val_ratio,\n args.syn_test_ratio,\n args.syn_seed)\n\ndef register_args(parser):\n # Args for synthetic graphs.\n parser.add_argument('--syn-type', type=str, default='gnp',\n help='Type of the synthetic graph generator')\n parser.add_argument('--syn-nfeats', type=int, default=500,\n help='Number of node features')\n parser.add_argument('--syn-nclasses', type=int, default=10,\n help='Number of output classes')\n parser.add_argument('--syn-train-ratio', type=float, default=.1,\n help='Ratio of training nodes')\n parser.add_argument('--syn-val-ratio', type=float, default=.2,\n help='Ratio of validation nodes')\n parser.add_argument('--syn-test-ratio', type=float, default=.5,\n help='Ratio of testing nodes')\n # Args for GNP generator\n parser.add_argument('--syn-gnp-n', type=int, default=1000,\n help='n in gnp random graph')\n parser.add_argument('--syn-gnp-p', type=float, default=0.0,\n help='p in gnp random graph')\n parser.add_argument('--syn-seed', type=int, default=42,\n help='random seed')\n\nclass CoraBinary(object):\n \"\"\"A mini-dataset for binary classification task using Cora.\n\n After loaded, it has following members:\n\n graphs : list of :class:`~dgl.DGLGraph`\n pmpds : list of :class:`scipy.sparse.coo_matrix`\n labels : list of :class:`numpy.ndarray`\n \"\"\"\n def __init__(self):\n self.dir = get_download_dir()\n self.name = 'cora_binary'\n self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)\n download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path, '{}/{}'.format(self.dir, self.name))\n self._load()\n\n def _load(self):\n root = '{}/{}'.format(self.dir, self.name)\n # load graphs\n self.graphs = []\n with open(\"{}/graphs.txt\".format(root), 'r') as f:\n elist = []\n for line in f.readlines():\n if line.startswith('graph'):\n if len(elist) != 0:\n self.graphs.append(dgl.DGLGraph(elist))\n elist = []\n else:\n u, v = line.strip().split(' ')\n elist.append((int(u), int(v)))\n if len(elist) != 0:\n self.graphs.append(dgl.DGLGraph(elist))\n with open(\"{}/pmpds.pkl\".format(root), 'rb') as f:\n self.pmpds = _pickle_load(f)\n self.labels = []\n with open(\"{}/labels.txt\".format(root), 'r') as f:\n cur = []\n for line in f.readlines():\n if line.startswith('graph'):\n if len(cur) != 0:\n self.labels.append(np.array(cur))\n cur = []\n else:\n cur.append(int(line.strip()))\n if len(cur) != 0:\n self.labels.append(np.array(cur))\n # sanity check\n assert len(self.graphs) == len(self.pmpds)\n assert len(self.graphs) == len(self.labels)\n\n def __len__(self):\n return len(self.graphs)\n\n def __getitem__(self, i):\n return (self.graphs[i], self.pmpds[i], self.labels[i])\n\n @staticmethod\n def collate_fn(batch):\n graphs, pmpds, labels = zip(*batch)\n batched_graphs = dgl.batch(graphs)\n batched_pmpds = sp.block_diag(pmpds)\n batched_labels = np.concatenate(labels, axis=0)\n return batched_graphs, batched_pmpds, batched_labels\n\nclass CoraDataset(object):\n def __init__(self):\n self.name = 'cora'\n self.dir = get_download_dir()\n self.zip_file_path='{}/{}.zip'.format(self.dir, self.name)\n download(_get_dgl_url(_urls[self.name]), path=self.zip_file_path)\n extract_archive(self.zip_file_path,\n '{}/{}'.format(self.dir, self.name))\n self._load()\n\n def _load(self):\n idx_features_labels = np.genfromtxt(\"{}/cora/cora.content\".\n format(self.dir),\n dtype=np.dtype(str))\n features = sp.csr_matrix(idx_features_labels[:, 1:-1],\n dtype=np.float32)\n labels = _encode_onehot(idx_features_labels[:, -1])\n self.num_labels = labels.shape[1]\n\n # build graph\n idx = np.array(idx_features_labels[:, 0], dtype=np.int32)\n idx_map = {j: i for i, j in enumerate(idx)}\n edges_unordered = np.genfromtxt(\"{}/cora/cora.cites\".format(self.dir),\n dtype=np.int32)\n edges = np.array(list(map(idx_map.get, edges_unordered.flatten())),\n dtype=np.int32).reshape(edges_unordered.shape)\n adj = sp.coo_matrix((np.ones(edges.shape[0]),\n (edges[:, 0], edges[:, 1])),\n shape=(labels.shape[0], labels.shape[0]),\n dtype=np.float32)\n\n # build symmetric adjacency matrix\n adj = adj + adj.T.multiply(adj.T > adj) - adj.multiply(adj.T > adj)\n self.graph = nx.from_scipy_sparse_matrix(adj, create_using=nx.DiGraph())\n\n features = _normalize(features)\n self.features = np.array(features.todense())\n self.labels = np.where(labels)[1]\n\n self.train_mask = _sample_mask(range(140), labels.shape[0])\n self.val_mask = _sample_mask(range(200, 500), labels.shape[0])\n self.test_mask = _sample_mask(range(500, 1500), labels.shape[0])\n\ndef _normalize(mx):\n \"\"\"Row-normalize sparse matrix\"\"\"\n rowsum = np.array(mx.sum(1))\n r_inv = np.power(rowsum, -1).flatten()\n r_inv[np.isinf(r_inv)] = np.inf\n r_mat_inv = sp.diags(r_inv)\n mx = r_mat_inv.dot(mx)\n return mx\n\ndef _encode_onehot(labels):\n classes = list(sorted(set(labels)))\n classes_dict = {c: np.identity(len(classes))[i, :] for i, c in\n enumerate(classes)}\n labels_onehot = np.array(list(map(classes_dict.get, labels)),\n dtype=np.int32)\n return labels_onehot\n\n", "path": "python/dgl/data/citation_graph.py"}]} |
gh_patches_debug_1186 | rasdani/github-patches | git_diff | freqtrade__freqtrade-1642 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Unable to Perform Sell in dry mode, broken persistance.py
Hi,
Since two days I'm trying to figure out why my fresh installed bot can't perform sell operations on Dry run mode. Even If I perform a force sell all trades stay open.
I suspected a breaking change in SQLAlchemy, so I rollbacked to a previous version, unsucessfully.
So I checked the persistance.py file and tried previous version. And surprisingly the bot is performing normaly if I go back to this commit "cfe00c2f0c118c93e1870567eb75c195bfa91ddd"
I'm investigating to figure out what is hapenning excactly.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `freqtrade/exchange/exchange.py`
Content:
```
1 # pragma pylint: disable=W0603
2 """ Cryptocurrency Exchanges support """
3 import logging
4 import inspect
5 from random import randint
6 from typing import List, Dict, Tuple, Any, Optional
7 from datetime import datetime
8 from math import floor, ceil
9
10 import arrow
11 import asyncio
12 import ccxt
13 import ccxt.async_support as ccxt_async
14 from pandas import DataFrame
15
16 from freqtrade import constants, OperationalException, DependencyException, TemporaryError
17 from freqtrade.data.converter import parse_ticker_dataframe
18
19 logger = logging.getLogger(__name__)
20
21 API_RETRY_COUNT = 4
22
23
24 # Urls to exchange markets, insert quote and base with .format()
25 _EXCHANGE_URLS = {
26 ccxt.bittrex.__name__: '/Market/Index?MarketName={quote}-{base}',
27 ccxt.binance.__name__: '/tradeDetail.html?symbol={base}_{quote}',
28 }
29
30
31 def retrier_async(f):
32 async def wrapper(*args, **kwargs):
33 count = kwargs.pop('count', API_RETRY_COUNT)
34 try:
35 return await f(*args, **kwargs)
36 except (TemporaryError, DependencyException) as ex:
37 logger.warning('%s() returned exception: "%s"', f.__name__, ex)
38 if count > 0:
39 count -= 1
40 kwargs.update({'count': count})
41 logger.warning('retrying %s() still for %s times', f.__name__, count)
42 return await wrapper(*args, **kwargs)
43 else:
44 logger.warning('Giving up retrying: %s()', f.__name__)
45 raise ex
46 return wrapper
47
48
49 def retrier(f):
50 def wrapper(*args, **kwargs):
51 count = kwargs.pop('count', API_RETRY_COUNT)
52 try:
53 return f(*args, **kwargs)
54 except (TemporaryError, DependencyException) as ex:
55 logger.warning('%s() returned exception: "%s"', f.__name__, ex)
56 if count > 0:
57 count -= 1
58 kwargs.update({'count': count})
59 logger.warning('retrying %s() still for %s times', f.__name__, count)
60 return wrapper(*args, **kwargs)
61 else:
62 logger.warning('Giving up retrying: %s()', f.__name__)
63 raise ex
64 return wrapper
65
66
67 class Exchange(object):
68
69 _conf: Dict = {}
70 _params: Dict = {}
71
72 # Dict to specify which options each exchange implements
73 # TODO: this should be merged with attributes from subclasses
74 # To avoid having to copy/paste this to all subclasses.
75 _ft_has = {
76 "stoploss_on_exchange": False,
77 }
78
79 def __init__(self, config: dict) -> None:
80 """
81 Initializes this module with the given config,
82 it does basic validation whether the specified exchange and pairs are valid.
83 :return: None
84 """
85 self._conf.update(config)
86
87 self._cached_ticker: Dict[str, Any] = {}
88
89 # Holds last candle refreshed time of each pair
90 self._pairs_last_refresh_time: Dict[Tuple[str, str], int] = {}
91
92 # Holds candles
93 self._klines: Dict[Tuple[str, str], DataFrame] = {}
94
95 # Holds all open sell orders for dry_run
96 self._dry_run_open_orders: Dict[str, Any] = {}
97
98 if config['dry_run']:
99 logger.info('Instance is running with dry_run enabled')
100
101 exchange_config = config['exchange']
102 self._api: ccxt.Exchange = self._init_ccxt(
103 exchange_config, ccxt_kwargs=exchange_config.get('ccxt_config'))
104 self._api_async: ccxt_async.Exchange = self._init_ccxt(
105 exchange_config, ccxt_async, ccxt_kwargs=exchange_config.get('ccxt_async_config'))
106
107 logger.info('Using Exchange "%s"', self.name)
108
109 self.markets = self._load_markets()
110 # Check if all pairs are available
111 self.validate_pairs(config['exchange']['pair_whitelist'])
112 self.validate_ordertypes(config.get('order_types', {}))
113 self.validate_order_time_in_force(config.get('order_time_in_force', {}))
114 if config.get('ticker_interval'):
115 # Check if timeframe is available
116 self.validate_timeframes(config['ticker_interval'])
117
118 def __del__(self):
119 """
120 Destructor - clean up async stuff
121 """
122 logger.debug("Exchange object destroyed, closing async loop")
123 if self._api_async and inspect.iscoroutinefunction(self._api_async.close):
124 asyncio.get_event_loop().run_until_complete(self._api_async.close())
125
126 def _init_ccxt(self, exchange_config: dict, ccxt_module=ccxt,
127 ccxt_kwargs: dict = None) -> ccxt.Exchange:
128 """
129 Initialize ccxt with given config and return valid
130 ccxt instance.
131 """
132 # Find matching class for the given exchange name
133 name = exchange_config['name']
134
135 if name not in ccxt_module.exchanges:
136 raise OperationalException(f'Exchange {name} is not supported')
137
138 ex_config = {
139 'apiKey': exchange_config.get('key'),
140 'secret': exchange_config.get('secret'),
141 'password': exchange_config.get('password'),
142 'uid': exchange_config.get('uid', ''),
143 'enableRateLimit': exchange_config.get('ccxt_rate_limit', True)
144 }
145 if ccxt_kwargs:
146 logger.info('Applying additional ccxt config: %s', ccxt_kwargs)
147 ex_config.update(ccxt_kwargs)
148 try:
149
150 api = getattr(ccxt_module, name.lower())(ex_config)
151 except (KeyError, AttributeError):
152 raise OperationalException(f'Exchange {name} is not supported')
153
154 self.set_sandbox(api, exchange_config, name)
155
156 return api
157
158 @property
159 def name(self) -> str:
160 """exchange Name (from ccxt)"""
161 return self._api.name
162
163 @property
164 def id(self) -> str:
165 """exchange ccxt id"""
166 return self._api.id
167
168 def klines(self, pair_interval: Tuple[str, str], copy=True) -> DataFrame:
169 if pair_interval in self._klines:
170 return self._klines[pair_interval].copy() if copy else self._klines[pair_interval]
171 else:
172 return DataFrame()
173
174 def set_sandbox(self, api, exchange_config: dict, name: str):
175 if exchange_config.get('sandbox'):
176 if api.urls.get('test'):
177 api.urls['api'] = api.urls['test']
178 logger.info("Enabled Sandbox API on %s", name)
179 else:
180 logger.warning(name, "No Sandbox URL in CCXT, exiting. "
181 "Please check your config.json")
182 raise OperationalException(f'Exchange {name} does not provide a sandbox api')
183
184 def _load_async_markets(self) -> None:
185 try:
186 if self._api_async:
187 asyncio.get_event_loop().run_until_complete(self._api_async.load_markets())
188
189 except ccxt.BaseError as e:
190 logger.warning('Could not load async markets. Reason: %s', e)
191 return
192
193 def _load_markets(self) -> Dict[str, Any]:
194 """ Initialize markets both sync and async """
195 try:
196 markets = self._api.load_markets()
197 self._load_async_markets()
198 return markets
199 except ccxt.BaseError as e:
200 logger.warning('Unable to initialize markets. Reason: %s', e)
201 return {}
202
203 def validate_pairs(self, pairs: List[str]) -> None:
204 """
205 Checks if all given pairs are tradable on the current exchange.
206 Raises OperationalException if one pair is not available.
207 :param pairs: list of pairs
208 :return: None
209 """
210
211 if not self.markets:
212 logger.warning('Unable to validate pairs (assuming they are correct).')
213 # return
214
215 stake_cur = self._conf['stake_currency']
216 for pair in pairs:
217 # Note: ccxt has BaseCurrency/QuoteCurrency format for pairs
218 # TODO: add a support for having coins in BTC/USDT format
219 if not pair.endswith(stake_cur):
220 raise OperationalException(
221 f'Pair {pair} not compatible with stake_currency: {stake_cur}')
222 if self.markets and pair not in self.markets:
223 raise OperationalException(
224 f'Pair {pair} is not available at {self.name}'
225 f'Please remove {pair} from your whitelist.')
226
227 def validate_timeframes(self, timeframe: List[str]) -> None:
228 """
229 Checks if ticker interval from config is a supported timeframe on the exchange
230 """
231 timeframes = self._api.timeframes
232 if timeframe not in timeframes:
233 raise OperationalException(
234 f'Invalid ticker {timeframe}, this Exchange supports {timeframes}')
235
236 def validate_ordertypes(self, order_types: Dict) -> None:
237 """
238 Checks if order-types configured in strategy/config are supported
239 """
240 if any(v == 'market' for k, v in order_types.items()):
241 if not self.exchange_has('createMarketOrder'):
242 raise OperationalException(
243 f'Exchange {self.name} does not support market orders.')
244
245 if (order_types.get("stoploss_on_exchange")
246 and not self._ft_has.get("stoploss_on_exchange", False)):
247 raise OperationalException(
248 'On exchange stoploss is not supported for %s.' % self.name
249 )
250
251 def validate_order_time_in_force(self, order_time_in_force: Dict) -> None:
252 """
253 Checks if order time in force configured in strategy/config are supported
254 """
255 if any(v != 'gtc' for k, v in order_time_in_force.items()):
256 if self.name != 'Binance':
257 raise OperationalException(
258 f'Time in force policies are not supporetd for {self.name} yet.')
259
260 def exchange_has(self, endpoint: str) -> bool:
261 """
262 Checks if exchange implements a specific API endpoint.
263 Wrapper around ccxt 'has' attribute
264 :param endpoint: Name of endpoint (e.g. 'fetchOHLCV', 'fetchTickers')
265 :return: bool
266 """
267 return endpoint in self._api.has and self._api.has[endpoint]
268
269 def symbol_amount_prec(self, pair, amount: float):
270 '''
271 Returns the amount to buy or sell to a precision the Exchange accepts
272 Rounded down
273 '''
274 if self._api.markets[pair]['precision']['amount']:
275 symbol_prec = self._api.markets[pair]['precision']['amount']
276 big_amount = amount * pow(10, symbol_prec)
277 amount = floor(big_amount) / pow(10, symbol_prec)
278 return amount
279
280 def symbol_price_prec(self, pair, price: float):
281 '''
282 Returns the price buying or selling with to the precision the Exchange accepts
283 Rounds up
284 '''
285 if self._api.markets[pair]['precision']['price']:
286 symbol_prec = self._api.markets[pair]['precision']['price']
287 big_price = price * pow(10, symbol_prec)
288 price = ceil(big_price) / pow(10, symbol_prec)
289 return price
290
291 def dry_run_order(self, pair: str, ordertype: str, side: str, amount: float,
292 rate: float, params: Dict = {}) -> Dict[str, Any]:
293 order_id = f'dry_run_{side}_{randint(0, 10**6)}'
294 dry_order = { # TODO: additional entry should be added for stoploss limit
295 "id": order_id,
296 'pair': pair,
297 'price': rate,
298 'amount': amount,
299 "cost": amount * rate,
300 'type': ordertype,
301 'side': 'buy',
302 'remaining': amount,
303 'datetime': arrow.utcnow().isoformat(),
304 'status': "open",
305 'fee': None,
306 "info": {}
307 }
308 self._store_dry_order(dry_order)
309 return dry_order
310
311 def _store_dry_order(self, dry_order: Dict) -> None:
312 closed_order = dry_order.copy()
313 if closed_order["type"] in ["market", "limit"]:
314 closed_order.update({
315 "status": "closed",
316 "filled": closed_order["amount"],
317 "remaining": 0
318 })
319 self._dry_run_open_orders[closed_order["id"]] = closed_order
320
321 def create_order(self, pair: str, ordertype: str, side: str, amount: float,
322 rate: float, params: Dict = {}) -> Dict:
323 try:
324 # Set the precision for amount and price(rate) as accepted by the exchange
325 amount = self.symbol_amount_prec(pair, amount)
326 rate = self.symbol_price_prec(pair, rate) if ordertype != 'market' else None
327
328 return self._api.create_order(pair, ordertype, side,
329 amount, rate, params)
330
331 except ccxt.InsufficientFunds as e:
332 raise DependencyException(
333 f'Insufficient funds to create {ordertype} {side} order on market {pair}.'
334 f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'
335 f'Message: {e}')
336 except ccxt.InvalidOrder as e:
337 raise DependencyException(
338 f'Could not create {ordertype} {side} order on market {pair}.'
339 f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'
340 f'Message: {e}')
341 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
342 raise TemporaryError(
343 f'Could not place {side} order due to {e.__class__.__name__}. Message: {e}')
344 except ccxt.BaseError as e:
345 raise OperationalException(e)
346
347 def buy(self, pair: str, ordertype: str, amount: float,
348 rate: float, time_in_force) -> Dict:
349
350 if self._conf['dry_run']:
351 dry_order = self.dry_run_order(pair, ordertype, "buy", amount, rate)
352 return dry_order
353
354 params = self._params.copy()
355 if time_in_force != 'gtc':
356 params.update({'timeInForce': time_in_force})
357
358 return self.create_order(pair, ordertype, 'buy', amount, rate, params)
359
360 def sell(self, pair: str, ordertype: str, amount: float,
361 rate: float, time_in_force='gtc') -> Dict:
362
363 if self._conf['dry_run']:
364 dry_order = self.dry_run_order(pair, ordertype, "sell", amount, rate)
365 return dry_order
366
367 params = self._params.copy()
368 if time_in_force != 'gtc':
369 params.update({'timeInForce': time_in_force})
370
371 return self.create_order(pair, ordertype, 'sell', amount, rate, params)
372
373 def stoploss_limit(self, pair: str, amount: float, stop_price: float, rate: float) -> Dict:
374 """
375 creates a stoploss limit order.
376 NOTICE: it is not supported by all exchanges. only binance is tested for now.
377 TODO: implementation maybe needs to be moved to the binance subclass
378 """
379 ordertype = "stop_loss_limit"
380
381 stop_price = self.symbol_price_prec(pair, stop_price)
382
383 # Ensure rate is less than stop price
384 if stop_price <= rate:
385 raise OperationalException(
386 'In stoploss limit order, stop price should be more than limit price')
387
388 if self._conf['dry_run']:
389 dry_order = self.dry_run_order(
390 pair, ordertype, "sell", amount, stop_price)
391 return dry_order
392
393 params = self._params.copy()
394 params.update({'stopPrice': stop_price})
395
396 order = self.create_order(pair, ordertype, 'sell', amount, rate, params)
397 logger.info('stoploss limit order added for %s. '
398 'stop price: %s. limit: %s' % (pair, stop_price, rate))
399 return order
400
401 @retrier
402 def get_balance(self, currency: str) -> float:
403 if self._conf['dry_run']:
404 return 999.9
405
406 # ccxt exception is already handled by get_balances
407 balances = self.get_balances()
408 balance = balances.get(currency)
409 if balance is None:
410 raise TemporaryError(
411 f'Could not get {currency} balance due to malformed exchange response: {balances}')
412 return balance['free']
413
414 @retrier
415 def get_balances(self) -> dict:
416 if self._conf['dry_run']:
417 return {}
418
419 try:
420 balances = self._api.fetch_balance()
421 # Remove additional info from ccxt results
422 balances.pop("info", None)
423 balances.pop("free", None)
424 balances.pop("total", None)
425 balances.pop("used", None)
426
427 return balances
428 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
429 raise TemporaryError(
430 f'Could not get balance due to {e.__class__.__name__}. Message: {e}')
431 except ccxt.BaseError as e:
432 raise OperationalException(e)
433
434 @retrier
435 def get_tickers(self) -> Dict:
436 try:
437 return self._api.fetch_tickers()
438 except ccxt.NotSupported as e:
439 raise OperationalException(
440 f'Exchange {self._api.name} does not support fetching tickers in batch.'
441 f'Message: {e}')
442 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
443 raise TemporaryError(
444 f'Could not load tickers due to {e.__class__.__name__}. Message: {e}')
445 except ccxt.BaseError as e:
446 raise OperationalException(e)
447
448 @retrier
449 def get_ticker(self, pair: str, refresh: Optional[bool] = True) -> dict:
450 if refresh or pair not in self._cached_ticker.keys():
451 try:
452 if pair not in self._api.markets:
453 raise DependencyException(f"Pair {pair} not available")
454 data = self._api.fetch_ticker(pair)
455 try:
456 self._cached_ticker[pair] = {
457 'bid': float(data['bid']),
458 'ask': float(data['ask']),
459 }
460 except KeyError:
461 logger.debug("Could not cache ticker data for %s", pair)
462 return data
463 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
464 raise TemporaryError(
465 f'Could not load ticker due to {e.__class__.__name__}. Message: {e}')
466 except ccxt.BaseError as e:
467 raise OperationalException(e)
468 else:
469 logger.info("returning cached ticker-data for %s", pair)
470 return self._cached_ticker[pair]
471
472 def get_history(self, pair: str, tick_interval: str,
473 since_ms: int) -> List:
474 """
475 Gets candle history using asyncio and returns the list of candles.
476 Handles all async doing.
477 """
478 return asyncio.get_event_loop().run_until_complete(
479 self._async_get_history(pair=pair, tick_interval=tick_interval,
480 since_ms=since_ms))
481
482 async def _async_get_history(self, pair: str,
483 tick_interval: str,
484 since_ms: int) -> List:
485 # Assume exchange returns 500 candles
486 _LIMIT = 500
487
488 one_call = constants.TICKER_INTERVAL_MINUTES[tick_interval] * 60 * _LIMIT * 1000
489 logger.debug("one_call: %s", one_call)
490 input_coroutines = [self._async_get_candle_history(
491 pair, tick_interval, since) for since in
492 range(since_ms, arrow.utcnow().timestamp * 1000, one_call)]
493
494 tickers = await asyncio.gather(*input_coroutines, return_exceptions=True)
495
496 # Combine tickers
497 data: List = []
498 for p, ticker_interval, ticker in tickers:
499 if p == pair:
500 data.extend(ticker)
501 # Sort data again after extending the result - above calls return in "async order"
502 data = sorted(data, key=lambda x: x[0])
503 logger.info("downloaded %s with length %s.", pair, len(data))
504 return data
505
506 def refresh_latest_ohlcv(self, pair_list: List[Tuple[str, str]]) -> List[Tuple[str, List]]:
507 """
508 Refresh in-memory ohlcv asyncronously and set `_klines` with the result
509 """
510 logger.debug("Refreshing ohlcv data for %d pairs", len(pair_list))
511
512 input_coroutines = []
513
514 # Gather coroutines to run
515 for pair, ticker_interval in set(pair_list):
516 if (not ((pair, ticker_interval) in self._klines)
517 or self._now_is_time_to_refresh(pair, ticker_interval)):
518 input_coroutines.append(self._async_get_candle_history(pair, ticker_interval))
519 else:
520 logger.debug("Using cached ohlcv data for %s, %s ...", pair, ticker_interval)
521
522 tickers = asyncio.get_event_loop().run_until_complete(
523 asyncio.gather(*input_coroutines, return_exceptions=True))
524
525 # handle caching
526 for res in tickers:
527 if isinstance(res, Exception):
528 logger.warning("Async code raised an exception: %s", res.__class__.__name__)
529 continue
530 pair = res[0]
531 tick_interval = res[1]
532 ticks = res[2]
533 # keeping last candle time as last refreshed time of the pair
534 if ticks:
535 self._pairs_last_refresh_time[(pair, tick_interval)] = ticks[-1][0] // 1000
536 # keeping parsed dataframe in cache
537 self._klines[(pair, tick_interval)] = parse_ticker_dataframe(
538 ticks, tick_interval, fill_missing=True)
539 return tickers
540
541 def _now_is_time_to_refresh(self, pair: str, ticker_interval: str) -> bool:
542 # Calculating ticker interval in seconds
543 interval_in_sec = constants.TICKER_INTERVAL_MINUTES[ticker_interval] * 60
544
545 return not ((self._pairs_last_refresh_time.get((pair, ticker_interval), 0)
546 + interval_in_sec) >= arrow.utcnow().timestamp)
547
548 @retrier_async
549 async def _async_get_candle_history(self, pair: str, tick_interval: str,
550 since_ms: Optional[int] = None) -> Tuple[str, str, List]:
551 """
552 Asyncronously gets candle histories using fetch_ohlcv
553 returns tuple: (pair, tick_interval, ohlcv_list)
554 """
555 try:
556 # fetch ohlcv asynchronously
557 logger.debug("fetching %s, %s since %s ...", pair, tick_interval, since_ms)
558
559 data = await self._api_async.fetch_ohlcv(pair, timeframe=tick_interval,
560 since=since_ms)
561
562 # Because some exchange sort Tickers ASC and other DESC.
563 # Ex: Bittrex returns a list of tickers ASC (oldest first, newest last)
564 # when GDAX returns a list of tickers DESC (newest first, oldest last)
565 # Only sort if necessary to save computing time
566 try:
567 if data and data[0][0] > data[-1][0]:
568 data = sorted(data, key=lambda x: x[0])
569 except IndexError:
570 logger.exception("Error loading %s. Result was %s.", pair, data)
571 return pair, tick_interval, []
572 logger.debug("done fetching %s, %s ...", pair, tick_interval)
573 return pair, tick_interval, data
574
575 except ccxt.NotSupported as e:
576 raise OperationalException(
577 f'Exchange {self._api.name} does not support fetching historical candlestick data.'
578 f'Message: {e}')
579 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
580 raise TemporaryError(
581 f'Could not load ticker history due to {e.__class__.__name__}. Message: {e}')
582 except ccxt.BaseError as e:
583 raise OperationalException(f'Could not fetch ticker data. Msg: {e}')
584
585 @retrier
586 def cancel_order(self, order_id: str, pair: str) -> None:
587 if self._conf['dry_run']:
588 return
589
590 try:
591 return self._api.cancel_order(order_id, pair)
592 except ccxt.InvalidOrder as e:
593 raise DependencyException(
594 f'Could not cancel order. Message: {e}')
595 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
596 raise TemporaryError(
597 f'Could not cancel order due to {e.__class__.__name__}. Message: {e}')
598 except ccxt.BaseError as e:
599 raise OperationalException(e)
600
601 @retrier
602 def get_order(self, order_id: str, pair: str) -> Dict:
603 if self._conf['dry_run']:
604 order = self._dry_run_open_orders[order_id]
605 return order
606 try:
607 return self._api.fetch_order(order_id, pair)
608 except ccxt.InvalidOrder as e:
609 raise DependencyException(
610 f'Could not get order. Message: {e}')
611 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
612 raise TemporaryError(
613 f'Could not get order due to {e.__class__.__name__}. Message: {e}')
614 except ccxt.BaseError as e:
615 raise OperationalException(e)
616
617 @retrier
618 def get_order_book(self, pair: str, limit: int = 100) -> dict:
619 """
620 get order book level 2 from exchange
621
622 Notes:
623 20180619: bittrex doesnt support limits -.-
624 """
625 try:
626
627 return self._api.fetch_l2_order_book(pair, limit)
628 except ccxt.NotSupported as e:
629 raise OperationalException(
630 f'Exchange {self._api.name} does not support fetching order book.'
631 f'Message: {e}')
632 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
633 raise TemporaryError(
634 f'Could not get order book due to {e.__class__.__name__}. Message: {e}')
635 except ccxt.BaseError as e:
636 raise OperationalException(e)
637
638 @retrier
639 def get_trades_for_order(self, order_id: str, pair: str, since: datetime) -> List:
640 if self._conf['dry_run']:
641 return []
642 if not self.exchange_has('fetchMyTrades'):
643 return []
644 try:
645 # Allow 5s offset to catch slight time offsets (discovered in #1185)
646 my_trades = self._api.fetch_my_trades(pair, since.timestamp() - 5)
647 matched_trades = [trade for trade in my_trades if trade['order'] == order_id]
648
649 return matched_trades
650
651 except ccxt.NetworkError as e:
652 raise TemporaryError(
653 f'Could not get trades due to networking error. Message: {e}')
654 except ccxt.BaseError as e:
655 raise OperationalException(e)
656
657 @retrier
658 def get_markets(self) -> List[dict]:
659 try:
660 return self._api.fetch_markets()
661 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
662 raise TemporaryError(
663 f'Could not load markets due to {e.__class__.__name__}. Message: {e}')
664 except ccxt.BaseError as e:
665 raise OperationalException(e)
666
667 @retrier
668 def get_fee(self, symbol='ETH/BTC', type='', side='', amount=1,
669 price=1, taker_or_maker='maker') -> float:
670 try:
671 # validate that markets are loaded before trying to get fee
672 if self._api.markets is None or len(self._api.markets) == 0:
673 self._api.load_markets()
674
675 return self._api.calculate_fee(symbol=symbol, type=type, side=side, amount=amount,
676 price=price, takerOrMaker=taker_or_maker)['rate']
677 except (ccxt.NetworkError, ccxt.ExchangeError) as e:
678 raise TemporaryError(
679 f'Could not get fee info due to {e.__class__.__name__}. Message: {e}')
680 except ccxt.BaseError as e:
681 raise OperationalException(e)
682
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/freqtrade/exchange/exchange.py b/freqtrade/exchange/exchange.py
--- a/freqtrade/exchange/exchange.py
+++ b/freqtrade/exchange/exchange.py
@@ -298,7 +298,7 @@
'amount': amount,
"cost": amount * rate,
'type': ordertype,
- 'side': 'buy',
+ 'side': side,
'remaining': amount,
'datetime': arrow.utcnow().isoformat(),
'status': "open",
| {"golden_diff": "diff --git a/freqtrade/exchange/exchange.py b/freqtrade/exchange/exchange.py\n--- a/freqtrade/exchange/exchange.py\n+++ b/freqtrade/exchange/exchange.py\n@@ -298,7 +298,7 @@\n 'amount': amount,\n \"cost\": amount * rate,\n 'type': ordertype,\n- 'side': 'buy',\n+ 'side': side,\n 'remaining': amount,\n 'datetime': arrow.utcnow().isoformat(),\n 'status': \"open\",\n", "issue": "Unable to Perform Sell in dry mode, broken persistance.py\nHi,\r\nSince two days I'm trying to figure out why my fresh installed bot can't perform sell operations on Dry run mode. Even If I perform a force sell all trades stay open.\r\nI suspected a breaking change in SQLAlchemy, so I rollbacked to a previous version, unsucessfully.\r\nSo I checked the persistance.py file and tried previous version. And surprisingly the bot is performing normaly if I go back to this commit \"cfe00c2f0c118c93e1870567eb75c195bfa91ddd\"\r\nI'm investigating to figure out what is hapenning excactly.\r\n\n", "before_files": [{"content": "# pragma pylint: disable=W0603\n\"\"\" Cryptocurrency Exchanges support \"\"\"\nimport logging\nimport inspect\nfrom random import randint\nfrom typing import List, Dict, Tuple, Any, Optional\nfrom datetime import datetime\nfrom math import floor, ceil\n\nimport arrow\nimport asyncio\nimport ccxt\nimport ccxt.async_support as ccxt_async\nfrom pandas import DataFrame\n\nfrom freqtrade import constants, OperationalException, DependencyException, TemporaryError\nfrom freqtrade.data.converter import parse_ticker_dataframe\n\nlogger = logging.getLogger(__name__)\n\nAPI_RETRY_COUNT = 4\n\n\n# Urls to exchange markets, insert quote and base with .format()\n_EXCHANGE_URLS = {\n ccxt.bittrex.__name__: '/Market/Index?MarketName={quote}-{base}',\n ccxt.binance.__name__: '/tradeDetail.html?symbol={base}_{quote}',\n}\n\n\ndef retrier_async(f):\n async def wrapper(*args, **kwargs):\n count = kwargs.pop('count', API_RETRY_COUNT)\n try:\n return await f(*args, **kwargs)\n except (TemporaryError, DependencyException) as ex:\n logger.warning('%s() returned exception: \"%s\"', f.__name__, ex)\n if count > 0:\n count -= 1\n kwargs.update({'count': count})\n logger.warning('retrying %s() still for %s times', f.__name__, count)\n return await wrapper(*args, **kwargs)\n else:\n logger.warning('Giving up retrying: %s()', f.__name__)\n raise ex\n return wrapper\n\n\ndef retrier(f):\n def wrapper(*args, **kwargs):\n count = kwargs.pop('count', API_RETRY_COUNT)\n try:\n return f(*args, **kwargs)\n except (TemporaryError, DependencyException) as ex:\n logger.warning('%s() returned exception: \"%s\"', f.__name__, ex)\n if count > 0:\n count -= 1\n kwargs.update({'count': count})\n logger.warning('retrying %s() still for %s times', f.__name__, count)\n return wrapper(*args, **kwargs)\n else:\n logger.warning('Giving up retrying: %s()', f.__name__)\n raise ex\n return wrapper\n\n\nclass Exchange(object):\n\n _conf: Dict = {}\n _params: Dict = {}\n\n # Dict to specify which options each exchange implements\n # TODO: this should be merged with attributes from subclasses\n # To avoid having to copy/paste this to all subclasses.\n _ft_has = {\n \"stoploss_on_exchange\": False,\n }\n\n def __init__(self, config: dict) -> None:\n \"\"\"\n Initializes this module with the given config,\n it does basic validation whether the specified exchange and pairs are valid.\n :return: None\n \"\"\"\n self._conf.update(config)\n\n self._cached_ticker: Dict[str, Any] = {}\n\n # Holds last candle refreshed time of each pair\n self._pairs_last_refresh_time: Dict[Tuple[str, str], int] = {}\n\n # Holds candles\n self._klines: Dict[Tuple[str, str], DataFrame] = {}\n\n # Holds all open sell orders for dry_run\n self._dry_run_open_orders: Dict[str, Any] = {}\n\n if config['dry_run']:\n logger.info('Instance is running with dry_run enabled')\n\n exchange_config = config['exchange']\n self._api: ccxt.Exchange = self._init_ccxt(\n exchange_config, ccxt_kwargs=exchange_config.get('ccxt_config'))\n self._api_async: ccxt_async.Exchange = self._init_ccxt(\n exchange_config, ccxt_async, ccxt_kwargs=exchange_config.get('ccxt_async_config'))\n\n logger.info('Using Exchange \"%s\"', self.name)\n\n self.markets = self._load_markets()\n # Check if all pairs are available\n self.validate_pairs(config['exchange']['pair_whitelist'])\n self.validate_ordertypes(config.get('order_types', {}))\n self.validate_order_time_in_force(config.get('order_time_in_force', {}))\n if config.get('ticker_interval'):\n # Check if timeframe is available\n self.validate_timeframes(config['ticker_interval'])\n\n def __del__(self):\n \"\"\"\n Destructor - clean up async stuff\n \"\"\"\n logger.debug(\"Exchange object destroyed, closing async loop\")\n if self._api_async and inspect.iscoroutinefunction(self._api_async.close):\n asyncio.get_event_loop().run_until_complete(self._api_async.close())\n\n def _init_ccxt(self, exchange_config: dict, ccxt_module=ccxt,\n ccxt_kwargs: dict = None) -> ccxt.Exchange:\n \"\"\"\n Initialize ccxt with given config and return valid\n ccxt instance.\n \"\"\"\n # Find matching class for the given exchange name\n name = exchange_config['name']\n\n if name not in ccxt_module.exchanges:\n raise OperationalException(f'Exchange {name} is not supported')\n\n ex_config = {\n 'apiKey': exchange_config.get('key'),\n 'secret': exchange_config.get('secret'),\n 'password': exchange_config.get('password'),\n 'uid': exchange_config.get('uid', ''),\n 'enableRateLimit': exchange_config.get('ccxt_rate_limit', True)\n }\n if ccxt_kwargs:\n logger.info('Applying additional ccxt config: %s', ccxt_kwargs)\n ex_config.update(ccxt_kwargs)\n try:\n\n api = getattr(ccxt_module, name.lower())(ex_config)\n except (KeyError, AttributeError):\n raise OperationalException(f'Exchange {name} is not supported')\n\n self.set_sandbox(api, exchange_config, name)\n\n return api\n\n @property\n def name(self) -> str:\n \"\"\"exchange Name (from ccxt)\"\"\"\n return self._api.name\n\n @property\n def id(self) -> str:\n \"\"\"exchange ccxt id\"\"\"\n return self._api.id\n\n def klines(self, pair_interval: Tuple[str, str], copy=True) -> DataFrame:\n if pair_interval in self._klines:\n return self._klines[pair_interval].copy() if copy else self._klines[pair_interval]\n else:\n return DataFrame()\n\n def set_sandbox(self, api, exchange_config: dict, name: str):\n if exchange_config.get('sandbox'):\n if api.urls.get('test'):\n api.urls['api'] = api.urls['test']\n logger.info(\"Enabled Sandbox API on %s\", name)\n else:\n logger.warning(name, \"No Sandbox URL in CCXT, exiting. \"\n \"Please check your config.json\")\n raise OperationalException(f'Exchange {name} does not provide a sandbox api')\n\n def _load_async_markets(self) -> None:\n try:\n if self._api_async:\n asyncio.get_event_loop().run_until_complete(self._api_async.load_markets())\n\n except ccxt.BaseError as e:\n logger.warning('Could not load async markets. Reason: %s', e)\n return\n\n def _load_markets(self) -> Dict[str, Any]:\n \"\"\" Initialize markets both sync and async \"\"\"\n try:\n markets = self._api.load_markets()\n self._load_async_markets()\n return markets\n except ccxt.BaseError as e:\n logger.warning('Unable to initialize markets. Reason: %s', e)\n return {}\n\n def validate_pairs(self, pairs: List[str]) -> None:\n \"\"\"\n Checks if all given pairs are tradable on the current exchange.\n Raises OperationalException if one pair is not available.\n :param pairs: list of pairs\n :return: None\n \"\"\"\n\n if not self.markets:\n logger.warning('Unable to validate pairs (assuming they are correct).')\n # return\n\n stake_cur = self._conf['stake_currency']\n for pair in pairs:\n # Note: ccxt has BaseCurrency/QuoteCurrency format for pairs\n # TODO: add a support for having coins in BTC/USDT format\n if not pair.endswith(stake_cur):\n raise OperationalException(\n f'Pair {pair} not compatible with stake_currency: {stake_cur}')\n if self.markets and pair not in self.markets:\n raise OperationalException(\n f'Pair {pair} is not available at {self.name}'\n f'Please remove {pair} from your whitelist.')\n\n def validate_timeframes(self, timeframe: List[str]) -> None:\n \"\"\"\n Checks if ticker interval from config is a supported timeframe on the exchange\n \"\"\"\n timeframes = self._api.timeframes\n if timeframe not in timeframes:\n raise OperationalException(\n f'Invalid ticker {timeframe}, this Exchange supports {timeframes}')\n\n def validate_ordertypes(self, order_types: Dict) -> None:\n \"\"\"\n Checks if order-types configured in strategy/config are supported\n \"\"\"\n if any(v == 'market' for k, v in order_types.items()):\n if not self.exchange_has('createMarketOrder'):\n raise OperationalException(\n f'Exchange {self.name} does not support market orders.')\n\n if (order_types.get(\"stoploss_on_exchange\")\n and not self._ft_has.get(\"stoploss_on_exchange\", False)):\n raise OperationalException(\n 'On exchange stoploss is not supported for %s.' % self.name\n )\n\n def validate_order_time_in_force(self, order_time_in_force: Dict) -> None:\n \"\"\"\n Checks if order time in force configured in strategy/config are supported\n \"\"\"\n if any(v != 'gtc' for k, v in order_time_in_force.items()):\n if self.name != 'Binance':\n raise OperationalException(\n f'Time in force policies are not supporetd for {self.name} yet.')\n\n def exchange_has(self, endpoint: str) -> bool:\n \"\"\"\n Checks if exchange implements a specific API endpoint.\n Wrapper around ccxt 'has' attribute\n :param endpoint: Name of endpoint (e.g. 'fetchOHLCV', 'fetchTickers')\n :return: bool\n \"\"\"\n return endpoint in self._api.has and self._api.has[endpoint]\n\n def symbol_amount_prec(self, pair, amount: float):\n '''\n Returns the amount to buy or sell to a precision the Exchange accepts\n Rounded down\n '''\n if self._api.markets[pair]['precision']['amount']:\n symbol_prec = self._api.markets[pair]['precision']['amount']\n big_amount = amount * pow(10, symbol_prec)\n amount = floor(big_amount) / pow(10, symbol_prec)\n return amount\n\n def symbol_price_prec(self, pair, price: float):\n '''\n Returns the price buying or selling with to the precision the Exchange accepts\n Rounds up\n '''\n if self._api.markets[pair]['precision']['price']:\n symbol_prec = self._api.markets[pair]['precision']['price']\n big_price = price * pow(10, symbol_prec)\n price = ceil(big_price) / pow(10, symbol_prec)\n return price\n\n def dry_run_order(self, pair: str, ordertype: str, side: str, amount: float,\n rate: float, params: Dict = {}) -> Dict[str, Any]:\n order_id = f'dry_run_{side}_{randint(0, 10**6)}'\n dry_order = { # TODO: additional entry should be added for stoploss limit\n \"id\": order_id,\n 'pair': pair,\n 'price': rate,\n 'amount': amount,\n \"cost\": amount * rate,\n 'type': ordertype,\n 'side': 'buy',\n 'remaining': amount,\n 'datetime': arrow.utcnow().isoformat(),\n 'status': \"open\",\n 'fee': None,\n \"info\": {}\n }\n self._store_dry_order(dry_order)\n return dry_order\n\n def _store_dry_order(self, dry_order: Dict) -> None:\n closed_order = dry_order.copy()\n if closed_order[\"type\"] in [\"market\", \"limit\"]:\n closed_order.update({\n \"status\": \"closed\",\n \"filled\": closed_order[\"amount\"],\n \"remaining\": 0\n })\n self._dry_run_open_orders[closed_order[\"id\"]] = closed_order\n\n def create_order(self, pair: str, ordertype: str, side: str, amount: float,\n rate: float, params: Dict = {}) -> Dict:\n try:\n # Set the precision for amount and price(rate) as accepted by the exchange\n amount = self.symbol_amount_prec(pair, amount)\n rate = self.symbol_price_prec(pair, rate) if ordertype != 'market' else None\n\n return self._api.create_order(pair, ordertype, side,\n amount, rate, params)\n\n except ccxt.InsufficientFunds as e:\n raise DependencyException(\n f'Insufficient funds to create {ordertype} {side} order on market {pair}.'\n f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'\n f'Message: {e}')\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not create {ordertype} {side} order on market {pair}.'\n f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not place {side} order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n def buy(self, pair: str, ordertype: str, amount: float,\n rate: float, time_in_force) -> Dict:\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(pair, ordertype, \"buy\", amount, rate)\n return dry_order\n\n params = self._params.copy()\n if time_in_force != 'gtc':\n params.update({'timeInForce': time_in_force})\n\n return self.create_order(pair, ordertype, 'buy', amount, rate, params)\n\n def sell(self, pair: str, ordertype: str, amount: float,\n rate: float, time_in_force='gtc') -> Dict:\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(pair, ordertype, \"sell\", amount, rate)\n return dry_order\n\n params = self._params.copy()\n if time_in_force != 'gtc':\n params.update({'timeInForce': time_in_force})\n\n return self.create_order(pair, ordertype, 'sell', amount, rate, params)\n\n def stoploss_limit(self, pair: str, amount: float, stop_price: float, rate: float) -> Dict:\n \"\"\"\n creates a stoploss limit order.\n NOTICE: it is not supported by all exchanges. only binance is tested for now.\n TODO: implementation maybe needs to be moved to the binance subclass\n \"\"\"\n ordertype = \"stop_loss_limit\"\n\n stop_price = self.symbol_price_prec(pair, stop_price)\n\n # Ensure rate is less than stop price\n if stop_price <= rate:\n raise OperationalException(\n 'In stoploss limit order, stop price should be more than limit price')\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(\n pair, ordertype, \"sell\", amount, stop_price)\n return dry_order\n\n params = self._params.copy()\n params.update({'stopPrice': stop_price})\n\n order = self.create_order(pair, ordertype, 'sell', amount, rate, params)\n logger.info('stoploss limit order added for %s. '\n 'stop price: %s. limit: %s' % (pair, stop_price, rate))\n return order\n\n @retrier\n def get_balance(self, currency: str) -> float:\n if self._conf['dry_run']:\n return 999.9\n\n # ccxt exception is already handled by get_balances\n balances = self.get_balances()\n balance = balances.get(currency)\n if balance is None:\n raise TemporaryError(\n f'Could not get {currency} balance due to malformed exchange response: {balances}')\n return balance['free']\n\n @retrier\n def get_balances(self) -> dict:\n if self._conf['dry_run']:\n return {}\n\n try:\n balances = self._api.fetch_balance()\n # Remove additional info from ccxt results\n balances.pop(\"info\", None)\n balances.pop(\"free\", None)\n balances.pop(\"total\", None)\n balances.pop(\"used\", None)\n\n return balances\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get balance due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_tickers(self) -> Dict:\n try:\n return self._api.fetch_tickers()\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching tickers in batch.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load tickers due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_ticker(self, pair: str, refresh: Optional[bool] = True) -> dict:\n if refresh or pair not in self._cached_ticker.keys():\n try:\n if pair not in self._api.markets:\n raise DependencyException(f\"Pair {pair} not available\")\n data = self._api.fetch_ticker(pair)\n try:\n self._cached_ticker[pair] = {\n 'bid': float(data['bid']),\n 'ask': float(data['ask']),\n }\n except KeyError:\n logger.debug(\"Could not cache ticker data for %s\", pair)\n return data\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load ticker due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n else:\n logger.info(\"returning cached ticker-data for %s\", pair)\n return self._cached_ticker[pair]\n\n def get_history(self, pair: str, tick_interval: str,\n since_ms: int) -> List:\n \"\"\"\n Gets candle history using asyncio and returns the list of candles.\n Handles all async doing.\n \"\"\"\n return asyncio.get_event_loop().run_until_complete(\n self._async_get_history(pair=pair, tick_interval=tick_interval,\n since_ms=since_ms))\n\n async def _async_get_history(self, pair: str,\n tick_interval: str,\n since_ms: int) -> List:\n # Assume exchange returns 500 candles\n _LIMIT = 500\n\n one_call = constants.TICKER_INTERVAL_MINUTES[tick_interval] * 60 * _LIMIT * 1000\n logger.debug(\"one_call: %s\", one_call)\n input_coroutines = [self._async_get_candle_history(\n pair, tick_interval, since) for since in\n range(since_ms, arrow.utcnow().timestamp * 1000, one_call)]\n\n tickers = await asyncio.gather(*input_coroutines, return_exceptions=True)\n\n # Combine tickers\n data: List = []\n for p, ticker_interval, ticker in tickers:\n if p == pair:\n data.extend(ticker)\n # Sort data again after extending the result - above calls return in \"async order\"\n data = sorted(data, key=lambda x: x[0])\n logger.info(\"downloaded %s with length %s.\", pair, len(data))\n return data\n\n def refresh_latest_ohlcv(self, pair_list: List[Tuple[str, str]]) -> List[Tuple[str, List]]:\n \"\"\"\n Refresh in-memory ohlcv asyncronously and set `_klines` with the result\n \"\"\"\n logger.debug(\"Refreshing ohlcv data for %d pairs\", len(pair_list))\n\n input_coroutines = []\n\n # Gather coroutines to run\n for pair, ticker_interval in set(pair_list):\n if (not ((pair, ticker_interval) in self._klines)\n or self._now_is_time_to_refresh(pair, ticker_interval)):\n input_coroutines.append(self._async_get_candle_history(pair, ticker_interval))\n else:\n logger.debug(\"Using cached ohlcv data for %s, %s ...\", pair, ticker_interval)\n\n tickers = asyncio.get_event_loop().run_until_complete(\n asyncio.gather(*input_coroutines, return_exceptions=True))\n\n # handle caching\n for res in tickers:\n if isinstance(res, Exception):\n logger.warning(\"Async code raised an exception: %s\", res.__class__.__name__)\n continue\n pair = res[0]\n tick_interval = res[1]\n ticks = res[2]\n # keeping last candle time as last refreshed time of the pair\n if ticks:\n self._pairs_last_refresh_time[(pair, tick_interval)] = ticks[-1][0] // 1000\n # keeping parsed dataframe in cache\n self._klines[(pair, tick_interval)] = parse_ticker_dataframe(\n ticks, tick_interval, fill_missing=True)\n return tickers\n\n def _now_is_time_to_refresh(self, pair: str, ticker_interval: str) -> bool:\n # Calculating ticker interval in seconds\n interval_in_sec = constants.TICKER_INTERVAL_MINUTES[ticker_interval] * 60\n\n return not ((self._pairs_last_refresh_time.get((pair, ticker_interval), 0)\n + interval_in_sec) >= arrow.utcnow().timestamp)\n\n @retrier_async\n async def _async_get_candle_history(self, pair: str, tick_interval: str,\n since_ms: Optional[int] = None) -> Tuple[str, str, List]:\n \"\"\"\n Asyncronously gets candle histories using fetch_ohlcv\n returns tuple: (pair, tick_interval, ohlcv_list)\n \"\"\"\n try:\n # fetch ohlcv asynchronously\n logger.debug(\"fetching %s, %s since %s ...\", pair, tick_interval, since_ms)\n\n data = await self._api_async.fetch_ohlcv(pair, timeframe=tick_interval,\n since=since_ms)\n\n # Because some exchange sort Tickers ASC and other DESC.\n # Ex: Bittrex returns a list of tickers ASC (oldest first, newest last)\n # when GDAX returns a list of tickers DESC (newest first, oldest last)\n # Only sort if necessary to save computing time\n try:\n if data and data[0][0] > data[-1][0]:\n data = sorted(data, key=lambda x: x[0])\n except IndexError:\n logger.exception(\"Error loading %s. Result was %s.\", pair, data)\n return pair, tick_interval, []\n logger.debug(\"done fetching %s, %s ...\", pair, tick_interval)\n return pair, tick_interval, data\n\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching historical candlestick data.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load ticker history due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(f'Could not fetch ticker data. Msg: {e}')\n\n @retrier\n def cancel_order(self, order_id: str, pair: str) -> None:\n if self._conf['dry_run']:\n return\n\n try:\n return self._api.cancel_order(order_id, pair)\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not cancel order. Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not cancel order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_order(self, order_id: str, pair: str) -> Dict:\n if self._conf['dry_run']:\n order = self._dry_run_open_orders[order_id]\n return order\n try:\n return self._api.fetch_order(order_id, pair)\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not get order. Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_order_book(self, pair: str, limit: int = 100) -> dict:\n \"\"\"\n get order book level 2 from exchange\n\n Notes:\n 20180619: bittrex doesnt support limits -.-\n \"\"\"\n try:\n\n return self._api.fetch_l2_order_book(pair, limit)\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching order book.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get order book due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_trades_for_order(self, order_id: str, pair: str, since: datetime) -> List:\n if self._conf['dry_run']:\n return []\n if not self.exchange_has('fetchMyTrades'):\n return []\n try:\n # Allow 5s offset to catch slight time offsets (discovered in #1185)\n my_trades = self._api.fetch_my_trades(pair, since.timestamp() - 5)\n matched_trades = [trade for trade in my_trades if trade['order'] == order_id]\n\n return matched_trades\n\n except ccxt.NetworkError as e:\n raise TemporaryError(\n f'Could not get trades due to networking error. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_markets(self) -> List[dict]:\n try:\n return self._api.fetch_markets()\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load markets due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_fee(self, symbol='ETH/BTC', type='', side='', amount=1,\n price=1, taker_or_maker='maker') -> float:\n try:\n # validate that markets are loaded before trying to get fee\n if self._api.markets is None or len(self._api.markets) == 0:\n self._api.load_markets()\n\n return self._api.calculate_fee(symbol=symbol, type=type, side=side, amount=amount,\n price=price, takerOrMaker=taker_or_maker)['rate']\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get fee info due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n", "path": "freqtrade/exchange/exchange.py"}], "after_files": [{"content": "# pragma pylint: disable=W0603\n\"\"\" Cryptocurrency Exchanges support \"\"\"\nimport logging\nimport inspect\nfrom random import randint\nfrom typing import List, Dict, Tuple, Any, Optional\nfrom datetime import datetime\nfrom math import floor, ceil\n\nimport arrow\nimport asyncio\nimport ccxt\nimport ccxt.async_support as ccxt_async\nfrom pandas import DataFrame\n\nfrom freqtrade import constants, OperationalException, DependencyException, TemporaryError\nfrom freqtrade.data.converter import parse_ticker_dataframe\n\nlogger = logging.getLogger(__name__)\n\nAPI_RETRY_COUNT = 4\n\n\n# Urls to exchange markets, insert quote and base with .format()\n_EXCHANGE_URLS = {\n ccxt.bittrex.__name__: '/Market/Index?MarketName={quote}-{base}',\n ccxt.binance.__name__: '/tradeDetail.html?symbol={base}_{quote}',\n}\n\n\ndef retrier_async(f):\n async def wrapper(*args, **kwargs):\n count = kwargs.pop('count', API_RETRY_COUNT)\n try:\n return await f(*args, **kwargs)\n except (TemporaryError, DependencyException) as ex:\n logger.warning('%s() returned exception: \"%s\"', f.__name__, ex)\n if count > 0:\n count -= 1\n kwargs.update({'count': count})\n logger.warning('retrying %s() still for %s times', f.__name__, count)\n return await wrapper(*args, **kwargs)\n else:\n logger.warning('Giving up retrying: %s()', f.__name__)\n raise ex\n return wrapper\n\n\ndef retrier(f):\n def wrapper(*args, **kwargs):\n count = kwargs.pop('count', API_RETRY_COUNT)\n try:\n return f(*args, **kwargs)\n except (TemporaryError, DependencyException) as ex:\n logger.warning('%s() returned exception: \"%s\"', f.__name__, ex)\n if count > 0:\n count -= 1\n kwargs.update({'count': count})\n logger.warning('retrying %s() still for %s times', f.__name__, count)\n return wrapper(*args, **kwargs)\n else:\n logger.warning('Giving up retrying: %s()', f.__name__)\n raise ex\n return wrapper\n\n\nclass Exchange(object):\n\n _conf: Dict = {}\n _params: Dict = {}\n\n # Dict to specify which options each exchange implements\n # TODO: this should be merged with attributes from subclasses\n # To avoid having to copy/paste this to all subclasses.\n _ft_has = {\n \"stoploss_on_exchange\": False,\n }\n\n def __init__(self, config: dict) -> None:\n \"\"\"\n Initializes this module with the given config,\n it does basic validation whether the specified exchange and pairs are valid.\n :return: None\n \"\"\"\n self._conf.update(config)\n\n self._cached_ticker: Dict[str, Any] = {}\n\n # Holds last candle refreshed time of each pair\n self._pairs_last_refresh_time: Dict[Tuple[str, str], int] = {}\n\n # Holds candles\n self._klines: Dict[Tuple[str, str], DataFrame] = {}\n\n # Holds all open sell orders for dry_run\n self._dry_run_open_orders: Dict[str, Any] = {}\n\n if config['dry_run']:\n logger.info('Instance is running with dry_run enabled')\n\n exchange_config = config['exchange']\n self._api: ccxt.Exchange = self._init_ccxt(\n exchange_config, ccxt_kwargs=exchange_config.get('ccxt_config'))\n self._api_async: ccxt_async.Exchange = self._init_ccxt(\n exchange_config, ccxt_async, ccxt_kwargs=exchange_config.get('ccxt_async_config'))\n\n logger.info('Using Exchange \"%s\"', self.name)\n\n self.markets = self._load_markets()\n # Check if all pairs are available\n self.validate_pairs(config['exchange']['pair_whitelist'])\n self.validate_ordertypes(config.get('order_types', {}))\n self.validate_order_time_in_force(config.get('order_time_in_force', {}))\n if config.get('ticker_interval'):\n # Check if timeframe is available\n self.validate_timeframes(config['ticker_interval'])\n\n def __del__(self):\n \"\"\"\n Destructor - clean up async stuff\n \"\"\"\n logger.debug(\"Exchange object destroyed, closing async loop\")\n if self._api_async and inspect.iscoroutinefunction(self._api_async.close):\n asyncio.get_event_loop().run_until_complete(self._api_async.close())\n\n def _init_ccxt(self, exchange_config: dict, ccxt_module=ccxt,\n ccxt_kwargs: dict = None) -> ccxt.Exchange:\n \"\"\"\n Initialize ccxt with given config and return valid\n ccxt instance.\n \"\"\"\n # Find matching class for the given exchange name\n name = exchange_config['name']\n\n if name not in ccxt_module.exchanges:\n raise OperationalException(f'Exchange {name} is not supported')\n\n ex_config = {\n 'apiKey': exchange_config.get('key'),\n 'secret': exchange_config.get('secret'),\n 'password': exchange_config.get('password'),\n 'uid': exchange_config.get('uid', ''),\n 'enableRateLimit': exchange_config.get('ccxt_rate_limit', True)\n }\n if ccxt_kwargs:\n logger.info('Applying additional ccxt config: %s', ccxt_kwargs)\n ex_config.update(ccxt_kwargs)\n try:\n\n api = getattr(ccxt_module, name.lower())(ex_config)\n except (KeyError, AttributeError):\n raise OperationalException(f'Exchange {name} is not supported')\n\n self.set_sandbox(api, exchange_config, name)\n\n return api\n\n @property\n def name(self) -> str:\n \"\"\"exchange Name (from ccxt)\"\"\"\n return self._api.name\n\n @property\n def id(self) -> str:\n \"\"\"exchange ccxt id\"\"\"\n return self._api.id\n\n def klines(self, pair_interval: Tuple[str, str], copy=True) -> DataFrame:\n if pair_interval in self._klines:\n return self._klines[pair_interval].copy() if copy else self._klines[pair_interval]\n else:\n return DataFrame()\n\n def set_sandbox(self, api, exchange_config: dict, name: str):\n if exchange_config.get('sandbox'):\n if api.urls.get('test'):\n api.urls['api'] = api.urls['test']\n logger.info(\"Enabled Sandbox API on %s\", name)\n else:\n logger.warning(name, \"No Sandbox URL in CCXT, exiting. \"\n \"Please check your config.json\")\n raise OperationalException(f'Exchange {name} does not provide a sandbox api')\n\n def _load_async_markets(self) -> None:\n try:\n if self._api_async:\n asyncio.get_event_loop().run_until_complete(self._api_async.load_markets())\n\n except ccxt.BaseError as e:\n logger.warning('Could not load async markets. Reason: %s', e)\n return\n\n def _load_markets(self) -> Dict[str, Any]:\n \"\"\" Initialize markets both sync and async \"\"\"\n try:\n markets = self._api.load_markets()\n self._load_async_markets()\n return markets\n except ccxt.BaseError as e:\n logger.warning('Unable to initialize markets. Reason: %s', e)\n return {}\n\n def validate_pairs(self, pairs: List[str]) -> None:\n \"\"\"\n Checks if all given pairs are tradable on the current exchange.\n Raises OperationalException if one pair is not available.\n :param pairs: list of pairs\n :return: None\n \"\"\"\n\n if not self.markets:\n logger.warning('Unable to validate pairs (assuming they are correct).')\n # return\n\n stake_cur = self._conf['stake_currency']\n for pair in pairs:\n # Note: ccxt has BaseCurrency/QuoteCurrency format for pairs\n # TODO: add a support for having coins in BTC/USDT format\n if not pair.endswith(stake_cur):\n raise OperationalException(\n f'Pair {pair} not compatible with stake_currency: {stake_cur}')\n if self.markets and pair not in self.markets:\n raise OperationalException(\n f'Pair {pair} is not available at {self.name}'\n f'Please remove {pair} from your whitelist.')\n\n def validate_timeframes(self, timeframe: List[str]) -> None:\n \"\"\"\n Checks if ticker interval from config is a supported timeframe on the exchange\n \"\"\"\n timeframes = self._api.timeframes\n if timeframe not in timeframes:\n raise OperationalException(\n f'Invalid ticker {timeframe}, this Exchange supports {timeframes}')\n\n def validate_ordertypes(self, order_types: Dict) -> None:\n \"\"\"\n Checks if order-types configured in strategy/config are supported\n \"\"\"\n if any(v == 'market' for k, v in order_types.items()):\n if not self.exchange_has('createMarketOrder'):\n raise OperationalException(\n f'Exchange {self.name} does not support market orders.')\n\n if (order_types.get(\"stoploss_on_exchange\")\n and not self._ft_has.get(\"stoploss_on_exchange\", False)):\n raise OperationalException(\n 'On exchange stoploss is not supported for %s.' % self.name\n )\n\n def validate_order_time_in_force(self, order_time_in_force: Dict) -> None:\n \"\"\"\n Checks if order time in force configured in strategy/config are supported\n \"\"\"\n if any(v != 'gtc' for k, v in order_time_in_force.items()):\n if self.name != 'Binance':\n raise OperationalException(\n f'Time in force policies are not supporetd for {self.name} yet.')\n\n def exchange_has(self, endpoint: str) -> bool:\n \"\"\"\n Checks if exchange implements a specific API endpoint.\n Wrapper around ccxt 'has' attribute\n :param endpoint: Name of endpoint (e.g. 'fetchOHLCV', 'fetchTickers')\n :return: bool\n \"\"\"\n return endpoint in self._api.has and self._api.has[endpoint]\n\n def symbol_amount_prec(self, pair, amount: float):\n '''\n Returns the amount to buy or sell to a precision the Exchange accepts\n Rounded down\n '''\n if self._api.markets[pair]['precision']['amount']:\n symbol_prec = self._api.markets[pair]['precision']['amount']\n big_amount = amount * pow(10, symbol_prec)\n amount = floor(big_amount) / pow(10, symbol_prec)\n return amount\n\n def symbol_price_prec(self, pair, price: float):\n '''\n Returns the price buying or selling with to the precision the Exchange accepts\n Rounds up\n '''\n if self._api.markets[pair]['precision']['price']:\n symbol_prec = self._api.markets[pair]['precision']['price']\n big_price = price * pow(10, symbol_prec)\n price = ceil(big_price) / pow(10, symbol_prec)\n return price\n\n def dry_run_order(self, pair: str, ordertype: str, side: str, amount: float,\n rate: float, params: Dict = {}) -> Dict[str, Any]:\n order_id = f'dry_run_{side}_{randint(0, 10**6)}'\n dry_order = { # TODO: additional entry should be added for stoploss limit\n \"id\": order_id,\n 'pair': pair,\n 'price': rate,\n 'amount': amount,\n \"cost\": amount * rate,\n 'type': ordertype,\n 'side': side,\n 'remaining': amount,\n 'datetime': arrow.utcnow().isoformat(),\n 'status': \"open\",\n 'fee': None,\n \"info\": {}\n }\n self._store_dry_order(dry_order)\n return dry_order\n\n def _store_dry_order(self, dry_order: Dict) -> None:\n closed_order = dry_order.copy()\n if closed_order[\"type\"] in [\"market\", \"limit\"]:\n closed_order.update({\n \"status\": \"closed\",\n \"filled\": closed_order[\"amount\"],\n \"remaining\": 0\n })\n self._dry_run_open_orders[closed_order[\"id\"]] = closed_order\n\n def create_order(self, pair: str, ordertype: str, side: str, amount: float,\n rate: float, params: Dict = {}) -> Dict:\n try:\n # Set the precision for amount and price(rate) as accepted by the exchange\n amount = self.symbol_amount_prec(pair, amount)\n rate = self.symbol_price_prec(pair, rate) if ordertype != 'market' else None\n\n return self._api.create_order(pair, ordertype, side,\n amount, rate, params)\n\n except ccxt.InsufficientFunds as e:\n raise DependencyException(\n f'Insufficient funds to create {ordertype} {side} order on market {pair}.'\n f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'\n f'Message: {e}')\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not create {ordertype} {side} order on market {pair}.'\n f'Tried to {side} amount {amount} at rate {rate} (total {rate*amount}).'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not place {side} order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n def buy(self, pair: str, ordertype: str, amount: float,\n rate: float, time_in_force) -> Dict:\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(pair, ordertype, \"buy\", amount, rate)\n return dry_order\n\n params = self._params.copy()\n if time_in_force != 'gtc':\n params.update({'timeInForce': time_in_force})\n\n return self.create_order(pair, ordertype, 'buy', amount, rate, params)\n\n def sell(self, pair: str, ordertype: str, amount: float,\n rate: float, time_in_force='gtc') -> Dict:\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(pair, ordertype, \"sell\", amount, rate)\n return dry_order\n\n params = self._params.copy()\n if time_in_force != 'gtc':\n params.update({'timeInForce': time_in_force})\n\n return self.create_order(pair, ordertype, 'sell', amount, rate, params)\n\n def stoploss_limit(self, pair: str, amount: float, stop_price: float, rate: float) -> Dict:\n \"\"\"\n creates a stoploss limit order.\n NOTICE: it is not supported by all exchanges. only binance is tested for now.\n TODO: implementation maybe needs to be moved to the binance subclass\n \"\"\"\n ordertype = \"stop_loss_limit\"\n\n stop_price = self.symbol_price_prec(pair, stop_price)\n\n # Ensure rate is less than stop price\n if stop_price <= rate:\n raise OperationalException(\n 'In stoploss limit order, stop price should be more than limit price')\n\n if self._conf['dry_run']:\n dry_order = self.dry_run_order(\n pair, ordertype, \"sell\", amount, stop_price)\n return dry_order\n\n params = self._params.copy()\n params.update({'stopPrice': stop_price})\n\n order = self.create_order(pair, ordertype, 'sell', amount, rate, params)\n logger.info('stoploss limit order added for %s. '\n 'stop price: %s. limit: %s' % (pair, stop_price, rate))\n return order\n\n @retrier\n def get_balance(self, currency: str) -> float:\n if self._conf['dry_run']:\n return 999.9\n\n # ccxt exception is already handled by get_balances\n balances = self.get_balances()\n balance = balances.get(currency)\n if balance is None:\n raise TemporaryError(\n f'Could not get {currency} balance due to malformed exchange response: {balances}')\n return balance['free']\n\n @retrier\n def get_balances(self) -> dict:\n if self._conf['dry_run']:\n return {}\n\n try:\n balances = self._api.fetch_balance()\n # Remove additional info from ccxt results\n balances.pop(\"info\", None)\n balances.pop(\"free\", None)\n balances.pop(\"total\", None)\n balances.pop(\"used\", None)\n\n return balances\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get balance due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_tickers(self) -> Dict:\n try:\n return self._api.fetch_tickers()\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching tickers in batch.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load tickers due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_ticker(self, pair: str, refresh: Optional[bool] = True) -> dict:\n if refresh or pair not in self._cached_ticker.keys():\n try:\n if pair not in self._api.markets:\n raise DependencyException(f\"Pair {pair} not available\")\n data = self._api.fetch_ticker(pair)\n try:\n self._cached_ticker[pair] = {\n 'bid': float(data['bid']),\n 'ask': float(data['ask']),\n }\n except KeyError:\n logger.debug(\"Could not cache ticker data for %s\", pair)\n return data\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load ticker due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n else:\n logger.info(\"returning cached ticker-data for %s\", pair)\n return self._cached_ticker[pair]\n\n def get_history(self, pair: str, tick_interval: str,\n since_ms: int) -> List:\n \"\"\"\n Gets candle history using asyncio and returns the list of candles.\n Handles all async doing.\n \"\"\"\n return asyncio.get_event_loop().run_until_complete(\n self._async_get_history(pair=pair, tick_interval=tick_interval,\n since_ms=since_ms))\n\n async def _async_get_history(self, pair: str,\n tick_interval: str,\n since_ms: int) -> List:\n # Assume exchange returns 500 candles\n _LIMIT = 500\n\n one_call = constants.TICKER_INTERVAL_MINUTES[tick_interval] * 60 * _LIMIT * 1000\n logger.debug(\"one_call: %s\", one_call)\n input_coroutines = [self._async_get_candle_history(\n pair, tick_interval, since) for since in\n range(since_ms, arrow.utcnow().timestamp * 1000, one_call)]\n\n tickers = await asyncio.gather(*input_coroutines, return_exceptions=True)\n\n # Combine tickers\n data: List = []\n for p, ticker_interval, ticker in tickers:\n if p == pair:\n data.extend(ticker)\n # Sort data again after extending the result - above calls return in \"async order\"\n data = sorted(data, key=lambda x: x[0])\n logger.info(\"downloaded %s with length %s.\", pair, len(data))\n return data\n\n def refresh_latest_ohlcv(self, pair_list: List[Tuple[str, str]]) -> List[Tuple[str, List]]:\n \"\"\"\n Refresh in-memory ohlcv asyncronously and set `_klines` with the result\n \"\"\"\n logger.debug(\"Refreshing ohlcv data for %d pairs\", len(pair_list))\n\n input_coroutines = []\n\n # Gather coroutines to run\n for pair, ticker_interval in set(pair_list):\n if (not ((pair, ticker_interval) in self._klines)\n or self._now_is_time_to_refresh(pair, ticker_interval)):\n input_coroutines.append(self._async_get_candle_history(pair, ticker_interval))\n else:\n logger.debug(\"Using cached ohlcv data for %s, %s ...\", pair, ticker_interval)\n\n tickers = asyncio.get_event_loop().run_until_complete(\n asyncio.gather(*input_coroutines, return_exceptions=True))\n\n # handle caching\n for res in tickers:\n if isinstance(res, Exception):\n logger.warning(\"Async code raised an exception: %s\", res.__class__.__name__)\n continue\n pair = res[0]\n tick_interval = res[1]\n ticks = res[2]\n # keeping last candle time as last refreshed time of the pair\n if ticks:\n self._pairs_last_refresh_time[(pair, tick_interval)] = ticks[-1][0] // 1000\n # keeping parsed dataframe in cache\n self._klines[(pair, tick_interval)] = parse_ticker_dataframe(\n ticks, tick_interval, fill_missing=True)\n return tickers\n\n def _now_is_time_to_refresh(self, pair: str, ticker_interval: str) -> bool:\n # Calculating ticker interval in seconds\n interval_in_sec = constants.TICKER_INTERVAL_MINUTES[ticker_interval] * 60\n\n return not ((self._pairs_last_refresh_time.get((pair, ticker_interval), 0)\n + interval_in_sec) >= arrow.utcnow().timestamp)\n\n @retrier_async\n async def _async_get_candle_history(self, pair: str, tick_interval: str,\n since_ms: Optional[int] = None) -> Tuple[str, str, List]:\n \"\"\"\n Asyncronously gets candle histories using fetch_ohlcv\n returns tuple: (pair, tick_interval, ohlcv_list)\n \"\"\"\n try:\n # fetch ohlcv asynchronously\n logger.debug(\"fetching %s, %s since %s ...\", pair, tick_interval, since_ms)\n\n data = await self._api_async.fetch_ohlcv(pair, timeframe=tick_interval,\n since=since_ms)\n\n # Because some exchange sort Tickers ASC and other DESC.\n # Ex: Bittrex returns a list of tickers ASC (oldest first, newest last)\n # when GDAX returns a list of tickers DESC (newest first, oldest last)\n # Only sort if necessary to save computing time\n try:\n if data and data[0][0] > data[-1][0]:\n data = sorted(data, key=lambda x: x[0])\n except IndexError:\n logger.exception(\"Error loading %s. Result was %s.\", pair, data)\n return pair, tick_interval, []\n logger.debug(\"done fetching %s, %s ...\", pair, tick_interval)\n return pair, tick_interval, data\n\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching historical candlestick data.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load ticker history due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(f'Could not fetch ticker data. Msg: {e}')\n\n @retrier\n def cancel_order(self, order_id: str, pair: str) -> None:\n if self._conf['dry_run']:\n return\n\n try:\n return self._api.cancel_order(order_id, pair)\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not cancel order. Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not cancel order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_order(self, order_id: str, pair: str) -> Dict:\n if self._conf['dry_run']:\n order = self._dry_run_open_orders[order_id]\n return order\n try:\n return self._api.fetch_order(order_id, pair)\n except ccxt.InvalidOrder as e:\n raise DependencyException(\n f'Could not get order. Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get order due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_order_book(self, pair: str, limit: int = 100) -> dict:\n \"\"\"\n get order book level 2 from exchange\n\n Notes:\n 20180619: bittrex doesnt support limits -.-\n \"\"\"\n try:\n\n return self._api.fetch_l2_order_book(pair, limit)\n except ccxt.NotSupported as e:\n raise OperationalException(\n f'Exchange {self._api.name} does not support fetching order book.'\n f'Message: {e}')\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get order book due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_trades_for_order(self, order_id: str, pair: str, since: datetime) -> List:\n if self._conf['dry_run']:\n return []\n if not self.exchange_has('fetchMyTrades'):\n return []\n try:\n # Allow 5s offset to catch slight time offsets (discovered in #1185)\n my_trades = self._api.fetch_my_trades(pair, since.timestamp() - 5)\n matched_trades = [trade for trade in my_trades if trade['order'] == order_id]\n\n return matched_trades\n\n except ccxt.NetworkError as e:\n raise TemporaryError(\n f'Could not get trades due to networking error. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_markets(self) -> List[dict]:\n try:\n return self._api.fetch_markets()\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not load markets due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n\n @retrier\n def get_fee(self, symbol='ETH/BTC', type='', side='', amount=1,\n price=1, taker_or_maker='maker') -> float:\n try:\n # validate that markets are loaded before trying to get fee\n if self._api.markets is None or len(self._api.markets) == 0:\n self._api.load_markets()\n\n return self._api.calculate_fee(symbol=symbol, type=type, side=side, amount=amount,\n price=price, takerOrMaker=taker_or_maker)['rate']\n except (ccxt.NetworkError, ccxt.ExchangeError) as e:\n raise TemporaryError(\n f'Could not get fee info due to {e.__class__.__name__}. Message: {e}')\n except ccxt.BaseError as e:\n raise OperationalException(e)\n", "path": "freqtrade/exchange/exchange.py"}]} |
gh_patches_debug_1187 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-2519 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
the second argument type is wrong when invoking isinstance()
code file: /mongoengine/queryset/base.py
line num: 328
argument: self._document
the self ._document is an object NOT a TYPE, i guess, you want to pass the variable Document.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mongoengine/document.py`
Content:
```
1 import re
2
3 import pymongo
4 from bson.dbref import DBRef
5 from pymongo.read_preferences import ReadPreference
6
7 from mongoengine import signals
8 from mongoengine.base import (
9 BaseDict,
10 BaseDocument,
11 BaseList,
12 DocumentMetaclass,
13 EmbeddedDocumentList,
14 TopLevelDocumentMetaclass,
15 get_document,
16 )
17 from mongoengine.common import _import_class
18 from mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db
19 from mongoengine.context_managers import (
20 set_write_concern,
21 switch_collection,
22 switch_db,
23 )
24 from mongoengine.errors import (
25 InvalidDocumentError,
26 InvalidQueryError,
27 SaveConditionError,
28 )
29 from mongoengine.pymongo_support import list_collection_names
30 from mongoengine.queryset import (
31 NotUniqueError,
32 OperationError,
33 QuerySet,
34 transform,
35 )
36
37 __all__ = (
38 "Document",
39 "EmbeddedDocument",
40 "DynamicDocument",
41 "DynamicEmbeddedDocument",
42 "OperationError",
43 "InvalidCollectionError",
44 "NotUniqueError",
45 "MapReduceDocument",
46 )
47
48
49 def includes_cls(fields):
50 """Helper function used for ensuring and comparing indexes."""
51 first_field = None
52 if len(fields):
53 if isinstance(fields[0], str):
54 first_field = fields[0]
55 elif isinstance(fields[0], (list, tuple)) and len(fields[0]):
56 first_field = fields[0][0]
57 return first_field == "_cls"
58
59
60 class InvalidCollectionError(Exception):
61 pass
62
63
64 class EmbeddedDocument(BaseDocument, metaclass=DocumentMetaclass):
65 r"""A :class:`~mongoengine.Document` that isn't stored in its own
66 collection. :class:`~mongoengine.EmbeddedDocument`\ s should be used as
67 fields on :class:`~mongoengine.Document`\ s through the
68 :class:`~mongoengine.EmbeddedDocumentField` field type.
69
70 A :class:`~mongoengine.EmbeddedDocument` subclass may be itself subclassed,
71 to create a specialised version of the embedded document that will be
72 stored in the same collection. To facilitate this behaviour a `_cls`
73 field is added to documents (hidden though the MongoEngine interface).
74 To enable this behaviour set :attr:`allow_inheritance` to ``True`` in the
75 :attr:`meta` dictionary.
76 """
77
78 __slots__ = ("_instance",)
79
80 # my_metaclass is defined so that metaclass can be queried in Python 2 & 3
81 my_metaclass = DocumentMetaclass
82
83 # A generic embedded document doesn't have any immutable properties
84 # that describe it uniquely, hence it shouldn't be hashable. You can
85 # define your own __hash__ method on a subclass if you need your
86 # embedded documents to be hashable.
87 __hash__ = None
88
89 def __init__(self, *args, **kwargs):
90 super().__init__(*args, **kwargs)
91 self._instance = None
92 self._changed_fields = []
93
94 def __eq__(self, other):
95 if isinstance(other, self.__class__):
96 return self._data == other._data
97 return False
98
99 def __ne__(self, other):
100 return not self.__eq__(other)
101
102 def to_mongo(self, *args, **kwargs):
103 data = super().to_mongo(*args, **kwargs)
104
105 # remove _id from the SON if it's in it and it's None
106 if "_id" in data and data["_id"] is None:
107 del data["_id"]
108
109 return data
110
111
112 class Document(BaseDocument, metaclass=TopLevelDocumentMetaclass):
113 """The base class used for defining the structure and properties of
114 collections of documents stored in MongoDB. Inherit from this class, and
115 add fields as class attributes to define a document's structure.
116 Individual documents may then be created by making instances of the
117 :class:`~mongoengine.Document` subclass.
118
119 By default, the MongoDB collection used to store documents created using a
120 :class:`~mongoengine.Document` subclass will be the name of the subclass
121 converted to snake_case. A different collection may be specified by
122 providing :attr:`collection` to the :attr:`meta` dictionary in the class
123 definition.
124
125 A :class:`~mongoengine.Document` subclass may be itself subclassed, to
126 create a specialised version of the document that will be stored in the
127 same collection. To facilitate this behaviour a `_cls`
128 field is added to documents (hidden though the MongoEngine interface).
129 To enable this behaviourset :attr:`allow_inheritance` to ``True`` in the
130 :attr:`meta` dictionary.
131
132 A :class:`~mongoengine.Document` may use a **Capped Collection** by
133 specifying :attr:`max_documents` and :attr:`max_size` in the :attr:`meta`
134 dictionary. :attr:`max_documents` is the maximum number of documents that
135 is allowed to be stored in the collection, and :attr:`max_size` is the
136 maximum size of the collection in bytes. :attr:`max_size` is rounded up
137 to the next multiple of 256 by MongoDB internally and mongoengine before.
138 Use also a multiple of 256 to avoid confusions. If :attr:`max_size` is not
139 specified and :attr:`max_documents` is, :attr:`max_size` defaults to
140 10485760 bytes (10MB).
141
142 Indexes may be created by specifying :attr:`indexes` in the :attr:`meta`
143 dictionary. The value should be a list of field names or tuples of field
144 names. Index direction may be specified by prefixing the field names with
145 a **+** or **-** sign.
146
147 Automatic index creation can be disabled by specifying
148 :attr:`auto_create_index` in the :attr:`meta` dictionary. If this is set to
149 False then indexes will not be created by MongoEngine. This is useful in
150 production systems where index creation is performed as part of a
151 deployment system.
152
153 By default, _cls will be added to the start of every index (that
154 doesn't contain a list) if allow_inheritance is True. This can be
155 disabled by either setting cls to False on the specific index or
156 by setting index_cls to False on the meta dictionary for the document.
157
158 By default, any extra attribute existing in stored data but not declared
159 in your model will raise a :class:`~mongoengine.FieldDoesNotExist` error.
160 This can be disabled by setting :attr:`strict` to ``False``
161 in the :attr:`meta` dictionary.
162 """
163
164 # my_metaclass is defined so that metaclass can be queried in Python 2 & 3
165 my_metaclass = TopLevelDocumentMetaclass
166
167 __slots__ = ("__objects",)
168
169 @property
170 def pk(self):
171 """Get the primary key."""
172 if "id_field" not in self._meta:
173 return None
174 return getattr(self, self._meta["id_field"])
175
176 @pk.setter
177 def pk(self, value):
178 """Set the primary key."""
179 return setattr(self, self._meta["id_field"], value)
180
181 def __hash__(self):
182 """Return the hash based on the PK of this document. If it's new
183 and doesn't have a PK yet, return the default object hash instead.
184 """
185 if self.pk is None:
186 return super(BaseDocument, self).__hash__()
187
188 return hash(self.pk)
189
190 @classmethod
191 def _get_db(cls):
192 """Some Model using other db_alias"""
193 return get_db(cls._meta.get("db_alias", DEFAULT_CONNECTION_NAME))
194
195 @classmethod
196 def _disconnect(cls):
197 """Detach the Document class from the (cached) database collection"""
198 cls._collection = None
199
200 @classmethod
201 def _get_collection(cls):
202 """Return the PyMongo collection corresponding to this document.
203
204 Upon first call, this method:
205 1. Initializes a :class:`~pymongo.collection.Collection` corresponding
206 to this document.
207 2. Creates indexes defined in this document's :attr:`meta` dictionary.
208 This happens only if `auto_create_index` is True.
209 """
210 if not hasattr(cls, "_collection") or cls._collection is None:
211 # Get the collection, either capped or regular.
212 if cls._meta.get("max_size") or cls._meta.get("max_documents"):
213 cls._collection = cls._get_capped_collection()
214 else:
215 db = cls._get_db()
216 collection_name = cls._get_collection_name()
217 cls._collection = db[collection_name]
218
219 # Ensure indexes on the collection unless auto_create_index was
220 # set to False.
221 # Also there is no need to ensure indexes on slave.
222 db = cls._get_db()
223 if cls._meta.get("auto_create_index", True) and db.client.is_primary:
224 cls.ensure_indexes()
225
226 return cls._collection
227
228 @classmethod
229 def _get_capped_collection(cls):
230 """Create a new or get an existing capped PyMongo collection."""
231 db = cls._get_db()
232 collection_name = cls._get_collection_name()
233
234 # Get max document limit and max byte size from meta.
235 max_size = cls._meta.get("max_size") or 10 * 2 ** 20 # 10MB default
236 max_documents = cls._meta.get("max_documents")
237
238 # MongoDB will automatically raise the size to make it a multiple of
239 # 256 bytes. We raise it here ourselves to be able to reliably compare
240 # the options below.
241 if max_size % 256:
242 max_size = (max_size // 256 + 1) * 256
243
244 # If the collection already exists and has different options
245 # (i.e. isn't capped or has different max/size), raise an error.
246 if collection_name in list_collection_names(
247 db, include_system_collections=True
248 ):
249 collection = db[collection_name]
250 options = collection.options()
251 if options.get("max") != max_documents or options.get("size") != max_size:
252 raise InvalidCollectionError(
253 'Cannot create collection "{}" as a capped '
254 "collection as it already exists".format(cls._collection)
255 )
256
257 return collection
258
259 # Create a new capped collection.
260 opts = {"capped": True, "size": max_size}
261 if max_documents:
262 opts["max"] = max_documents
263
264 return db.create_collection(collection_name, **opts)
265
266 def to_mongo(self, *args, **kwargs):
267 data = super().to_mongo(*args, **kwargs)
268
269 # If '_id' is None, try and set it from self._data. If that
270 # doesn't exist either, remove '_id' from the SON completely.
271 if data["_id"] is None:
272 if self._data.get("id") is None:
273 del data["_id"]
274 else:
275 data["_id"] = self._data["id"]
276
277 return data
278
279 def modify(self, query=None, **update):
280 """Perform an atomic update of the document in the database and reload
281 the document object using updated version.
282
283 Returns True if the document has been updated or False if the document
284 in the database doesn't match the query.
285
286 .. note:: All unsaved changes that have been made to the document are
287 rejected if the method returns True.
288
289 :param query: the update will be performed only if the document in the
290 database matches the query
291 :param update: Django-style update keyword arguments
292 """
293 if query is None:
294 query = {}
295
296 if self.pk is None:
297 raise InvalidDocumentError("The document does not have a primary key.")
298
299 id_field = self._meta["id_field"]
300 query = query.copy() if isinstance(query, dict) else query.to_query(self)
301
302 if id_field not in query:
303 query[id_field] = self.pk
304 elif query[id_field] != self.pk:
305 raise InvalidQueryError(
306 "Invalid document modify query: it must modify only this document."
307 )
308
309 # Need to add shard key to query, or you get an error
310 query.update(self._object_key)
311
312 updated = self._qs(**query).modify(new=True, **update)
313 if updated is None:
314 return False
315
316 for field in self._fields_ordered:
317 setattr(self, field, self._reload(field, updated[field]))
318
319 self._changed_fields = updated._changed_fields
320 self._created = False
321
322 return True
323
324 def save(
325 self,
326 force_insert=False,
327 validate=True,
328 clean=True,
329 write_concern=None,
330 cascade=None,
331 cascade_kwargs=None,
332 _refs=None,
333 save_condition=None,
334 signal_kwargs=None,
335 **kwargs,
336 ):
337 """Save the :class:`~mongoengine.Document` to the database. If the
338 document already exists, it will be updated, otherwise it will be
339 created. Returns the saved object instance.
340
341 :param force_insert: only try to create a new document, don't allow
342 updates of existing documents.
343 :param validate: validates the document; set to ``False`` to skip.
344 :param clean: call the document clean method, requires `validate` to be
345 True.
346 :param write_concern: Extra keyword arguments are passed down to
347 :meth:`~pymongo.collection.Collection.save` OR
348 :meth:`~pymongo.collection.Collection.insert`
349 which will be used as options for the resultant
350 ``getLastError`` command. For example,
351 ``save(..., write_concern={w: 2, fsync: True}, ...)`` will
352 wait until at least two servers have recorded the write and
353 will force an fsync on the primary server.
354 :param cascade: Sets the flag for cascading saves. You can set a
355 default by setting "cascade" in the document __meta__
356 :param cascade_kwargs: (optional) kwargs dictionary to be passed throw
357 to cascading saves. Implies ``cascade=True``.
358 :param _refs: A list of processed references used in cascading saves
359 :param save_condition: only perform save if matching record in db
360 satisfies condition(s) (e.g. version number).
361 Raises :class:`OperationError` if the conditions are not satisfied
362 :param signal_kwargs: (optional) kwargs dictionary to be passed to
363 the signal calls.
364
365 .. versionchanged:: 0.5
366 In existing documents it only saves changed fields using
367 set / unset. Saves are cascaded and any
368 :class:`~bson.dbref.DBRef` objects that have changes are
369 saved as well.
370 .. versionchanged:: 0.6
371 Added cascading saves
372 .. versionchanged:: 0.8
373 Cascade saves are optional and default to False. If you want
374 fine grain control then you can turn off using document
375 meta['cascade'] = True. Also you can pass different kwargs to
376 the cascade save using cascade_kwargs which overwrites the
377 existing kwargs with custom values.
378 """
379 signal_kwargs = signal_kwargs or {}
380
381 if self._meta.get("abstract"):
382 raise InvalidDocumentError("Cannot save an abstract document.")
383
384 signals.pre_save.send(self.__class__, document=self, **signal_kwargs)
385
386 if validate:
387 self.validate(clean=clean)
388
389 if write_concern is None:
390 write_concern = {}
391
392 doc_id = self.to_mongo(fields=[self._meta["id_field"]])
393 created = "_id" not in doc_id or self._created or force_insert
394
395 signals.pre_save_post_validation.send(
396 self.__class__, document=self, created=created, **signal_kwargs
397 )
398 # it might be refreshed by the pre_save_post_validation hook, e.g., for etag generation
399 doc = self.to_mongo()
400
401 if self._meta.get("auto_create_index", True):
402 self.ensure_indexes()
403
404 try:
405 # Save a new document or update an existing one
406 if created:
407 object_id = self._save_create(doc, force_insert, write_concern)
408 else:
409 object_id, created = self._save_update(
410 doc, save_condition, write_concern
411 )
412
413 if cascade is None:
414 cascade = self._meta.get("cascade", False) or cascade_kwargs is not None
415
416 if cascade:
417 kwargs = {
418 "force_insert": force_insert,
419 "validate": validate,
420 "write_concern": write_concern,
421 "cascade": cascade,
422 }
423 if cascade_kwargs: # Allow granular control over cascades
424 kwargs.update(cascade_kwargs)
425 kwargs["_refs"] = _refs
426 self.cascade_save(**kwargs)
427
428 except pymongo.errors.DuplicateKeyError as err:
429 message = "Tried to save duplicate unique keys (%s)"
430 raise NotUniqueError(message % err)
431 except pymongo.errors.OperationFailure as err:
432 message = "Could not save document (%s)"
433 if re.match("^E1100[01] duplicate key", str(err)):
434 # E11000 - duplicate key error index
435 # E11001 - duplicate key on update
436 message = "Tried to save duplicate unique keys (%s)"
437 raise NotUniqueError(message % err)
438 raise OperationError(message % err)
439
440 # Make sure we store the PK on this document now that it's saved
441 id_field = self._meta["id_field"]
442 if created or id_field not in self._meta.get("shard_key", []):
443 self[id_field] = self._fields[id_field].to_python(object_id)
444
445 signals.post_save.send(
446 self.__class__, document=self, created=created, **signal_kwargs
447 )
448
449 self._clear_changed_fields()
450 self._created = False
451
452 return self
453
454 def _save_create(self, doc, force_insert, write_concern):
455 """Save a new document.
456
457 Helper method, should only be used inside save().
458 """
459 collection = self._get_collection()
460 with set_write_concern(collection, write_concern) as wc_collection:
461 if force_insert:
462 return wc_collection.insert_one(doc).inserted_id
463 # insert_one will provoke UniqueError alongside save does not
464 # therefore, it need to catch and call replace_one.
465 if "_id" in doc:
466 select_dict = {"_id": doc["_id"]}
467 select_dict = self._integrate_shard_key(doc, select_dict)
468 raw_object = wc_collection.find_one_and_replace(select_dict, doc)
469 if raw_object:
470 return doc["_id"]
471
472 object_id = wc_collection.insert_one(doc).inserted_id
473
474 return object_id
475
476 def _get_update_doc(self):
477 """Return a dict containing all the $set and $unset operations
478 that should be sent to MongoDB based on the changes made to this
479 Document.
480 """
481 updates, removals = self._delta()
482
483 update_doc = {}
484 if updates:
485 update_doc["$set"] = updates
486 if removals:
487 update_doc["$unset"] = removals
488
489 return update_doc
490
491 def _integrate_shard_key(self, doc, select_dict):
492 """Integrates the collection's shard key to the `select_dict`, which will be used for the query.
493 The value from the shard key is taken from the `doc` and finally the select_dict is returned.
494 """
495
496 # Need to add shard key to query, or you get an error
497 shard_key = self._meta.get("shard_key", tuple())
498 for k in shard_key:
499 path = self._lookup_field(k.split("."))
500 actual_key = [p.db_field for p in path]
501 val = doc
502 for ak in actual_key:
503 val = val[ak]
504 select_dict[".".join(actual_key)] = val
505
506 return select_dict
507
508 def _save_update(self, doc, save_condition, write_concern):
509 """Update an existing document.
510
511 Helper method, should only be used inside save().
512 """
513 collection = self._get_collection()
514 object_id = doc["_id"]
515 created = False
516
517 select_dict = {}
518 if save_condition is not None:
519 select_dict = transform.query(self.__class__, **save_condition)
520
521 select_dict["_id"] = object_id
522
523 select_dict = self._integrate_shard_key(doc, select_dict)
524
525 update_doc = self._get_update_doc()
526 if update_doc:
527 upsert = save_condition is None
528 with set_write_concern(collection, write_concern) as wc_collection:
529 last_error = wc_collection.update_one(
530 select_dict, update_doc, upsert=upsert
531 ).raw_result
532 if not upsert and last_error["n"] == 0:
533 raise SaveConditionError(
534 "Race condition preventing document update detected"
535 )
536 if last_error is not None:
537 updated_existing = last_error.get("updatedExisting")
538 if updated_existing is False:
539 created = True
540 # !!! This is bad, means we accidentally created a new,
541 # potentially corrupted document. See
542 # https://github.com/MongoEngine/mongoengine/issues/564
543
544 return object_id, created
545
546 def cascade_save(self, **kwargs):
547 """Recursively save any references and generic references on the
548 document.
549 """
550 _refs = kwargs.get("_refs") or []
551
552 ReferenceField = _import_class("ReferenceField")
553 GenericReferenceField = _import_class("GenericReferenceField")
554
555 for name, cls in self._fields.items():
556 if not isinstance(cls, (ReferenceField, GenericReferenceField)):
557 continue
558
559 ref = self._data.get(name)
560 if not ref or isinstance(ref, DBRef):
561 continue
562
563 if not getattr(ref, "_changed_fields", True):
564 continue
565
566 ref_id = f"{ref.__class__.__name__},{str(ref._data)}"
567 if ref and ref_id not in _refs:
568 _refs.append(ref_id)
569 kwargs["_refs"] = _refs
570 ref.save(**kwargs)
571 ref._changed_fields = []
572
573 @property
574 def _qs(self):
575 """Return the default queryset corresponding to this document."""
576 if not hasattr(self, "__objects"):
577 self.__objects = QuerySet(self, self._get_collection())
578 return self.__objects
579
580 @property
581 def _object_key(self):
582 """Return a query dict that can be used to fetch this document.
583
584 Most of the time the dict is a simple PK lookup, but in case of
585 a sharded collection with a compound shard key, it can contain a more
586 complex query.
587
588 Note that the dict returned by this method uses MongoEngine field
589 names instead of PyMongo field names (e.g. "pk" instead of "_id",
590 "some__nested__field" instead of "some.nested.field", etc.).
591 """
592 select_dict = {"pk": self.pk}
593 shard_key = self.__class__._meta.get("shard_key", tuple())
594 for k in shard_key:
595 val = self
596 field_parts = k.split(".")
597 for part in field_parts:
598 val = getattr(val, part)
599 select_dict["__".join(field_parts)] = val
600 return select_dict
601
602 def update(self, **kwargs):
603 """Performs an update on the :class:`~mongoengine.Document`
604 A convenience wrapper to :meth:`~mongoengine.QuerySet.update`.
605
606 Raises :class:`OperationError` if called on an object that has not yet
607 been saved.
608 """
609 if self.pk is None:
610 if kwargs.get("upsert", False):
611 query = self.to_mongo()
612 if "_cls" in query:
613 del query["_cls"]
614 return self._qs.filter(**query).update_one(**kwargs)
615 else:
616 raise OperationError("attempt to update a document not yet saved")
617
618 # Need to add shard key to query, or you get an error
619 return self._qs.filter(**self._object_key).update_one(**kwargs)
620
621 def delete(self, signal_kwargs=None, **write_concern):
622 """Delete the :class:`~mongoengine.Document` from the database. This
623 will only take effect if the document has been previously saved.
624
625 :param signal_kwargs: (optional) kwargs dictionary to be passed to
626 the signal calls.
627 :param write_concern: Extra keyword arguments are passed down which
628 will be used as options for the resultant ``getLastError`` command.
629 For example, ``save(..., w: 2, fsync: True)`` will
630 wait until at least two servers have recorded the write and
631 will force an fsync on the primary server.
632 """
633 signal_kwargs = signal_kwargs or {}
634 signals.pre_delete.send(self.__class__, document=self, **signal_kwargs)
635
636 # Delete FileFields separately
637 FileField = _import_class("FileField")
638 for name, field in self._fields.items():
639 if isinstance(field, FileField):
640 getattr(self, name).delete()
641
642 try:
643 self._qs.filter(**self._object_key).delete(
644 write_concern=write_concern, _from_doc_delete=True
645 )
646 except pymongo.errors.OperationFailure as err:
647 message = "Could not delete document (%s)" % err.args
648 raise OperationError(message)
649 signals.post_delete.send(self.__class__, document=self, **signal_kwargs)
650
651 def switch_db(self, db_alias, keep_created=True):
652 """
653 Temporarily switch the database for a document instance.
654
655 Only really useful for archiving off data and calling `save()`::
656
657 user = User.objects.get(id=user_id)
658 user.switch_db('archive-db')
659 user.save()
660
661 :param str db_alias: The database alias to use for saving the document
662
663 :param bool keep_created: keep self._created value after switching db, else is reset to True
664
665
666 .. seealso::
667 Use :class:`~mongoengine.context_managers.switch_collection`
668 if you need to read from another collection
669 """
670 with switch_db(self.__class__, db_alias) as cls:
671 collection = cls._get_collection()
672 db = cls._get_db()
673 self._get_collection = lambda: collection
674 self._get_db = lambda: db
675 self._collection = collection
676 self._created = True if not keep_created else self._created
677 self.__objects = self._qs
678 self.__objects._collection_obj = collection
679 return self
680
681 def switch_collection(self, collection_name, keep_created=True):
682 """
683 Temporarily switch the collection for a document instance.
684
685 Only really useful for archiving off data and calling `save()`::
686
687 user = User.objects.get(id=user_id)
688 user.switch_collection('old-users')
689 user.save()
690
691 :param str collection_name: The database alias to use for saving the
692 document
693
694 :param bool keep_created: keep self._created value after switching collection, else is reset to True
695
696
697 .. seealso::
698 Use :class:`~mongoengine.context_managers.switch_db`
699 if you need to read from another database
700 """
701 with switch_collection(self.__class__, collection_name) as cls:
702 collection = cls._get_collection()
703 self._get_collection = lambda: collection
704 self._collection = collection
705 self._created = True if not keep_created else self._created
706 self.__objects = self._qs
707 self.__objects._collection_obj = collection
708 return self
709
710 def select_related(self, max_depth=1):
711 """Handles dereferencing of :class:`~bson.dbref.DBRef` objects to
712 a maximum depth in order to cut down the number queries to mongodb.
713 """
714 DeReference = _import_class("DeReference")
715 DeReference()([self], max_depth + 1)
716 return self
717
718 def reload(self, *fields, **kwargs):
719 """Reloads all attributes from the database.
720
721 :param fields: (optional) args list of fields to reload
722 :param max_depth: (optional) depth of dereferencing to follow
723 """
724 max_depth = 1
725 if fields and isinstance(fields[0], int):
726 max_depth = fields[0]
727 fields = fields[1:]
728 elif "max_depth" in kwargs:
729 max_depth = kwargs["max_depth"]
730
731 if self.pk is None:
732 raise self.DoesNotExist("Document does not exist")
733
734 obj = (
735 self._qs.read_preference(ReadPreference.PRIMARY)
736 .filter(**self._object_key)
737 .only(*fields)
738 .limit(1)
739 .select_related(max_depth=max_depth)
740 )
741
742 if obj:
743 obj = obj[0]
744 else:
745 raise self.DoesNotExist("Document does not exist")
746 for field in obj._data:
747 if not fields or field in fields:
748 try:
749 setattr(self, field, self._reload(field, obj[field]))
750 except (KeyError, AttributeError):
751 try:
752 # If field is a special field, e.g. items is stored as _reserved_items,
753 # a KeyError is thrown. So try to retrieve the field from _data
754 setattr(self, field, self._reload(field, obj._data.get(field)))
755 except KeyError:
756 # If field is removed from the database while the object
757 # is in memory, a reload would cause a KeyError
758 # i.e. obj.update(unset__field=1) followed by obj.reload()
759 delattr(self, field)
760
761 self._changed_fields = (
762 list(set(self._changed_fields) - set(fields))
763 if fields
764 else obj._changed_fields
765 )
766 self._created = False
767 return self
768
769 def _reload(self, key, value):
770 """Used by :meth:`~mongoengine.Document.reload` to ensure the
771 correct instance is linked to self.
772 """
773 if isinstance(value, BaseDict):
774 value = [(k, self._reload(k, v)) for k, v in value.items()]
775 value = BaseDict(value, self, key)
776 elif isinstance(value, EmbeddedDocumentList):
777 value = [self._reload(key, v) for v in value]
778 value = EmbeddedDocumentList(value, self, key)
779 elif isinstance(value, BaseList):
780 value = [self._reload(key, v) for v in value]
781 value = BaseList(value, self, key)
782 elif isinstance(value, (EmbeddedDocument, DynamicEmbeddedDocument)):
783 value._instance = None
784 value._changed_fields = []
785 return value
786
787 def to_dbref(self):
788 """Returns an instance of :class:`~bson.dbref.DBRef` useful in
789 `__raw__` queries."""
790 if self.pk is None:
791 msg = "Only saved documents can have a valid dbref"
792 raise OperationError(msg)
793 return DBRef(self.__class__._get_collection_name(), self.pk)
794
795 @classmethod
796 def register_delete_rule(cls, document_cls, field_name, rule):
797 """This method registers the delete rules to apply when removing this
798 object.
799 """
800 classes = [
801 get_document(class_name)
802 for class_name in cls._subclasses
803 if class_name != cls.__name__
804 ] + [cls]
805 documents = [
806 get_document(class_name)
807 for class_name in document_cls._subclasses
808 if class_name != document_cls.__name__
809 ] + [document_cls]
810
811 for klass in classes:
812 for document_cls in documents:
813 delete_rules = klass._meta.get("delete_rules") or {}
814 delete_rules[(document_cls, field_name)] = rule
815 klass._meta["delete_rules"] = delete_rules
816
817 @classmethod
818 def drop_collection(cls):
819 """Drops the entire collection associated with this
820 :class:`~mongoengine.Document` type from the database.
821
822 Raises :class:`OperationError` if the document has no collection set
823 (i.g. if it is `abstract`)
824 """
825 coll_name = cls._get_collection_name()
826 if not coll_name:
827 raise OperationError(
828 "Document %s has no collection defined (is it abstract ?)" % cls
829 )
830 cls._collection = None
831 db = cls._get_db()
832 db.drop_collection(coll_name)
833
834 @classmethod
835 def create_index(cls, keys, background=False, **kwargs):
836 """Creates the given indexes if required.
837
838 :param keys: a single index key or a list of index keys (to
839 construct a multi-field index); keys may be prefixed with a **+**
840 or a **-** to determine the index ordering
841 :param background: Allows index creation in the background
842 """
843 index_spec = cls._build_index_spec(keys)
844 index_spec = index_spec.copy()
845 fields = index_spec.pop("fields")
846 index_spec["background"] = background
847 index_spec.update(kwargs)
848
849 return cls._get_collection().create_index(fields, **index_spec)
850
851 @classmethod
852 def ensure_index(cls, key_or_list, background=False, **kwargs):
853 """Ensure that the given indexes are in place. Deprecated in favour
854 of create_index.
855
856 :param key_or_list: a single index key or a list of index keys (to
857 construct a multi-field index); keys may be prefixed with a **+**
858 or a **-** to determine the index ordering
859 :param background: Allows index creation in the background
860 """
861 return cls.create_index(key_or_list, background=background, **kwargs)
862
863 @classmethod
864 def ensure_indexes(cls):
865 """Checks the document meta data and ensures all the indexes exist.
866
867 Global defaults can be set in the meta - see :doc:`guide/defining-documents`
868
869 .. note:: You can disable automatic index creation by setting
870 `auto_create_index` to False in the documents meta data
871 """
872 background = cls._meta.get("index_background", False)
873 index_opts = cls._meta.get("index_opts") or {}
874 index_cls = cls._meta.get("index_cls", True)
875
876 collection = cls._get_collection()
877 # 746: when connection is via mongos, the read preference is not necessarily an indication that
878 # this code runs on a secondary
879 if not collection.is_mongos and collection.read_preference > 1:
880 return
881
882 # determine if an index which we are creating includes
883 # _cls as its first field; if so, we can avoid creating
884 # an extra index on _cls, as mongodb will use the existing
885 # index to service queries against _cls
886 cls_indexed = False
887
888 # Ensure document-defined indexes are created
889 if cls._meta["index_specs"]:
890 index_spec = cls._meta["index_specs"]
891 for spec in index_spec:
892 spec = spec.copy()
893 fields = spec.pop("fields")
894 cls_indexed = cls_indexed or includes_cls(fields)
895 opts = index_opts.copy()
896 opts.update(spec)
897
898 # we shouldn't pass 'cls' to the collection.ensureIndex options
899 # because of https://jira.mongodb.org/browse/SERVER-769
900 if "cls" in opts:
901 del opts["cls"]
902
903 collection.create_index(fields, background=background, **opts)
904
905 # If _cls is being used (for polymorphism), it needs an index,
906 # only if another index doesn't begin with _cls
907 if index_cls and not cls_indexed and cls._meta.get("allow_inheritance"):
908
909 # we shouldn't pass 'cls' to the collection.ensureIndex options
910 # because of https://jira.mongodb.org/browse/SERVER-769
911 if "cls" in index_opts:
912 del index_opts["cls"]
913
914 collection.create_index("_cls", background=background, **index_opts)
915
916 @classmethod
917 def list_indexes(cls):
918 """Lists all of the indexes that should be created for given
919 collection. It includes all the indexes from super- and sub-classes.
920 """
921 if cls._meta.get("abstract"):
922 return []
923
924 # get all the base classes, subclasses and siblings
925 classes = []
926
927 def get_classes(cls):
928
929 if cls not in classes and isinstance(cls, TopLevelDocumentMetaclass):
930 classes.append(cls)
931
932 for base_cls in cls.__bases__:
933 if (
934 isinstance(base_cls, TopLevelDocumentMetaclass)
935 and base_cls != Document
936 and not base_cls._meta.get("abstract")
937 and base_cls._get_collection().full_name
938 == cls._get_collection().full_name
939 and base_cls not in classes
940 ):
941 classes.append(base_cls)
942 get_classes(base_cls)
943 for subclass in cls.__subclasses__():
944 if (
945 isinstance(base_cls, TopLevelDocumentMetaclass)
946 and subclass._get_collection().full_name
947 == cls._get_collection().full_name
948 and subclass not in classes
949 ):
950 classes.append(subclass)
951 get_classes(subclass)
952
953 get_classes(cls)
954
955 # get the indexes spec for all of the gathered classes
956 def get_indexes_spec(cls):
957 indexes = []
958
959 if cls._meta["index_specs"]:
960 index_spec = cls._meta["index_specs"]
961 for spec in index_spec:
962 spec = spec.copy()
963 fields = spec.pop("fields")
964 indexes.append(fields)
965 return indexes
966
967 indexes = []
968 for klass in classes:
969 for index in get_indexes_spec(klass):
970 if index not in indexes:
971 indexes.append(index)
972
973 # finish up by appending { '_id': 1 } and { '_cls': 1 }, if needed
974 if [("_id", 1)] not in indexes:
975 indexes.append([("_id", 1)])
976 if cls._meta.get("index_cls", True) and cls._meta.get("allow_inheritance"):
977 indexes.append([("_cls", 1)])
978
979 return indexes
980
981 @classmethod
982 def compare_indexes(cls):
983 """Compares the indexes defined in MongoEngine with the ones
984 existing in the database. Returns any missing/extra indexes.
985 """
986
987 required = cls.list_indexes()
988
989 existing = []
990 for info in cls._get_collection().index_information().values():
991 if "_fts" in info["key"][0]:
992 index_type = info["key"][0][1]
993 text_index_fields = info.get("weights").keys()
994 existing.append([(key, index_type) for key in text_index_fields])
995 else:
996 existing.append(info["key"])
997 missing = [index for index in required if index not in existing]
998 extra = [index for index in existing if index not in required]
999
1000 # if { _cls: 1 } is missing, make sure it's *really* necessary
1001 if [("_cls", 1)] in missing:
1002 cls_obsolete = False
1003 for index in existing:
1004 if includes_cls(index) and index not in extra:
1005 cls_obsolete = True
1006 break
1007 if cls_obsolete:
1008 missing.remove([("_cls", 1)])
1009
1010 return {"missing": missing, "extra": extra}
1011
1012
1013 class DynamicDocument(Document, metaclass=TopLevelDocumentMetaclass):
1014 """A Dynamic Document class allowing flexible, expandable and uncontrolled
1015 schemas. As a :class:`~mongoengine.Document` subclass, acts in the same
1016 way as an ordinary document but has expanded style properties. Any data
1017 passed or set against the :class:`~mongoengine.DynamicDocument` that is
1018 not a field is automatically converted into a
1019 :class:`~mongoengine.fields.DynamicField` and data can be attributed to that
1020 field.
1021
1022 .. note::
1023
1024 There is one caveat on Dynamic Documents: undeclared fields cannot start with `_`
1025 """
1026
1027 # my_metaclass is defined so that metaclass can be queried in Python 2 & 3
1028 my_metaclass = TopLevelDocumentMetaclass
1029
1030 _dynamic = True
1031
1032 def __delattr__(self, *args, **kwargs):
1033 """Delete the attribute by setting to None and allowing _delta
1034 to unset it.
1035 """
1036 field_name = args[0]
1037 if field_name in self._dynamic_fields:
1038 setattr(self, field_name, None)
1039 self._dynamic_fields[field_name].null = False
1040 else:
1041 super().__delattr__(*args, **kwargs)
1042
1043
1044 class DynamicEmbeddedDocument(EmbeddedDocument, metaclass=DocumentMetaclass):
1045 """A Dynamic Embedded Document class allowing flexible, expandable and
1046 uncontrolled schemas. See :class:`~mongoengine.DynamicDocument` for more
1047 information about dynamic documents.
1048 """
1049
1050 # my_metaclass is defined so that metaclass can be queried in Python 2 & 3
1051 my_metaclass = DocumentMetaclass
1052
1053 _dynamic = True
1054
1055 def __delattr__(self, *args, **kwargs):
1056 """Delete the attribute by setting to None and allowing _delta
1057 to unset it.
1058 """
1059 field_name = args[0]
1060 if field_name in self._fields:
1061 default = self._fields[field_name].default
1062 if callable(default):
1063 default = default()
1064 setattr(self, field_name, default)
1065 else:
1066 setattr(self, field_name, None)
1067
1068
1069 class MapReduceDocument:
1070 """A document returned from a map/reduce query.
1071
1072 :param collection: An instance of :class:`~pymongo.Collection`
1073 :param key: Document/result key, often an instance of
1074 :class:`~bson.objectid.ObjectId`. If supplied as
1075 an ``ObjectId`` found in the given ``collection``,
1076 the object can be accessed via the ``object`` property.
1077 :param value: The result(s) for this key.
1078 """
1079
1080 def __init__(self, document, collection, key, value):
1081 self._document = document
1082 self._collection = collection
1083 self.key = key
1084 self.value = value
1085
1086 @property
1087 def object(self):
1088 """Lazy-load the object referenced by ``self.key``. ``self.key``
1089 should be the ``primary_key``.
1090 """
1091 id_field = self._document()._meta["id_field"]
1092 id_field_type = type(id_field)
1093
1094 if not isinstance(self.key, id_field_type):
1095 try:
1096 self.key = id_field_type(self.key)
1097 except Exception:
1098 raise Exception("Could not cast key as %s" % id_field_type.__name__)
1099
1100 if not hasattr(self, "_key_object"):
1101 self._key_object = self._document.objects.with_id(self.key)
1102 return self._key_object
1103 return self._key_object
1104
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mongoengine/document.py b/mongoengine/document.py
--- a/mongoengine/document.py
+++ b/mongoengine/document.py
@@ -574,7 +574,7 @@
def _qs(self):
"""Return the default queryset corresponding to this document."""
if not hasattr(self, "__objects"):
- self.__objects = QuerySet(self, self._get_collection())
+ self.__objects = QuerySet(self.__class__, self._get_collection())
return self.__objects
@property
| {"golden_diff": "diff --git a/mongoengine/document.py b/mongoengine/document.py\n--- a/mongoengine/document.py\n+++ b/mongoengine/document.py\n@@ -574,7 +574,7 @@\n def _qs(self):\n \"\"\"Return the default queryset corresponding to this document.\"\"\"\n if not hasattr(self, \"__objects\"):\n- self.__objects = QuerySet(self, self._get_collection())\n+ self.__objects = QuerySet(self.__class__, self._get_collection())\n return self.__objects\n \n @property\n", "issue": "the second argument type is wrong when invoking isinstance()\ncode file: /mongoengine/queryset/base.py\r\nline num: 328\r\nargument: self._document\r\n\r\nthe self ._document is an object NOT a TYPE, i guess, you want to pass the variable Document.\n", "before_files": [{"content": "import re\n\nimport pymongo\nfrom bson.dbref import DBRef\nfrom pymongo.read_preferences import ReadPreference\n\nfrom mongoengine import signals\nfrom mongoengine.base import (\n BaseDict,\n BaseDocument,\n BaseList,\n DocumentMetaclass,\n EmbeddedDocumentList,\n TopLevelDocumentMetaclass,\n get_document,\n)\nfrom mongoengine.common import _import_class\nfrom mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db\nfrom mongoengine.context_managers import (\n set_write_concern,\n switch_collection,\n switch_db,\n)\nfrom mongoengine.errors import (\n InvalidDocumentError,\n InvalidQueryError,\n SaveConditionError,\n)\nfrom mongoengine.pymongo_support import list_collection_names\nfrom mongoengine.queryset import (\n NotUniqueError,\n OperationError,\n QuerySet,\n transform,\n)\n\n__all__ = (\n \"Document\",\n \"EmbeddedDocument\",\n \"DynamicDocument\",\n \"DynamicEmbeddedDocument\",\n \"OperationError\",\n \"InvalidCollectionError\",\n \"NotUniqueError\",\n \"MapReduceDocument\",\n)\n\n\ndef includes_cls(fields):\n \"\"\"Helper function used for ensuring and comparing indexes.\"\"\"\n first_field = None\n if len(fields):\n if isinstance(fields[0], str):\n first_field = fields[0]\n elif isinstance(fields[0], (list, tuple)) and len(fields[0]):\n first_field = fields[0][0]\n return first_field == \"_cls\"\n\n\nclass InvalidCollectionError(Exception):\n pass\n\n\nclass EmbeddedDocument(BaseDocument, metaclass=DocumentMetaclass):\n r\"\"\"A :class:`~mongoengine.Document` that isn't stored in its own\n collection. :class:`~mongoengine.EmbeddedDocument`\\ s should be used as\n fields on :class:`~mongoengine.Document`\\ s through the\n :class:`~mongoengine.EmbeddedDocumentField` field type.\n\n A :class:`~mongoengine.EmbeddedDocument` subclass may be itself subclassed,\n to create a specialised version of the embedded document that will be\n stored in the same collection. To facilitate this behaviour a `_cls`\n field is added to documents (hidden though the MongoEngine interface).\n To enable this behaviour set :attr:`allow_inheritance` to ``True`` in the\n :attr:`meta` dictionary.\n \"\"\"\n\n __slots__ = (\"_instance\",)\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = DocumentMetaclass\n\n # A generic embedded document doesn't have any immutable properties\n # that describe it uniquely, hence it shouldn't be hashable. You can\n # define your own __hash__ method on a subclass if you need your\n # embedded documents to be hashable.\n __hash__ = None\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._instance = None\n self._changed_fields = []\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self._data == other._data\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def to_mongo(self, *args, **kwargs):\n data = super().to_mongo(*args, **kwargs)\n\n # remove _id from the SON if it's in it and it's None\n if \"_id\" in data and data[\"_id\"] is None:\n del data[\"_id\"]\n\n return data\n\n\nclass Document(BaseDocument, metaclass=TopLevelDocumentMetaclass):\n \"\"\"The base class used for defining the structure and properties of\n collections of documents stored in MongoDB. Inherit from this class, and\n add fields as class attributes to define a document's structure.\n Individual documents may then be created by making instances of the\n :class:`~mongoengine.Document` subclass.\n\n By default, the MongoDB collection used to store documents created using a\n :class:`~mongoengine.Document` subclass will be the name of the subclass\n converted to snake_case. A different collection may be specified by\n providing :attr:`collection` to the :attr:`meta` dictionary in the class\n definition.\n\n A :class:`~mongoengine.Document` subclass may be itself subclassed, to\n create a specialised version of the document that will be stored in the\n same collection. To facilitate this behaviour a `_cls`\n field is added to documents (hidden though the MongoEngine interface).\n To enable this behaviourset :attr:`allow_inheritance` to ``True`` in the\n :attr:`meta` dictionary.\n\n A :class:`~mongoengine.Document` may use a **Capped Collection** by\n specifying :attr:`max_documents` and :attr:`max_size` in the :attr:`meta`\n dictionary. :attr:`max_documents` is the maximum number of documents that\n is allowed to be stored in the collection, and :attr:`max_size` is the\n maximum size of the collection in bytes. :attr:`max_size` is rounded up\n to the next multiple of 256 by MongoDB internally and mongoengine before.\n Use also a multiple of 256 to avoid confusions. If :attr:`max_size` is not\n specified and :attr:`max_documents` is, :attr:`max_size` defaults to\n 10485760 bytes (10MB).\n\n Indexes may be created by specifying :attr:`indexes` in the :attr:`meta`\n dictionary. The value should be a list of field names or tuples of field\n names. Index direction may be specified by prefixing the field names with\n a **+** or **-** sign.\n\n Automatic index creation can be disabled by specifying\n :attr:`auto_create_index` in the :attr:`meta` dictionary. If this is set to\n False then indexes will not be created by MongoEngine. This is useful in\n production systems where index creation is performed as part of a\n deployment system.\n\n By default, _cls will be added to the start of every index (that\n doesn't contain a list) if allow_inheritance is True. This can be\n disabled by either setting cls to False on the specific index or\n by setting index_cls to False on the meta dictionary for the document.\n\n By default, any extra attribute existing in stored data but not declared\n in your model will raise a :class:`~mongoengine.FieldDoesNotExist` error.\n This can be disabled by setting :attr:`strict` to ``False``\n in the :attr:`meta` dictionary.\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = TopLevelDocumentMetaclass\n\n __slots__ = (\"__objects\",)\n\n @property\n def pk(self):\n \"\"\"Get the primary key.\"\"\"\n if \"id_field\" not in self._meta:\n return None\n return getattr(self, self._meta[\"id_field\"])\n\n @pk.setter\n def pk(self, value):\n \"\"\"Set the primary key.\"\"\"\n return setattr(self, self._meta[\"id_field\"], value)\n\n def __hash__(self):\n \"\"\"Return the hash based on the PK of this document. If it's new\n and doesn't have a PK yet, return the default object hash instead.\n \"\"\"\n if self.pk is None:\n return super(BaseDocument, self).__hash__()\n\n return hash(self.pk)\n\n @classmethod\n def _get_db(cls):\n \"\"\"Some Model using other db_alias\"\"\"\n return get_db(cls._meta.get(\"db_alias\", DEFAULT_CONNECTION_NAME))\n\n @classmethod\n def _disconnect(cls):\n \"\"\"Detach the Document class from the (cached) database collection\"\"\"\n cls._collection = None\n\n @classmethod\n def _get_collection(cls):\n \"\"\"Return the PyMongo collection corresponding to this document.\n\n Upon first call, this method:\n 1. Initializes a :class:`~pymongo.collection.Collection` corresponding\n to this document.\n 2. Creates indexes defined in this document's :attr:`meta` dictionary.\n This happens only if `auto_create_index` is True.\n \"\"\"\n if not hasattr(cls, \"_collection\") or cls._collection is None:\n # Get the collection, either capped or regular.\n if cls._meta.get(\"max_size\") or cls._meta.get(\"max_documents\"):\n cls._collection = cls._get_capped_collection()\n else:\n db = cls._get_db()\n collection_name = cls._get_collection_name()\n cls._collection = db[collection_name]\n\n # Ensure indexes on the collection unless auto_create_index was\n # set to False.\n # Also there is no need to ensure indexes on slave.\n db = cls._get_db()\n if cls._meta.get(\"auto_create_index\", True) and db.client.is_primary:\n cls.ensure_indexes()\n\n return cls._collection\n\n @classmethod\n def _get_capped_collection(cls):\n \"\"\"Create a new or get an existing capped PyMongo collection.\"\"\"\n db = cls._get_db()\n collection_name = cls._get_collection_name()\n\n # Get max document limit and max byte size from meta.\n max_size = cls._meta.get(\"max_size\") or 10 * 2 ** 20 # 10MB default\n max_documents = cls._meta.get(\"max_documents\")\n\n # MongoDB will automatically raise the size to make it a multiple of\n # 256 bytes. We raise it here ourselves to be able to reliably compare\n # the options below.\n if max_size % 256:\n max_size = (max_size // 256 + 1) * 256\n\n # If the collection already exists and has different options\n # (i.e. isn't capped or has different max/size), raise an error.\n if collection_name in list_collection_names(\n db, include_system_collections=True\n ):\n collection = db[collection_name]\n options = collection.options()\n if options.get(\"max\") != max_documents or options.get(\"size\") != max_size:\n raise InvalidCollectionError(\n 'Cannot create collection \"{}\" as a capped '\n \"collection as it already exists\".format(cls._collection)\n )\n\n return collection\n\n # Create a new capped collection.\n opts = {\"capped\": True, \"size\": max_size}\n if max_documents:\n opts[\"max\"] = max_documents\n\n return db.create_collection(collection_name, **opts)\n\n def to_mongo(self, *args, **kwargs):\n data = super().to_mongo(*args, **kwargs)\n\n # If '_id' is None, try and set it from self._data. If that\n # doesn't exist either, remove '_id' from the SON completely.\n if data[\"_id\"] is None:\n if self._data.get(\"id\") is None:\n del data[\"_id\"]\n else:\n data[\"_id\"] = self._data[\"id\"]\n\n return data\n\n def modify(self, query=None, **update):\n \"\"\"Perform an atomic update of the document in the database and reload\n the document object using updated version.\n\n Returns True if the document has been updated or False if the document\n in the database doesn't match the query.\n\n .. note:: All unsaved changes that have been made to the document are\n rejected if the method returns True.\n\n :param query: the update will be performed only if the document in the\n database matches the query\n :param update: Django-style update keyword arguments\n \"\"\"\n if query is None:\n query = {}\n\n if self.pk is None:\n raise InvalidDocumentError(\"The document does not have a primary key.\")\n\n id_field = self._meta[\"id_field\"]\n query = query.copy() if isinstance(query, dict) else query.to_query(self)\n\n if id_field not in query:\n query[id_field] = self.pk\n elif query[id_field] != self.pk:\n raise InvalidQueryError(\n \"Invalid document modify query: it must modify only this document.\"\n )\n\n # Need to add shard key to query, or you get an error\n query.update(self._object_key)\n\n updated = self._qs(**query).modify(new=True, **update)\n if updated is None:\n return False\n\n for field in self._fields_ordered:\n setattr(self, field, self._reload(field, updated[field]))\n\n self._changed_fields = updated._changed_fields\n self._created = False\n\n return True\n\n def save(\n self,\n force_insert=False,\n validate=True,\n clean=True,\n write_concern=None,\n cascade=None,\n cascade_kwargs=None,\n _refs=None,\n save_condition=None,\n signal_kwargs=None,\n **kwargs,\n ):\n \"\"\"Save the :class:`~mongoengine.Document` to the database. If the\n document already exists, it will be updated, otherwise it will be\n created. Returns the saved object instance.\n\n :param force_insert: only try to create a new document, don't allow\n updates of existing documents.\n :param validate: validates the document; set to ``False`` to skip.\n :param clean: call the document clean method, requires `validate` to be\n True.\n :param write_concern: Extra keyword arguments are passed down to\n :meth:`~pymongo.collection.Collection.save` OR\n :meth:`~pymongo.collection.Collection.insert`\n which will be used as options for the resultant\n ``getLastError`` command. For example,\n ``save(..., write_concern={w: 2, fsync: True}, ...)`` will\n wait until at least two servers have recorded the write and\n will force an fsync on the primary server.\n :param cascade: Sets the flag for cascading saves. You can set a\n default by setting \"cascade\" in the document __meta__\n :param cascade_kwargs: (optional) kwargs dictionary to be passed throw\n to cascading saves. Implies ``cascade=True``.\n :param _refs: A list of processed references used in cascading saves\n :param save_condition: only perform save if matching record in db\n satisfies condition(s) (e.g. version number).\n Raises :class:`OperationError` if the conditions are not satisfied\n :param signal_kwargs: (optional) kwargs dictionary to be passed to\n the signal calls.\n\n .. versionchanged:: 0.5\n In existing documents it only saves changed fields using\n set / unset. Saves are cascaded and any\n :class:`~bson.dbref.DBRef` objects that have changes are\n saved as well.\n .. versionchanged:: 0.6\n Added cascading saves\n .. versionchanged:: 0.8\n Cascade saves are optional and default to False. If you want\n fine grain control then you can turn off using document\n meta['cascade'] = True. Also you can pass different kwargs to\n the cascade save using cascade_kwargs which overwrites the\n existing kwargs with custom values.\n \"\"\"\n signal_kwargs = signal_kwargs or {}\n\n if self._meta.get(\"abstract\"):\n raise InvalidDocumentError(\"Cannot save an abstract document.\")\n\n signals.pre_save.send(self.__class__, document=self, **signal_kwargs)\n\n if validate:\n self.validate(clean=clean)\n\n if write_concern is None:\n write_concern = {}\n\n doc_id = self.to_mongo(fields=[self._meta[\"id_field\"]])\n created = \"_id\" not in doc_id or self._created or force_insert\n\n signals.pre_save_post_validation.send(\n self.__class__, document=self, created=created, **signal_kwargs\n )\n # it might be refreshed by the pre_save_post_validation hook, e.g., for etag generation\n doc = self.to_mongo()\n\n if self._meta.get(\"auto_create_index\", True):\n self.ensure_indexes()\n\n try:\n # Save a new document or update an existing one\n if created:\n object_id = self._save_create(doc, force_insert, write_concern)\n else:\n object_id, created = self._save_update(\n doc, save_condition, write_concern\n )\n\n if cascade is None:\n cascade = self._meta.get(\"cascade\", False) or cascade_kwargs is not None\n\n if cascade:\n kwargs = {\n \"force_insert\": force_insert,\n \"validate\": validate,\n \"write_concern\": write_concern,\n \"cascade\": cascade,\n }\n if cascade_kwargs: # Allow granular control over cascades\n kwargs.update(cascade_kwargs)\n kwargs[\"_refs\"] = _refs\n self.cascade_save(**kwargs)\n\n except pymongo.errors.DuplicateKeyError as err:\n message = \"Tried to save duplicate unique keys (%s)\"\n raise NotUniqueError(message % err)\n except pymongo.errors.OperationFailure as err:\n message = \"Could not save document (%s)\"\n if re.match(\"^E1100[01] duplicate key\", str(err)):\n # E11000 - duplicate key error index\n # E11001 - duplicate key on update\n message = \"Tried to save duplicate unique keys (%s)\"\n raise NotUniqueError(message % err)\n raise OperationError(message % err)\n\n # Make sure we store the PK on this document now that it's saved\n id_field = self._meta[\"id_field\"]\n if created or id_field not in self._meta.get(\"shard_key\", []):\n self[id_field] = self._fields[id_field].to_python(object_id)\n\n signals.post_save.send(\n self.__class__, document=self, created=created, **signal_kwargs\n )\n\n self._clear_changed_fields()\n self._created = False\n\n return self\n\n def _save_create(self, doc, force_insert, write_concern):\n \"\"\"Save a new document.\n\n Helper method, should only be used inside save().\n \"\"\"\n collection = self._get_collection()\n with set_write_concern(collection, write_concern) as wc_collection:\n if force_insert:\n return wc_collection.insert_one(doc).inserted_id\n # insert_one will provoke UniqueError alongside save does not\n # therefore, it need to catch and call replace_one.\n if \"_id\" in doc:\n select_dict = {\"_id\": doc[\"_id\"]}\n select_dict = self._integrate_shard_key(doc, select_dict)\n raw_object = wc_collection.find_one_and_replace(select_dict, doc)\n if raw_object:\n return doc[\"_id\"]\n\n object_id = wc_collection.insert_one(doc).inserted_id\n\n return object_id\n\n def _get_update_doc(self):\n \"\"\"Return a dict containing all the $set and $unset operations\n that should be sent to MongoDB based on the changes made to this\n Document.\n \"\"\"\n updates, removals = self._delta()\n\n update_doc = {}\n if updates:\n update_doc[\"$set\"] = updates\n if removals:\n update_doc[\"$unset\"] = removals\n\n return update_doc\n\n def _integrate_shard_key(self, doc, select_dict):\n \"\"\"Integrates the collection's shard key to the `select_dict`, which will be used for the query.\n The value from the shard key is taken from the `doc` and finally the select_dict is returned.\n \"\"\"\n\n # Need to add shard key to query, or you get an error\n shard_key = self._meta.get(\"shard_key\", tuple())\n for k in shard_key:\n path = self._lookup_field(k.split(\".\"))\n actual_key = [p.db_field for p in path]\n val = doc\n for ak in actual_key:\n val = val[ak]\n select_dict[\".\".join(actual_key)] = val\n\n return select_dict\n\n def _save_update(self, doc, save_condition, write_concern):\n \"\"\"Update an existing document.\n\n Helper method, should only be used inside save().\n \"\"\"\n collection = self._get_collection()\n object_id = doc[\"_id\"]\n created = False\n\n select_dict = {}\n if save_condition is not None:\n select_dict = transform.query(self.__class__, **save_condition)\n\n select_dict[\"_id\"] = object_id\n\n select_dict = self._integrate_shard_key(doc, select_dict)\n\n update_doc = self._get_update_doc()\n if update_doc:\n upsert = save_condition is None\n with set_write_concern(collection, write_concern) as wc_collection:\n last_error = wc_collection.update_one(\n select_dict, update_doc, upsert=upsert\n ).raw_result\n if not upsert and last_error[\"n\"] == 0:\n raise SaveConditionError(\n \"Race condition preventing document update detected\"\n )\n if last_error is not None:\n updated_existing = last_error.get(\"updatedExisting\")\n if updated_existing is False:\n created = True\n # !!! This is bad, means we accidentally created a new,\n # potentially corrupted document. See\n # https://github.com/MongoEngine/mongoengine/issues/564\n\n return object_id, created\n\n def cascade_save(self, **kwargs):\n \"\"\"Recursively save any references and generic references on the\n document.\n \"\"\"\n _refs = kwargs.get(\"_refs\") or []\n\n ReferenceField = _import_class(\"ReferenceField\")\n GenericReferenceField = _import_class(\"GenericReferenceField\")\n\n for name, cls in self._fields.items():\n if not isinstance(cls, (ReferenceField, GenericReferenceField)):\n continue\n\n ref = self._data.get(name)\n if not ref or isinstance(ref, DBRef):\n continue\n\n if not getattr(ref, \"_changed_fields\", True):\n continue\n\n ref_id = f\"{ref.__class__.__name__},{str(ref._data)}\"\n if ref and ref_id not in _refs:\n _refs.append(ref_id)\n kwargs[\"_refs\"] = _refs\n ref.save(**kwargs)\n ref._changed_fields = []\n\n @property\n def _qs(self):\n \"\"\"Return the default queryset corresponding to this document.\"\"\"\n if not hasattr(self, \"__objects\"):\n self.__objects = QuerySet(self, self._get_collection())\n return self.__objects\n\n @property\n def _object_key(self):\n \"\"\"Return a query dict that can be used to fetch this document.\n\n Most of the time the dict is a simple PK lookup, but in case of\n a sharded collection with a compound shard key, it can contain a more\n complex query.\n\n Note that the dict returned by this method uses MongoEngine field\n names instead of PyMongo field names (e.g. \"pk\" instead of \"_id\",\n \"some__nested__field\" instead of \"some.nested.field\", etc.).\n \"\"\"\n select_dict = {\"pk\": self.pk}\n shard_key = self.__class__._meta.get(\"shard_key\", tuple())\n for k in shard_key:\n val = self\n field_parts = k.split(\".\")\n for part in field_parts:\n val = getattr(val, part)\n select_dict[\"__\".join(field_parts)] = val\n return select_dict\n\n def update(self, **kwargs):\n \"\"\"Performs an update on the :class:`~mongoengine.Document`\n A convenience wrapper to :meth:`~mongoengine.QuerySet.update`.\n\n Raises :class:`OperationError` if called on an object that has not yet\n been saved.\n \"\"\"\n if self.pk is None:\n if kwargs.get(\"upsert\", False):\n query = self.to_mongo()\n if \"_cls\" in query:\n del query[\"_cls\"]\n return self._qs.filter(**query).update_one(**kwargs)\n else:\n raise OperationError(\"attempt to update a document not yet saved\")\n\n # Need to add shard key to query, or you get an error\n return self._qs.filter(**self._object_key).update_one(**kwargs)\n\n def delete(self, signal_kwargs=None, **write_concern):\n \"\"\"Delete the :class:`~mongoengine.Document` from the database. This\n will only take effect if the document has been previously saved.\n\n :param signal_kwargs: (optional) kwargs dictionary to be passed to\n the signal calls.\n :param write_concern: Extra keyword arguments are passed down which\n will be used as options for the resultant ``getLastError`` command.\n For example, ``save(..., w: 2, fsync: True)`` will\n wait until at least two servers have recorded the write and\n will force an fsync on the primary server.\n \"\"\"\n signal_kwargs = signal_kwargs or {}\n signals.pre_delete.send(self.__class__, document=self, **signal_kwargs)\n\n # Delete FileFields separately\n FileField = _import_class(\"FileField\")\n for name, field in self._fields.items():\n if isinstance(field, FileField):\n getattr(self, name).delete()\n\n try:\n self._qs.filter(**self._object_key).delete(\n write_concern=write_concern, _from_doc_delete=True\n )\n except pymongo.errors.OperationFailure as err:\n message = \"Could not delete document (%s)\" % err.args\n raise OperationError(message)\n signals.post_delete.send(self.__class__, document=self, **signal_kwargs)\n\n def switch_db(self, db_alias, keep_created=True):\n \"\"\"\n Temporarily switch the database for a document instance.\n\n Only really useful for archiving off data and calling `save()`::\n\n user = User.objects.get(id=user_id)\n user.switch_db('archive-db')\n user.save()\n\n :param str db_alias: The database alias to use for saving the document\n\n :param bool keep_created: keep self._created value after switching db, else is reset to True\n\n\n .. seealso::\n Use :class:`~mongoengine.context_managers.switch_collection`\n if you need to read from another collection\n \"\"\"\n with switch_db(self.__class__, db_alias) as cls:\n collection = cls._get_collection()\n db = cls._get_db()\n self._get_collection = lambda: collection\n self._get_db = lambda: db\n self._collection = collection\n self._created = True if not keep_created else self._created\n self.__objects = self._qs\n self.__objects._collection_obj = collection\n return self\n\n def switch_collection(self, collection_name, keep_created=True):\n \"\"\"\n Temporarily switch the collection for a document instance.\n\n Only really useful for archiving off data and calling `save()`::\n\n user = User.objects.get(id=user_id)\n user.switch_collection('old-users')\n user.save()\n\n :param str collection_name: The database alias to use for saving the\n document\n\n :param bool keep_created: keep self._created value after switching collection, else is reset to True\n\n\n .. seealso::\n Use :class:`~mongoengine.context_managers.switch_db`\n if you need to read from another database\n \"\"\"\n with switch_collection(self.__class__, collection_name) as cls:\n collection = cls._get_collection()\n self._get_collection = lambda: collection\n self._collection = collection\n self._created = True if not keep_created else self._created\n self.__objects = self._qs\n self.__objects._collection_obj = collection\n return self\n\n def select_related(self, max_depth=1):\n \"\"\"Handles dereferencing of :class:`~bson.dbref.DBRef` objects to\n a maximum depth in order to cut down the number queries to mongodb.\n \"\"\"\n DeReference = _import_class(\"DeReference\")\n DeReference()([self], max_depth + 1)\n return self\n\n def reload(self, *fields, **kwargs):\n \"\"\"Reloads all attributes from the database.\n\n :param fields: (optional) args list of fields to reload\n :param max_depth: (optional) depth of dereferencing to follow\n \"\"\"\n max_depth = 1\n if fields and isinstance(fields[0], int):\n max_depth = fields[0]\n fields = fields[1:]\n elif \"max_depth\" in kwargs:\n max_depth = kwargs[\"max_depth\"]\n\n if self.pk is None:\n raise self.DoesNotExist(\"Document does not exist\")\n\n obj = (\n self._qs.read_preference(ReadPreference.PRIMARY)\n .filter(**self._object_key)\n .only(*fields)\n .limit(1)\n .select_related(max_depth=max_depth)\n )\n\n if obj:\n obj = obj[0]\n else:\n raise self.DoesNotExist(\"Document does not exist\")\n for field in obj._data:\n if not fields or field in fields:\n try:\n setattr(self, field, self._reload(field, obj[field]))\n except (KeyError, AttributeError):\n try:\n # If field is a special field, e.g. items is stored as _reserved_items,\n # a KeyError is thrown. So try to retrieve the field from _data\n setattr(self, field, self._reload(field, obj._data.get(field)))\n except KeyError:\n # If field is removed from the database while the object\n # is in memory, a reload would cause a KeyError\n # i.e. obj.update(unset__field=1) followed by obj.reload()\n delattr(self, field)\n\n self._changed_fields = (\n list(set(self._changed_fields) - set(fields))\n if fields\n else obj._changed_fields\n )\n self._created = False\n return self\n\n def _reload(self, key, value):\n \"\"\"Used by :meth:`~mongoengine.Document.reload` to ensure the\n correct instance is linked to self.\n \"\"\"\n if isinstance(value, BaseDict):\n value = [(k, self._reload(k, v)) for k, v in value.items()]\n value = BaseDict(value, self, key)\n elif isinstance(value, EmbeddedDocumentList):\n value = [self._reload(key, v) for v in value]\n value = EmbeddedDocumentList(value, self, key)\n elif isinstance(value, BaseList):\n value = [self._reload(key, v) for v in value]\n value = BaseList(value, self, key)\n elif isinstance(value, (EmbeddedDocument, DynamicEmbeddedDocument)):\n value._instance = None\n value._changed_fields = []\n return value\n\n def to_dbref(self):\n \"\"\"Returns an instance of :class:`~bson.dbref.DBRef` useful in\n `__raw__` queries.\"\"\"\n if self.pk is None:\n msg = \"Only saved documents can have a valid dbref\"\n raise OperationError(msg)\n return DBRef(self.__class__._get_collection_name(), self.pk)\n\n @classmethod\n def register_delete_rule(cls, document_cls, field_name, rule):\n \"\"\"This method registers the delete rules to apply when removing this\n object.\n \"\"\"\n classes = [\n get_document(class_name)\n for class_name in cls._subclasses\n if class_name != cls.__name__\n ] + [cls]\n documents = [\n get_document(class_name)\n for class_name in document_cls._subclasses\n if class_name != document_cls.__name__\n ] + [document_cls]\n\n for klass in classes:\n for document_cls in documents:\n delete_rules = klass._meta.get(\"delete_rules\") or {}\n delete_rules[(document_cls, field_name)] = rule\n klass._meta[\"delete_rules\"] = delete_rules\n\n @classmethod\n def drop_collection(cls):\n \"\"\"Drops the entire collection associated with this\n :class:`~mongoengine.Document` type from the database.\n\n Raises :class:`OperationError` if the document has no collection set\n (i.g. if it is `abstract`)\n \"\"\"\n coll_name = cls._get_collection_name()\n if not coll_name:\n raise OperationError(\n \"Document %s has no collection defined (is it abstract ?)\" % cls\n )\n cls._collection = None\n db = cls._get_db()\n db.drop_collection(coll_name)\n\n @classmethod\n def create_index(cls, keys, background=False, **kwargs):\n \"\"\"Creates the given indexes if required.\n\n :param keys: a single index key or a list of index keys (to\n construct a multi-field index); keys may be prefixed with a **+**\n or a **-** to determine the index ordering\n :param background: Allows index creation in the background\n \"\"\"\n index_spec = cls._build_index_spec(keys)\n index_spec = index_spec.copy()\n fields = index_spec.pop(\"fields\")\n index_spec[\"background\"] = background\n index_spec.update(kwargs)\n\n return cls._get_collection().create_index(fields, **index_spec)\n\n @classmethod\n def ensure_index(cls, key_or_list, background=False, **kwargs):\n \"\"\"Ensure that the given indexes are in place. Deprecated in favour\n of create_index.\n\n :param key_or_list: a single index key or a list of index keys (to\n construct a multi-field index); keys may be prefixed with a **+**\n or a **-** to determine the index ordering\n :param background: Allows index creation in the background\n \"\"\"\n return cls.create_index(key_or_list, background=background, **kwargs)\n\n @classmethod\n def ensure_indexes(cls):\n \"\"\"Checks the document meta data and ensures all the indexes exist.\n\n Global defaults can be set in the meta - see :doc:`guide/defining-documents`\n\n .. note:: You can disable automatic index creation by setting\n `auto_create_index` to False in the documents meta data\n \"\"\"\n background = cls._meta.get(\"index_background\", False)\n index_opts = cls._meta.get(\"index_opts\") or {}\n index_cls = cls._meta.get(\"index_cls\", True)\n\n collection = cls._get_collection()\n # 746: when connection is via mongos, the read preference is not necessarily an indication that\n # this code runs on a secondary\n if not collection.is_mongos and collection.read_preference > 1:\n return\n\n # determine if an index which we are creating includes\n # _cls as its first field; if so, we can avoid creating\n # an extra index on _cls, as mongodb will use the existing\n # index to service queries against _cls\n cls_indexed = False\n\n # Ensure document-defined indexes are created\n if cls._meta[\"index_specs\"]:\n index_spec = cls._meta[\"index_specs\"]\n for spec in index_spec:\n spec = spec.copy()\n fields = spec.pop(\"fields\")\n cls_indexed = cls_indexed or includes_cls(fields)\n opts = index_opts.copy()\n opts.update(spec)\n\n # we shouldn't pass 'cls' to the collection.ensureIndex options\n # because of https://jira.mongodb.org/browse/SERVER-769\n if \"cls\" in opts:\n del opts[\"cls\"]\n\n collection.create_index(fields, background=background, **opts)\n\n # If _cls is being used (for polymorphism), it needs an index,\n # only if another index doesn't begin with _cls\n if index_cls and not cls_indexed and cls._meta.get(\"allow_inheritance\"):\n\n # we shouldn't pass 'cls' to the collection.ensureIndex options\n # because of https://jira.mongodb.org/browse/SERVER-769\n if \"cls\" in index_opts:\n del index_opts[\"cls\"]\n\n collection.create_index(\"_cls\", background=background, **index_opts)\n\n @classmethod\n def list_indexes(cls):\n \"\"\"Lists all of the indexes that should be created for given\n collection. It includes all the indexes from super- and sub-classes.\n \"\"\"\n if cls._meta.get(\"abstract\"):\n return []\n\n # get all the base classes, subclasses and siblings\n classes = []\n\n def get_classes(cls):\n\n if cls not in classes and isinstance(cls, TopLevelDocumentMetaclass):\n classes.append(cls)\n\n for base_cls in cls.__bases__:\n if (\n isinstance(base_cls, TopLevelDocumentMetaclass)\n and base_cls != Document\n and not base_cls._meta.get(\"abstract\")\n and base_cls._get_collection().full_name\n == cls._get_collection().full_name\n and base_cls not in classes\n ):\n classes.append(base_cls)\n get_classes(base_cls)\n for subclass in cls.__subclasses__():\n if (\n isinstance(base_cls, TopLevelDocumentMetaclass)\n and subclass._get_collection().full_name\n == cls._get_collection().full_name\n and subclass not in classes\n ):\n classes.append(subclass)\n get_classes(subclass)\n\n get_classes(cls)\n\n # get the indexes spec for all of the gathered classes\n def get_indexes_spec(cls):\n indexes = []\n\n if cls._meta[\"index_specs\"]:\n index_spec = cls._meta[\"index_specs\"]\n for spec in index_spec:\n spec = spec.copy()\n fields = spec.pop(\"fields\")\n indexes.append(fields)\n return indexes\n\n indexes = []\n for klass in classes:\n for index in get_indexes_spec(klass):\n if index not in indexes:\n indexes.append(index)\n\n # finish up by appending { '_id': 1 } and { '_cls': 1 }, if needed\n if [(\"_id\", 1)] not in indexes:\n indexes.append([(\"_id\", 1)])\n if cls._meta.get(\"index_cls\", True) and cls._meta.get(\"allow_inheritance\"):\n indexes.append([(\"_cls\", 1)])\n\n return indexes\n\n @classmethod\n def compare_indexes(cls):\n \"\"\"Compares the indexes defined in MongoEngine with the ones\n existing in the database. Returns any missing/extra indexes.\n \"\"\"\n\n required = cls.list_indexes()\n\n existing = []\n for info in cls._get_collection().index_information().values():\n if \"_fts\" in info[\"key\"][0]:\n index_type = info[\"key\"][0][1]\n text_index_fields = info.get(\"weights\").keys()\n existing.append([(key, index_type) for key in text_index_fields])\n else:\n existing.append(info[\"key\"])\n missing = [index for index in required if index not in existing]\n extra = [index for index in existing if index not in required]\n\n # if { _cls: 1 } is missing, make sure it's *really* necessary\n if [(\"_cls\", 1)] in missing:\n cls_obsolete = False\n for index in existing:\n if includes_cls(index) and index not in extra:\n cls_obsolete = True\n break\n if cls_obsolete:\n missing.remove([(\"_cls\", 1)])\n\n return {\"missing\": missing, \"extra\": extra}\n\n\nclass DynamicDocument(Document, metaclass=TopLevelDocumentMetaclass):\n \"\"\"A Dynamic Document class allowing flexible, expandable and uncontrolled\n schemas. As a :class:`~mongoengine.Document` subclass, acts in the same\n way as an ordinary document but has expanded style properties. Any data\n passed or set against the :class:`~mongoengine.DynamicDocument` that is\n not a field is automatically converted into a\n :class:`~mongoengine.fields.DynamicField` and data can be attributed to that\n field.\n\n .. note::\n\n There is one caveat on Dynamic Documents: undeclared fields cannot start with `_`\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = TopLevelDocumentMetaclass\n\n _dynamic = True\n\n def __delattr__(self, *args, **kwargs):\n \"\"\"Delete the attribute by setting to None and allowing _delta\n to unset it.\n \"\"\"\n field_name = args[0]\n if field_name in self._dynamic_fields:\n setattr(self, field_name, None)\n self._dynamic_fields[field_name].null = False\n else:\n super().__delattr__(*args, **kwargs)\n\n\nclass DynamicEmbeddedDocument(EmbeddedDocument, metaclass=DocumentMetaclass):\n \"\"\"A Dynamic Embedded Document class allowing flexible, expandable and\n uncontrolled schemas. See :class:`~mongoengine.DynamicDocument` for more\n information about dynamic documents.\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = DocumentMetaclass\n\n _dynamic = True\n\n def __delattr__(self, *args, **kwargs):\n \"\"\"Delete the attribute by setting to None and allowing _delta\n to unset it.\n \"\"\"\n field_name = args[0]\n if field_name in self._fields:\n default = self._fields[field_name].default\n if callable(default):\n default = default()\n setattr(self, field_name, default)\n else:\n setattr(self, field_name, None)\n\n\nclass MapReduceDocument:\n \"\"\"A document returned from a map/reduce query.\n\n :param collection: An instance of :class:`~pymongo.Collection`\n :param key: Document/result key, often an instance of\n :class:`~bson.objectid.ObjectId`. If supplied as\n an ``ObjectId`` found in the given ``collection``,\n the object can be accessed via the ``object`` property.\n :param value: The result(s) for this key.\n \"\"\"\n\n def __init__(self, document, collection, key, value):\n self._document = document\n self._collection = collection\n self.key = key\n self.value = value\n\n @property\n def object(self):\n \"\"\"Lazy-load the object referenced by ``self.key``. ``self.key``\n should be the ``primary_key``.\n \"\"\"\n id_field = self._document()._meta[\"id_field\"]\n id_field_type = type(id_field)\n\n if not isinstance(self.key, id_field_type):\n try:\n self.key = id_field_type(self.key)\n except Exception:\n raise Exception(\"Could not cast key as %s\" % id_field_type.__name__)\n\n if not hasattr(self, \"_key_object\"):\n self._key_object = self._document.objects.with_id(self.key)\n return self._key_object\n return self._key_object\n", "path": "mongoengine/document.py"}], "after_files": [{"content": "import re\n\nimport pymongo\nfrom bson.dbref import DBRef\nfrom pymongo.read_preferences import ReadPreference\n\nfrom mongoengine import signals\nfrom mongoengine.base import (\n BaseDict,\n BaseDocument,\n BaseList,\n DocumentMetaclass,\n EmbeddedDocumentList,\n TopLevelDocumentMetaclass,\n get_document,\n)\nfrom mongoengine.common import _import_class\nfrom mongoengine.connection import DEFAULT_CONNECTION_NAME, get_db\nfrom mongoengine.context_managers import (\n set_write_concern,\n switch_collection,\n switch_db,\n)\nfrom mongoengine.errors import (\n InvalidDocumentError,\n InvalidQueryError,\n SaveConditionError,\n)\nfrom mongoengine.pymongo_support import list_collection_names\nfrom mongoengine.queryset import (\n NotUniqueError,\n OperationError,\n QuerySet,\n transform,\n)\n\n__all__ = (\n \"Document\",\n \"EmbeddedDocument\",\n \"DynamicDocument\",\n \"DynamicEmbeddedDocument\",\n \"OperationError\",\n \"InvalidCollectionError\",\n \"NotUniqueError\",\n \"MapReduceDocument\",\n)\n\n\ndef includes_cls(fields):\n \"\"\"Helper function used for ensuring and comparing indexes.\"\"\"\n first_field = None\n if len(fields):\n if isinstance(fields[0], str):\n first_field = fields[0]\n elif isinstance(fields[0], (list, tuple)) and len(fields[0]):\n first_field = fields[0][0]\n return first_field == \"_cls\"\n\n\nclass InvalidCollectionError(Exception):\n pass\n\n\nclass EmbeddedDocument(BaseDocument, metaclass=DocumentMetaclass):\n r\"\"\"A :class:`~mongoengine.Document` that isn't stored in its own\n collection. :class:`~mongoengine.EmbeddedDocument`\\ s should be used as\n fields on :class:`~mongoengine.Document`\\ s through the\n :class:`~mongoengine.EmbeddedDocumentField` field type.\n\n A :class:`~mongoengine.EmbeddedDocument` subclass may be itself subclassed,\n to create a specialised version of the embedded document that will be\n stored in the same collection. To facilitate this behaviour a `_cls`\n field is added to documents (hidden though the MongoEngine interface).\n To enable this behaviour set :attr:`allow_inheritance` to ``True`` in the\n :attr:`meta` dictionary.\n \"\"\"\n\n __slots__ = (\"_instance\",)\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = DocumentMetaclass\n\n # A generic embedded document doesn't have any immutable properties\n # that describe it uniquely, hence it shouldn't be hashable. You can\n # define your own __hash__ method on a subclass if you need your\n # embedded documents to be hashable.\n __hash__ = None\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._instance = None\n self._changed_fields = []\n\n def __eq__(self, other):\n if isinstance(other, self.__class__):\n return self._data == other._data\n return False\n\n def __ne__(self, other):\n return not self.__eq__(other)\n\n def to_mongo(self, *args, **kwargs):\n data = super().to_mongo(*args, **kwargs)\n\n # remove _id from the SON if it's in it and it's None\n if \"_id\" in data and data[\"_id\"] is None:\n del data[\"_id\"]\n\n return data\n\n\nclass Document(BaseDocument, metaclass=TopLevelDocumentMetaclass):\n \"\"\"The base class used for defining the structure and properties of\n collections of documents stored in MongoDB. Inherit from this class, and\n add fields as class attributes to define a document's structure.\n Individual documents may then be created by making instances of the\n :class:`~mongoengine.Document` subclass.\n\n By default, the MongoDB collection used to store documents created using a\n :class:`~mongoengine.Document` subclass will be the name of the subclass\n converted to snake_case. A different collection may be specified by\n providing :attr:`collection` to the :attr:`meta` dictionary in the class\n definition.\n\n A :class:`~mongoengine.Document` subclass may be itself subclassed, to\n create a specialised version of the document that will be stored in the\n same collection. To facilitate this behaviour a `_cls`\n field is added to documents (hidden though the MongoEngine interface).\n To enable this behaviourset :attr:`allow_inheritance` to ``True`` in the\n :attr:`meta` dictionary.\n\n A :class:`~mongoengine.Document` may use a **Capped Collection** by\n specifying :attr:`max_documents` and :attr:`max_size` in the :attr:`meta`\n dictionary. :attr:`max_documents` is the maximum number of documents that\n is allowed to be stored in the collection, and :attr:`max_size` is the\n maximum size of the collection in bytes. :attr:`max_size` is rounded up\n to the next multiple of 256 by MongoDB internally and mongoengine before.\n Use also a multiple of 256 to avoid confusions. If :attr:`max_size` is not\n specified and :attr:`max_documents` is, :attr:`max_size` defaults to\n 10485760 bytes (10MB).\n\n Indexes may be created by specifying :attr:`indexes` in the :attr:`meta`\n dictionary. The value should be a list of field names or tuples of field\n names. Index direction may be specified by prefixing the field names with\n a **+** or **-** sign.\n\n Automatic index creation can be disabled by specifying\n :attr:`auto_create_index` in the :attr:`meta` dictionary. If this is set to\n False then indexes will not be created by MongoEngine. This is useful in\n production systems where index creation is performed as part of a\n deployment system.\n\n By default, _cls will be added to the start of every index (that\n doesn't contain a list) if allow_inheritance is True. This can be\n disabled by either setting cls to False on the specific index or\n by setting index_cls to False on the meta dictionary for the document.\n\n By default, any extra attribute existing in stored data but not declared\n in your model will raise a :class:`~mongoengine.FieldDoesNotExist` error.\n This can be disabled by setting :attr:`strict` to ``False``\n in the :attr:`meta` dictionary.\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = TopLevelDocumentMetaclass\n\n __slots__ = (\"__objects\",)\n\n @property\n def pk(self):\n \"\"\"Get the primary key.\"\"\"\n if \"id_field\" not in self._meta:\n return None\n return getattr(self, self._meta[\"id_field\"])\n\n @pk.setter\n def pk(self, value):\n \"\"\"Set the primary key.\"\"\"\n return setattr(self, self._meta[\"id_field\"], value)\n\n def __hash__(self):\n \"\"\"Return the hash based on the PK of this document. If it's new\n and doesn't have a PK yet, return the default object hash instead.\n \"\"\"\n if self.pk is None:\n return super(BaseDocument, self).__hash__()\n\n return hash(self.pk)\n\n @classmethod\n def _get_db(cls):\n \"\"\"Some Model using other db_alias\"\"\"\n return get_db(cls._meta.get(\"db_alias\", DEFAULT_CONNECTION_NAME))\n\n @classmethod\n def _disconnect(cls):\n \"\"\"Detach the Document class from the (cached) database collection\"\"\"\n cls._collection = None\n\n @classmethod\n def _get_collection(cls):\n \"\"\"Return the PyMongo collection corresponding to this document.\n\n Upon first call, this method:\n 1. Initializes a :class:`~pymongo.collection.Collection` corresponding\n to this document.\n 2. Creates indexes defined in this document's :attr:`meta` dictionary.\n This happens only if `auto_create_index` is True.\n \"\"\"\n if not hasattr(cls, \"_collection\") or cls._collection is None:\n # Get the collection, either capped or regular.\n if cls._meta.get(\"max_size\") or cls._meta.get(\"max_documents\"):\n cls._collection = cls._get_capped_collection()\n else:\n db = cls._get_db()\n collection_name = cls._get_collection_name()\n cls._collection = db[collection_name]\n\n # Ensure indexes on the collection unless auto_create_index was\n # set to False.\n # Also there is no need to ensure indexes on slave.\n db = cls._get_db()\n if cls._meta.get(\"auto_create_index\", True) and db.client.is_primary:\n cls.ensure_indexes()\n\n return cls._collection\n\n @classmethod\n def _get_capped_collection(cls):\n \"\"\"Create a new or get an existing capped PyMongo collection.\"\"\"\n db = cls._get_db()\n collection_name = cls._get_collection_name()\n\n # Get max document limit and max byte size from meta.\n max_size = cls._meta.get(\"max_size\") or 10 * 2 ** 20 # 10MB default\n max_documents = cls._meta.get(\"max_documents\")\n\n # MongoDB will automatically raise the size to make it a multiple of\n # 256 bytes. We raise it here ourselves to be able to reliably compare\n # the options below.\n if max_size % 256:\n max_size = (max_size // 256 + 1) * 256\n\n # If the collection already exists and has different options\n # (i.e. isn't capped or has different max/size), raise an error.\n if collection_name in list_collection_names(\n db, include_system_collections=True\n ):\n collection = db[collection_name]\n options = collection.options()\n if options.get(\"max\") != max_documents or options.get(\"size\") != max_size:\n raise InvalidCollectionError(\n 'Cannot create collection \"{}\" as a capped '\n \"collection as it already exists\".format(cls._collection)\n )\n\n return collection\n\n # Create a new capped collection.\n opts = {\"capped\": True, \"size\": max_size}\n if max_documents:\n opts[\"max\"] = max_documents\n\n return db.create_collection(collection_name, **opts)\n\n def to_mongo(self, *args, **kwargs):\n data = super().to_mongo(*args, **kwargs)\n\n # If '_id' is None, try and set it from self._data. If that\n # doesn't exist either, remove '_id' from the SON completely.\n if data[\"_id\"] is None:\n if self._data.get(\"id\") is None:\n del data[\"_id\"]\n else:\n data[\"_id\"] = self._data[\"id\"]\n\n return data\n\n def modify(self, query=None, **update):\n \"\"\"Perform an atomic update of the document in the database and reload\n the document object using updated version.\n\n Returns True if the document has been updated or False if the document\n in the database doesn't match the query.\n\n .. note:: All unsaved changes that have been made to the document are\n rejected if the method returns True.\n\n :param query: the update will be performed only if the document in the\n database matches the query\n :param update: Django-style update keyword arguments\n \"\"\"\n if query is None:\n query = {}\n\n if self.pk is None:\n raise InvalidDocumentError(\"The document does not have a primary key.\")\n\n id_field = self._meta[\"id_field\"]\n query = query.copy() if isinstance(query, dict) else query.to_query(self)\n\n if id_field not in query:\n query[id_field] = self.pk\n elif query[id_field] != self.pk:\n raise InvalidQueryError(\n \"Invalid document modify query: it must modify only this document.\"\n )\n\n # Need to add shard key to query, or you get an error\n query.update(self._object_key)\n\n updated = self._qs(**query).modify(new=True, **update)\n if updated is None:\n return False\n\n for field in self._fields_ordered:\n setattr(self, field, self._reload(field, updated[field]))\n\n self._changed_fields = updated._changed_fields\n self._created = False\n\n return True\n\n def save(\n self,\n force_insert=False,\n validate=True,\n clean=True,\n write_concern=None,\n cascade=None,\n cascade_kwargs=None,\n _refs=None,\n save_condition=None,\n signal_kwargs=None,\n **kwargs,\n ):\n \"\"\"Save the :class:`~mongoengine.Document` to the database. If the\n document already exists, it will be updated, otherwise it will be\n created. Returns the saved object instance.\n\n :param force_insert: only try to create a new document, don't allow\n updates of existing documents.\n :param validate: validates the document; set to ``False`` to skip.\n :param clean: call the document clean method, requires `validate` to be\n True.\n :param write_concern: Extra keyword arguments are passed down to\n :meth:`~pymongo.collection.Collection.save` OR\n :meth:`~pymongo.collection.Collection.insert`\n which will be used as options for the resultant\n ``getLastError`` command. For example,\n ``save(..., write_concern={w: 2, fsync: True}, ...)`` will\n wait until at least two servers have recorded the write and\n will force an fsync on the primary server.\n :param cascade: Sets the flag for cascading saves. You can set a\n default by setting \"cascade\" in the document __meta__\n :param cascade_kwargs: (optional) kwargs dictionary to be passed throw\n to cascading saves. Implies ``cascade=True``.\n :param _refs: A list of processed references used in cascading saves\n :param save_condition: only perform save if matching record in db\n satisfies condition(s) (e.g. version number).\n Raises :class:`OperationError` if the conditions are not satisfied\n :param signal_kwargs: (optional) kwargs dictionary to be passed to\n the signal calls.\n\n .. versionchanged:: 0.5\n In existing documents it only saves changed fields using\n set / unset. Saves are cascaded and any\n :class:`~bson.dbref.DBRef` objects that have changes are\n saved as well.\n .. versionchanged:: 0.6\n Added cascading saves\n .. versionchanged:: 0.8\n Cascade saves are optional and default to False. If you want\n fine grain control then you can turn off using document\n meta['cascade'] = True. Also you can pass different kwargs to\n the cascade save using cascade_kwargs which overwrites the\n existing kwargs with custom values.\n \"\"\"\n signal_kwargs = signal_kwargs or {}\n\n if self._meta.get(\"abstract\"):\n raise InvalidDocumentError(\"Cannot save an abstract document.\")\n\n signals.pre_save.send(self.__class__, document=self, **signal_kwargs)\n\n if validate:\n self.validate(clean=clean)\n\n if write_concern is None:\n write_concern = {}\n\n doc_id = self.to_mongo(fields=[self._meta[\"id_field\"]])\n created = \"_id\" not in doc_id or self._created or force_insert\n\n signals.pre_save_post_validation.send(\n self.__class__, document=self, created=created, **signal_kwargs\n )\n # it might be refreshed by the pre_save_post_validation hook, e.g., for etag generation\n doc = self.to_mongo()\n\n if self._meta.get(\"auto_create_index\", True):\n self.ensure_indexes()\n\n try:\n # Save a new document or update an existing one\n if created:\n object_id = self._save_create(doc, force_insert, write_concern)\n else:\n object_id, created = self._save_update(\n doc, save_condition, write_concern\n )\n\n if cascade is None:\n cascade = self._meta.get(\"cascade\", False) or cascade_kwargs is not None\n\n if cascade:\n kwargs = {\n \"force_insert\": force_insert,\n \"validate\": validate,\n \"write_concern\": write_concern,\n \"cascade\": cascade,\n }\n if cascade_kwargs: # Allow granular control over cascades\n kwargs.update(cascade_kwargs)\n kwargs[\"_refs\"] = _refs\n self.cascade_save(**kwargs)\n\n except pymongo.errors.DuplicateKeyError as err:\n message = \"Tried to save duplicate unique keys (%s)\"\n raise NotUniqueError(message % err)\n except pymongo.errors.OperationFailure as err:\n message = \"Could not save document (%s)\"\n if re.match(\"^E1100[01] duplicate key\", str(err)):\n # E11000 - duplicate key error index\n # E11001 - duplicate key on update\n message = \"Tried to save duplicate unique keys (%s)\"\n raise NotUniqueError(message % err)\n raise OperationError(message % err)\n\n # Make sure we store the PK on this document now that it's saved\n id_field = self._meta[\"id_field\"]\n if created or id_field not in self._meta.get(\"shard_key\", []):\n self[id_field] = self._fields[id_field].to_python(object_id)\n\n signals.post_save.send(\n self.__class__, document=self, created=created, **signal_kwargs\n )\n\n self._clear_changed_fields()\n self._created = False\n\n return self\n\n def _save_create(self, doc, force_insert, write_concern):\n \"\"\"Save a new document.\n\n Helper method, should only be used inside save().\n \"\"\"\n collection = self._get_collection()\n with set_write_concern(collection, write_concern) as wc_collection:\n if force_insert:\n return wc_collection.insert_one(doc).inserted_id\n # insert_one will provoke UniqueError alongside save does not\n # therefore, it need to catch and call replace_one.\n if \"_id\" in doc:\n select_dict = {\"_id\": doc[\"_id\"]}\n select_dict = self._integrate_shard_key(doc, select_dict)\n raw_object = wc_collection.find_one_and_replace(select_dict, doc)\n if raw_object:\n return doc[\"_id\"]\n\n object_id = wc_collection.insert_one(doc).inserted_id\n\n return object_id\n\n def _get_update_doc(self):\n \"\"\"Return a dict containing all the $set and $unset operations\n that should be sent to MongoDB based on the changes made to this\n Document.\n \"\"\"\n updates, removals = self._delta()\n\n update_doc = {}\n if updates:\n update_doc[\"$set\"] = updates\n if removals:\n update_doc[\"$unset\"] = removals\n\n return update_doc\n\n def _integrate_shard_key(self, doc, select_dict):\n \"\"\"Integrates the collection's shard key to the `select_dict`, which will be used for the query.\n The value from the shard key is taken from the `doc` and finally the select_dict is returned.\n \"\"\"\n\n # Need to add shard key to query, or you get an error\n shard_key = self._meta.get(\"shard_key\", tuple())\n for k in shard_key:\n path = self._lookup_field(k.split(\".\"))\n actual_key = [p.db_field for p in path]\n val = doc\n for ak in actual_key:\n val = val[ak]\n select_dict[\".\".join(actual_key)] = val\n\n return select_dict\n\n def _save_update(self, doc, save_condition, write_concern):\n \"\"\"Update an existing document.\n\n Helper method, should only be used inside save().\n \"\"\"\n collection = self._get_collection()\n object_id = doc[\"_id\"]\n created = False\n\n select_dict = {}\n if save_condition is not None:\n select_dict = transform.query(self.__class__, **save_condition)\n\n select_dict[\"_id\"] = object_id\n\n select_dict = self._integrate_shard_key(doc, select_dict)\n\n update_doc = self._get_update_doc()\n if update_doc:\n upsert = save_condition is None\n with set_write_concern(collection, write_concern) as wc_collection:\n last_error = wc_collection.update_one(\n select_dict, update_doc, upsert=upsert\n ).raw_result\n if not upsert and last_error[\"n\"] == 0:\n raise SaveConditionError(\n \"Race condition preventing document update detected\"\n )\n if last_error is not None:\n updated_existing = last_error.get(\"updatedExisting\")\n if updated_existing is False:\n created = True\n # !!! This is bad, means we accidentally created a new,\n # potentially corrupted document. See\n # https://github.com/MongoEngine/mongoengine/issues/564\n\n return object_id, created\n\n def cascade_save(self, **kwargs):\n \"\"\"Recursively save any references and generic references on the\n document.\n \"\"\"\n _refs = kwargs.get(\"_refs\") or []\n\n ReferenceField = _import_class(\"ReferenceField\")\n GenericReferenceField = _import_class(\"GenericReferenceField\")\n\n for name, cls in self._fields.items():\n if not isinstance(cls, (ReferenceField, GenericReferenceField)):\n continue\n\n ref = self._data.get(name)\n if not ref or isinstance(ref, DBRef):\n continue\n\n if not getattr(ref, \"_changed_fields\", True):\n continue\n\n ref_id = f\"{ref.__class__.__name__},{str(ref._data)}\"\n if ref and ref_id not in _refs:\n _refs.append(ref_id)\n kwargs[\"_refs\"] = _refs\n ref.save(**kwargs)\n ref._changed_fields = []\n\n @property\n def _qs(self):\n \"\"\"Return the default queryset corresponding to this document.\"\"\"\n if not hasattr(self, \"__objects\"):\n self.__objects = QuerySet(self.__class__, self._get_collection())\n return self.__objects\n\n @property\n def _object_key(self):\n \"\"\"Return a query dict that can be used to fetch this document.\n\n Most of the time the dict is a simple PK lookup, but in case of\n a sharded collection with a compound shard key, it can contain a more\n complex query.\n\n Note that the dict returned by this method uses MongoEngine field\n names instead of PyMongo field names (e.g. \"pk\" instead of \"_id\",\n \"some__nested__field\" instead of \"some.nested.field\", etc.).\n \"\"\"\n select_dict = {\"pk\": self.pk}\n shard_key = self.__class__._meta.get(\"shard_key\", tuple())\n for k in shard_key:\n val = self\n field_parts = k.split(\".\")\n for part in field_parts:\n val = getattr(val, part)\n select_dict[\"__\".join(field_parts)] = val\n return select_dict\n\n def update(self, **kwargs):\n \"\"\"Performs an update on the :class:`~mongoengine.Document`\n A convenience wrapper to :meth:`~mongoengine.QuerySet.update`.\n\n Raises :class:`OperationError` if called on an object that has not yet\n been saved.\n \"\"\"\n if self.pk is None:\n if kwargs.get(\"upsert\", False):\n query = self.to_mongo()\n if \"_cls\" in query:\n del query[\"_cls\"]\n return self._qs.filter(**query).update_one(**kwargs)\n else:\n raise OperationError(\"attempt to update a document not yet saved\")\n\n # Need to add shard key to query, or you get an error\n return self._qs.filter(**self._object_key).update_one(**kwargs)\n\n def delete(self, signal_kwargs=None, **write_concern):\n \"\"\"Delete the :class:`~mongoengine.Document` from the database. This\n will only take effect if the document has been previously saved.\n\n :param signal_kwargs: (optional) kwargs dictionary to be passed to\n the signal calls.\n :param write_concern: Extra keyword arguments are passed down which\n will be used as options for the resultant ``getLastError`` command.\n For example, ``save(..., w: 2, fsync: True)`` will\n wait until at least two servers have recorded the write and\n will force an fsync on the primary server.\n \"\"\"\n signal_kwargs = signal_kwargs or {}\n signals.pre_delete.send(self.__class__, document=self, **signal_kwargs)\n\n # Delete FileFields separately\n FileField = _import_class(\"FileField\")\n for name, field in self._fields.items():\n if isinstance(field, FileField):\n getattr(self, name).delete()\n\n try:\n self._qs.filter(**self._object_key).delete(\n write_concern=write_concern, _from_doc_delete=True\n )\n except pymongo.errors.OperationFailure as err:\n message = \"Could not delete document (%s)\" % err.args\n raise OperationError(message)\n signals.post_delete.send(self.__class__, document=self, **signal_kwargs)\n\n def switch_db(self, db_alias, keep_created=True):\n \"\"\"\n Temporarily switch the database for a document instance.\n\n Only really useful for archiving off data and calling `save()`::\n\n user = User.objects.get(id=user_id)\n user.switch_db('archive-db')\n user.save()\n\n :param str db_alias: The database alias to use for saving the document\n\n :param bool keep_created: keep self._created value after switching db, else is reset to True\n\n\n .. seealso::\n Use :class:`~mongoengine.context_managers.switch_collection`\n if you need to read from another collection\n \"\"\"\n with switch_db(self.__class__, db_alias) as cls:\n collection = cls._get_collection()\n db = cls._get_db()\n self._get_collection = lambda: collection\n self._get_db = lambda: db\n self._collection = collection\n self._created = True if not keep_created else self._created\n self.__objects = self._qs\n self.__objects._collection_obj = collection\n return self\n\n def switch_collection(self, collection_name, keep_created=True):\n \"\"\"\n Temporarily switch the collection for a document instance.\n\n Only really useful for archiving off data and calling `save()`::\n\n user = User.objects.get(id=user_id)\n user.switch_collection('old-users')\n user.save()\n\n :param str collection_name: The database alias to use for saving the\n document\n\n :param bool keep_created: keep self._created value after switching collection, else is reset to True\n\n\n .. seealso::\n Use :class:`~mongoengine.context_managers.switch_db`\n if you need to read from another database\n \"\"\"\n with switch_collection(self.__class__, collection_name) as cls:\n collection = cls._get_collection()\n self._get_collection = lambda: collection\n self._collection = collection\n self._created = True if not keep_created else self._created\n self.__objects = self._qs\n self.__objects._collection_obj = collection\n return self\n\n def select_related(self, max_depth=1):\n \"\"\"Handles dereferencing of :class:`~bson.dbref.DBRef` objects to\n a maximum depth in order to cut down the number queries to mongodb.\n \"\"\"\n DeReference = _import_class(\"DeReference\")\n DeReference()([self], max_depth + 1)\n return self\n\n def reload(self, *fields, **kwargs):\n \"\"\"Reloads all attributes from the database.\n\n :param fields: (optional) args list of fields to reload\n :param max_depth: (optional) depth of dereferencing to follow\n \"\"\"\n max_depth = 1\n if fields and isinstance(fields[0], int):\n max_depth = fields[0]\n fields = fields[1:]\n elif \"max_depth\" in kwargs:\n max_depth = kwargs[\"max_depth\"]\n\n if self.pk is None:\n raise self.DoesNotExist(\"Document does not exist\")\n\n obj = (\n self._qs.read_preference(ReadPreference.PRIMARY)\n .filter(**self._object_key)\n .only(*fields)\n .limit(1)\n .select_related(max_depth=max_depth)\n )\n\n if obj:\n obj = obj[0]\n else:\n raise self.DoesNotExist(\"Document does not exist\")\n for field in obj._data:\n if not fields or field in fields:\n try:\n setattr(self, field, self._reload(field, obj[field]))\n except (KeyError, AttributeError):\n try:\n # If field is a special field, e.g. items is stored as _reserved_items,\n # a KeyError is thrown. So try to retrieve the field from _data\n setattr(self, field, self._reload(field, obj._data.get(field)))\n except KeyError:\n # If field is removed from the database while the object\n # is in memory, a reload would cause a KeyError\n # i.e. obj.update(unset__field=1) followed by obj.reload()\n delattr(self, field)\n\n self._changed_fields = (\n list(set(self._changed_fields) - set(fields))\n if fields\n else obj._changed_fields\n )\n self._created = False\n return self\n\n def _reload(self, key, value):\n \"\"\"Used by :meth:`~mongoengine.Document.reload` to ensure the\n correct instance is linked to self.\n \"\"\"\n if isinstance(value, BaseDict):\n value = [(k, self._reload(k, v)) for k, v in value.items()]\n value = BaseDict(value, self, key)\n elif isinstance(value, EmbeddedDocumentList):\n value = [self._reload(key, v) for v in value]\n value = EmbeddedDocumentList(value, self, key)\n elif isinstance(value, BaseList):\n value = [self._reload(key, v) for v in value]\n value = BaseList(value, self, key)\n elif isinstance(value, (EmbeddedDocument, DynamicEmbeddedDocument)):\n value._instance = None\n value._changed_fields = []\n return value\n\n def to_dbref(self):\n \"\"\"Returns an instance of :class:`~bson.dbref.DBRef` useful in\n `__raw__` queries.\"\"\"\n if self.pk is None:\n msg = \"Only saved documents can have a valid dbref\"\n raise OperationError(msg)\n return DBRef(self.__class__._get_collection_name(), self.pk)\n\n @classmethod\n def register_delete_rule(cls, document_cls, field_name, rule):\n \"\"\"This method registers the delete rules to apply when removing this\n object.\n \"\"\"\n classes = [\n get_document(class_name)\n for class_name in cls._subclasses\n if class_name != cls.__name__\n ] + [cls]\n documents = [\n get_document(class_name)\n for class_name in document_cls._subclasses\n if class_name != document_cls.__name__\n ] + [document_cls]\n\n for klass in classes:\n for document_cls in documents:\n delete_rules = klass._meta.get(\"delete_rules\") or {}\n delete_rules[(document_cls, field_name)] = rule\n klass._meta[\"delete_rules\"] = delete_rules\n\n @classmethod\n def drop_collection(cls):\n \"\"\"Drops the entire collection associated with this\n :class:`~mongoengine.Document` type from the database.\n\n Raises :class:`OperationError` if the document has no collection set\n (i.g. if it is `abstract`)\n \"\"\"\n coll_name = cls._get_collection_name()\n if not coll_name:\n raise OperationError(\n \"Document %s has no collection defined (is it abstract ?)\" % cls\n )\n cls._collection = None\n db = cls._get_db()\n db.drop_collection(coll_name)\n\n @classmethod\n def create_index(cls, keys, background=False, **kwargs):\n \"\"\"Creates the given indexes if required.\n\n :param keys: a single index key or a list of index keys (to\n construct a multi-field index); keys may be prefixed with a **+**\n or a **-** to determine the index ordering\n :param background: Allows index creation in the background\n \"\"\"\n index_spec = cls._build_index_spec(keys)\n index_spec = index_spec.copy()\n fields = index_spec.pop(\"fields\")\n index_spec[\"background\"] = background\n index_spec.update(kwargs)\n\n return cls._get_collection().create_index(fields, **index_spec)\n\n @classmethod\n def ensure_index(cls, key_or_list, background=False, **kwargs):\n \"\"\"Ensure that the given indexes are in place. Deprecated in favour\n of create_index.\n\n :param key_or_list: a single index key or a list of index keys (to\n construct a multi-field index); keys may be prefixed with a **+**\n or a **-** to determine the index ordering\n :param background: Allows index creation in the background\n \"\"\"\n return cls.create_index(key_or_list, background=background, **kwargs)\n\n @classmethod\n def ensure_indexes(cls):\n \"\"\"Checks the document meta data and ensures all the indexes exist.\n\n Global defaults can be set in the meta - see :doc:`guide/defining-documents`\n\n .. note:: You can disable automatic index creation by setting\n `auto_create_index` to False in the documents meta data\n \"\"\"\n background = cls._meta.get(\"index_background\", False)\n index_opts = cls._meta.get(\"index_opts\") or {}\n index_cls = cls._meta.get(\"index_cls\", True)\n\n collection = cls._get_collection()\n # 746: when connection is via mongos, the read preference is not necessarily an indication that\n # this code runs on a secondary\n if not collection.is_mongos and collection.read_preference > 1:\n return\n\n # determine if an index which we are creating includes\n # _cls as its first field; if so, we can avoid creating\n # an extra index on _cls, as mongodb will use the existing\n # index to service queries against _cls\n cls_indexed = False\n\n # Ensure document-defined indexes are created\n if cls._meta[\"index_specs\"]:\n index_spec = cls._meta[\"index_specs\"]\n for spec in index_spec:\n spec = spec.copy()\n fields = spec.pop(\"fields\")\n cls_indexed = cls_indexed or includes_cls(fields)\n opts = index_opts.copy()\n opts.update(spec)\n\n # we shouldn't pass 'cls' to the collection.ensureIndex options\n # because of https://jira.mongodb.org/browse/SERVER-769\n if \"cls\" in opts:\n del opts[\"cls\"]\n\n collection.create_index(fields, background=background, **opts)\n\n # If _cls is being used (for polymorphism), it needs an index,\n # only if another index doesn't begin with _cls\n if index_cls and not cls_indexed and cls._meta.get(\"allow_inheritance\"):\n\n # we shouldn't pass 'cls' to the collection.ensureIndex options\n # because of https://jira.mongodb.org/browse/SERVER-769\n if \"cls\" in index_opts:\n del index_opts[\"cls\"]\n\n collection.create_index(\"_cls\", background=background, **index_opts)\n\n @classmethod\n def list_indexes(cls):\n \"\"\"Lists all of the indexes that should be created for given\n collection. It includes all the indexes from super- and sub-classes.\n \"\"\"\n if cls._meta.get(\"abstract\"):\n return []\n\n # get all the base classes, subclasses and siblings\n classes = []\n\n def get_classes(cls):\n\n if cls not in classes and isinstance(cls, TopLevelDocumentMetaclass):\n classes.append(cls)\n\n for base_cls in cls.__bases__:\n if (\n isinstance(base_cls, TopLevelDocumentMetaclass)\n and base_cls != Document\n and not base_cls._meta.get(\"abstract\")\n and base_cls._get_collection().full_name\n == cls._get_collection().full_name\n and base_cls not in classes\n ):\n classes.append(base_cls)\n get_classes(base_cls)\n for subclass in cls.__subclasses__():\n if (\n isinstance(base_cls, TopLevelDocumentMetaclass)\n and subclass._get_collection().full_name\n == cls._get_collection().full_name\n and subclass not in classes\n ):\n classes.append(subclass)\n get_classes(subclass)\n\n get_classes(cls)\n\n # get the indexes spec for all of the gathered classes\n def get_indexes_spec(cls):\n indexes = []\n\n if cls._meta[\"index_specs\"]:\n index_spec = cls._meta[\"index_specs\"]\n for spec in index_spec:\n spec = spec.copy()\n fields = spec.pop(\"fields\")\n indexes.append(fields)\n return indexes\n\n indexes = []\n for klass in classes:\n for index in get_indexes_spec(klass):\n if index not in indexes:\n indexes.append(index)\n\n # finish up by appending { '_id': 1 } and { '_cls': 1 }, if needed\n if [(\"_id\", 1)] not in indexes:\n indexes.append([(\"_id\", 1)])\n if cls._meta.get(\"index_cls\", True) and cls._meta.get(\"allow_inheritance\"):\n indexes.append([(\"_cls\", 1)])\n\n return indexes\n\n @classmethod\n def compare_indexes(cls):\n \"\"\"Compares the indexes defined in MongoEngine with the ones\n existing in the database. Returns any missing/extra indexes.\n \"\"\"\n\n required = cls.list_indexes()\n\n existing = []\n for info in cls._get_collection().index_information().values():\n if \"_fts\" in info[\"key\"][0]:\n index_type = info[\"key\"][0][1]\n text_index_fields = info.get(\"weights\").keys()\n existing.append([(key, index_type) for key in text_index_fields])\n else:\n existing.append(info[\"key\"])\n missing = [index for index in required if index not in existing]\n extra = [index for index in existing if index not in required]\n\n # if { _cls: 1 } is missing, make sure it's *really* necessary\n if [(\"_cls\", 1)] in missing:\n cls_obsolete = False\n for index in existing:\n if includes_cls(index) and index not in extra:\n cls_obsolete = True\n break\n if cls_obsolete:\n missing.remove([(\"_cls\", 1)])\n\n return {\"missing\": missing, \"extra\": extra}\n\n\nclass DynamicDocument(Document, metaclass=TopLevelDocumentMetaclass):\n \"\"\"A Dynamic Document class allowing flexible, expandable and uncontrolled\n schemas. As a :class:`~mongoengine.Document` subclass, acts in the same\n way as an ordinary document but has expanded style properties. Any data\n passed or set against the :class:`~mongoengine.DynamicDocument` that is\n not a field is automatically converted into a\n :class:`~mongoengine.fields.DynamicField` and data can be attributed to that\n field.\n\n .. note::\n\n There is one caveat on Dynamic Documents: undeclared fields cannot start with `_`\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = TopLevelDocumentMetaclass\n\n _dynamic = True\n\n def __delattr__(self, *args, **kwargs):\n \"\"\"Delete the attribute by setting to None and allowing _delta\n to unset it.\n \"\"\"\n field_name = args[0]\n if field_name in self._dynamic_fields:\n setattr(self, field_name, None)\n self._dynamic_fields[field_name].null = False\n else:\n super().__delattr__(*args, **kwargs)\n\n\nclass DynamicEmbeddedDocument(EmbeddedDocument, metaclass=DocumentMetaclass):\n \"\"\"A Dynamic Embedded Document class allowing flexible, expandable and\n uncontrolled schemas. See :class:`~mongoengine.DynamicDocument` for more\n information about dynamic documents.\n \"\"\"\n\n # my_metaclass is defined so that metaclass can be queried in Python 2 & 3\n my_metaclass = DocumentMetaclass\n\n _dynamic = True\n\n def __delattr__(self, *args, **kwargs):\n \"\"\"Delete the attribute by setting to None and allowing _delta\n to unset it.\n \"\"\"\n field_name = args[0]\n if field_name in self._fields:\n default = self._fields[field_name].default\n if callable(default):\n default = default()\n setattr(self, field_name, default)\n else:\n setattr(self, field_name, None)\n\n\nclass MapReduceDocument:\n \"\"\"A document returned from a map/reduce query.\n\n :param collection: An instance of :class:`~pymongo.Collection`\n :param key: Document/result key, often an instance of\n :class:`~bson.objectid.ObjectId`. If supplied as\n an ``ObjectId`` found in the given ``collection``,\n the object can be accessed via the ``object`` property.\n :param value: The result(s) for this key.\n \"\"\"\n\n def __init__(self, document, collection, key, value):\n self._document = document\n self._collection = collection\n self.key = key\n self.value = value\n\n @property\n def object(self):\n \"\"\"Lazy-load the object referenced by ``self.key``. ``self.key``\n should be the ``primary_key``.\n \"\"\"\n id_field = self._document()._meta[\"id_field\"]\n id_field_type = type(id_field)\n\n if not isinstance(self.key, id_field_type):\n try:\n self.key = id_field_type(self.key)\n except Exception:\n raise Exception(\"Could not cast key as %s\" % id_field_type.__name__)\n\n if not hasattr(self, \"_key_object\"):\n self._key_object = self._document.objects.with_id(self.key)\n return self._key_object\n return self._key_object\n", "path": "mongoengine/document.py"}]} |
gh_patches_debug_1188 | rasdani/github-patches | git_diff | liqd__a4-opin-1835 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Sort by section changes automatically to "most recent" on productive
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `euth/ideas/templatetags/idea_tags.py`
Content:
```
1 from django import template
2
3 from euth.ideas.models import Idea
4
5 register = template.Library()
6
7
8 @register.simple_tag
9 def get_range(number, listcount):
10 if number < 3:
11 return range(1, 6)
12 elif number > listcount - 2:
13 return range(listcount - 4, listcount + 1)
14 else:
15 return range(number - 2, number + 3)
16
17
18 @register.simple_tag
19 def is_idea_list(module):
20 return Idea.objects.filter(module=module).count() > 0
21
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/euth/ideas/templatetags/idea_tags.py b/euth/ideas/templatetags/idea_tags.py
--- a/euth/ideas/templatetags/idea_tags.py
+++ b/euth/ideas/templatetags/idea_tags.py
@@ -18,3 +18,12 @@
@register.simple_tag
def is_idea_list(module):
return Idea.objects.filter(module=module).count() > 0
+
+
[email protected]_tag
+def combined_url_parameter(request_query_dict, **kwargs):
+ combined_query_dict = request_query_dict.copy()
+ for key in kwargs:
+ combined_query_dict.setlist(key, [kwargs[key]])
+ encoded_parameter = '?' + combined_query_dict.urlencode()
+ return encoded_parameter
| {"golden_diff": "diff --git a/euth/ideas/templatetags/idea_tags.py b/euth/ideas/templatetags/idea_tags.py\n--- a/euth/ideas/templatetags/idea_tags.py\n+++ b/euth/ideas/templatetags/idea_tags.py\n@@ -18,3 +18,12 @@\n @register.simple_tag\n def is_idea_list(module):\n return Idea.objects.filter(module=module).count() > 0\n+\n+\[email protected]_tag\n+def combined_url_parameter(request_query_dict, **kwargs):\n+ combined_query_dict = request_query_dict.copy()\n+ for key in kwargs:\n+ combined_query_dict.setlist(key, [kwargs[key]])\n+ encoded_parameter = '?' + combined_query_dict.urlencode()\n+ return encoded_parameter\n", "issue": "Sort by section changes automatically to \"most recent\" on productive\n\n", "before_files": [{"content": "from django import template\n\nfrom euth.ideas.models import Idea\n\nregister = template.Library()\n\n\[email protected]_tag\ndef get_range(number, listcount):\n if number < 3:\n return range(1, 6)\n elif number > listcount - 2:\n return range(listcount - 4, listcount + 1)\n else:\n return range(number - 2, number + 3)\n\n\[email protected]_tag\ndef is_idea_list(module):\n return Idea.objects.filter(module=module).count() > 0\n", "path": "euth/ideas/templatetags/idea_tags.py"}], "after_files": [{"content": "from django import template\n\nfrom euth.ideas.models import Idea\n\nregister = template.Library()\n\n\[email protected]_tag\ndef get_range(number, listcount):\n if number < 3:\n return range(1, 6)\n elif number > listcount - 2:\n return range(listcount - 4, listcount + 1)\n else:\n return range(number - 2, number + 3)\n\n\[email protected]_tag\ndef is_idea_list(module):\n return Idea.objects.filter(module=module).count() > 0\n\n\[email protected]_tag\ndef combined_url_parameter(request_query_dict, **kwargs):\n combined_query_dict = request_query_dict.copy()\n for key in kwargs:\n combined_query_dict.setlist(key, [kwargs[key]])\n encoded_parameter = '?' + combined_query_dict.urlencode()\n return encoded_parameter\n", "path": "euth/ideas/templatetags/idea_tags.py"}]} |
gh_patches_debug_1189 | rasdani/github-patches | git_diff | biolab__orange3-5469 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PCA widget not adding the last component
**What's wrong?**
PCA widget does not return the last component even though selected in the widget. More specifically it doesn't work for drag and dropping the vertical line but it does work in the components input field.
**How can we reproduce the problem?**
File with iris dataset -> PCA -> Data Table
Move the vertical line in PCA and observed the output in Data Table.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `Orange/widgets/unsupervised/owpca.py`
Content:
```
1 import numbers
2
3 import numpy
4 from AnyQt.QtWidgets import QFormLayout
5 from AnyQt.QtCore import Qt
6
7 from Orange.data import Table, Domain, StringVariable, ContinuousVariable
8 from Orange.data.util import get_unique_names
9 from Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT
10 from Orange.preprocess import preprocess
11 from Orange.projection import PCA
12 from Orange.widgets import widget, gui, settings
13 from Orange.widgets.utils.slidergraph import SliderGraph
14 from Orange.widgets.utils.widgetpreview import WidgetPreview
15 from Orange.widgets.widget import Input, Output
16
17
18 # Maximum number of PCA components that we can set in the widget
19 MAX_COMPONENTS = 100
20 LINE_NAMES = ["component variance", "cumulative variance"]
21
22
23 class OWPCA(widget.OWWidget):
24 name = "PCA"
25 description = "Principal component analysis with a scree-diagram."
26 icon = "icons/PCA.svg"
27 priority = 3050
28 keywords = ["principal component analysis", "linear transformation"]
29
30 class Inputs:
31 data = Input("Data", Table)
32
33 class Outputs:
34 transformed_data = Output("Transformed Data", Table, replaces=["Transformed data"])
35 data = Output("Data", Table, default=True)
36 components = Output("Components", Table)
37 pca = Output("PCA", PCA, dynamic=False)
38
39 ncomponents = settings.Setting(2)
40 variance_covered = settings.Setting(100)
41 auto_commit = settings.Setting(True)
42 normalize = settings.Setting(True)
43 maxp = settings.Setting(20)
44 axis_labels = settings.Setting(10)
45
46 graph_name = "plot.plotItem"
47
48 class Warning(widget.OWWidget.Warning):
49 trivial_components = widget.Msg(
50 "All components of the PCA are trivial (explain 0 variance). "
51 "Input data is constant (or near constant).")
52
53 class Error(widget.OWWidget.Error):
54 no_features = widget.Msg("At least 1 feature is required")
55 no_instances = widget.Msg("At least 1 data instance is required")
56
57 def __init__(self):
58 super().__init__()
59 self.data = None
60
61 self._pca = None
62 self._transformed = None
63 self._variance_ratio = None
64 self._cumulative = None
65 self._init_projector()
66
67 # Components Selection
68 form = QFormLayout()
69 box = gui.widgetBox(self.controlArea, "Components Selection",
70 orientation=form)
71
72 self.components_spin = gui.spin(
73 box, self, "ncomponents", 1, MAX_COMPONENTS,
74 callback=self._update_selection_component_spin,
75 keyboardTracking=False, addToLayout=False
76 )
77 self.components_spin.setSpecialValueText("All")
78
79 self.variance_spin = gui.spin(
80 box, self, "variance_covered", 1, 100,
81 callback=self._update_selection_variance_spin,
82 keyboardTracking=False, addToLayout=False
83 )
84 self.variance_spin.setSuffix("%")
85
86 form.addRow("Components:", self.components_spin)
87 form.addRow("Explained variance:", self.variance_spin)
88
89 # Options
90 self.options_box = gui.vBox(self.controlArea, "Options")
91 self.normalize_box = gui.checkBox(
92 self.options_box, self, "normalize",
93 "Normalize variables", callback=self._update_normalize,
94 attribute=Qt.WA_LayoutUsesWidgetRect
95 )
96
97 self.maxp_spin = gui.spin(
98 self.options_box, self, "maxp", 1, MAX_COMPONENTS,
99 label="Show only first", callback=self._setup_plot,
100 keyboardTracking=False
101 )
102
103 gui.rubber(self.controlArea)
104
105 gui.auto_apply(self.buttonsArea, self, "auto_commit")
106
107 self.plot = SliderGraph(
108 "Principal Components", "Proportion of variance",
109 self._on_cut_changed)
110
111 self.mainArea.layout().addWidget(self.plot)
112 self._update_normalize()
113
114 @Inputs.data
115 def set_data(self, data):
116 self.clear_messages()
117 self.clear()
118 self.information()
119 self.data = None
120 if not data:
121 self.clear_outputs()
122 if isinstance(data, SqlTable):
123 if data.approx_len() < AUTO_DL_LIMIT:
124 data = Table(data)
125 else:
126 self.information("Data has been sampled")
127 data_sample = data.sample_time(1, no_cache=True)
128 data_sample.download_data(2000, partial=True)
129 data = Table(data_sample)
130 if isinstance(data, Table):
131 if not data.domain.attributes:
132 self.Error.no_features()
133 self.clear_outputs()
134 return
135 if not data:
136 self.Error.no_instances()
137 self.clear_outputs()
138 return
139
140 self._init_projector()
141
142 self.data = data
143 self.fit()
144
145 def fit(self):
146 self.clear()
147 self.Warning.trivial_components.clear()
148 if self.data is None:
149 return
150
151 data = self.data
152
153 if self.normalize:
154 self._pca_projector.preprocessors = \
155 self._pca_preprocessors + [preprocess.Normalize(center=False)]
156 else:
157 self._pca_projector.preprocessors = self._pca_preprocessors
158
159 if not isinstance(data, SqlTable):
160 pca = self._pca_projector(data)
161 variance_ratio = pca.explained_variance_ratio_
162 cumulative = numpy.cumsum(variance_ratio)
163
164 if numpy.isfinite(cumulative[-1]):
165 self.components_spin.setRange(0, len(cumulative))
166 self._pca = pca
167 self._variance_ratio = variance_ratio
168 self._cumulative = cumulative
169 self._setup_plot()
170 else:
171 self.Warning.trivial_components()
172
173 self.unconditional_commit()
174
175 def clear(self):
176 self._pca = None
177 self._transformed = None
178 self._variance_ratio = None
179 self._cumulative = None
180 self.plot.clear_plot()
181
182 def clear_outputs(self):
183 self.Outputs.transformed_data.send(None)
184 self.Outputs.data.send(None)
185 self.Outputs.components.send(None)
186 self.Outputs.pca.send(self._pca_projector)
187
188 def _setup_plot(self):
189 if self._pca is None:
190 self.plot.clear_plot()
191 return
192
193 explained_ratio = self._variance_ratio
194 explained = self._cumulative
195 cutpos = self._nselected_components()
196 p = min(len(self._variance_ratio), self.maxp)
197
198 self.plot.update(
199 numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],
200 [Qt.red, Qt.darkYellow], cutpoint_x=cutpos, names=LINE_NAMES)
201
202 self._update_axis()
203
204 def _on_cut_changed(self, components):
205 if components == self.ncomponents \
206 or self.ncomponents == 0 \
207 or self._pca is not None \
208 and components == len(self._variance_ratio):
209 return
210
211 self.ncomponents = components
212 if self._pca is not None:
213 var = self._cumulative[components - 1]
214 if numpy.isfinite(var):
215 self.variance_covered = int(var * 100)
216
217 self._invalidate_selection()
218
219 def _update_selection_component_spin(self):
220 # cut changed by "ncomponents" spin.
221 if self._pca is None:
222 self._invalidate_selection()
223 return
224
225 if self.ncomponents == 0:
226 # Special "All" value
227 cut = len(self._variance_ratio)
228 else:
229 cut = self.ncomponents
230
231 var = self._cumulative[cut - 1]
232 if numpy.isfinite(var):
233 self.variance_covered = int(var * 100)
234
235 self.plot.set_cut_point(cut)
236 self._invalidate_selection()
237
238 def _update_selection_variance_spin(self):
239 # cut changed by "max variance" spin.
240 if self._pca is None:
241 return
242
243 cut = numpy.searchsorted(self._cumulative,
244 self.variance_covered / 100.0) + 1
245 cut = min(cut, len(self._cumulative))
246 self.ncomponents = cut
247 self.plot.set_cut_point(cut)
248 self._invalidate_selection()
249
250 def _update_normalize(self):
251 self.fit()
252 if self.data is None:
253 self._invalidate_selection()
254
255 def _init_projector(self):
256 self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)
257 self._pca_projector.component = self.ncomponents
258 self._pca_preprocessors = PCA.preprocessors
259
260 def _nselected_components(self):
261 """Return the number of selected components."""
262 if self._pca is None:
263 return 0
264
265 if self.ncomponents == 0:
266 # Special "All" value
267 max_comp = len(self._variance_ratio)
268 else:
269 max_comp = self.ncomponents
270
271 var_max = self._cumulative[max_comp - 1]
272 if var_max != numpy.floor(self.variance_covered / 100.0):
273 cut = max_comp
274 assert numpy.isfinite(var_max)
275 self.variance_covered = int(var_max * 100)
276 else:
277 self.ncomponents = cut = numpy.searchsorted(
278 self._cumulative, self.variance_covered / 100.0) + 1
279 return cut
280
281 def _invalidate_selection(self):
282 self.commit()
283
284 def _update_axis(self):
285 p = min(len(self._variance_ratio), self.maxp)
286 axis = self.plot.getAxis("bottom")
287 d = max((p-1)//(self.axis_labels-1), 1)
288 axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])
289
290 def commit(self):
291 transformed = data = components = None
292 if self._pca is not None:
293 if self._transformed is None:
294 # Compute the full transform (MAX_COMPONENTS components) once.
295 self._transformed = self._pca(self.data)
296 transformed = self._transformed
297
298 domain = Domain(
299 transformed.domain.attributes[:self.ncomponents],
300 self.data.domain.class_vars,
301 self.data.domain.metas
302 )
303 transformed = transformed.from_table(domain, transformed)
304 # prevent caching new features by defining compute_value
305 proposed = [a.name for a in self._pca.orig_domain.attributes]
306 meta_name = get_unique_names(proposed, 'components')
307 dom = Domain(
308 [ContinuousVariable(name, compute_value=lambda _: None)
309 for name in proposed],
310 metas=[StringVariable(name=meta_name)])
311 metas = numpy.array([['PC{}'.format(i + 1)
312 for i in range(self.ncomponents)]],
313 dtype=object).T
314 components = Table(dom, self._pca.components_[:self.ncomponents],
315 metas=metas)
316 components.name = 'components'
317
318 data_dom = Domain(
319 self.data.domain.attributes,
320 self.data.domain.class_vars,
321 self.data.domain.metas + domain.attributes)
322 data = Table.from_numpy(
323 data_dom, self.data.X, self.data.Y,
324 numpy.hstack((self.data.metas, transformed.X)),
325 ids=self.data.ids)
326
327 self._pca_projector.component = self.ncomponents
328 self.Outputs.transformed_data.send(transformed)
329 self.Outputs.components.send(components)
330 self.Outputs.data.send(data)
331 self.Outputs.pca.send(self._pca_projector)
332
333 def send_report(self):
334 if self.data is None:
335 return
336 self.report_items((
337 ("Normalize data", str(self.normalize)),
338 ("Selected components", self.ncomponents),
339 ("Explained variance", "{:.3f} %".format(self.variance_covered))
340 ))
341 self.report_plot()
342
343 @classmethod
344 def migrate_settings(cls, settings, version):
345 if "variance_covered" in settings:
346 # Due to the error in gh-1896 the variance_covered was persisted
347 # as a NaN value, causing a TypeError in the widgets `__init__`.
348 vc = settings["variance_covered"]
349 if isinstance(vc, numbers.Real):
350 if numpy.isfinite(vc):
351 vc = int(vc)
352 else:
353 vc = 100
354 settings["variance_covered"] = vc
355 if settings.get("ncomponents", 0) > MAX_COMPONENTS:
356 settings["ncomponents"] = MAX_COMPONENTS
357
358 # Remove old `decomposition_idx` when SVD was still included
359 settings.pop("decomposition_idx", None)
360
361 # Remove RemotePCA settings
362 settings.pop("batch_size", None)
363 settings.pop("address", None)
364 settings.pop("auto_update", None)
365
366
367 if __name__ == "__main__": # pragma: no cover
368 WidgetPreview(OWPCA).run(Table("housing"))
369
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/Orange/widgets/unsupervised/owpca.py b/Orange/widgets/unsupervised/owpca.py
--- a/Orange/widgets/unsupervised/owpca.py
+++ b/Orange/widgets/unsupervised/owpca.py
@@ -203,9 +203,7 @@
def _on_cut_changed(self, components):
if components == self.ncomponents \
- or self.ncomponents == 0 \
- or self._pca is not None \
- and components == len(self._variance_ratio):
+ or self.ncomponents == 0:
return
self.ncomponents = components
| {"golden_diff": "diff --git a/Orange/widgets/unsupervised/owpca.py b/Orange/widgets/unsupervised/owpca.py\n--- a/Orange/widgets/unsupervised/owpca.py\n+++ b/Orange/widgets/unsupervised/owpca.py\n@@ -203,9 +203,7 @@\n \n def _on_cut_changed(self, components):\n if components == self.ncomponents \\\n- or self.ncomponents == 0 \\\n- or self._pca is not None \\\n- and components == len(self._variance_ratio):\n+ or self.ncomponents == 0:\n return\n \n self.ncomponents = components\n", "issue": "PCA widget not adding the last component\n**What's wrong?**\r\nPCA widget does not return the last component even though selected in the widget. More specifically it doesn't work for drag and dropping the vertical line but it does work in the components input field.\r\n\r\n**How can we reproduce the problem?** \r\nFile with iris dataset -> PCA -> Data Table\r\nMove the vertical line in PCA and observed the output in Data Table.\n", "before_files": [{"content": "import numbers\n\nimport numpy\nfrom AnyQt.QtWidgets import QFormLayout\nfrom AnyQt.QtCore import Qt\n\nfrom Orange.data import Table, Domain, StringVariable, ContinuousVariable\nfrom Orange.data.util import get_unique_names\nfrom Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT\nfrom Orange.preprocess import preprocess\nfrom Orange.projection import PCA\nfrom Orange.widgets import widget, gui, settings\nfrom Orange.widgets.utils.slidergraph import SliderGraph\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Input, Output\n\n\n# Maximum number of PCA components that we can set in the widget\nMAX_COMPONENTS = 100\nLINE_NAMES = [\"component variance\", \"cumulative variance\"]\n\n\nclass OWPCA(widget.OWWidget):\n name = \"PCA\"\n description = \"Principal component analysis with a scree-diagram.\"\n icon = \"icons/PCA.svg\"\n priority = 3050\n keywords = [\"principal component analysis\", \"linear transformation\"]\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n transformed_data = Output(\"Transformed Data\", Table, replaces=[\"Transformed data\"])\n data = Output(\"Data\", Table, default=True)\n components = Output(\"Components\", Table)\n pca = Output(\"PCA\", PCA, dynamic=False)\n\n ncomponents = settings.Setting(2)\n variance_covered = settings.Setting(100)\n auto_commit = settings.Setting(True)\n normalize = settings.Setting(True)\n maxp = settings.Setting(20)\n axis_labels = settings.Setting(10)\n\n graph_name = \"plot.plotItem\"\n\n class Warning(widget.OWWidget.Warning):\n trivial_components = widget.Msg(\n \"All components of the PCA are trivial (explain 0 variance). \"\n \"Input data is constant (or near constant).\")\n\n class Error(widget.OWWidget.Error):\n no_features = widget.Msg(\"At least 1 feature is required\")\n no_instances = widget.Msg(\"At least 1 data instance is required\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self._init_projector()\n\n # Components Selection\n form = QFormLayout()\n box = gui.widgetBox(self.controlArea, \"Components Selection\",\n orientation=form)\n\n self.components_spin = gui.spin(\n box, self, \"ncomponents\", 1, MAX_COMPONENTS,\n callback=self._update_selection_component_spin,\n keyboardTracking=False, addToLayout=False\n )\n self.components_spin.setSpecialValueText(\"All\")\n\n self.variance_spin = gui.spin(\n box, self, \"variance_covered\", 1, 100,\n callback=self._update_selection_variance_spin,\n keyboardTracking=False, addToLayout=False\n )\n self.variance_spin.setSuffix(\"%\")\n\n form.addRow(\"Components:\", self.components_spin)\n form.addRow(\"Explained variance:\", self.variance_spin)\n\n # Options\n self.options_box = gui.vBox(self.controlArea, \"Options\")\n self.normalize_box = gui.checkBox(\n self.options_box, self, \"normalize\",\n \"Normalize variables\", callback=self._update_normalize,\n attribute=Qt.WA_LayoutUsesWidgetRect\n )\n\n self.maxp_spin = gui.spin(\n self.options_box, self, \"maxp\", 1, MAX_COMPONENTS,\n label=\"Show only first\", callback=self._setup_plot,\n keyboardTracking=False\n )\n\n gui.rubber(self.controlArea)\n\n gui.auto_apply(self.buttonsArea, self, \"auto_commit\")\n\n self.plot = SliderGraph(\n \"Principal Components\", \"Proportion of variance\",\n self._on_cut_changed)\n\n self.mainArea.layout().addWidget(self.plot)\n self._update_normalize()\n\n @Inputs.data\n def set_data(self, data):\n self.clear_messages()\n self.clear()\n self.information()\n self.data = None\n if not data:\n self.clear_outputs()\n if isinstance(data, SqlTable):\n if data.approx_len() < AUTO_DL_LIMIT:\n data = Table(data)\n else:\n self.information(\"Data has been sampled\")\n data_sample = data.sample_time(1, no_cache=True)\n data_sample.download_data(2000, partial=True)\n data = Table(data_sample)\n if isinstance(data, Table):\n if not data.domain.attributes:\n self.Error.no_features()\n self.clear_outputs()\n return\n if not data:\n self.Error.no_instances()\n self.clear_outputs()\n return\n\n self._init_projector()\n\n self.data = data\n self.fit()\n\n def fit(self):\n self.clear()\n self.Warning.trivial_components.clear()\n if self.data is None:\n return\n\n data = self.data\n\n if self.normalize:\n self._pca_projector.preprocessors = \\\n self._pca_preprocessors + [preprocess.Normalize(center=False)]\n else:\n self._pca_projector.preprocessors = self._pca_preprocessors\n\n if not isinstance(data, SqlTable):\n pca = self._pca_projector(data)\n variance_ratio = pca.explained_variance_ratio_\n cumulative = numpy.cumsum(variance_ratio)\n\n if numpy.isfinite(cumulative[-1]):\n self.components_spin.setRange(0, len(cumulative))\n self._pca = pca\n self._variance_ratio = variance_ratio\n self._cumulative = cumulative\n self._setup_plot()\n else:\n self.Warning.trivial_components()\n\n self.unconditional_commit()\n\n def clear(self):\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self.plot.clear_plot()\n\n def clear_outputs(self):\n self.Outputs.transformed_data.send(None)\n self.Outputs.data.send(None)\n self.Outputs.components.send(None)\n self.Outputs.pca.send(self._pca_projector)\n\n def _setup_plot(self):\n if self._pca is None:\n self.plot.clear_plot()\n return\n\n explained_ratio = self._variance_ratio\n explained = self._cumulative\n cutpos = self._nselected_components()\n p = min(len(self._variance_ratio), self.maxp)\n\n self.plot.update(\n numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],\n [Qt.red, Qt.darkYellow], cutpoint_x=cutpos, names=LINE_NAMES)\n\n self._update_axis()\n\n def _on_cut_changed(self, components):\n if components == self.ncomponents \\\n or self.ncomponents == 0 \\\n or self._pca is not None \\\n and components == len(self._variance_ratio):\n return\n\n self.ncomponents = components\n if self._pca is not None:\n var = self._cumulative[components - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self._invalidate_selection()\n\n def _update_selection_component_spin(self):\n # cut changed by \"ncomponents\" spin.\n if self._pca is None:\n self._invalidate_selection()\n return\n\n if self.ncomponents == 0:\n # Special \"All\" value\n cut = len(self._variance_ratio)\n else:\n cut = self.ncomponents\n\n var = self._cumulative[cut - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_selection_variance_spin(self):\n # cut changed by \"max variance\" spin.\n if self._pca is None:\n return\n\n cut = numpy.searchsorted(self._cumulative,\n self.variance_covered / 100.0) + 1\n cut = min(cut, len(self._cumulative))\n self.ncomponents = cut\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_normalize(self):\n self.fit()\n if self.data is None:\n self._invalidate_selection()\n\n def _init_projector(self):\n self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)\n self._pca_projector.component = self.ncomponents\n self._pca_preprocessors = PCA.preprocessors\n\n def _nselected_components(self):\n \"\"\"Return the number of selected components.\"\"\"\n if self._pca is None:\n return 0\n\n if self.ncomponents == 0:\n # Special \"All\" value\n max_comp = len(self._variance_ratio)\n else:\n max_comp = self.ncomponents\n\n var_max = self._cumulative[max_comp - 1]\n if var_max != numpy.floor(self.variance_covered / 100.0):\n cut = max_comp\n assert numpy.isfinite(var_max)\n self.variance_covered = int(var_max * 100)\n else:\n self.ncomponents = cut = numpy.searchsorted(\n self._cumulative, self.variance_covered / 100.0) + 1\n return cut\n\n def _invalidate_selection(self):\n self.commit()\n\n def _update_axis(self):\n p = min(len(self._variance_ratio), self.maxp)\n axis = self.plot.getAxis(\"bottom\")\n d = max((p-1)//(self.axis_labels-1), 1)\n axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])\n\n def commit(self):\n transformed = data = components = None\n if self._pca is not None:\n if self._transformed is None:\n # Compute the full transform (MAX_COMPONENTS components) once.\n self._transformed = self._pca(self.data)\n transformed = self._transformed\n\n domain = Domain(\n transformed.domain.attributes[:self.ncomponents],\n self.data.domain.class_vars,\n self.data.domain.metas\n )\n transformed = transformed.from_table(domain, transformed)\n # prevent caching new features by defining compute_value\n proposed = [a.name for a in self._pca.orig_domain.attributes]\n meta_name = get_unique_names(proposed, 'components')\n dom = Domain(\n [ContinuousVariable(name, compute_value=lambda _: None)\n for name in proposed],\n metas=[StringVariable(name=meta_name)])\n metas = numpy.array([['PC{}'.format(i + 1)\n for i in range(self.ncomponents)]],\n dtype=object).T\n components = Table(dom, self._pca.components_[:self.ncomponents],\n metas=metas)\n components.name = 'components'\n\n data_dom = Domain(\n self.data.domain.attributes,\n self.data.domain.class_vars,\n self.data.domain.metas + domain.attributes)\n data = Table.from_numpy(\n data_dom, self.data.X, self.data.Y,\n numpy.hstack((self.data.metas, transformed.X)),\n ids=self.data.ids)\n\n self._pca_projector.component = self.ncomponents\n self.Outputs.transformed_data.send(transformed)\n self.Outputs.components.send(components)\n self.Outputs.data.send(data)\n self.Outputs.pca.send(self._pca_projector)\n\n def send_report(self):\n if self.data is None:\n return\n self.report_items((\n (\"Normalize data\", str(self.normalize)),\n (\"Selected components\", self.ncomponents),\n (\"Explained variance\", \"{:.3f} %\".format(self.variance_covered))\n ))\n self.report_plot()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if \"variance_covered\" in settings:\n # Due to the error in gh-1896 the variance_covered was persisted\n # as a NaN value, causing a TypeError in the widgets `__init__`.\n vc = settings[\"variance_covered\"]\n if isinstance(vc, numbers.Real):\n if numpy.isfinite(vc):\n vc = int(vc)\n else:\n vc = 100\n settings[\"variance_covered\"] = vc\n if settings.get(\"ncomponents\", 0) > MAX_COMPONENTS:\n settings[\"ncomponents\"] = MAX_COMPONENTS\n\n # Remove old `decomposition_idx` when SVD was still included\n settings.pop(\"decomposition_idx\", None)\n\n # Remove RemotePCA settings\n settings.pop(\"batch_size\", None)\n settings.pop(\"address\", None)\n settings.pop(\"auto_update\", None)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWPCA).run(Table(\"housing\"))\n", "path": "Orange/widgets/unsupervised/owpca.py"}], "after_files": [{"content": "import numbers\n\nimport numpy\nfrom AnyQt.QtWidgets import QFormLayout\nfrom AnyQt.QtCore import Qt\n\nfrom Orange.data import Table, Domain, StringVariable, ContinuousVariable\nfrom Orange.data.util import get_unique_names\nfrom Orange.data.sql.table import SqlTable, AUTO_DL_LIMIT\nfrom Orange.preprocess import preprocess\nfrom Orange.projection import PCA\nfrom Orange.widgets import widget, gui, settings\nfrom Orange.widgets.utils.slidergraph import SliderGraph\nfrom Orange.widgets.utils.widgetpreview import WidgetPreview\nfrom Orange.widgets.widget import Input, Output\n\n\n# Maximum number of PCA components that we can set in the widget\nMAX_COMPONENTS = 100\nLINE_NAMES = [\"component variance\", \"cumulative variance\"]\n\n\nclass OWPCA(widget.OWWidget):\n name = \"PCA\"\n description = \"Principal component analysis with a scree-diagram.\"\n icon = \"icons/PCA.svg\"\n priority = 3050\n keywords = [\"principal component analysis\", \"linear transformation\"]\n\n class Inputs:\n data = Input(\"Data\", Table)\n\n class Outputs:\n transformed_data = Output(\"Transformed Data\", Table, replaces=[\"Transformed data\"])\n data = Output(\"Data\", Table, default=True)\n components = Output(\"Components\", Table)\n pca = Output(\"PCA\", PCA, dynamic=False)\n\n ncomponents = settings.Setting(2)\n variance_covered = settings.Setting(100)\n auto_commit = settings.Setting(True)\n normalize = settings.Setting(True)\n maxp = settings.Setting(20)\n axis_labels = settings.Setting(10)\n\n graph_name = \"plot.plotItem\"\n\n class Warning(widget.OWWidget.Warning):\n trivial_components = widget.Msg(\n \"All components of the PCA are trivial (explain 0 variance). \"\n \"Input data is constant (or near constant).\")\n\n class Error(widget.OWWidget.Error):\n no_features = widget.Msg(\"At least 1 feature is required\")\n no_instances = widget.Msg(\"At least 1 data instance is required\")\n\n def __init__(self):\n super().__init__()\n self.data = None\n\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self._init_projector()\n\n # Components Selection\n form = QFormLayout()\n box = gui.widgetBox(self.controlArea, \"Components Selection\",\n orientation=form)\n\n self.components_spin = gui.spin(\n box, self, \"ncomponents\", 1, MAX_COMPONENTS,\n callback=self._update_selection_component_spin,\n keyboardTracking=False, addToLayout=False\n )\n self.components_spin.setSpecialValueText(\"All\")\n\n self.variance_spin = gui.spin(\n box, self, \"variance_covered\", 1, 100,\n callback=self._update_selection_variance_spin,\n keyboardTracking=False, addToLayout=False\n )\n self.variance_spin.setSuffix(\"%\")\n\n form.addRow(\"Components:\", self.components_spin)\n form.addRow(\"Explained variance:\", self.variance_spin)\n\n # Options\n self.options_box = gui.vBox(self.controlArea, \"Options\")\n self.normalize_box = gui.checkBox(\n self.options_box, self, \"normalize\",\n \"Normalize variables\", callback=self._update_normalize,\n attribute=Qt.WA_LayoutUsesWidgetRect\n )\n\n self.maxp_spin = gui.spin(\n self.options_box, self, \"maxp\", 1, MAX_COMPONENTS,\n label=\"Show only first\", callback=self._setup_plot,\n keyboardTracking=False\n )\n\n gui.rubber(self.controlArea)\n\n gui.auto_apply(self.buttonsArea, self, \"auto_commit\")\n\n self.plot = SliderGraph(\n \"Principal Components\", \"Proportion of variance\",\n self._on_cut_changed)\n\n self.mainArea.layout().addWidget(self.plot)\n self._update_normalize()\n\n @Inputs.data\n def set_data(self, data):\n self.clear_messages()\n self.clear()\n self.information()\n self.data = None\n if not data:\n self.clear_outputs()\n if isinstance(data, SqlTable):\n if data.approx_len() < AUTO_DL_LIMIT:\n data = Table(data)\n else:\n self.information(\"Data has been sampled\")\n data_sample = data.sample_time(1, no_cache=True)\n data_sample.download_data(2000, partial=True)\n data = Table(data_sample)\n if isinstance(data, Table):\n if not data.domain.attributes:\n self.Error.no_features()\n self.clear_outputs()\n return\n if not data:\n self.Error.no_instances()\n self.clear_outputs()\n return\n\n self._init_projector()\n\n self.data = data\n self.fit()\n\n def fit(self):\n self.clear()\n self.Warning.trivial_components.clear()\n if self.data is None:\n return\n\n data = self.data\n\n if self.normalize:\n self._pca_projector.preprocessors = \\\n self._pca_preprocessors + [preprocess.Normalize(center=False)]\n else:\n self._pca_projector.preprocessors = self._pca_preprocessors\n\n if not isinstance(data, SqlTable):\n pca = self._pca_projector(data)\n variance_ratio = pca.explained_variance_ratio_\n cumulative = numpy.cumsum(variance_ratio)\n\n if numpy.isfinite(cumulative[-1]):\n self.components_spin.setRange(0, len(cumulative))\n self._pca = pca\n self._variance_ratio = variance_ratio\n self._cumulative = cumulative\n self._setup_plot()\n else:\n self.Warning.trivial_components()\n\n self.unconditional_commit()\n\n def clear(self):\n self._pca = None\n self._transformed = None\n self._variance_ratio = None\n self._cumulative = None\n self.plot.clear_plot()\n\n def clear_outputs(self):\n self.Outputs.transformed_data.send(None)\n self.Outputs.data.send(None)\n self.Outputs.components.send(None)\n self.Outputs.pca.send(self._pca_projector)\n\n def _setup_plot(self):\n if self._pca is None:\n self.plot.clear_plot()\n return\n\n explained_ratio = self._variance_ratio\n explained = self._cumulative\n cutpos = self._nselected_components()\n p = min(len(self._variance_ratio), self.maxp)\n\n self.plot.update(\n numpy.arange(1, p+1), [explained_ratio[:p], explained[:p]],\n [Qt.red, Qt.darkYellow], cutpoint_x=cutpos, names=LINE_NAMES)\n\n self._update_axis()\n\n def _on_cut_changed(self, components):\n if components == self.ncomponents \\\n or self.ncomponents == 0:\n return\n\n self.ncomponents = components\n if self._pca is not None:\n var = self._cumulative[components - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self._invalidate_selection()\n\n def _update_selection_component_spin(self):\n # cut changed by \"ncomponents\" spin.\n if self._pca is None:\n self._invalidate_selection()\n return\n\n if self.ncomponents == 0:\n # Special \"All\" value\n cut = len(self._variance_ratio)\n else:\n cut = self.ncomponents\n\n var = self._cumulative[cut - 1]\n if numpy.isfinite(var):\n self.variance_covered = int(var * 100)\n\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_selection_variance_spin(self):\n # cut changed by \"max variance\" spin.\n if self._pca is None:\n return\n\n cut = numpy.searchsorted(self._cumulative,\n self.variance_covered / 100.0) + 1\n cut = min(cut, len(self._cumulative))\n self.ncomponents = cut\n self.plot.set_cut_point(cut)\n self._invalidate_selection()\n\n def _update_normalize(self):\n self.fit()\n if self.data is None:\n self._invalidate_selection()\n\n def _init_projector(self):\n self._pca_projector = PCA(n_components=MAX_COMPONENTS, random_state=0)\n self._pca_projector.component = self.ncomponents\n self._pca_preprocessors = PCA.preprocessors\n\n def _nselected_components(self):\n \"\"\"Return the number of selected components.\"\"\"\n if self._pca is None:\n return 0\n\n if self.ncomponents == 0:\n # Special \"All\" value\n max_comp = len(self._variance_ratio)\n else:\n max_comp = self.ncomponents\n\n var_max = self._cumulative[max_comp - 1]\n if var_max != numpy.floor(self.variance_covered / 100.0):\n cut = max_comp\n assert numpy.isfinite(var_max)\n self.variance_covered = int(var_max * 100)\n else:\n self.ncomponents = cut = numpy.searchsorted(\n self._cumulative, self.variance_covered / 100.0) + 1\n return cut\n\n def _invalidate_selection(self):\n self.commit()\n\n def _update_axis(self):\n p = min(len(self._variance_ratio), self.maxp)\n axis = self.plot.getAxis(\"bottom\")\n d = max((p-1)//(self.axis_labels-1), 1)\n axis.setTicks([[(i, str(i)) for i in range(1, p + 1, d)]])\n\n def commit(self):\n transformed = data = components = None\n if self._pca is not None:\n if self._transformed is None:\n # Compute the full transform (MAX_COMPONENTS components) once.\n self._transformed = self._pca(self.data)\n transformed = self._transformed\n\n domain = Domain(\n transformed.domain.attributes[:self.ncomponents],\n self.data.domain.class_vars,\n self.data.domain.metas\n )\n transformed = transformed.from_table(domain, transformed)\n # prevent caching new features by defining compute_value\n proposed = [a.name for a in self._pca.orig_domain.attributes]\n meta_name = get_unique_names(proposed, 'components')\n dom = Domain(\n [ContinuousVariable(name, compute_value=lambda _: None)\n for name in proposed],\n metas=[StringVariable(name=meta_name)])\n metas = numpy.array([['PC{}'.format(i + 1)\n for i in range(self.ncomponents)]],\n dtype=object).T\n components = Table(dom, self._pca.components_[:self.ncomponents],\n metas=metas)\n components.name = 'components'\n\n data_dom = Domain(\n self.data.domain.attributes,\n self.data.domain.class_vars,\n self.data.domain.metas + domain.attributes)\n data = Table.from_numpy(\n data_dom, self.data.X, self.data.Y,\n numpy.hstack((self.data.metas, transformed.X)),\n ids=self.data.ids)\n\n self._pca_projector.component = self.ncomponents\n self.Outputs.transformed_data.send(transformed)\n self.Outputs.components.send(components)\n self.Outputs.data.send(data)\n self.Outputs.pca.send(self._pca_projector)\n\n def send_report(self):\n if self.data is None:\n return\n self.report_items((\n (\"Normalize data\", str(self.normalize)),\n (\"Selected components\", self.ncomponents),\n (\"Explained variance\", \"{:.3f} %\".format(self.variance_covered))\n ))\n self.report_plot()\n\n @classmethod\n def migrate_settings(cls, settings, version):\n if \"variance_covered\" in settings:\n # Due to the error in gh-1896 the variance_covered was persisted\n # as a NaN value, causing a TypeError in the widgets `__init__`.\n vc = settings[\"variance_covered\"]\n if isinstance(vc, numbers.Real):\n if numpy.isfinite(vc):\n vc = int(vc)\n else:\n vc = 100\n settings[\"variance_covered\"] = vc\n if settings.get(\"ncomponents\", 0) > MAX_COMPONENTS:\n settings[\"ncomponents\"] = MAX_COMPONENTS\n\n # Remove old `decomposition_idx` when SVD was still included\n settings.pop(\"decomposition_idx\", None)\n\n # Remove RemotePCA settings\n settings.pop(\"batch_size\", None)\n settings.pop(\"address\", None)\n settings.pop(\"auto_update\", None)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n WidgetPreview(OWPCA).run(Table(\"housing\"))\n", "path": "Orange/widgets/unsupervised/owpca.py"}]} |
gh_patches_debug_1190 | rasdani/github-patches | git_diff | CiviWiki__OpenCiviWiki-980 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Restore SessionAuthenticationMiddleware
We aim to move away from having a heavy JavaScript front-end, preferring instead to use Django templates (and sprinkles of JS where needed). This means we can use SessionAuthenticationMiddleware.
This will also require restoring the default authentication classes in `settings.py`
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `project/core/settings.py`
Content:
```
1 """
2 Django settings for civiwiki project.
3 Darius Calliet May 12, 2016
4
5 Production settings file to select proper environment variables.
6 """
7 import os
8
9 # False if not in os.environ
10 DEBUG = os.getenv("DEBUG", False)
11
12 # defaults to second value if not found in os.environ
13 DJANGO_HOST = os.getenv("DJANGO_HOST", "LOCALHOST")
14
15 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
16 SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", "TEST_KEY_FOR_DEVELOPMENT")
17 ALLOWED_HOSTS = [".herokuapp.com", ".civiwiki.org", "127.0.0.1", "localhost", "0.0.0.0"]
18
19 INSTALLED_APPS = (
20 "django.contrib.admin",
21 "django.contrib.auth",
22 "django.contrib.contenttypes",
23 "django.contrib.sessions",
24 "django.contrib.messages",
25 "django.contrib.staticfiles",
26 "django_extensions",
27 "storages",
28 "core", # TODO: consider removing this, if we can move the decorators, etc. to an actual app
29 "api",
30 "rest_framework",
31 "accounts",
32 "threads",
33 "frontend_views",
34 "notifications",
35 "corsheaders",
36 "taggit",
37 )
38
39 MIDDLEWARE = [
40 "corsheaders.middleware.CorsMiddleware",
41 "django.middleware.security.SecurityMiddleware",
42 "whitenoise.middleware.WhiteNoiseMiddleware",
43 "django.contrib.sessions.middleware.SessionMiddleware",
44 "django.middleware.common.CommonMiddleware",
45 "django.middleware.csrf.CsrfViewMiddleware",
46 "django.contrib.auth.middleware.AuthenticationMiddleware",
47 # 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
48 "django.contrib.messages.middleware.MessageMiddleware",
49 "django.middleware.clickjacking.XFrameOptionsMiddleware",
50 ]
51
52 CSRF_USE_SESSIONS = (
53 True # Store the CSRF token in the users session instead of in a cookie
54 )
55
56 CORS_ORIGIN_ALLOW_ALL = True
57 ROOT_URLCONF = "core.urls"
58 LOGIN_URL = "/login"
59
60 # SSL Setup
61 if DJANGO_HOST != "LOCALHOST":
62 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
63 SECURE_SSL_REDIRECT = True
64 SESSION_COOKIE_SECURE = True
65 CSRF_COOKIE_SECURE = True
66
67 # Internationalization & Localization
68 LANGUAGE_CODE = "en-us"
69 TIME_ZONE = "UTC"
70 USE_I18N = True
71 USE_L10N = True
72 USE_TZ = True
73
74 TEMPLATES = [
75 {
76 "BACKEND": "django.template.backends.django.DjangoTemplates",
77 "DIRS": [
78 os.path.join(BASE_DIR, "threads/templates/threads"), os.path.join(BASE_DIR, "accounts/templates/accounts")
79 ], # TODO: Add non-webapp template directory
80 "APP_DIRS": True,
81 "OPTIONS": {
82 "context_processors": [
83 "django.template.context_processors.debug",
84 "django.template.context_processors.request",
85 "django.contrib.auth.context_processors.auth",
86 "django.contrib.messages.context_processors.messages",
87 ],
88 },
89 },
90 ]
91
92 WSGI_APPLICATION = "core.wsgi.application"
93
94 # Apex Contact for Production Errors
95 ADMINS = [("Development Team", "[email protected]")]
96
97 # AWS S3 Setup
98 if "AWS_STORAGE_BUCKET_NAME" not in os.environ:
99 MEDIA_URL = "/media/"
100 MEDIA_ROOT = os.path.join(BASE_DIR, "media")
101 else:
102 AWS_STORAGE_BUCKET_NAME = os.getenv("AWS_STORAGE_BUCKET_NAME")
103 AWS_S3_ACCESS_KEY_ID = os.getenv("AWS_S3_ACCESS_KEY_ID")
104 AWS_S3_SECRET_ACCESS_KEY = os.getenv("AWS_S3_SECRET_ACCESS_KEY")
105 DEFAULT_FILE_STORAGE = "storages.backends.s3boto.S3BotoStorage"
106 AWS_S3_SECURE_URLS = False
107 AWS_QUERYSTRING_AUTH = False
108
109 STATIC_URL = "/static/"
110 STATICFILES_DIRS = (os.path.join(BASE_DIR, "threads/templates/static"),)
111 STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
112
113 # TODO: re-organize and simplify staticfiles settings
114 if "CIVIWIKI_LOCAL_NAME" not in os.environ:
115 STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
116
117 # Use DATABASE_URL in production
118 DATABASE_URL = os.getenv("DATABASE_URL")
119
120 if DATABASE_URL is not None:
121 DATABASES = {"default": DATABASE_URL}
122 else:
123 # Default to sqlite for simplicity in development
124 DATABASES = {
125 "default": {
126 "ENGINE": "django.db.backends.sqlite3",
127 "NAME": BASE_DIR + "/" + "db.sqlite3",
128 }
129 }
130
131 # Email Backend Setup
132 if "EMAIL_HOST" not in os.environ:
133 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
134 EMAIL_HOST_USER = "[email protected]"
135 else:
136 EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
137 EMAIL_HOST = os.getenv("EMAIL_HOST")
138 EMAIL_PORT = os.getenv("EMAIL_PORT")
139 EMAIL_HOST_USER = os.getenv("EMAIL_HOST_USER")
140 EMAIL_HOST_PASSWORD = os.getenv("EMAIL_HOST_PASSWORD")
141 EMAIL_USE_SSL = True
142 DEFAULT_FROM_EMAIL = EMAIL_HOST
143
144 # Notification API Settings
145 NOTIFICATIONS_SOFT_DELETE = True
146 NOTIFICATIONS_USE_JSONFIELD = True
147
148 # Django REST API Settings
149 DEFAULT_RENDERER_CLASSES = ("rest_framework.renderers.JSONRenderer",)
150
151 DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.BasicAuthentication",)
152
153 if DEBUG:
154 # Browsable HTML - Enabled only in Debug mode (dev)
155 DEFAULT_RENDERER_CLASSES = DEFAULT_RENDERER_CLASSES + (
156 "rest_framework.renderers.BrowsableAPIRenderer",
157 )
158
159 DEFAULT_AUTHENTICATION_CLASSES = (
160 "api.authentication.CsrfExemptSessionAuthentication",
161 ) + DEFAULT_AUTHENTICATION_CLASSES
162
163 REST_FRAMEWORK = {
164 "DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),
165 "DEFAULT_RENDERER_CLASSES": DEFAULT_RENDERER_CLASSES,
166 "DEFAULT_AUTHENTICATION_CLASSES": DEFAULT_AUTHENTICATION_CLASSES,
167 }
168
169 # CORS Settings
170 CORS_ORIGIN_ALLOW_ALL = True
171
172 # Custom User model
173 AUTH_USER_MODEL = 'accounts.User'
174
175 APPEND_SLASH = False
176
177 DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'
178
179 LOGIN_REDIRECT_URL = '/'
180
181 AUTH_PASSWORD_VALIDATORS = [
182 {
183 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
184 },
185 {
186 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
187 'OPTIONS': {
188 'min_length': 8,
189 }
190 },
191 {
192 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
193 },
194 {
195 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
196 },
197 ]
198
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/project/core/settings.py b/project/core/settings.py
--- a/project/core/settings.py
+++ b/project/core/settings.py
@@ -148,7 +148,7 @@
# Django REST API Settings
DEFAULT_RENDERER_CLASSES = ("rest_framework.renderers.JSONRenderer",)
-DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.BasicAuthentication",)
+DEFAULT_AUTHENTICATION_CLASSES = ("rest_framework.authentication.SessionAuthentication",)
if DEBUG:
# Browsable HTML - Enabled only in Debug mode (dev)
| {"golden_diff": "diff --git a/project/core/settings.py b/project/core/settings.py\n--- a/project/core/settings.py\n+++ b/project/core/settings.py\n@@ -148,7 +148,7 @@\n # Django REST API Settings\n DEFAULT_RENDERER_CLASSES = (\"rest_framework.renderers.JSONRenderer\",)\n \n-DEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.BasicAuthentication\",)\n+DEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.SessionAuthentication\",)\n \n if DEBUG:\n # Browsable HTML - Enabled only in Debug mode (dev)\n", "issue": "Restore SessionAuthenticationMiddleware\nWe aim to move away from having a heavy JavaScript front-end, preferring instead to use Django templates (and sprinkles of JS where needed). This means we can use SessionAuthenticationMiddleware.\r\n\r\nThis will also require restoring the default authentication classes in `settings.py`\n", "before_files": [{"content": "\"\"\"\nDjango settings for civiwiki project.\nDarius Calliet May 12, 2016\n\nProduction settings file to select proper environment variables.\n\"\"\"\nimport os\n\n# False if not in os.environ\nDEBUG = os.getenv(\"DEBUG\", False)\n\n# defaults to second value if not found in os.environ\nDJANGO_HOST = os.getenv(\"DJANGO_HOST\", \"LOCALHOST\")\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nSECRET_KEY = os.getenv(\"DJANGO_SECRET_KEY\", \"TEST_KEY_FOR_DEVELOPMENT\")\nALLOWED_HOSTS = [\".herokuapp.com\", \".civiwiki.org\", \"127.0.0.1\", \"localhost\", \"0.0.0.0\"]\n\nINSTALLED_APPS = (\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django_extensions\",\n \"storages\",\n \"core\", # TODO: consider removing this, if we can move the decorators, etc. to an actual app\n \"api\",\n \"rest_framework\",\n \"accounts\",\n \"threads\",\n \"frontend_views\",\n \"notifications\",\n \"corsheaders\",\n \"taggit\",\n)\n\nMIDDLEWARE = [\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n # 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCSRF_USE_SESSIONS = (\n True # Store the CSRF token in the users session instead of in a cookie\n)\n\nCORS_ORIGIN_ALLOW_ALL = True\nROOT_URLCONF = \"core.urls\"\nLOGIN_URL = \"/login\"\n\n# SSL Setup\nif DJANGO_HOST != \"LOCALHOST\":\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n SECURE_SSL_REDIRECT = True\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n\n# Internationalization & Localization\nLANGUAGE_CODE = \"en-us\"\nTIME_ZONE = \"UTC\"\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n os.path.join(BASE_DIR, \"threads/templates/threads\"), os.path.join(BASE_DIR, \"accounts/templates/accounts\")\n ], # TODO: Add non-webapp template directory\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"core.wsgi.application\"\n\n# Apex Contact for Production Errors\nADMINS = [(\"Development Team\", \"[email protected]\")]\n\n# AWS S3 Setup\nif \"AWS_STORAGE_BUCKET_NAME\" not in os.environ:\n MEDIA_URL = \"/media/\"\n MEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nelse:\n AWS_STORAGE_BUCKET_NAME = os.getenv(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_ACCESS_KEY_ID = os.getenv(\"AWS_S3_ACCESS_KEY_ID\")\n AWS_S3_SECRET_ACCESS_KEY = os.getenv(\"AWS_S3_SECRET_ACCESS_KEY\")\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto.S3BotoStorage\"\n AWS_S3_SECURE_URLS = False\n AWS_QUERYSTRING_AUTH = False\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"threads/templates/static\"),)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"staticfiles\")\n\n# TODO: re-organize and simplify staticfiles settings\nif \"CIVIWIKI_LOCAL_NAME\" not in os.environ:\n STATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\n# Use DATABASE_URL in production\nDATABASE_URL = os.getenv(\"DATABASE_URL\")\n\nif DATABASE_URL is not None:\n DATABASES = {\"default\": DATABASE_URL}\nelse:\n # Default to sqlite for simplicity in development\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": BASE_DIR + \"/\" + \"db.sqlite3\",\n }\n }\n\n# Email Backend Setup\nif \"EMAIL_HOST\" not in os.environ:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n EMAIL_HOST_USER = \"[email protected]\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.getenv(\"EMAIL_HOST\")\n EMAIL_PORT = os.getenv(\"EMAIL_PORT\")\n EMAIL_HOST_USER = os.getenv(\"EMAIL_HOST_USER\")\n EMAIL_HOST_PASSWORD = os.getenv(\"EMAIL_HOST_PASSWORD\")\n EMAIL_USE_SSL = True\n DEFAULT_FROM_EMAIL = EMAIL_HOST\n\n# Notification API Settings\nNOTIFICATIONS_SOFT_DELETE = True\nNOTIFICATIONS_USE_JSONFIELD = True\n\n# Django REST API Settings\nDEFAULT_RENDERER_CLASSES = (\"rest_framework.renderers.JSONRenderer\",)\n\nDEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.BasicAuthentication\",)\n\nif DEBUG:\n # Browsable HTML - Enabled only in Debug mode (dev)\n DEFAULT_RENDERER_CLASSES = DEFAULT_RENDERER_CLASSES + (\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n )\n\n DEFAULT_AUTHENTICATION_CLASSES = (\n \"api.authentication.CsrfExemptSessionAuthentication\",\n ) + DEFAULT_AUTHENTICATION_CLASSES\n\nREST_FRAMEWORK = {\n \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAuthenticated\",),\n \"DEFAULT_RENDERER_CLASSES\": DEFAULT_RENDERER_CLASSES,\n \"DEFAULT_AUTHENTICATION_CLASSES\": DEFAULT_AUTHENTICATION_CLASSES,\n}\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# Custom User model\nAUTH_USER_MODEL = 'accounts.User'\n\nAPPEND_SLASH = False\n\nDEFAULT_AUTO_FIELD = 'django.db.models.AutoField'\n\nLOGIN_REDIRECT_URL = '/'\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 8,\n }\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n", "path": "project/core/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for civiwiki project.\nDarius Calliet May 12, 2016\n\nProduction settings file to select proper environment variables.\n\"\"\"\nimport os\n\n# False if not in os.environ\nDEBUG = os.getenv(\"DEBUG\", False)\n\n# defaults to second value if not found in os.environ\nDJANGO_HOST = os.getenv(\"DJANGO_HOST\", \"LOCALHOST\")\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\nSECRET_KEY = os.getenv(\"DJANGO_SECRET_KEY\", \"TEST_KEY_FOR_DEVELOPMENT\")\nALLOWED_HOSTS = [\".herokuapp.com\", \".civiwiki.org\", \"127.0.0.1\", \"localhost\", \"0.0.0.0\"]\n\nINSTALLED_APPS = (\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django_extensions\",\n \"storages\",\n \"core\", # TODO: consider removing this, if we can move the decorators, etc. to an actual app\n \"api\",\n \"rest_framework\",\n \"accounts\",\n \"threads\",\n \"frontend_views\",\n \"notifications\",\n \"corsheaders\",\n \"taggit\",\n)\n\nMIDDLEWARE = [\n \"corsheaders.middleware.CorsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n # 'django.contrib.auth.middleware.SessionAuthenticationMiddleware',\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nCSRF_USE_SESSIONS = (\n True # Store the CSRF token in the users session instead of in a cookie\n)\n\nCORS_ORIGIN_ALLOW_ALL = True\nROOT_URLCONF = \"core.urls\"\nLOGIN_URL = \"/login\"\n\n# SSL Setup\nif DJANGO_HOST != \"LOCALHOST\":\n SECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\n SECURE_SSL_REDIRECT = True\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n\n# Internationalization & Localization\nLANGUAGE_CODE = \"en-us\"\nTIME_ZONE = \"UTC\"\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [\n os.path.join(BASE_DIR, \"threads/templates/threads\"), os.path.join(BASE_DIR, \"accounts/templates/accounts\")\n ], # TODO: Add non-webapp template directory\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"core.wsgi.application\"\n\n# Apex Contact for Production Errors\nADMINS = [(\"Development Team\", \"[email protected]\")]\n\n# AWS S3 Setup\nif \"AWS_STORAGE_BUCKET_NAME\" not in os.environ:\n MEDIA_URL = \"/media/\"\n MEDIA_ROOT = os.path.join(BASE_DIR, \"media\")\nelse:\n AWS_STORAGE_BUCKET_NAME = os.getenv(\"AWS_STORAGE_BUCKET_NAME\")\n AWS_S3_ACCESS_KEY_ID = os.getenv(\"AWS_S3_ACCESS_KEY_ID\")\n AWS_S3_SECRET_ACCESS_KEY = os.getenv(\"AWS_S3_SECRET_ACCESS_KEY\")\n DEFAULT_FILE_STORAGE = \"storages.backends.s3boto.S3BotoStorage\"\n AWS_S3_SECURE_URLS = False\n AWS_QUERYSTRING_AUTH = False\n\nSTATIC_URL = \"/static/\"\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"threads/templates/static\"),)\nSTATIC_ROOT = os.path.join(BASE_DIR, \"staticfiles\")\n\n# TODO: re-organize and simplify staticfiles settings\nif \"CIVIWIKI_LOCAL_NAME\" not in os.environ:\n STATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\n\n# Use DATABASE_URL in production\nDATABASE_URL = os.getenv(\"DATABASE_URL\")\n\nif DATABASE_URL is not None:\n DATABASES = {\"default\": DATABASE_URL}\nelse:\n # Default to sqlite for simplicity in development\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": BASE_DIR + \"/\" + \"db.sqlite3\",\n }\n }\n\n# Email Backend Setup\nif \"EMAIL_HOST\" not in os.environ:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n EMAIL_HOST_USER = \"[email protected]\"\nelse:\n EMAIL_BACKEND = \"django.core.mail.backends.smtp.EmailBackend\"\n EMAIL_HOST = os.getenv(\"EMAIL_HOST\")\n EMAIL_PORT = os.getenv(\"EMAIL_PORT\")\n EMAIL_HOST_USER = os.getenv(\"EMAIL_HOST_USER\")\n EMAIL_HOST_PASSWORD = os.getenv(\"EMAIL_HOST_PASSWORD\")\n EMAIL_USE_SSL = True\n DEFAULT_FROM_EMAIL = EMAIL_HOST\n\n# Notification API Settings\nNOTIFICATIONS_SOFT_DELETE = True\nNOTIFICATIONS_USE_JSONFIELD = True\n\n# Django REST API Settings\nDEFAULT_RENDERER_CLASSES = (\"rest_framework.renderers.JSONRenderer\",)\n\nDEFAULT_AUTHENTICATION_CLASSES = (\"rest_framework.authentication.SessionAuthentication\",)\n\nif DEBUG:\n # Browsable HTML - Enabled only in Debug mode (dev)\n DEFAULT_RENDERER_CLASSES = DEFAULT_RENDERER_CLASSES + (\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n )\n\n DEFAULT_AUTHENTICATION_CLASSES = (\n \"api.authentication.CsrfExemptSessionAuthentication\",\n ) + DEFAULT_AUTHENTICATION_CLASSES\n\nREST_FRAMEWORK = {\n \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAuthenticated\",),\n \"DEFAULT_RENDERER_CLASSES\": DEFAULT_RENDERER_CLASSES,\n \"DEFAULT_AUTHENTICATION_CLASSES\": DEFAULT_AUTHENTICATION_CLASSES,\n}\n\n# CORS Settings\nCORS_ORIGIN_ALLOW_ALL = True\n\n# Custom User model\nAUTH_USER_MODEL = 'accounts.User'\n\nAPPEND_SLASH = False\n\nDEFAULT_AUTO_FIELD = 'django.db.models.AutoField'\n\nLOGIN_REDIRECT_URL = '/'\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n 'OPTIONS': {\n 'min_length': 8,\n }\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n", "path": "project/core/settings.py"}]} |
gh_patches_debug_1191 | rasdani/github-patches | git_diff | facebookresearch__xformers-175 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Dropout uses the *memory address* of seeds instead of reading seeds from memory
# 🐛 Bug
From reading the code for `k_dropout_fw` and `k_dropout_bw`, it seems to me that the seeds are never read from memory and the code simply uses the *memory address* of the seed.
For [example](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_dropout.py):
```def _get_4_bin_masks(seed, rand_offsets, p):
rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)
```
Here seed is still a memory address and not an integer.
As a result, when `k_dropout_fw` is passed in two identical seed tensors with different memory addresses, it produces different results.
## To Reproduce
Setting the Pytorch seed should produce the same `seed` used in dropout, and should produce the same dropout mask.
However, that's not the case
```
import torch
from xformers.triton.dropout import dropout
x = torch.randn(3, 5, device='cuda')
print(x)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
print(dropout(x, 0.5))
torch.manual_seed(0)
torch.cuda.manual_seed(0)
print(dropout(x, 0.5))
```
```
tensor([[ 0.4821, -1.6949, 0.8196, 1.9093, -1.0018],
[ 0.4030, -1.5175, -0.3187, -0.0959, 2.7204],
[ 1.0645, -0.1254, 0.3978, -2.9882, 0.2232]], device='cuda:0')
tensor([[ 0.9642, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.8059, -3.0350, 0.0000, -0.1918, 5.4409],
[ 0.0000, -0.2507, 0.7955, 0.0000, 0.0000]], device='cuda:0')
tensor([[ 0.9642, -3.3897, 0.0000, 3.8186, -2.0037],
[ 0.0000, 0.0000, -0.6374, 0.0000, 5.4409],
[ 2.1290, -0.2507, 0.7955, 0.0000, 0.4464]], device='cuda:0')
```
- PyTorch Version (e.g., 1.0): 1.10.1
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): pip install -e . (from master)
- Python version: 3.8
- CUDA/cuDNN version: 11.3
- GPU models and configuration: V100
Dropout uses the *memory address* of seeds instead of reading seeds from memory
# 🐛 Bug
From reading the code for `k_dropout_fw` and `k_dropout_bw`, it seems to me that the seeds are never read from memory and the code simply uses the *memory address* of the seed.
For [example](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_dropout.py):
```def _get_4_bin_masks(seed, rand_offsets, p):
rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)
```
Here seed is still a memory address and not an integer.
As a result, when `k_dropout_fw` is passed in two identical seed tensors with different memory addresses, it produces different results.
## To Reproduce
Setting the Pytorch seed should produce the same `seed` used in dropout, and should produce the same dropout mask.
However, that's not the case
```
import torch
from xformers.triton.dropout import dropout
x = torch.randn(3, 5, device='cuda')
print(x)
torch.manual_seed(0)
torch.cuda.manual_seed(0)
print(dropout(x, 0.5))
torch.manual_seed(0)
torch.cuda.manual_seed(0)
print(dropout(x, 0.5))
```
```
tensor([[ 0.4821, -1.6949, 0.8196, 1.9093, -1.0018],
[ 0.4030, -1.5175, -0.3187, -0.0959, 2.7204],
[ 1.0645, -0.1254, 0.3978, -2.9882, 0.2232]], device='cuda:0')
tensor([[ 0.9642, 0.0000, 0.0000, 0.0000, 0.0000],
[ 0.8059, -3.0350, 0.0000, -0.1918, 5.4409],
[ 0.0000, -0.2507, 0.7955, 0.0000, 0.0000]], device='cuda:0')
tensor([[ 0.9642, -3.3897, 0.0000, 3.8186, -2.0037],
[ 0.0000, 0.0000, -0.6374, 0.0000, 5.4409],
[ 2.1290, -0.2507, 0.7955, 0.0000, 0.4464]], device='cuda:0')
```
- PyTorch Version (e.g., 1.0): 1.10.1
- OS (e.g., Linux): Ubuntu 20.04
- How you installed PyTorch (`conda`, `pip`, source): conda
- Build command you used (if compiling from source): pip install -e . (from master)
- Python version: 3.8
- CUDA/cuDNN version: 11.3
- GPU models and configuration: V100
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `xformers/triton/k_dropout.py`
Content:
```
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 # CREDITS: This comes almost as-is from the Triton dropout tutorial
8 # https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py
9
10 import triton
11 import triton.language as tl
12
13 _configs = [
14 triton.Config({}, num_warps=1),
15 triton.Config({}, num_warps=2),
16 triton.Config({}, num_warps=4),
17 triton.Config({}, num_warps=8),
18 triton.Config({}, num_warps=16),
19 ]
20
21
22 @triton.jit
23 def _get_4_bin_masks(seed, rand_offsets, p):
24 rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)
25
26 # binarize masks, save registers
27 # NOTE: We keep the random numbers as is there (integers over int32),
28 # and convert the threshold instead, for speed
29
30 # The initial distribution is -2**31 / 2**31 -1
31 # and our float threshold is in between [0, 1]
32 # The full computation is: `start_point + full range * p`
33 threshold = (-2147483648.0 + 4294967295.0 * p).to(tl.int32)
34 rand_mask1 = rand1 > threshold
35 rand_mask2 = rand2 > threshold
36 rand_mask3 = rand3 > threshold
37 rand_mask4 = rand4 > threshold
38
39 return rand_mask1, rand_mask2, rand_mask3, rand_mask4
40
41
42 @triton.jit
43 def _random_prune_and_scale(x, rand_mask, p, p_scale):
44 zero = 0.0
45
46 if p > 0.0:
47 # generate all the random numbers for the block at once, then reshape
48 keep = tl.reshape(rand_mask, x.shape)
49
50 # prune and normalize in one go
51 x = tl.where(keep, (x * p_scale).to(x.dtype), zero.to(x.dtype))
52
53 return x
54
55
56 # fmt: off
57 @triton.heuristics({"SIZE_RAND_BLOCK": lambda *_, **meta: meta["BLOCK_N"] * meta["BLOCK_M"]})
58 @triton.autotune(
59 configs=_configs,
60 key=["M", "N", "is_fp16"],
61 )
62 @triton.jit
63 def k_dropout_fw(
64 Y, X, BIAS, SEEDS,
65 stride,
66 M, N,
67 p,
68 is_fp16, # autotune
69 **meta,
70 ):
71 """
72 Apply dropout on an input tensor
73 Y : Output (M, N)
74 X : Input (M, N)
75 BIAS (N,)
76 SEEDS (M,)
77 p : dropout probability
78 """
79 # fmt: on
80
81 BLOCK_M = meta["BLOCK_M"]
82 BLOCK_N = meta["BLOCK_N"]
83 SIZE_RAND_BLOCK = meta["SIZE_RAND_BLOCK"]
84
85 row_id = tl.program_id(axis=0)
86 rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)
87
88 col_id = tl.program_id(axis=1)
89 cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)
90 seed = SEEDS + col_id
91
92 # pointers starting point
93 x_ptrs = X + rows[:, None] * stride + cols[None, :]
94 y_ptrs = Y + rows[:, None] * stride + cols[None, :]
95
96 # go over all the tiles, one by one
97 rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4
98 rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)
99
100 col_mask = cols[None, :] < N
101 p_scale = 1 / (1 - p) if p < 1. else 1.
102
103 if meta["USE_BIAS"]:
104 b_ptrs = BIAS + cols[None, :]
105 bias = tl.load(b_ptrs, mask=cols[None, :] < N, other=0.)
106
107 for i in range(4):
108 # cycle through the binary masks (workaround / no indexing)
109 if i == 0:
110 rand_mask = rand_mask1
111 elif i == 1:
112 rand_mask = rand_mask2
113 elif i == 2:
114 rand_mask = rand_mask3
115 else:
116 rand_mask = rand_mask4
117
118 block_mask = (rows[:, None] < M) & col_mask
119 x = tl.load(x_ptrs, mask=block_mask, other=0.)
120
121 # optionally apply a fused bias
122 if meta["USE_BIAS"]:
123 x += bias
124
125 # optional: fused activation (while the data is in shared memory)
126 if meta["ACTIVATION"]:
127 x = meta["ACTIVATION"](x)
128
129 # randomly prune (and scale) the resulting buffer, possibly a no-op
130 output = _random_prune_and_scale(x, rand_mask, p, p_scale)
131
132 tl.store(y_ptrs, output, mask=block_mask)
133
134 # Update the pointers
135 rows += BLOCK_M # needs to be updated for the mask to be correct
136 x_ptrs += BLOCK_M * stride
137 y_ptrs += BLOCK_M * stride
138
139
140 # fmt: off
141 @triton.heuristics({"SIZE_RAND_BLOCK": lambda *_, **meta: meta["BLOCK_N"] * meta["BLOCK_M"]})
142 @triton.autotune(
143 configs=_configs,
144 key=["M", "N", "is_fp16"],
145 )
146 @triton.jit
147 def k_dropout_bw(
148 GRAD_IN, GRAD_BIAS, GRAD_OUT,
149 INPUTS, BIAS, SEEDS,
150 stride_grad, stride_inputs,
151 M, N,
152 p,
153 is_fp16, # autotune
154 **meta,
155 ):
156 """
157 Apply dropout on an input tensor
158 GRAD_OUT (M, N)
159 GRAD_BIAS (N,)
160 GRAD_IN (M, N)
161 BIAS (N,)
162 SEEDS (N,)
163 p : dropout probability
164 """
165 # fmt: on
166
167 BLOCK_M = meta["BLOCK_M"]
168 BLOCK_N = meta["BLOCK_N"]
169 SIZE_RAND_BLOCK = meta["SIZE_RAND_BLOCK"]
170 TRAINABLE_BIAS = meta["TRAINABLE_BIAS"]
171
172 rows = tl.arange(0, BLOCK_M)
173 row_id = tl.program_id(axis=0)
174 rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)
175
176 col_id = tl.program_id(axis=1)
177 cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)
178 seed = SEEDS + col_id # FIXME index the seed properly
179
180 # pointers starting point
181 grad_out_ptrs = GRAD_OUT + rows[:, None] * stride_grad + cols[None, :]
182 grad_in_ptrs = GRAD_IN + rows[:, None] * stride_grad + cols[None, :]
183 input_ptrs = INPUTS + rows[:, None] * stride_inputs + cols[None, :]
184
185 # random binary masks, save registers
186 rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4
187 rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)
188
189 # now go over the tiles
190 grad_bias = tl.zeros((BLOCK_N,), dtype=tl.float32)
191 col_mask = cols[None, :] < N
192 p_scale = 1 / (1 - p) if p < 1. else 1.
193
194 if meta["USE_BIAS"]:
195 b_ptrs = BIAS + cols[None, :]
196 bias = tl.load(b_ptrs, mask=col_mask, other=0.)
197
198 for i in range(4):
199 # cycle through the binary masks (workaround / no indexing)
200 if i == 0:
201 rand_mask = rand_mask1
202 elif i == 1:
203 rand_mask = rand_mask2
204 elif i == 2:
205 rand_mask = rand_mask3
206 else:
207 rand_mask = rand_mask4
208
209 block_mask = (rows[:, None] < M) & col_mask
210 grad_out = tl.load(grad_out_ptrs, mask=block_mask, other=0.)
211
212 # optional: fused activation (while the data is in shared memory)
213 if meta["ACTIVATION_GRAD"]:
214 inputs = tl.load(input_ptrs, mask=block_mask, other=0.)
215
216 # optionally apply a fused bias
217 if meta["USE_BIAS"]:
218 inputs += bias
219
220 act_grad = meta["ACTIVATION_GRAD"](inputs).to(grad_out.dtype)
221 grad_out *= act_grad
222
223 # randomly prune (and scale) the resulting buffer, possibly a no-op
224 # note that even if we did not save the mask from the FW pass, it is generated
225 # from the same seeds, so the same drop mask is applied here
226 output = _random_prune_and_scale(grad_out, rand_mask, p, p_scale)
227
228 # write-back
229 tl.store(grad_in_ptrs, output, mask=block_mask)
230
231 # optionally accumulate the bias gradient
232 if TRAINABLE_BIAS:
233 grad_bias += tl.sum(output, axis=0)
234
235 # Update the pointers
236 rows += BLOCK_M # needs to be updated for the mask to be correct
237 grad_out_ptrs += BLOCK_M * stride_grad
238 input_ptrs += BLOCK_M * stride_inputs
239 grad_in_ptrs += BLOCK_M * stride_grad
240
241 if TRAINABLE_BIAS:
242 grad_bias_ptr = GRAD_BIAS + row_id * N + cols
243 tl.store(grad_bias_ptr, grad_bias, mask=cols < N)
244
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py
--- a/xformers/triton/k_dropout.py
+++ b/xformers/triton/k_dropout.py
@@ -21,7 +21,8 @@
@triton.jit
def _get_4_bin_masks(seed, rand_offsets, p):
- rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)
+ seed = tl.load(seed)
+ rand1, rand2, rand3, rand4 = tl.randint4x(seed, rand_offsets)
# binarize masks, save registers
# NOTE: We keep the random numbers as is there (integers over int32),
| {"golden_diff": "diff --git a/xformers/triton/k_dropout.py b/xformers/triton/k_dropout.py\n--- a/xformers/triton/k_dropout.py\n+++ b/xformers/triton/k_dropout.py\n@@ -21,7 +21,8 @@\n \n @triton.jit\n def _get_4_bin_masks(seed, rand_offsets, p):\n- rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)\n+ seed = tl.load(seed)\n+ rand1, rand2, rand3, rand4 = tl.randint4x(seed, rand_offsets)\n \n # binarize masks, save registers\n # NOTE: We keep the random numbers as is there (integers over int32),\n", "issue": "Dropout uses the *memory address* of seeds instead of reading seeds from memory\n# \ud83d\udc1b Bug\r\n\r\nFrom reading the code for `k_dropout_fw` and `k_dropout_bw`, it seems to me that the seeds are never read from memory and the code simply uses the *memory address* of the seed.\r\nFor [example](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_dropout.py):\r\n```def _get_4_bin_masks(seed, rand_offsets, p):\r\n rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)\r\n```\r\nHere seed is still a memory address and not an integer.\r\n\r\nAs a result, when `k_dropout_fw` is passed in two identical seed tensors with different memory addresses, it produces different results. \r\n\r\n## To Reproduce\r\n\r\nSetting the Pytorch seed should produce the same `seed` used in dropout, and should produce the same dropout mask.\r\nHowever, that's not the case\r\n\r\n```\r\nimport torch\r\nfrom xformers.triton.dropout import dropout\r\n\r\nx = torch.randn(3, 5, device='cuda')\r\nprint(x)\r\n\r\ntorch.manual_seed(0)\r\ntorch.cuda.manual_seed(0)\r\nprint(dropout(x, 0.5))\r\n\r\ntorch.manual_seed(0)\r\ntorch.cuda.manual_seed(0)\r\nprint(dropout(x, 0.5))\r\n```\r\n\r\n```\r\n tensor([[ 0.4821, -1.6949, 0.8196, 1.9093, -1.0018],\r\n [ 0.4030, -1.5175, -0.3187, -0.0959, 2.7204],\r\n [ 1.0645, -0.1254, 0.3978, -2.9882, 0.2232]], device='cuda:0')\r\n\r\ntensor([[ 0.9642, 0.0000, 0.0000, 0.0000, 0.0000],\r\n [ 0.8059, -3.0350, 0.0000, -0.1918, 5.4409],\r\n [ 0.0000, -0.2507, 0.7955, 0.0000, 0.0000]], device='cuda:0')\r\n\r\ntensor([[ 0.9642, -3.3897, 0.0000, 3.8186, -2.0037],\r\n [ 0.0000, 0.0000, -0.6374, 0.0000, 5.4409],\r\n [ 2.1290, -0.2507, 0.7955, 0.0000, 0.4464]], device='cuda:0')\r\n```\r\n\r\n\r\n- PyTorch Version (e.g., 1.0): 1.10.1\r\n- OS (e.g., Linux): Ubuntu 20.04\r\n- How you installed PyTorch (`conda`, `pip`, source): conda\r\n- Build command you used (if compiling from source): pip install -e . (from master)\r\n- Python version: 3.8\r\n- CUDA/cuDNN version: 11.3\r\n- GPU models and configuration: V100\r\n\nDropout uses the *memory address* of seeds instead of reading seeds from memory\n# \ud83d\udc1b Bug\r\n\r\nFrom reading the code for `k_dropout_fw` and `k_dropout_bw`, it seems to me that the seeds are never read from memory and the code simply uses the *memory address* of the seed.\r\nFor [example](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_dropout.py):\r\n```def _get_4_bin_masks(seed, rand_offsets, p):\r\n rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)\r\n```\r\nHere seed is still a memory address and not an integer.\r\n\r\nAs a result, when `k_dropout_fw` is passed in two identical seed tensors with different memory addresses, it produces different results. \r\n\r\n## To Reproduce\r\n\r\nSetting the Pytorch seed should produce the same `seed` used in dropout, and should produce the same dropout mask.\r\nHowever, that's not the case\r\n\r\n```\r\nimport torch\r\nfrom xformers.triton.dropout import dropout\r\n\r\nx = torch.randn(3, 5, device='cuda')\r\nprint(x)\r\n\r\ntorch.manual_seed(0)\r\ntorch.cuda.manual_seed(0)\r\nprint(dropout(x, 0.5))\r\n\r\ntorch.manual_seed(0)\r\ntorch.cuda.manual_seed(0)\r\nprint(dropout(x, 0.5))\r\n```\r\n\r\n```\r\n tensor([[ 0.4821, -1.6949, 0.8196, 1.9093, -1.0018],\r\n [ 0.4030, -1.5175, -0.3187, -0.0959, 2.7204],\r\n [ 1.0645, -0.1254, 0.3978, -2.9882, 0.2232]], device='cuda:0')\r\n\r\ntensor([[ 0.9642, 0.0000, 0.0000, 0.0000, 0.0000],\r\n [ 0.8059, -3.0350, 0.0000, -0.1918, 5.4409],\r\n [ 0.0000, -0.2507, 0.7955, 0.0000, 0.0000]], device='cuda:0')\r\n\r\ntensor([[ 0.9642, -3.3897, 0.0000, 3.8186, -2.0037],\r\n [ 0.0000, 0.0000, -0.6374, 0.0000, 5.4409],\r\n [ 2.1290, -0.2507, 0.7955, 0.0000, 0.4464]], device='cuda:0')\r\n```\r\n\r\n\r\n- PyTorch Version (e.g., 1.0): 1.10.1\r\n- OS (e.g., Linux): Ubuntu 20.04\r\n- How you installed PyTorch (`conda`, `pip`, source): conda\r\n- Build command you used (if compiling from source): pip install -e . (from master)\r\n- Python version: 3.8\r\n- CUDA/cuDNN version: 11.3\r\n- GPU models and configuration: V100\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n_configs = [\n triton.Config({}, num_warps=1),\n triton.Config({}, num_warps=2),\n triton.Config({}, num_warps=4),\n triton.Config({}, num_warps=8),\n triton.Config({}, num_warps=16),\n]\n\n\[email protected]\ndef _get_4_bin_masks(seed, rand_offsets, p):\n rand1, rand2, rand3, rand4 = tl.randint4x(seed.to(tl.int32), rand_offsets)\n\n # binarize masks, save registers\n # NOTE: We keep the random numbers as is there (integers over int32),\n # and convert the threshold instead, for speed\n\n # The initial distribution is -2**31 / 2**31 -1\n # and our float threshold is in between [0, 1]\n # The full computation is: `start_point + full range * p`\n threshold = (-2147483648.0 + 4294967295.0 * p).to(tl.int32)\n rand_mask1 = rand1 > threshold\n rand_mask2 = rand2 > threshold\n rand_mask3 = rand3 > threshold\n rand_mask4 = rand4 > threshold\n\n return rand_mask1, rand_mask2, rand_mask3, rand_mask4\n\n\[email protected]\ndef _random_prune_and_scale(x, rand_mask, p, p_scale):\n zero = 0.0\n\n if p > 0.0:\n # generate all the random numbers for the block at once, then reshape\n keep = tl.reshape(rand_mask, x.shape)\n\n # prune and normalize in one go\n x = tl.where(keep, (x * p_scale).to(x.dtype), zero.to(x.dtype))\n\n return x\n\n\n# fmt: off\[email protected]({\"SIZE_RAND_BLOCK\": lambda *_, **meta: meta[\"BLOCK_N\"] * meta[\"BLOCK_M\"]})\[email protected](\n configs=_configs,\n key=[\"M\", \"N\", \"is_fp16\"],\n)\[email protected]\ndef k_dropout_fw(\n Y, X, BIAS, SEEDS,\n stride,\n M, N,\n p,\n is_fp16, # autotune\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n BIAS (N,)\n SEEDS (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n BLOCK_M = meta[\"BLOCK_M\"]\n BLOCK_N = meta[\"BLOCK_N\"]\n SIZE_RAND_BLOCK = meta[\"SIZE_RAND_BLOCK\"]\n\n row_id = tl.program_id(axis=0)\n rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)\n\n col_id = tl.program_id(axis=1)\n cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)\n seed = SEEDS + col_id\n\n # pointers starting point\n x_ptrs = X + rows[:, None] * stride + cols[None, :]\n y_ptrs = Y + rows[:, None] * stride + cols[None, :]\n\n # go over all the tiles, one by one\n rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4\n rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)\n\n col_mask = cols[None, :] < N\n p_scale = 1 / (1 - p) if p < 1. else 1.\n\n if meta[\"USE_BIAS\"]:\n b_ptrs = BIAS + cols[None, :]\n bias = tl.load(b_ptrs, mask=cols[None, :] < N, other=0.)\n\n for i in range(4):\n # cycle through the binary masks (workaround / no indexing)\n if i == 0:\n rand_mask = rand_mask1\n elif i == 1:\n rand_mask = rand_mask2\n elif i == 2:\n rand_mask = rand_mask3\n else:\n rand_mask = rand_mask4\n\n block_mask = (rows[:, None] < M) & col_mask\n x = tl.load(x_ptrs, mask=block_mask, other=0.)\n\n # optionally apply a fused bias\n if meta[\"USE_BIAS\"]:\n x += bias\n\n # optional: fused activation (while the data is in shared memory)\n if meta[\"ACTIVATION\"]:\n x = meta[\"ACTIVATION\"](x)\n\n # randomly prune (and scale) the resulting buffer, possibly a no-op\n output = _random_prune_and_scale(x, rand_mask, p, p_scale)\n\n tl.store(y_ptrs, output, mask=block_mask)\n\n # Update the pointers\n rows += BLOCK_M # needs to be updated for the mask to be correct\n x_ptrs += BLOCK_M * stride\n y_ptrs += BLOCK_M * stride\n\n\n# fmt: off\[email protected]({\"SIZE_RAND_BLOCK\": lambda *_, **meta: meta[\"BLOCK_N\"] * meta[\"BLOCK_M\"]})\[email protected](\n configs=_configs,\n key=[\"M\", \"N\", \"is_fp16\"],\n)\[email protected]\ndef k_dropout_bw(\n GRAD_IN, GRAD_BIAS, GRAD_OUT,\n INPUTS, BIAS, SEEDS,\n stride_grad, stride_inputs,\n M, N,\n p,\n is_fp16, # autotune\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n GRAD_OUT (M, N)\n GRAD_BIAS (N,)\n GRAD_IN (M, N)\n BIAS (N,)\n SEEDS (N,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n BLOCK_M = meta[\"BLOCK_M\"]\n BLOCK_N = meta[\"BLOCK_N\"]\n SIZE_RAND_BLOCK = meta[\"SIZE_RAND_BLOCK\"]\n TRAINABLE_BIAS = meta[\"TRAINABLE_BIAS\"]\n\n rows = tl.arange(0, BLOCK_M)\n row_id = tl.program_id(axis=0)\n rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)\n\n col_id = tl.program_id(axis=1)\n cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)\n seed = SEEDS + col_id # FIXME index the seed properly\n\n # pointers starting point\n grad_out_ptrs = GRAD_OUT + rows[:, None] * stride_grad + cols[None, :]\n grad_in_ptrs = GRAD_IN + rows[:, None] * stride_grad + cols[None, :]\n input_ptrs = INPUTS + rows[:, None] * stride_inputs + cols[None, :]\n\n # random binary masks, save registers\n rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4\n rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)\n\n # now go over the tiles\n grad_bias = tl.zeros((BLOCK_N,), dtype=tl.float32)\n col_mask = cols[None, :] < N\n p_scale = 1 / (1 - p) if p < 1. else 1.\n\n if meta[\"USE_BIAS\"]:\n b_ptrs = BIAS + cols[None, :]\n bias = tl.load(b_ptrs, mask=col_mask, other=0.)\n\n for i in range(4):\n # cycle through the binary masks (workaround / no indexing)\n if i == 0:\n rand_mask = rand_mask1\n elif i == 1:\n rand_mask = rand_mask2\n elif i == 2:\n rand_mask = rand_mask3\n else:\n rand_mask = rand_mask4\n\n block_mask = (rows[:, None] < M) & col_mask\n grad_out = tl.load(grad_out_ptrs, mask=block_mask, other=0.)\n\n # optional: fused activation (while the data is in shared memory)\n if meta[\"ACTIVATION_GRAD\"]:\n inputs = tl.load(input_ptrs, mask=block_mask, other=0.)\n\n # optionally apply a fused bias\n if meta[\"USE_BIAS\"]:\n inputs += bias\n\n act_grad = meta[\"ACTIVATION_GRAD\"](inputs).to(grad_out.dtype)\n grad_out *= act_grad\n\n # randomly prune (and scale) the resulting buffer, possibly a no-op\n # note that even if we did not save the mask from the FW pass, it is generated\n # from the same seeds, so the same drop mask is applied here\n output = _random_prune_and_scale(grad_out, rand_mask, p, p_scale)\n\n # write-back\n tl.store(grad_in_ptrs, output, mask=block_mask)\n\n # optionally accumulate the bias gradient\n if TRAINABLE_BIAS:\n grad_bias += tl.sum(output, axis=0)\n\n # Update the pointers\n rows += BLOCK_M # needs to be updated for the mask to be correct\n grad_out_ptrs += BLOCK_M * stride_grad\n input_ptrs += BLOCK_M * stride_inputs\n grad_in_ptrs += BLOCK_M * stride_grad\n\n if TRAINABLE_BIAS:\n grad_bias_ptr = GRAD_BIAS + row_id * N + cols\n tl.store(grad_bias_ptr, grad_bias, mask=cols < N)\n", "path": "xformers/triton/k_dropout.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\n# CREDITS: This comes almost as-is from the Triton dropout tutorial\n# https://raw.githubusercontent.com/openai/triton/master/python/tutorials/04-low-memory-dropout.py\n\nimport triton\nimport triton.language as tl\n\n_configs = [\n triton.Config({}, num_warps=1),\n triton.Config({}, num_warps=2),\n triton.Config({}, num_warps=4),\n triton.Config({}, num_warps=8),\n triton.Config({}, num_warps=16),\n]\n\n\[email protected]\ndef _get_4_bin_masks(seed, rand_offsets, p):\n seed = tl.load(seed)\n rand1, rand2, rand3, rand4 = tl.randint4x(seed, rand_offsets)\n\n # binarize masks, save registers\n # NOTE: We keep the random numbers as is there (integers over int32),\n # and convert the threshold instead, for speed\n\n # The initial distribution is -2**31 / 2**31 -1\n # and our float threshold is in between [0, 1]\n # The full computation is: `start_point + full range * p`\n threshold = (-2147483648.0 + 4294967295.0 * p).to(tl.int32)\n rand_mask1 = rand1 > threshold\n rand_mask2 = rand2 > threshold\n rand_mask3 = rand3 > threshold\n rand_mask4 = rand4 > threshold\n\n return rand_mask1, rand_mask2, rand_mask3, rand_mask4\n\n\[email protected]\ndef _random_prune_and_scale(x, rand_mask, p, p_scale):\n zero = 0.0\n\n if p > 0.0:\n # generate all the random numbers for the block at once, then reshape\n keep = tl.reshape(rand_mask, x.shape)\n\n # prune and normalize in one go\n x = tl.where(keep, (x * p_scale).to(x.dtype), zero.to(x.dtype))\n\n return x\n\n\n# fmt: off\[email protected]({\"SIZE_RAND_BLOCK\": lambda *_, **meta: meta[\"BLOCK_N\"] * meta[\"BLOCK_M\"]})\[email protected](\n configs=_configs,\n key=[\"M\", \"N\", \"is_fp16\"],\n)\[email protected]\ndef k_dropout_fw(\n Y, X, BIAS, SEEDS,\n stride,\n M, N,\n p,\n is_fp16, # autotune\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n Y : Output (M, N)\n X : Input (M, N)\n BIAS (N,)\n SEEDS (M,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n BLOCK_M = meta[\"BLOCK_M\"]\n BLOCK_N = meta[\"BLOCK_N\"]\n SIZE_RAND_BLOCK = meta[\"SIZE_RAND_BLOCK\"]\n\n row_id = tl.program_id(axis=0)\n rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)\n\n col_id = tl.program_id(axis=1)\n cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)\n seed = SEEDS + col_id\n\n # pointers starting point\n x_ptrs = X + rows[:, None] * stride + cols[None, :]\n y_ptrs = Y + rows[:, None] * stride + cols[None, :]\n\n # go over all the tiles, one by one\n rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4\n rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)\n\n col_mask = cols[None, :] < N\n p_scale = 1 / (1 - p) if p < 1. else 1.\n\n if meta[\"USE_BIAS\"]:\n b_ptrs = BIAS + cols[None, :]\n bias = tl.load(b_ptrs, mask=cols[None, :] < N, other=0.)\n\n for i in range(4):\n # cycle through the binary masks (workaround / no indexing)\n if i == 0:\n rand_mask = rand_mask1\n elif i == 1:\n rand_mask = rand_mask2\n elif i == 2:\n rand_mask = rand_mask3\n else:\n rand_mask = rand_mask4\n\n block_mask = (rows[:, None] < M) & col_mask\n x = tl.load(x_ptrs, mask=block_mask, other=0.)\n\n # optionally apply a fused bias\n if meta[\"USE_BIAS\"]:\n x += bias\n\n # optional: fused activation (while the data is in shared memory)\n if meta[\"ACTIVATION\"]:\n x = meta[\"ACTIVATION\"](x)\n\n # randomly prune (and scale) the resulting buffer, possibly a no-op\n output = _random_prune_and_scale(x, rand_mask, p, p_scale)\n\n tl.store(y_ptrs, output, mask=block_mask)\n\n # Update the pointers\n rows += BLOCK_M # needs to be updated for the mask to be correct\n x_ptrs += BLOCK_M * stride\n y_ptrs += BLOCK_M * stride\n\n\n# fmt: off\[email protected]({\"SIZE_RAND_BLOCK\": lambda *_, **meta: meta[\"BLOCK_N\"] * meta[\"BLOCK_M\"]})\[email protected](\n configs=_configs,\n key=[\"M\", \"N\", \"is_fp16\"],\n)\[email protected]\ndef k_dropout_bw(\n GRAD_IN, GRAD_BIAS, GRAD_OUT,\n INPUTS, BIAS, SEEDS,\n stride_grad, stride_inputs,\n M, N,\n p,\n is_fp16, # autotune\n **meta,\n):\n \"\"\"\n Apply dropout on an input tensor\n GRAD_OUT (M, N)\n GRAD_BIAS (N,)\n GRAD_IN (M, N)\n BIAS (N,)\n SEEDS (N,)\n p : dropout probability\n \"\"\"\n # fmt: on\n\n BLOCK_M = meta[\"BLOCK_M\"]\n BLOCK_N = meta[\"BLOCK_N\"]\n SIZE_RAND_BLOCK = meta[\"SIZE_RAND_BLOCK\"]\n TRAINABLE_BIAS = meta[\"TRAINABLE_BIAS\"]\n\n rows = tl.arange(0, BLOCK_M)\n row_id = tl.program_id(axis=0)\n rows = row_id * BLOCK_M * 4 + tl.arange(0, BLOCK_M)\n\n col_id = tl.program_id(axis=1)\n cols = col_id * BLOCK_N + tl.arange(0, BLOCK_N)\n seed = SEEDS + col_id # FIXME index the seed properly\n\n # pointers starting point\n grad_out_ptrs = GRAD_OUT + rows[:, None] * stride_grad + cols[None, :]\n grad_in_ptrs = GRAD_IN + rows[:, None] * stride_grad + cols[None, :]\n input_ptrs = INPUTS + rows[:, None] * stride_inputs + cols[None, :]\n\n # random binary masks, save registers\n rand_offsets = tl.arange(0, SIZE_RAND_BLOCK) + row_id * BLOCK_M * 4\n rand_mask1, rand_mask2, rand_mask3, rand_mask4 = _get_4_bin_masks(seed, rand_offsets, p)\n\n # now go over the tiles\n grad_bias = tl.zeros((BLOCK_N,), dtype=tl.float32)\n col_mask = cols[None, :] < N\n p_scale = 1 / (1 - p) if p < 1. else 1.\n\n if meta[\"USE_BIAS\"]:\n b_ptrs = BIAS + cols[None, :]\n bias = tl.load(b_ptrs, mask=col_mask, other=0.)\n\n for i in range(4):\n # cycle through the binary masks (workaround / no indexing)\n if i == 0:\n rand_mask = rand_mask1\n elif i == 1:\n rand_mask = rand_mask2\n elif i == 2:\n rand_mask = rand_mask3\n else:\n rand_mask = rand_mask4\n\n block_mask = (rows[:, None] < M) & col_mask\n grad_out = tl.load(grad_out_ptrs, mask=block_mask, other=0.)\n\n # optional: fused activation (while the data is in shared memory)\n if meta[\"ACTIVATION_GRAD\"]:\n inputs = tl.load(input_ptrs, mask=block_mask, other=0.)\n\n # optionally apply a fused bias\n if meta[\"USE_BIAS\"]:\n inputs += bias\n\n act_grad = meta[\"ACTIVATION_GRAD\"](inputs).to(grad_out.dtype)\n grad_out *= act_grad\n\n # randomly prune (and scale) the resulting buffer, possibly a no-op\n # note that even if we did not save the mask from the FW pass, it is generated\n # from the same seeds, so the same drop mask is applied here\n output = _random_prune_and_scale(grad_out, rand_mask, p, p_scale)\n\n # write-back\n tl.store(grad_in_ptrs, output, mask=block_mask)\n\n # optionally accumulate the bias gradient\n if TRAINABLE_BIAS:\n grad_bias += tl.sum(output, axis=0)\n\n # Update the pointers\n rows += BLOCK_M # needs to be updated for the mask to be correct\n grad_out_ptrs += BLOCK_M * stride_grad\n input_ptrs += BLOCK_M * stride_inputs\n grad_in_ptrs += BLOCK_M * stride_grad\n\n if TRAINABLE_BIAS:\n grad_bias_ptr = GRAD_BIAS + row_id * N + cols\n tl.store(grad_bias_ptr, grad_bias, mask=cols < N)\n", "path": "xformers/triton/k_dropout.py"}]} |
gh_patches_debug_1192 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-6134 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
PubSub protobuf dependency requirements incorrect
I think the minimum version for the `protobuf` package dependency is not correct. google-cloud-python has the version requirements as protobuf>=3.0.0, but it fails to import when using version up to and including protobuf==3.3.0. I'm not sure what the exact correct version is, but the last version of google-cloud-pubsub to work with protobuf==3.3.0 is google-cloud-pubsub==0.35.4. I believe after this commit (https://github.com/GoogleCloudPlatform/google-cloud-python/commit/371333a51165e99d4d02876b1ef133618485b6fc#diff-29280288794caf553b0b008084a0e854), a protobuf version >3.3.0 is required:
Python version
```
$ python --version
Python 2.7.15rc1
```
Package versions:
```
$ pip list | grep -E '(cloud|protobuf)'
google-cloud-core 0.28.1
google-cloud-datastore 1.7.0
google-cloud-pubsub 0.38.0
google-cloud-storage 1.12.0
protobuf 3.3.0
```
Getting a stack track just importing pubsub (in ipython here)
```
In [1]: from google.cloud import pubsub
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-8fba37b708ad> in <module>()
----> 1 from google.cloud import pubsub
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub.py in <module>()
17 from __future__ import absolute_import
18
---> 19 from google.cloud.pubsub_v1 import PublisherClient
20 from google.cloud.pubsub_v1 import SubscriberClient
21 from google.cloud.pubsub_v1 import types
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/__init__.py in <module>()
15 from __future__ import absolute_import
16
---> 17 from google.cloud.pubsub_v1 import types
18 from google.cloud.pubsub_v1 import publisher
19 from google.cloud.pubsub_v1 import subscriber
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/types.py in <module>()
28
29 from google.api_core.protobuf_helpers import get_messages
---> 30 from google.cloud.pubsub_v1.proto import pubsub_pb2
31
32
/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/proto/pubsub_pb2.py in <module>()
45 message_type=None, enum_type=None, containing_type=None,
46 is_extension=False, extension_scope=None,
---> 47 options=None, file=DESCRIPTOR),
48 ],
49 extensions=[
TypeError: __new__() got an unexpected keyword argument 'file'
```
Snipped the pubsub section from pipdeptree output showing the protobuf requirement is >=3.0.0:
```
- google-cloud-pubsub [required: Any, installed: 0.38.0]
- enum34 [required: Any, installed: 1.1.6]
- google-api-core [required: >=1.1.0,<2.0.0dev, installed: 1.4.0]
- futures [required: >=3.2.0, installed: 3.2.0]
- google-auth [required: >=0.4.0,<2.0.0dev, installed: 1.5.1]
- cachetools [required: >=2.0.0, installed: 2.1.0]
- pyasn1-modules [required: >=0.2.1, installed: 0.2.2]
- pyasn1 [required: >=0.4.1,<0.5.0, installed: 0.4.4]
- rsa [required: >=3.1.4, installed: 4.0]
- pyasn1 [required: >=0.1.3, installed: 0.4.4]
- six [required: >=1.9.0, installed: 1.11.0]
- googleapis-common-protos [required: >=1.5.3,<2.0dev, installed: 1.5.3]
- protobuf [required: >=3.0.0, installed: 3.3.0]
- setuptools [required: Any, installed: 40.4.3]
- six [required: >=1.9, installed: 1.11.0]
- protobuf [required: >=3.0.0, installed: 3.3.0]
- setuptools [required: Any, installed: 40.4.3]
- six [required: >=1.9, installed: 1.11.0]
```
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pubsub/setup.py`
Content:
```
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-cloud-pubsub'
24 description = 'Google Cloud Pub/Sub API client library'
25 version = '0.38.0'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 4 - Beta'
31 dependencies = [
32 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
33 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',
34 'enum34; python_version < "3.4"',
35 ]
36 extras = {
37 }
38
39
40 # Setup boilerplate below this line.
41
42 package_root = os.path.abspath(os.path.dirname(__file__))
43
44 readme_filename = os.path.join(package_root, 'README.rst')
45 with io.open(readme_filename, encoding='utf-8') as readme_file:
46 readme = readme_file.read()
47
48 # Only include packages under the 'google' namespace. Do not include tests,
49 # benchmarks, etc.
50 packages = [
51 package for package in setuptools.find_packages()
52 if package.startswith('google')]
53
54 # Determine which namespaces are needed.
55 namespaces = ['google']
56 if 'google.cloud' in packages:
57 namespaces.append('google.cloud')
58
59
60 setuptools.setup(
61 name=name,
62 version=version,
63 description=description,
64 long_description=readme,
65 author='Google LLC',
66 author_email='[email protected]',
67 license='Apache 2.0',
68 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
69 classifiers=[
70 release_status,
71 'Intended Audience :: Developers',
72 'License :: OSI Approved :: Apache Software License',
73 'Programming Language :: Python',
74 'Programming Language :: Python :: 2',
75 'Programming Language :: Python :: 2.7',
76 'Programming Language :: Python :: 3',
77 'Programming Language :: Python :: 3.4',
78 'Programming Language :: Python :: 3.5',
79 'Programming Language :: Python :: 3.6',
80 'Operating System :: OS Independent',
81 'Topic :: Internet',
82 ],
83 platforms='Posix; MacOS X; Windows',
84 packages=packages,
85 namespace_packages=namespaces,
86 install_requires=dependencies,
87 extras_require=extras,
88 include_package_data=True,
89 zip_safe=False,
90 )
91
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pubsub/setup.py b/pubsub/setup.py
--- a/pubsub/setup.py
+++ b/pubsub/setup.py
@@ -29,7 +29,7 @@
# 'Development Status :: 5 - Production/Stable'
release_status = 'Development Status :: 4 - Beta'
dependencies = [
- 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',
+ 'google-api-core[grpc] >= 1.4.1, < 2.0.0dev',
'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',
'enum34; python_version < "3.4"',
]
| {"golden_diff": "diff --git a/pubsub/setup.py b/pubsub/setup.py\n--- a/pubsub/setup.py\n+++ b/pubsub/setup.py\n@@ -29,7 +29,7 @@\n # 'Development Status :: 5 - Production/Stable'\n release_status = 'Development Status :: 4 - Beta'\n dependencies = [\n- 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n+ 'google-api-core[grpc] >= 1.4.1, < 2.0.0dev',\n 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',\n 'enum34; python_version < \"3.4\"',\n ]\n", "issue": "PubSub protobuf dependency requirements incorrect\nI think the minimum version for the `protobuf` package dependency is not correct. google-cloud-python has the version requirements as protobuf>=3.0.0, but it fails to import when using version up to and including protobuf==3.3.0. I'm not sure what the exact correct version is, but the last version of google-cloud-pubsub to work with protobuf==3.3.0 is google-cloud-pubsub==0.35.4. I believe after this commit (https://github.com/GoogleCloudPlatform/google-cloud-python/commit/371333a51165e99d4d02876b1ef133618485b6fc#diff-29280288794caf553b0b008084a0e854), a protobuf version >3.3.0 is required:\r\n\r\nPython version\r\n```\r\n$ python --version\r\nPython 2.7.15rc1\r\n```\r\n\r\nPackage versions:\r\n```\r\n$ pip list | grep -E '(cloud|protobuf)'\r\ngoogle-cloud-core 0.28.1 \r\ngoogle-cloud-datastore 1.7.0 \r\ngoogle-cloud-pubsub 0.38.0 \r\ngoogle-cloud-storage 1.12.0 \r\nprotobuf 3.3.0 \r\n```\r\n\r\nGetting a stack track just importing pubsub (in ipython here)\r\n```\r\nIn [1]: from google.cloud import pubsub\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-1-8fba37b708ad> in <module>()\r\n----> 1 from google.cloud import pubsub\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub.py in <module>()\r\n 17 from __future__ import absolute_import\r\n 18 \r\n---> 19 from google.cloud.pubsub_v1 import PublisherClient\r\n 20 from google.cloud.pubsub_v1 import SubscriberClient\r\n 21 from google.cloud.pubsub_v1 import types\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/__init__.py in <module>()\r\n 15 from __future__ import absolute_import\r\n 16 \r\n---> 17 from google.cloud.pubsub_v1 import types\r\n 18 from google.cloud.pubsub_v1 import publisher\r\n 19 from google.cloud.pubsub_v1 import subscriber\r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/types.py in <module>()\r\n 28 \r\n 29 from google.api_core.protobuf_helpers import get_messages\r\n---> 30 from google.cloud.pubsub_v1.proto import pubsub_pb2\r\n 31 \r\n 32 \r\n\r\n/home/aaronpeterson/.local/share/virtualenvs/turbinia-docs-oOHvuNoj/local/lib/python2.7/site-packages/google/cloud/pubsub_v1/proto/pubsub_pb2.py in <module>()\r\n 45 message_type=None, enum_type=None, containing_type=None,\r\n 46 is_extension=False, extension_scope=None,\r\n---> 47 options=None, file=DESCRIPTOR),\r\n 48 ],\r\n 49 extensions=[\r\n\r\nTypeError: __new__() got an unexpected keyword argument 'file'\r\n```\r\n\r\nSnipped the pubsub section from pipdeptree output showing the protobuf requirement is >=3.0.0:\r\n```\r\n - google-cloud-pubsub [required: Any, installed: 0.38.0]\r\n - enum34 [required: Any, installed: 1.1.6]\r\n - google-api-core [required: >=1.1.0,<2.0.0dev, installed: 1.4.0]\r\n - futures [required: >=3.2.0, installed: 3.2.0]\r\n - google-auth [required: >=0.4.0,<2.0.0dev, installed: 1.5.1]\r\n - cachetools [required: >=2.0.0, installed: 2.1.0]\r\n - pyasn1-modules [required: >=0.2.1, installed: 0.2.2]\r\n - pyasn1 [required: >=0.4.1,<0.5.0, installed: 0.4.4]\r\n - rsa [required: >=3.1.4, installed: 4.0]\r\n - pyasn1 [required: >=0.1.3, installed: 0.4.4]\r\n - six [required: >=1.9.0, installed: 1.11.0]\r\n - googleapis-common-protos [required: >=1.5.3,<2.0dev, installed: 1.5.3]\r\n - protobuf [required: >=3.0.0, installed: 3.3.0]\r\n - setuptools [required: Any, installed: 40.4.3]\r\n - six [required: >=1.9, installed: 1.11.0]\r\n - protobuf [required: >=3.0.0, installed: 3.3.0]\r\n - setuptools [required: Any, installed: 40.4.3]\r\n - six [required: >=1.9, installed: 1.11.0]\r\n```\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-pubsub'\ndescription = 'Google Cloud Pub/Sub API client library'\nversion = '0.38.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 4 - Beta'\ndependencies = [\n 'google-api-core[grpc] >= 1.1.0, < 2.0.0dev',\n 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',\n 'enum34; python_version < \"3.4\"',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "pubsub/setup.py"}], "after_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-cloud-pubsub'\ndescription = 'Google Cloud Pub/Sub API client library'\nversion = '0.38.0'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 4 - Beta'\ndependencies = [\n 'google-api-core[grpc] >= 1.4.1, < 2.0.0dev',\n 'grpc-google-iam-v1 >= 0.11.1, < 0.12dev',\n 'enum34; python_version < \"3.4\"',\n]\nextras = {\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "pubsub/setup.py"}]} |
gh_patches_debug_1193 | rasdani/github-patches | git_diff | Miserlou__Zappa-918 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Bug using slim_handler when project name begins with an 's'
<!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
When a project name starts with a lowercase 's' and the project uses Zappa's slim_handler, Zappa tries to fetch a non-existent file from S3
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7 -->
## Expected Behavior
<!--- Tell us what should happen -->
Zappa's slim_handler should download zip files that begin with 's' (ex: saffron_current_project.zip)
## Actual Behavior
<!--- Tell us what happens instead -->
Zappa strips the leading 's' and attempts to pull 'affron_current_project.zip' from S3
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
At handler.py:161, instead of using lstrip (which will strip based on the individual characters within 's3://') we could try using a regular expression
`remote_bucket, remote_file = re.sub('^s3://', '', project_zip_path).split('/', 1)`
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Enable slim_handler
2. Make first character in project name a lowercase 's'
3. Attempt to load the remote zip by calling the lambda function
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.42.0
* Operating System and Python version: Linux/Python3.6
* The output of `pip freeze`:
appdirs==1.4.3
argcomplete==1.8.2
awscli==1.11.91
base58==0.2.4
boto==2.47.0
boto3==1.4.4
botocore==1.5.40
click==6.7
colorama==0.3.7
defusedxml==0.5.0
Django==1.11.1
django-allauth==0.32.0
django-filter==1.0.4
django-redis==4.8.0
django-rest-auth==0.9.1
django-storages==1.5.2
djangorestframework==3.6.3
docutils==0.13.1
durationpy==0.4
factory-boy==2.8.1
Faker==0.7.12
future==0.16.0
futures==3.1.1
hjson==2.0.2
jmespath==0.9.2
kappa==0.6.0
lambda-packages==0.15.0
Markdown==2.6.8
oauthlib==2.0.2
packaging==16.8
placebo==0.8.1
psycopg2==2.7.1
pyasn1==0.2.3
pyparsing==2.2.0
python-dateutil==2.6.0
python-slugify==1.2.4
python3-openid==3.1.0
pytz==2017.2
PyYAML==3.12
redis==2.10.5
requests==2.14.2
requests-oauthlib==0.8.0
rsa==3.4.2
s3transfer==0.1.10
six==1.10.0
toml==0.9.2
tqdm==4.11.2
troposphere==1.9.3
Unidecode==0.4.20
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.42.0
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"production": {
"slim_handler": true,
"exclude": ["*.gz", "*.rar", "deploy*", "lib64*"],
"django_settings": "saffron.settings",
"s3_bucket": "mybucket",
"aws_region": "us-east-1",
"project_name": "saffron",
"debug": "true",
"runtime": "python3.6",
}
}
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `zappa/handler.py`
Content:
```
1 from __future__ import unicode_literals
2
3 import base64
4 import boto3
5 import collections
6 import datetime
7 import importlib
8 import inspect
9 import json
10 import logging
11 import os
12 import sys
13 import traceback
14 import zipfile
15
16 from builtins import str
17 from werkzeug.wrappers import Response
18
19 # This file may be copied into a project's root,
20 # so handle both scenarios.
21 try:
22 from zappa.middleware import ZappaWSGIMiddleware
23 from zappa.wsgi import create_wsgi_request, common_log
24 from zappa.utilities import parse_s3_url
25 except ImportError as e: # pragma: no cover
26 from .middleware import ZappaWSGIMiddleware
27 from .wsgi import create_wsgi_request, common_log
28 from .utilities import parse_s3_url
29
30
31 # Set up logging
32 logging.basicConfig()
33 logger = logging.getLogger()
34 logger.setLevel(logging.INFO)
35
36
37 class LambdaHandler(object):
38 """
39 Singleton for avoiding duplicate setup.
40
41 Pattern provided by @benbangert.
42 """
43
44 __instance = None
45 settings = None
46 settings_name = None
47 session = None
48
49 # Application
50 app_module = None
51 wsgi_app = None
52 trailing_slash = False
53
54 def __new__(cls, settings_name="zappa_settings", session=None):
55 """Singleton instance to avoid repeat setup"""
56 if LambdaHandler.__instance is None:
57 if sys.version_info[0] < 3:
58 LambdaHandler.__instance = object.__new__(cls, settings_name, session)
59 else:
60 print("Instancing..")
61 LambdaHandler.__instance = object.__new__(cls)
62 return LambdaHandler.__instance
63
64 def __init__(self, settings_name="zappa_settings", session=None):
65
66 # We haven't cached our settings yet, load the settings and app.
67 if not self.settings:
68 # Loading settings from a python module
69 self.settings = importlib.import_module(settings_name)
70 self.settings_name = settings_name
71 self.session = session
72
73 # Custom log level
74 if self.settings.LOG_LEVEL:
75 level = logging.getLevelName(self.settings.LOG_LEVEL)
76 logger.setLevel(level)
77
78 remote_env = getattr(self.settings, 'REMOTE_ENV', None)
79 remote_bucket, remote_file = parse_s3_url(remote_env)
80
81 if remote_bucket and remote_file:
82 self.load_remote_settings(remote_bucket, remote_file)
83
84 # Let the system know that this will be a Lambda/Zappa/Stack
85 os.environ["SERVERTYPE"] = "AWS Lambda"
86 os.environ["FRAMEWORK"] = "Zappa"
87 try:
88 os.environ["PROJECT"] = self.settings.PROJECT_NAME
89 os.environ["STAGE"] = self.settings.API_STAGE
90 except Exception: # pragma: no cover
91 pass
92
93 # Set any locally defined env vars
94 # Environement variable keys can't be Unicode
95 # https://github.com/Miserlou/Zappa/issues/604
96 for key in self.settings.ENVIRONMENT_VARIABLES.keys():
97 os.environ[str(key)] = self.settings.ENVIRONMENT_VARIABLES[key]
98
99 # Pulling from S3 if given a zip path
100 project_zip_path = getattr(self.settings, 'ZIP_PATH', None)
101 if project_zip_path:
102 self.load_remote_project_zip(project_zip_path)
103
104
105 # Load compliled library to the PythonPath
106 # checks if we are the slim_handler since this is not needed otherwise
107 # https://github.com/Miserlou/Zappa/issues/776
108 is_slim_handler = getattr(self.settings, 'SLIM_HANDLER', False)
109 if is_slim_handler:
110 included_libraries = getattr(self.settings, 'INCLUDE', ['libmysqlclient.so.18'])
111 try:
112 from ctypes import cdll, util
113 for library in included_libraries:
114 try:
115 cdll.LoadLibrary(os.path.join(os.getcwd(), library))
116 except OSError:
117 print ("Failed to find library...right filename?")
118 except ImportError:
119 print ("Failed to import cytpes library")
120
121 # This is a non-WSGI application
122 # https://github.com/Miserlou/Zappa/pull/748
123 if not hasattr(self.settings, 'APP_MODULE') and not self.settings.DJANGO_SETTINGS:
124 self.app_module = None
125 wsgi_app_function = None
126 # This is probably a normal WSGI app
127 elif not self.settings.DJANGO_SETTINGS:
128 # The app module
129 self.app_module = importlib.import_module(self.settings.APP_MODULE)
130
131 # The application
132 wsgi_app_function = getattr(self.app_module, self.settings.APP_FUNCTION)
133 self.trailing_slash = False
134 # Django gets special treatment.
135 else:
136
137 try: # Support both for tests
138 from zappa.ext.django_zappa import get_django_wsgi
139 except ImportError: # pragma: no cover
140 from django_zappa_app import get_django_wsgi
141
142 # Get the Django WSGI app from our extension
143 wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
144 self.trailing_slash = True
145
146 self.wsgi_app = ZappaWSGIMiddleware(wsgi_app_function)
147
148 def load_remote_project_zip(self, project_zip_path):
149 """
150 Puts the project files from S3 in /tmp and adds to path
151 """
152 project_folder = '/tmp/{0!s}'.format(self.settings.PROJECT_NAME)
153 if not os.path.isdir(project_folder):
154 # The project folder doesn't exist in this cold lambda, get it from S3
155 if not self.session:
156 boto_session = boto3.Session()
157 else:
158 boto_session = self.session
159
160 # Download the zip
161 remote_bucket, remote_file = project_zip_path.lstrip('s3://').split('/', 1)
162 s3 = boto_session.resource('s3')
163
164 zip_path = '/tmp/{0!s}'.format(remote_file)
165 s3.Object(remote_bucket, remote_file).download_file(zip_path)
166
167 # Unzip contents to project folder
168 with zipfile.ZipFile(zip_path, 'r') as z:
169 z.extractall(path=project_folder)
170
171 # Add to project path
172 sys.path.insert(0, project_folder)
173
174 # Change working directory to project folder
175 # Related: https://github.com/Miserlou/Zappa/issues/702
176 os.chdir(project_folder)
177 return True
178
179 def load_remote_settings(self, remote_bucket, remote_file):
180 """
181 Attempt to read a file from s3 containing a flat json object. Adds each
182 key->value pair as environment variables. Helpful for keeping
183 sensitiZve or stage-specific configuration variables in s3 instead of
184 version control.
185 """
186 if not self.session:
187 boto_session = boto3.Session()
188 else:
189 boto_session = self.session
190
191 s3 = boto_session.resource('s3')
192 try:
193 remote_env_object = s3.Object(remote_bucket, remote_file).get()
194 except Exception as e: # pragma: no cover
195 # catch everything aws might decide to raise
196 print('Could not load remote settings file.', e)
197 return
198
199 try:
200 content = remote_env_object['Body'].read()
201 except Exception as e: # pragma: no cover
202 # catch everything aws might decide to raise
203 print('Exception while reading remote settings file.', e)
204 return
205
206 try:
207 settings_dict = json.loads(content)
208 except (ValueError, TypeError): # pragma: no cover
209 print('Failed to parse remote settings!')
210 return
211
212 # add each key-value to environment - overwrites existing keys!
213 for key, value in settings_dict.items():
214 if self.settings.LOG_LEVEL == "DEBUG":
215 print('Adding {} -> {} to environment'.format(
216 key,
217 value
218 ))
219 # Environement variable keys can't be Unicode
220 # https://github.com/Miserlou/Zappa/issues/604
221 try:
222 os.environ[str(key)] = value
223 except Exception:
224 if self.settings.LOG_LEVEL == "DEBUG":
225 print("Environment variable keys must be non-unicode!")
226
227 @staticmethod
228 def import_module_and_get_function(whole_function):
229 """
230 Given a modular path to a function, import that module
231 and return the function.
232 """
233 module, function = whole_function.rsplit('.', 1)
234 app_module = importlib.import_module(module)
235 app_function = getattr(app_module, function)
236 return app_function
237
238 @classmethod
239 def lambda_handler(cls, event, context): # pragma: no cover
240 handler = cls()
241 exception_handler = handler.settings.EXCEPTION_HANDLER
242 try:
243 return handler.handler(event, context)
244 except Exception as ex:
245 exception_processed = cls._process_exception(exception_handler=exception_handler,
246 event=event, context=context, exception=ex)
247 if not exception_processed:
248 # Only re-raise exception if handler directed so. Allows handler to control if lambda has to retry
249 # an event execution in case of failure.
250 raise
251
252 @classmethod
253 def _process_exception(cls, exception_handler, event, context, exception):
254 exception_processed = False
255 if exception_handler:
256 try:
257 handler_function = cls.import_module_and_get_function(exception_handler)
258 exception_processed = handler_function(exception, event, context)
259 except Exception as cex:
260 logger.error(msg='Failed to process exception via custom handler.')
261 print(cex)
262 return exception_processed
263
264 @staticmethod
265 def run_function(app_function, event, context):
266 """
267 Given a function and event context,
268 detect signature and execute, returning any result.
269 """
270 args, varargs, keywords, defaults = inspect.getargspec(app_function)
271 num_args = len(args)
272 if num_args == 0:
273 result = app_function(event, context) if varargs else app_function()
274 elif num_args == 1:
275 result = app_function(event, context) if varargs else app_function(event)
276 elif num_args == 2:
277 result = app_function(event, context)
278 else:
279 raise RuntimeError("Function signature is invalid. Expected a function that accepts at most "
280 "2 arguments or varargs.")
281 return result
282
283 def get_function_for_aws_event(self, record):
284 """
285 Get the associated function to execute for a triggered AWS event
286
287 Support S3, SNS, DynamoDB and kinesis events
288 """
289 if 's3' in record:
290 return record['s3']['configurationId'].split(':')[-1]
291
292 arn = None
293 if 'Sns' in record:
294 try:
295 message = json.loads(record['Sns']['Message'])
296 if message.get('command'):
297 return message['command']
298 except ValueError:
299 pass
300 arn = record['Sns'].get('TopicArn')
301 elif 'dynamodb' in record or 'kinesis' in record:
302 arn = record.get('eventSourceARN')
303
304 if arn:
305 return self.settings.AWS_EVENT_MAPPING.get(arn)
306
307 return None
308
309 def handler(self, event, context):
310 """
311 An AWS Lambda function which parses specific API Gateway input into a
312 WSGI request, feeds it to our WSGI app, procceses the response, and returns
313 that back to the API Gateway.
314
315 """
316 settings = self.settings
317
318 # If in DEBUG mode, log all raw incoming events.
319 if settings.DEBUG:
320 logger.debug('Zappa Event: {}'.format(event))
321
322 # This is the result of a keep alive, recertify
323 # or scheduled event.
324 if event.get('detail-type') == u'Scheduled Event':
325
326 whole_function = event['resources'][0].split('/')[-1].split('-')[-1]
327
328 # This is a scheduled function.
329 if '.' in whole_function:
330 app_function = self.import_module_and_get_function(whole_function)
331
332 # Execute the function!
333 return self.run_function(app_function, event, context)
334
335 # Else, let this execute as it were.
336
337 # This is a direct command invocation.
338 elif event.get('command', None):
339
340 whole_function = event['command']
341 app_function = self.import_module_and_get_function(whole_function)
342 result = self.run_function(app_function, event, context)
343 print("Result of %s:" % whole_function)
344 print(result)
345 return result
346
347 # This is a direct, raw python invocation.
348 # It's _extremely_ important we don't allow this event source
349 # to be overriden by unsanitized, non-admin user input.
350 elif event.get('raw_command', None):
351
352 raw_command = event['raw_command']
353 exec(raw_command)
354 return
355
356 # This is a Django management command invocation.
357 elif event.get('manage', None):
358
359 from django.core import management
360
361 try: # Support both for tests
362 from zappa.ext.django_zappa import get_django_wsgi
363 except ImportError as e: # pragma: no cover
364 from django_zappa_app import get_django_wsgi
365
366 # Get the Django WSGI app from our extension
367 # We don't actually need the function,
368 # but we do need to do all of the required setup for it.
369 app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
370
371 # Couldn't figure out how to get the value into stdout with StringIO..
372 # Read the log for now. :[]
373 management.call_command(*event['manage'].split(' '))
374 return {}
375
376 # This is an AWS-event triggered invokation.
377 elif event.get('Records', None):
378
379 records = event.get('Records')
380 result = None
381 whole_function = self.get_function_for_aws_event(records[0])
382 if whole_function:
383 app_function = self.import_module_and_get_function(whole_function)
384 result = self.run_function(app_function, event, context)
385 logger.debug(result)
386 else:
387 logger.error("Cannot find a function to process the triggered event.")
388 return result
389
390 # This is an API Gateway authorizer event
391 elif event.get('type') == u'TOKEN':
392 whole_function = self.settings.AUTHORIZER_FUNCTION
393 if whole_function:
394 app_function = self.import_module_and_get_function(whole_function)
395 policy = self.run_function(app_function, event, context)
396 return policy
397 else:
398 logger.error("Cannot find a function to process the authorization request.")
399 raise Exception('Unauthorized')
400
401 # Normal web app flow
402 try:
403 # Timing
404 time_start = datetime.datetime.now()
405
406 # This is a normal HTTP request
407 if event.get('httpMethod', None):
408
409 if settings.DOMAIN:
410 # If we're on a domain, we operate normally
411 script_name = ''
412 else:
413 # But if we're not, then our base URL
414 # will be something like
415 # https://blahblahblah.execute-api.us-east-1.amazonaws.com/dev
416 # So, we need to make sure the WSGI app knows this.
417 script_name = '/' + settings.API_STAGE
418
419 # Create the environment for WSGI and handle the request
420 environ = create_wsgi_request(
421 event,
422 script_name=script_name,
423 trailing_slash=self.trailing_slash,
424 binary_support=settings.BINARY_SUPPORT
425 )
426
427 # We are always on https on Lambda, so tell our wsgi app that.
428 environ['HTTPS'] = 'on'
429 environ['wsgi.url_scheme'] = 'https'
430 environ['lambda.context'] = context
431
432 # Execute the application
433 response = Response.from_app(self.wsgi_app, environ)
434
435 # This is the object we're going to return.
436 # Pack the WSGI response into our special dictionary.
437 zappa_returndict = dict()
438
439 if response.data:
440 if settings.BINARY_SUPPORT:
441 if not response.mimetype.startswith("text/") \
442 or response.mimetype != "application/json":
443 zappa_returndict['body'] = base64.b64encode(response.data).decode('utf-8')
444 zappa_returndict["isBase64Encoded"] = "true"
445 else:
446 zappa_returndict['body'] = response.data
447 else:
448 zappa_returndict['body'] = response.data
449
450 zappa_returndict['statusCode'] = response.status_code
451 zappa_returndict['headers'] = {}
452 for key, value in response.headers:
453 zappa_returndict['headers'][key] = value
454
455 # Calculate the total response time,
456 # and log it in the Common Log format.
457 time_end = datetime.datetime.now()
458 delta = time_end - time_start
459 response_time_ms = delta.total_seconds() * 1000
460 response.content = response.data
461 common_log(environ, response, response_time=response_time_ms)
462
463 return zappa_returndict
464 except Exception as e: # pragma: no cover
465
466 # Print statements are visible in the logs either way
467 print(e)
468 exc_info = sys.exc_info()
469 message = ('An uncaught exception happened while servicing this request. '
470 'You can investigate this with the `zappa tail` command.')
471
472 # If we didn't even build an app_module, just raise.
473 if not settings.DJANGO_SETTINGS:
474 try:
475 self.app_module
476 except NameError as ne:
477 message = 'Failed to import module: {}'.format(ne.message)
478
479 # Return this unspecified exception as a 500, using template that API Gateway expects.
480 content = collections.OrderedDict()
481 content['statusCode'] = 500
482 body = {'message': message}
483 if settings.DEBUG: # only include traceback if debug is on.
484 body['traceback'] = traceback.format_exception(*exc_info) # traceback as a list for readability.
485 content['body'] = json.dumps(str(body), sort_keys=True, indent=4)
486 return content
487
488
489 def lambda_handler(event, context): # pragma: no cover
490 return LambdaHandler.lambda_handler(event, context)
491
492
493 def keep_warm_callback(event, context):
494 """Method is triggered by the CloudWatch event scheduled when keep_warm setting is set to true."""
495 lambda_handler(event={}, context=context) # overriding event with an empty one so that web app initialization will
496 # be triggered.
497
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/zappa/handler.py b/zappa/handler.py
--- a/zappa/handler.py
+++ b/zappa/handler.py
@@ -158,7 +158,7 @@
boto_session = self.session
# Download the zip
- remote_bucket, remote_file = project_zip_path.lstrip('s3://').split('/', 1)
+ remote_bucket, remote_file = parse_s3_url(project_zip_path)
s3 = boto_session.resource('s3')
zip_path = '/tmp/{0!s}'.format(remote_file)
| {"golden_diff": "diff --git a/zappa/handler.py b/zappa/handler.py\n--- a/zappa/handler.py\n+++ b/zappa/handler.py\n@@ -158,7 +158,7 @@\n boto_session = self.session\n \n # Download the zip\n- remote_bucket, remote_file = project_zip_path.lstrip('s3://').split('/', 1)\n+ remote_bucket, remote_file = parse_s3_url(project_zip_path)\n s3 = boto_session.resource('s3')\n \n zip_path = '/tmp/{0!s}'.format(remote_file)\n", "issue": "Bug using slim_handler when project name begins with an 's'\n<!--- Provide a general summary of the issue in the Title above -->\r\n## Context\r\n<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->\r\nWhen a project name starts with a lowercase 's' and the project uses Zappa's slim_handler, Zappa tries to fetch a non-existent file from S3\r\n<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7 --> \r\n\r\n## Expected Behavior\r\n<!--- Tell us what should happen -->\r\nZappa's slim_handler should download zip files that begin with 's' (ex: saffron_current_project.zip)\r\n## Actual Behavior\r\n<!--- Tell us what happens instead -->\r\nZappa strips the leading 's' and attempts to pull 'affron_current_project.zip' from S3\r\n\r\n## Possible Fix\r\n<!--- Not obligatory, but suggest a fix or reason for the bug -->\r\nAt handler.py:161, instead of using lstrip (which will strip based on the individual characters within 's3://') we could try using a regular expression \r\n`remote_bucket, remote_file = re.sub('^s3://', '', project_zip_path).split('/', 1)`\r\n\r\n## Steps to Reproduce\r\n<!--- Provide a link to a live example, or an unambiguous set of steps to -->\r\n<!--- reproduce this bug include code to reproduce, if relevant -->\r\n1. Enable slim_handler\r\n2. Make first character in project name a lowercase 's'\r\n3. Attempt to load the remote zip by calling the lambda function\r\n\r\n## Your Environment\r\n<!--- Include as many relevant details about the environment you experienced the bug in -->\r\n* Zappa version used: 0.42.0\r\n* Operating System and Python version: Linux/Python3.6\r\n* The output of `pip freeze`:\r\nappdirs==1.4.3\r\nargcomplete==1.8.2\r\nawscli==1.11.91\r\nbase58==0.2.4\r\nboto==2.47.0\r\nboto3==1.4.4\r\nbotocore==1.5.40\r\nclick==6.7\r\ncolorama==0.3.7\r\ndefusedxml==0.5.0\r\nDjango==1.11.1\r\ndjango-allauth==0.32.0\r\ndjango-filter==1.0.4\r\ndjango-redis==4.8.0\r\ndjango-rest-auth==0.9.1\r\ndjango-storages==1.5.2\r\ndjangorestframework==3.6.3\r\ndocutils==0.13.1\r\ndurationpy==0.4\r\nfactory-boy==2.8.1\r\nFaker==0.7.12\r\nfuture==0.16.0\r\nfutures==3.1.1\r\nhjson==2.0.2\r\njmespath==0.9.2\r\nkappa==0.6.0\r\nlambda-packages==0.15.0\r\nMarkdown==2.6.8\r\noauthlib==2.0.2\r\npackaging==16.8\r\nplacebo==0.8.1\r\npsycopg2==2.7.1\r\npyasn1==0.2.3\r\npyparsing==2.2.0\r\npython-dateutil==2.6.0\r\npython-slugify==1.2.4\r\npython3-openid==3.1.0\r\npytz==2017.2\r\nPyYAML==3.12\r\nredis==2.10.5\r\nrequests==2.14.2\r\nrequests-oauthlib==0.8.0\r\nrsa==3.4.2\r\ns3transfer==0.1.10\r\nsix==1.10.0\r\ntoml==0.9.2\r\ntqdm==4.11.2\r\ntroposphere==1.9.3\r\nUnidecode==0.4.20\r\nWerkzeug==0.12\r\nwsgi-request-logger==0.4.6\r\nzappa==0.42.0\r\n\r\n* Link to your project (optional):\r\n* Your `zappa_settings.py`: \r\n{\r\n \"production\": {\r\n \"slim_handler\": true,\r\n \"exclude\": [\"*.gz\", \"*.rar\", \"deploy*\", \"lib64*\"],\r\n \"django_settings\": \"saffron.settings\",\r\n \"s3_bucket\": \"mybucket\",\r\n \"aws_region\": \"us-east-1\",\r\n \"project_name\": \"saffron\",\r\n \"debug\": \"true\",\r\n \"runtime\": \"python3.6\",\r\n }\r\n}\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport base64\nimport boto3\nimport collections\nimport datetime\nimport importlib\nimport inspect\nimport json\nimport logging\nimport os\nimport sys\nimport traceback\nimport zipfile\n\nfrom builtins import str\nfrom werkzeug.wrappers import Response\n\n# This file may be copied into a project's root,\n# so handle both scenarios.\ntry:\n from zappa.middleware import ZappaWSGIMiddleware\n from zappa.wsgi import create_wsgi_request, common_log\n from zappa.utilities import parse_s3_url\nexcept ImportError as e: # pragma: no cover\n from .middleware import ZappaWSGIMiddleware\n from .wsgi import create_wsgi_request, common_log\n from .utilities import parse_s3_url\n\n\n# Set up logging\nlogging.basicConfig()\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\n\n\nclass LambdaHandler(object):\n \"\"\"\n Singleton for avoiding duplicate setup.\n\n Pattern provided by @benbangert.\n \"\"\"\n\n __instance = None\n settings = None\n settings_name = None\n session = None\n\n # Application\n app_module = None\n wsgi_app = None\n trailing_slash = False\n\n def __new__(cls, settings_name=\"zappa_settings\", session=None):\n \"\"\"Singleton instance to avoid repeat setup\"\"\"\n if LambdaHandler.__instance is None:\n if sys.version_info[0] < 3:\n LambdaHandler.__instance = object.__new__(cls, settings_name, session)\n else:\n print(\"Instancing..\")\n LambdaHandler.__instance = object.__new__(cls)\n return LambdaHandler.__instance\n\n def __init__(self, settings_name=\"zappa_settings\", session=None):\n\n # We haven't cached our settings yet, load the settings and app.\n if not self.settings:\n # Loading settings from a python module\n self.settings = importlib.import_module(settings_name)\n self.settings_name = settings_name\n self.session = session\n\n # Custom log level\n if self.settings.LOG_LEVEL:\n level = logging.getLevelName(self.settings.LOG_LEVEL)\n logger.setLevel(level)\n\n remote_env = getattr(self.settings, 'REMOTE_ENV', None)\n remote_bucket, remote_file = parse_s3_url(remote_env)\n\n if remote_bucket and remote_file:\n self.load_remote_settings(remote_bucket, remote_file)\n\n # Let the system know that this will be a Lambda/Zappa/Stack\n os.environ[\"SERVERTYPE\"] = \"AWS Lambda\"\n os.environ[\"FRAMEWORK\"] = \"Zappa\"\n try:\n os.environ[\"PROJECT\"] = self.settings.PROJECT_NAME\n os.environ[\"STAGE\"] = self.settings.API_STAGE\n except Exception: # pragma: no cover\n pass\n\n # Set any locally defined env vars\n # Environement variable keys can't be Unicode\n # https://github.com/Miserlou/Zappa/issues/604\n for key in self.settings.ENVIRONMENT_VARIABLES.keys():\n os.environ[str(key)] = self.settings.ENVIRONMENT_VARIABLES[key]\n\n # Pulling from S3 if given a zip path\n project_zip_path = getattr(self.settings, 'ZIP_PATH', None)\n if project_zip_path:\n self.load_remote_project_zip(project_zip_path)\n\n\n # Load compliled library to the PythonPath\n # checks if we are the slim_handler since this is not needed otherwise\n # https://github.com/Miserlou/Zappa/issues/776\n is_slim_handler = getattr(self.settings, 'SLIM_HANDLER', False)\n if is_slim_handler:\n included_libraries = getattr(self.settings, 'INCLUDE', ['libmysqlclient.so.18'])\n try:\n from ctypes import cdll, util\n for library in included_libraries:\n try:\n cdll.LoadLibrary(os.path.join(os.getcwd(), library))\n except OSError:\n print (\"Failed to find library...right filename?\")\n except ImportError:\n print (\"Failed to import cytpes library\")\n\n # This is a non-WSGI application\n # https://github.com/Miserlou/Zappa/pull/748\n if not hasattr(self.settings, 'APP_MODULE') and not self.settings.DJANGO_SETTINGS:\n self.app_module = None\n wsgi_app_function = None\n # This is probably a normal WSGI app\n elif not self.settings.DJANGO_SETTINGS:\n # The app module\n self.app_module = importlib.import_module(self.settings.APP_MODULE)\n\n # The application\n wsgi_app_function = getattr(self.app_module, self.settings.APP_FUNCTION)\n self.trailing_slash = False\n # Django gets special treatment.\n else:\n\n try: # Support both for tests\n from zappa.ext.django_zappa import get_django_wsgi\n except ImportError: # pragma: no cover\n from django_zappa_app import get_django_wsgi\n\n # Get the Django WSGI app from our extension\n wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)\n self.trailing_slash = True\n\n self.wsgi_app = ZappaWSGIMiddleware(wsgi_app_function)\n\n def load_remote_project_zip(self, project_zip_path):\n \"\"\"\n Puts the project files from S3 in /tmp and adds to path\n \"\"\"\n project_folder = '/tmp/{0!s}'.format(self.settings.PROJECT_NAME)\n if not os.path.isdir(project_folder):\n # The project folder doesn't exist in this cold lambda, get it from S3\n if not self.session:\n boto_session = boto3.Session()\n else:\n boto_session = self.session\n\n # Download the zip\n remote_bucket, remote_file = project_zip_path.lstrip('s3://').split('/', 1)\n s3 = boto_session.resource('s3')\n\n zip_path = '/tmp/{0!s}'.format(remote_file)\n s3.Object(remote_bucket, remote_file).download_file(zip_path)\n\n # Unzip contents to project folder\n with zipfile.ZipFile(zip_path, 'r') as z:\n z.extractall(path=project_folder)\n\n # Add to project path\n sys.path.insert(0, project_folder)\n\n # Change working directory to project folder\n # Related: https://github.com/Miserlou/Zappa/issues/702\n os.chdir(project_folder)\n return True\n\n def load_remote_settings(self, remote_bucket, remote_file):\n \"\"\"\n Attempt to read a file from s3 containing a flat json object. Adds each\n key->value pair as environment variables. Helpful for keeping\n sensitiZve or stage-specific configuration variables in s3 instead of\n version control.\n \"\"\"\n if not self.session:\n boto_session = boto3.Session()\n else:\n boto_session = self.session\n\n s3 = boto_session.resource('s3')\n try:\n remote_env_object = s3.Object(remote_bucket, remote_file).get()\n except Exception as e: # pragma: no cover\n # catch everything aws might decide to raise\n print('Could not load remote settings file.', e)\n return\n\n try:\n content = remote_env_object['Body'].read()\n except Exception as e: # pragma: no cover\n # catch everything aws might decide to raise\n print('Exception while reading remote settings file.', e)\n return\n\n try:\n settings_dict = json.loads(content)\n except (ValueError, TypeError): # pragma: no cover\n print('Failed to parse remote settings!')\n return\n\n # add each key-value to environment - overwrites existing keys!\n for key, value in settings_dict.items():\n if self.settings.LOG_LEVEL == \"DEBUG\":\n print('Adding {} -> {} to environment'.format(\n key,\n value\n ))\n # Environement variable keys can't be Unicode\n # https://github.com/Miserlou/Zappa/issues/604\n try:\n os.environ[str(key)] = value\n except Exception:\n if self.settings.LOG_LEVEL == \"DEBUG\":\n print(\"Environment variable keys must be non-unicode!\")\n\n @staticmethod\n def import_module_and_get_function(whole_function):\n \"\"\"\n Given a modular path to a function, import that module\n and return the function.\n \"\"\"\n module, function = whole_function.rsplit('.', 1)\n app_module = importlib.import_module(module)\n app_function = getattr(app_module, function)\n return app_function\n\n @classmethod\n def lambda_handler(cls, event, context): # pragma: no cover\n handler = cls()\n exception_handler = handler.settings.EXCEPTION_HANDLER\n try:\n return handler.handler(event, context)\n except Exception as ex:\n exception_processed = cls._process_exception(exception_handler=exception_handler,\n event=event, context=context, exception=ex)\n if not exception_processed:\n # Only re-raise exception if handler directed so. Allows handler to control if lambda has to retry\n # an event execution in case of failure.\n raise\n\n @classmethod\n def _process_exception(cls, exception_handler, event, context, exception):\n exception_processed = False\n if exception_handler:\n try:\n handler_function = cls.import_module_and_get_function(exception_handler)\n exception_processed = handler_function(exception, event, context)\n except Exception as cex:\n logger.error(msg='Failed to process exception via custom handler.')\n print(cex)\n return exception_processed\n\n @staticmethod\n def run_function(app_function, event, context):\n \"\"\"\n Given a function and event context,\n detect signature and execute, returning any result.\n \"\"\"\n args, varargs, keywords, defaults = inspect.getargspec(app_function)\n num_args = len(args)\n if num_args == 0:\n result = app_function(event, context) if varargs else app_function()\n elif num_args == 1:\n result = app_function(event, context) if varargs else app_function(event)\n elif num_args == 2:\n result = app_function(event, context)\n else:\n raise RuntimeError(\"Function signature is invalid. Expected a function that accepts at most \"\n \"2 arguments or varargs.\")\n return result\n\n def get_function_for_aws_event(self, record):\n \"\"\"\n Get the associated function to execute for a triggered AWS event\n\n Support S3, SNS, DynamoDB and kinesis events\n \"\"\"\n if 's3' in record:\n return record['s3']['configurationId'].split(':')[-1]\n\n arn = None\n if 'Sns' in record:\n try:\n message = json.loads(record['Sns']['Message'])\n if message.get('command'):\n return message['command']\n except ValueError:\n pass\n arn = record['Sns'].get('TopicArn')\n elif 'dynamodb' in record or 'kinesis' in record:\n arn = record.get('eventSourceARN')\n\n if arn:\n return self.settings.AWS_EVENT_MAPPING.get(arn)\n\n return None\n\n def handler(self, event, context):\n \"\"\"\n An AWS Lambda function which parses specific API Gateway input into a\n WSGI request, feeds it to our WSGI app, procceses the response, and returns\n that back to the API Gateway.\n\n \"\"\"\n settings = self.settings\n\n # If in DEBUG mode, log all raw incoming events.\n if settings.DEBUG:\n logger.debug('Zappa Event: {}'.format(event))\n\n # This is the result of a keep alive, recertify\n # or scheduled event.\n if event.get('detail-type') == u'Scheduled Event':\n\n whole_function = event['resources'][0].split('/')[-1].split('-')[-1]\n\n # This is a scheduled function.\n if '.' in whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n\n # Execute the function!\n return self.run_function(app_function, event, context)\n\n # Else, let this execute as it were.\n\n # This is a direct command invocation.\n elif event.get('command', None):\n\n whole_function = event['command']\n app_function = self.import_module_and_get_function(whole_function)\n result = self.run_function(app_function, event, context)\n print(\"Result of %s:\" % whole_function)\n print(result)\n return result\n\n # This is a direct, raw python invocation.\n # It's _extremely_ important we don't allow this event source\n # to be overriden by unsanitized, non-admin user input.\n elif event.get('raw_command', None):\n\n raw_command = event['raw_command']\n exec(raw_command)\n return\n\n # This is a Django management command invocation.\n elif event.get('manage', None):\n\n from django.core import management\n\n try: # Support both for tests\n from zappa.ext.django_zappa import get_django_wsgi\n except ImportError as e: # pragma: no cover\n from django_zappa_app import get_django_wsgi\n\n # Get the Django WSGI app from our extension\n # We don't actually need the function,\n # but we do need to do all of the required setup for it.\n app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)\n\n # Couldn't figure out how to get the value into stdout with StringIO..\n # Read the log for now. :[]\n management.call_command(*event['manage'].split(' '))\n return {}\n\n # This is an AWS-event triggered invokation.\n elif event.get('Records', None):\n\n records = event.get('Records')\n result = None\n whole_function = self.get_function_for_aws_event(records[0])\n if whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n result = self.run_function(app_function, event, context)\n logger.debug(result)\n else:\n logger.error(\"Cannot find a function to process the triggered event.\")\n return result\n\n # This is an API Gateway authorizer event\n elif event.get('type') == u'TOKEN':\n whole_function = self.settings.AUTHORIZER_FUNCTION\n if whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n policy = self.run_function(app_function, event, context)\n return policy\n else:\n logger.error(\"Cannot find a function to process the authorization request.\")\n raise Exception('Unauthorized')\n\n # Normal web app flow\n try:\n # Timing\n time_start = datetime.datetime.now()\n\n # This is a normal HTTP request\n if event.get('httpMethod', None):\n\n if settings.DOMAIN:\n # If we're on a domain, we operate normally\n script_name = ''\n else:\n # But if we're not, then our base URL\n # will be something like\n # https://blahblahblah.execute-api.us-east-1.amazonaws.com/dev\n # So, we need to make sure the WSGI app knows this.\n script_name = '/' + settings.API_STAGE\n\n # Create the environment for WSGI and handle the request\n environ = create_wsgi_request(\n event,\n script_name=script_name,\n trailing_slash=self.trailing_slash,\n binary_support=settings.BINARY_SUPPORT\n )\n\n # We are always on https on Lambda, so tell our wsgi app that.\n environ['HTTPS'] = 'on'\n environ['wsgi.url_scheme'] = 'https'\n environ['lambda.context'] = context\n\n # Execute the application\n response = Response.from_app(self.wsgi_app, environ)\n\n # This is the object we're going to return.\n # Pack the WSGI response into our special dictionary.\n zappa_returndict = dict()\n\n if response.data:\n if settings.BINARY_SUPPORT:\n if not response.mimetype.startswith(\"text/\") \\\n or response.mimetype != \"application/json\":\n zappa_returndict['body'] = base64.b64encode(response.data).decode('utf-8')\n zappa_returndict[\"isBase64Encoded\"] = \"true\"\n else:\n zappa_returndict['body'] = response.data\n else:\n zappa_returndict['body'] = response.data\n\n zappa_returndict['statusCode'] = response.status_code\n zappa_returndict['headers'] = {}\n for key, value in response.headers:\n zappa_returndict['headers'][key] = value\n\n # Calculate the total response time,\n # and log it in the Common Log format.\n time_end = datetime.datetime.now()\n delta = time_end - time_start\n response_time_ms = delta.total_seconds() * 1000\n response.content = response.data\n common_log(environ, response, response_time=response_time_ms)\n\n return zappa_returndict\n except Exception as e: # pragma: no cover\n\n # Print statements are visible in the logs either way\n print(e)\n exc_info = sys.exc_info()\n message = ('An uncaught exception happened while servicing this request. '\n 'You can investigate this with the `zappa tail` command.')\n\n # If we didn't even build an app_module, just raise.\n if not settings.DJANGO_SETTINGS:\n try:\n self.app_module\n except NameError as ne:\n message = 'Failed to import module: {}'.format(ne.message)\n\n # Return this unspecified exception as a 500, using template that API Gateway expects.\n content = collections.OrderedDict()\n content['statusCode'] = 500\n body = {'message': message}\n if settings.DEBUG: # only include traceback if debug is on.\n body['traceback'] = traceback.format_exception(*exc_info) # traceback as a list for readability.\n content['body'] = json.dumps(str(body), sort_keys=True, indent=4)\n return content\n\n\ndef lambda_handler(event, context): # pragma: no cover\n return LambdaHandler.lambda_handler(event, context)\n\n\ndef keep_warm_callback(event, context):\n \"\"\"Method is triggered by the CloudWatch event scheduled when keep_warm setting is set to true.\"\"\"\n lambda_handler(event={}, context=context) # overriding event with an empty one so that web app initialization will\n # be triggered.\n", "path": "zappa/handler.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport base64\nimport boto3\nimport collections\nimport datetime\nimport importlib\nimport inspect\nimport json\nimport logging\nimport os\nimport sys\nimport traceback\nimport zipfile\n\nfrom builtins import str\nfrom werkzeug.wrappers import Response\n\n# This file may be copied into a project's root,\n# so handle both scenarios.\ntry:\n from zappa.middleware import ZappaWSGIMiddleware\n from zappa.wsgi import create_wsgi_request, common_log\n from zappa.utilities import parse_s3_url\nexcept ImportError as e: # pragma: no cover\n from .middleware import ZappaWSGIMiddleware\n from .wsgi import create_wsgi_request, common_log\n from .utilities import parse_s3_url\n\n\n# Set up logging\nlogging.basicConfig()\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\n\n\nclass LambdaHandler(object):\n \"\"\"\n Singleton for avoiding duplicate setup.\n\n Pattern provided by @benbangert.\n \"\"\"\n\n __instance = None\n settings = None\n settings_name = None\n session = None\n\n # Application\n app_module = None\n wsgi_app = None\n trailing_slash = False\n\n def __new__(cls, settings_name=\"zappa_settings\", session=None):\n \"\"\"Singleton instance to avoid repeat setup\"\"\"\n if LambdaHandler.__instance is None:\n if sys.version_info[0] < 3:\n LambdaHandler.__instance = object.__new__(cls, settings_name, session)\n else:\n print(\"Instancing..\")\n LambdaHandler.__instance = object.__new__(cls)\n return LambdaHandler.__instance\n\n def __init__(self, settings_name=\"zappa_settings\", session=None):\n\n # We haven't cached our settings yet, load the settings and app.\n if not self.settings:\n # Loading settings from a python module\n self.settings = importlib.import_module(settings_name)\n self.settings_name = settings_name\n self.session = session\n\n # Custom log level\n if self.settings.LOG_LEVEL:\n level = logging.getLevelName(self.settings.LOG_LEVEL)\n logger.setLevel(level)\n\n remote_env = getattr(self.settings, 'REMOTE_ENV', None)\n remote_bucket, remote_file = parse_s3_url(remote_env)\n\n if remote_bucket and remote_file:\n self.load_remote_settings(remote_bucket, remote_file)\n\n # Let the system know that this will be a Lambda/Zappa/Stack\n os.environ[\"SERVERTYPE\"] = \"AWS Lambda\"\n os.environ[\"FRAMEWORK\"] = \"Zappa\"\n try:\n os.environ[\"PROJECT\"] = self.settings.PROJECT_NAME\n os.environ[\"STAGE\"] = self.settings.API_STAGE\n except Exception: # pragma: no cover\n pass\n\n # Set any locally defined env vars\n # Environement variable keys can't be Unicode\n # https://github.com/Miserlou/Zappa/issues/604\n for key in self.settings.ENVIRONMENT_VARIABLES.keys():\n os.environ[str(key)] = self.settings.ENVIRONMENT_VARIABLES[key]\n\n # Pulling from S3 if given a zip path\n project_zip_path = getattr(self.settings, 'ZIP_PATH', None)\n if project_zip_path:\n self.load_remote_project_zip(project_zip_path)\n\n\n # Load compliled library to the PythonPath\n # checks if we are the slim_handler since this is not needed otherwise\n # https://github.com/Miserlou/Zappa/issues/776\n is_slim_handler = getattr(self.settings, 'SLIM_HANDLER', False)\n if is_slim_handler:\n included_libraries = getattr(self.settings, 'INCLUDE', ['libmysqlclient.so.18'])\n try:\n from ctypes import cdll, util\n for library in included_libraries:\n try:\n cdll.LoadLibrary(os.path.join(os.getcwd(), library))\n except OSError:\n print (\"Failed to find library...right filename?\")\n except ImportError:\n print (\"Failed to import cytpes library\")\n\n # This is a non-WSGI application\n # https://github.com/Miserlou/Zappa/pull/748\n if not hasattr(self.settings, 'APP_MODULE') and not self.settings.DJANGO_SETTINGS:\n self.app_module = None\n wsgi_app_function = None\n # This is probably a normal WSGI app\n elif not self.settings.DJANGO_SETTINGS:\n # The app module\n self.app_module = importlib.import_module(self.settings.APP_MODULE)\n\n # The application\n wsgi_app_function = getattr(self.app_module, self.settings.APP_FUNCTION)\n self.trailing_slash = False\n # Django gets special treatment.\n else:\n\n try: # Support both for tests\n from zappa.ext.django_zappa import get_django_wsgi\n except ImportError: # pragma: no cover\n from django_zappa_app import get_django_wsgi\n\n # Get the Django WSGI app from our extension\n wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)\n self.trailing_slash = True\n\n self.wsgi_app = ZappaWSGIMiddleware(wsgi_app_function)\n\n def load_remote_project_zip(self, project_zip_path):\n \"\"\"\n Puts the project files from S3 in /tmp and adds to path\n \"\"\"\n project_folder = '/tmp/{0!s}'.format(self.settings.PROJECT_NAME)\n if not os.path.isdir(project_folder):\n # The project folder doesn't exist in this cold lambda, get it from S3\n if not self.session:\n boto_session = boto3.Session()\n else:\n boto_session = self.session\n\n # Download the zip\n remote_bucket, remote_file = parse_s3_url(project_zip_path)\n s3 = boto_session.resource('s3')\n\n zip_path = '/tmp/{0!s}'.format(remote_file)\n s3.Object(remote_bucket, remote_file).download_file(zip_path)\n\n # Unzip contents to project folder\n with zipfile.ZipFile(zip_path, 'r') as z:\n z.extractall(path=project_folder)\n\n # Add to project path\n sys.path.insert(0, project_folder)\n\n # Change working directory to project folder\n # Related: https://github.com/Miserlou/Zappa/issues/702\n os.chdir(project_folder)\n return True\n\n def load_remote_settings(self, remote_bucket, remote_file):\n \"\"\"\n Attempt to read a file from s3 containing a flat json object. Adds each\n key->value pair as environment variables. Helpful for keeping\n sensitiZve or stage-specific configuration variables in s3 instead of\n version control.\n \"\"\"\n if not self.session:\n boto_session = boto3.Session()\n else:\n boto_session = self.session\n\n s3 = boto_session.resource('s3')\n try:\n remote_env_object = s3.Object(remote_bucket, remote_file).get()\n except Exception as e: # pragma: no cover\n # catch everything aws might decide to raise\n print('Could not load remote settings file.', e)\n return\n\n try:\n content = remote_env_object['Body'].read()\n except Exception as e: # pragma: no cover\n # catch everything aws might decide to raise\n print('Exception while reading remote settings file.', e)\n return\n\n try:\n settings_dict = json.loads(content)\n except (ValueError, TypeError): # pragma: no cover\n print('Failed to parse remote settings!')\n return\n\n # add each key-value to environment - overwrites existing keys!\n for key, value in settings_dict.items():\n if self.settings.LOG_LEVEL == \"DEBUG\":\n print('Adding {} -> {} to environment'.format(\n key,\n value\n ))\n # Environement variable keys can't be Unicode\n # https://github.com/Miserlou/Zappa/issues/604\n try:\n os.environ[str(key)] = value\n except Exception:\n if self.settings.LOG_LEVEL == \"DEBUG\":\n print(\"Environment variable keys must be non-unicode!\")\n\n @staticmethod\n def import_module_and_get_function(whole_function):\n \"\"\"\n Given a modular path to a function, import that module\n and return the function.\n \"\"\"\n module, function = whole_function.rsplit('.', 1)\n app_module = importlib.import_module(module)\n app_function = getattr(app_module, function)\n return app_function\n\n @classmethod\n def lambda_handler(cls, event, context): # pragma: no cover\n handler = cls()\n exception_handler = handler.settings.EXCEPTION_HANDLER\n try:\n return handler.handler(event, context)\n except Exception as ex:\n exception_processed = cls._process_exception(exception_handler=exception_handler,\n event=event, context=context, exception=ex)\n if not exception_processed:\n # Only re-raise exception if handler directed so. Allows handler to control if lambda has to retry\n # an event execution in case of failure.\n raise\n\n @classmethod\n def _process_exception(cls, exception_handler, event, context, exception):\n exception_processed = False\n if exception_handler:\n try:\n handler_function = cls.import_module_and_get_function(exception_handler)\n exception_processed = handler_function(exception, event, context)\n except Exception as cex:\n logger.error(msg='Failed to process exception via custom handler.')\n print(cex)\n return exception_processed\n\n @staticmethod\n def run_function(app_function, event, context):\n \"\"\"\n Given a function and event context,\n detect signature and execute, returning any result.\n \"\"\"\n args, varargs, keywords, defaults = inspect.getargspec(app_function)\n num_args = len(args)\n if num_args == 0:\n result = app_function(event, context) if varargs else app_function()\n elif num_args == 1:\n result = app_function(event, context) if varargs else app_function(event)\n elif num_args == 2:\n result = app_function(event, context)\n else:\n raise RuntimeError(\"Function signature is invalid. Expected a function that accepts at most \"\n \"2 arguments or varargs.\")\n return result\n\n def get_function_for_aws_event(self, record):\n \"\"\"\n Get the associated function to execute for a triggered AWS event\n\n Support S3, SNS, DynamoDB and kinesis events\n \"\"\"\n if 's3' in record:\n return record['s3']['configurationId'].split(':')[-1]\n\n arn = None\n if 'Sns' in record:\n try:\n message = json.loads(record['Sns']['Message'])\n if message.get('command'):\n return message['command']\n except ValueError:\n pass\n arn = record['Sns'].get('TopicArn')\n elif 'dynamodb' in record or 'kinesis' in record:\n arn = record.get('eventSourceARN')\n\n if arn:\n return self.settings.AWS_EVENT_MAPPING.get(arn)\n\n return None\n\n def handler(self, event, context):\n \"\"\"\n An AWS Lambda function which parses specific API Gateway input into a\n WSGI request, feeds it to our WSGI app, procceses the response, and returns\n that back to the API Gateway.\n\n \"\"\"\n settings = self.settings\n\n # If in DEBUG mode, log all raw incoming events.\n if settings.DEBUG:\n logger.debug('Zappa Event: {}'.format(event))\n\n # This is the result of a keep alive, recertify\n # or scheduled event.\n if event.get('detail-type') == u'Scheduled Event':\n\n whole_function = event['resources'][0].split('/')[-1].split('-')[-1]\n\n # This is a scheduled function.\n if '.' in whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n\n # Execute the function!\n return self.run_function(app_function, event, context)\n\n # Else, let this execute as it were.\n\n # This is a direct command invocation.\n elif event.get('command', None):\n\n whole_function = event['command']\n app_function = self.import_module_and_get_function(whole_function)\n result = self.run_function(app_function, event, context)\n print(\"Result of %s:\" % whole_function)\n print(result)\n return result\n\n # This is a direct, raw python invocation.\n # It's _extremely_ important we don't allow this event source\n # to be overriden by unsanitized, non-admin user input.\n elif event.get('raw_command', None):\n\n raw_command = event['raw_command']\n exec(raw_command)\n return\n\n # This is a Django management command invocation.\n elif event.get('manage', None):\n\n from django.core import management\n\n try: # Support both for tests\n from zappa.ext.django_zappa import get_django_wsgi\n except ImportError as e: # pragma: no cover\n from django_zappa_app import get_django_wsgi\n\n # Get the Django WSGI app from our extension\n # We don't actually need the function,\n # but we do need to do all of the required setup for it.\n app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)\n\n # Couldn't figure out how to get the value into stdout with StringIO..\n # Read the log for now. :[]\n management.call_command(*event['manage'].split(' '))\n return {}\n\n # This is an AWS-event triggered invokation.\n elif event.get('Records', None):\n\n records = event.get('Records')\n result = None\n whole_function = self.get_function_for_aws_event(records[0])\n if whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n result = self.run_function(app_function, event, context)\n logger.debug(result)\n else:\n logger.error(\"Cannot find a function to process the triggered event.\")\n return result\n\n # This is an API Gateway authorizer event\n elif event.get('type') == u'TOKEN':\n whole_function = self.settings.AUTHORIZER_FUNCTION\n if whole_function:\n app_function = self.import_module_and_get_function(whole_function)\n policy = self.run_function(app_function, event, context)\n return policy\n else:\n logger.error(\"Cannot find a function to process the authorization request.\")\n raise Exception('Unauthorized')\n\n # Normal web app flow\n try:\n # Timing\n time_start = datetime.datetime.now()\n\n # This is a normal HTTP request\n if event.get('httpMethod', None):\n\n if settings.DOMAIN:\n # If we're on a domain, we operate normally\n script_name = ''\n else:\n # But if we're not, then our base URL\n # will be something like\n # https://blahblahblah.execute-api.us-east-1.amazonaws.com/dev\n # So, we need to make sure the WSGI app knows this.\n script_name = '/' + settings.API_STAGE\n\n # Create the environment for WSGI and handle the request\n environ = create_wsgi_request(\n event,\n script_name=script_name,\n trailing_slash=self.trailing_slash,\n binary_support=settings.BINARY_SUPPORT\n )\n\n # We are always on https on Lambda, so tell our wsgi app that.\n environ['HTTPS'] = 'on'\n environ['wsgi.url_scheme'] = 'https'\n environ['lambda.context'] = context\n\n # Execute the application\n response = Response.from_app(self.wsgi_app, environ)\n\n # This is the object we're going to return.\n # Pack the WSGI response into our special dictionary.\n zappa_returndict = dict()\n\n if response.data:\n if settings.BINARY_SUPPORT:\n if not response.mimetype.startswith(\"text/\") \\\n or response.mimetype != \"application/json\":\n zappa_returndict['body'] = base64.b64encode(response.data).decode('utf-8')\n zappa_returndict[\"isBase64Encoded\"] = \"true\"\n else:\n zappa_returndict['body'] = response.data\n else:\n zappa_returndict['body'] = response.data\n\n zappa_returndict['statusCode'] = response.status_code\n zappa_returndict['headers'] = {}\n for key, value in response.headers:\n zappa_returndict['headers'][key] = value\n\n # Calculate the total response time,\n # and log it in the Common Log format.\n time_end = datetime.datetime.now()\n delta = time_end - time_start\n response_time_ms = delta.total_seconds() * 1000\n response.content = response.data\n common_log(environ, response, response_time=response_time_ms)\n\n return zappa_returndict\n except Exception as e: # pragma: no cover\n\n # Print statements are visible in the logs either way\n print(e)\n exc_info = sys.exc_info()\n message = ('An uncaught exception happened while servicing this request. '\n 'You can investigate this with the `zappa tail` command.')\n\n # If we didn't even build an app_module, just raise.\n if not settings.DJANGO_SETTINGS:\n try:\n self.app_module\n except NameError as ne:\n message = 'Failed to import module: {}'.format(ne.message)\n\n # Return this unspecified exception as a 500, using template that API Gateway expects.\n content = collections.OrderedDict()\n content['statusCode'] = 500\n body = {'message': message}\n if settings.DEBUG: # only include traceback if debug is on.\n body['traceback'] = traceback.format_exception(*exc_info) # traceback as a list for readability.\n content['body'] = json.dumps(str(body), sort_keys=True, indent=4)\n return content\n\n\ndef lambda_handler(event, context): # pragma: no cover\n return LambdaHandler.lambda_handler(event, context)\n\n\ndef keep_warm_callback(event, context):\n \"\"\"Method is triggered by the CloudWatch event scheduled when keep_warm setting is set to true.\"\"\"\n lambda_handler(event={}, context=context) # overriding event with an empty one so that web app initialization will\n # be triggered.\n", "path": "zappa/handler.py"}]} |
gh_patches_debug_1194 | rasdani/github-patches | git_diff | conda__conda-3335 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
URLs with :: are OK and should not raise assertion errors
For conda-build's perl skeleton generator, we can end up with URLs like:
http://api.metacpan.org/v0/module/Test::More
Unfortunately, conda prevents us from actually using those URLs:
```
File "/Users/msarahan/miniconda2/lib/python2.7/site-packages/conda/fetch.py", line 354, in download
assert "::" not in str(url), url
AssertionError: http://api.metacpan.org/v0/module/Test::More
```
Please partially revert https://github.com/conda/conda/commit/39605e01ccd05b5af5ebeceeacaafe652f4b32e4
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `conda/fetch.py`
Content:
```
1 # (c) 2012-2015 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 from __future__ import print_function, division, absolute_import, unicode_literals
7
8 import bz2
9 import getpass
10 import hashlib
11 import json
12 import os
13 import requests
14 import shutil
15 import tempfile
16 import warnings
17 from functools import wraps
18 from logging import getLogger, DEBUG
19 from os.path import basename, dirname, join
20 from requests.packages.urllib3.connectionpool import InsecureRequestWarning
21
22 from ._vendor.auxlib.logz import stringify
23 from .base.context import context
24 from .common.url import add_username_and_pass_to_url, url_to_path
25 from .compat import itervalues, input, iteritems
26 from .connection import CondaSession, RETRIES
27 from .models.channel import Channel, offline_keep
28 from .exceptions import (ProxyError, CondaRuntimeError, CondaSignatureError, CondaHTTPError,
29 MD5MismatchError)
30 from .install import add_cached_package, find_new_location, package_cache, dist2pair, rm_rf
31 from .lock import FileLock
32 from .utils import exp_backoff_fn, memoized
33
34 log = getLogger(__name__)
35 dotlog = getLogger('dotupdate')
36 stdoutlog = getLogger('stdoutlog')
37 stderrlog = getLogger('stderrlog')
38
39 fail_unknown_host = False
40
41
42 def create_cache_dir():
43 cache_dir = join(context.pkgs_dirs[0], 'cache')
44 try:
45 os.makedirs(cache_dir)
46 except OSError:
47 pass
48 return cache_dir
49
50
51 def cache_fn_url(url):
52 md5 = hashlib.md5(url.encode('utf-8')).hexdigest()
53 return '%s.json' % (md5[:8],)
54
55
56 def add_http_value_to_dict(resp, http_key, d, dict_key):
57 value = resp.headers.get(http_key)
58 if value:
59 d[dict_key] = value
60
61 # We need a decorator so that the dot gets printed *after* the repodata is fetched
62 class dotlog_on_return(object):
63 def __init__(self, msg):
64 self.msg = msg
65
66 def __call__(self, f):
67 @wraps(f)
68 def func(*args, **kwargs):
69 res = f(*args, **kwargs)
70 dotlog.debug("%s args %s kwargs %s" % (self.msg, args, kwargs))
71 return res
72 return func
73
74
75 @dotlog_on_return("fetching repodata:")
76 def fetch_repodata(url, cache_dir=None, use_cache=False, session=None):
77 if not offline_keep(url):
78 return {'packages': {}}
79 cache_path = join(cache_dir or create_cache_dir(), cache_fn_url(url))
80 try:
81 log.debug("Opening repodata cache for %s at %s", url, cache_path)
82 with open(cache_path) as f:
83 cache = json.load(f)
84 except (IOError, ValueError):
85 cache = {'packages': {}}
86
87 if use_cache:
88 return cache
89
90 if not context.ssl_verify:
91 warnings.simplefilter('ignore', InsecureRequestWarning)
92
93 session = session or CondaSession()
94
95 headers = {}
96 if "_etag" in cache:
97 headers["If-None-Match"] = cache["_etag"]
98 if "_mod" in cache:
99 headers["If-Modified-Since"] = cache["_mod"]
100
101 if 'repo.continuum.io' in url or url.startswith("file://"):
102 filename = 'repodata.json.bz2'
103 headers['Accept-Encoding'] = 'identity'
104 else:
105 headers['Accept-Encoding'] = 'gzip, deflate, compress, identity'
106 headers['Content-Type'] = 'application/json'
107 filename = 'repodata.json'
108
109 try:
110 resp = session.get(url + filename, headers=headers, proxies=session.proxies,
111 timeout=(3.05, 60))
112 if log.isEnabledFor(DEBUG):
113 log.debug(stringify(resp))
114 resp.raise_for_status()
115
116 if resp.status_code != 304:
117 def get_json_str(filename, resp_content):
118 if filename.endswith('.bz2'):
119 return bz2.decompress(resp_content).decode('utf-8')
120 else:
121 return resp_content.decode('utf-8')
122
123 if url.startswith('file://'):
124 file_path = url_to_path(url)
125 with FileLock(dirname(file_path)):
126 json_str = get_json_str(filename, resp.content)
127 else:
128 json_str = get_json_str(filename, resp.content)
129
130 cache = json.loads(json_str)
131 add_http_value_to_dict(resp, 'Etag', cache, '_etag')
132 add_http_value_to_dict(resp, 'Last-Modified', cache, '_mod')
133
134 except ValueError as e:
135 raise CondaRuntimeError("Invalid index file: {0}{1}: {2}"
136 .format(url, filename, e))
137
138 except requests.exceptions.HTTPError as e:
139 if e.response.status_code == 407: # Proxy Authentication Required
140 handle_proxy_407(url, session)
141 # Try again
142 return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)
143
144 if e.response.status_code == 404:
145 if url.endswith('/noarch/'): # noarch directory might not exist
146 return None
147 msg = 'Could not find URL: %s' % url
148 elif e.response.status_code == 403 and url.endswith('/noarch/'):
149 return None
150
151 elif e.response.status_code == 401 and context.channel_alias in url:
152 # Note, this will not trigger if the binstar configured url does
153 # not match the conda configured one.
154 msg = ("Warning: you may need to login to anaconda.org again with "
155 "'anaconda login' to access private packages(%s, %s)" %
156 (url, e))
157 stderrlog.info(msg)
158 return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)
159
160 else:
161 msg = "HTTPError: %s: %s\n" % (e, url)
162
163 log.debug(msg)
164 raise CondaHTTPError(msg)
165
166 except requests.exceptions.SSLError as e:
167 msg = "SSL Error: %s\n" % e
168 stderrlog.info("SSL verification error: %s\n" % e)
169 log.debug(msg)
170
171 except requests.exceptions.ConnectionError as e:
172 # requests isn't so nice here. For whatever reason, https gives this
173 # error and http gives the above error. Also, there is no status_code
174 # attribute here. We have to just check if it looks like 407. See
175 # https://github.com/kennethreitz/requests/issues/2061.
176 if "407" in str(e): # Proxy Authentication Required
177 handle_proxy_407(url, session)
178 # Try again
179 return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)
180 msg = "Connection error: %s: %s\n" % (e, url)
181 stderrlog.info('Could not connect to %s\n' % url)
182 log.debug(msg)
183 if fail_unknown_host:
184 raise CondaRuntimeError(msg)
185
186 raise CondaRuntimeError(msg)
187 cache['_url'] = url
188 try:
189 with open(cache_path, 'w') as fo:
190 json.dump(cache, fo, indent=2, sort_keys=True)
191 except IOError:
192 pass
193
194 return cache or None
195
196
197 def handle_proxy_407(url, session):
198 """
199 Prompts the user for the proxy username and password and modifies the
200 proxy in the session object to include it.
201 """
202 # We could also use HTTPProxyAuth, but this does not work with https
203 # proxies (see https://github.com/kennethreitz/requests/issues/2061).
204 scheme = requests.packages.urllib3.util.url.parse_url(url).scheme
205 if scheme not in session.proxies:
206 raise ProxyError("""Could not find a proxy for %r. See
207 http://conda.pydata.org/docs/html#configure-conda-for-use-behind-a-proxy-server
208 for more information on how to configure proxies.""" % scheme)
209 username, passwd = get_proxy_username_and_pass(scheme)
210 session.proxies[scheme] = add_username_and_pass_to_url(
211 session.proxies[scheme], username, passwd)
212
213
214 @memoized
215 def get_proxy_username_and_pass(scheme):
216 username = input("\n%s proxy username: " % scheme)
217 passwd = getpass.getpass("Password:")
218 return username, passwd
219
220 def add_unknown(index, priorities):
221 priorities = {p[0]: p[1] for p in itervalues(priorities)}
222 maxp = max(itervalues(priorities)) + 1 if priorities else 1
223 for dist, info in iteritems(package_cache()):
224 schannel, dname = dist2pair(dist)
225 fname = dname + '.tar.bz2'
226 fkey = dist + '.tar.bz2'
227 if fkey in index or not info['dirs']:
228 continue
229 try:
230 with open(join(info['dirs'][0], 'info', 'index.json')) as fi:
231 meta = json.load(fi)
232 except IOError:
233 continue
234 if info['urls']:
235 url = info['urls'][0]
236 elif meta.get('url'):
237 url = meta['url']
238 elif meta.get('channel'):
239 url = meta['channel'].rstrip('/') + '/' + fname
240 else:
241 url = '<unknown>/' + fname
242 if url.rsplit('/', 1)[-1] != fname:
243 continue
244 channel, schannel2 = Channel(url).url_channel_wtf
245 if schannel2 != schannel:
246 continue
247 priority = priorities.get(schannel, maxp)
248 if 'link' in meta:
249 del meta['link']
250 meta.update({'fn': fname, 'url': url, 'channel': channel,
251 'schannel': schannel, 'priority': priority})
252 meta.setdefault('depends', [])
253 log.debug("adding cached pkg to index: %s" % fkey)
254 index[fkey] = meta
255
256 def add_pip_dependency(index):
257 for info in itervalues(index):
258 if (info['name'] == 'python' and
259 info['version'].startswith(('2.', '3.'))):
260 info.setdefault('depends', []).append('pip')
261
262 def fetch_index(channel_urls, use_cache=False, unknown=False, index=None):
263 log.debug('channel_urls=' + repr(channel_urls))
264 # pool = ThreadPool(5)
265 if index is None:
266 index = {}
267 stdoutlog.info("Fetching package metadata ...")
268 # if not isinstance(channel_urls, dict):
269 # channel_urls = prioritize_channels(channel_urls)
270
271 urls = tuple(filter(offline_keep, channel_urls))
272 try:
273 import concurrent.futures
274 executor = concurrent.futures.ThreadPoolExecutor(10)
275 except (ImportError, RuntimeError) as e:
276 # concurrent.futures is only available in Python >= 3.2 or if futures is installed
277 # RuntimeError is thrown if number of threads are limited by OS
278 log.debug(repr(e))
279 session = CondaSession()
280 repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))
281 for url in urls]
282 else:
283 try:
284 futures = tuple(executor.submit(fetch_repodata, url, use_cache=use_cache,
285 session=CondaSession()) for url in urls)
286 repodatas = [(u, f.result()) for u, f in zip(urls, futures)]
287 except RuntimeError as e:
288 # Cannot start new thread, then give up parallel execution
289 log.debug(repr(e))
290 session = CondaSession()
291 repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))
292 for url in urls]
293 finally:
294 executor.shutdown(wait=True)
295
296 for channel, repodata in repodatas:
297 if repodata is None:
298 continue
299 new_index = repodata['packages']
300 url_s, priority = channel_urls[channel]
301 channel = channel.rstrip('/')
302 for fn, info in iteritems(new_index):
303 info['fn'] = fn
304 info['schannel'] = url_s
305 info['channel'] = channel
306 info['priority'] = priority
307 info['url'] = channel + '/' + fn
308 key = url_s + '::' + fn if url_s != 'defaults' else fn
309 index[key] = info
310
311 stdoutlog.info('\n')
312 if unknown:
313 add_unknown(index, channel_urls)
314 if context.add_pip_as_python_dependency:
315 add_pip_dependency(index)
316 return index
317
318
319 def fetch_pkg(info, dst_dir=None, session=None):
320 '''
321 fetch a package given by `info` and store it into `dst_dir`
322 '''
323
324 session = session or CondaSession()
325
326 fn = info['fn']
327 url = info.get('url')
328 if url is None:
329 url = info['channel'] + '/' + fn
330 log.debug("url=%r" % url)
331 if dst_dir is None:
332 dst_dir = find_new_location(fn[:-8])[0]
333 path = join(dst_dir, fn)
334
335 download(url, path, session=session, md5=info['md5'], urlstxt=True)
336 if info.get('sig'):
337 from .signature import verify
338
339 fn2 = fn + '.sig'
340 url = (info['channel'] if info['sig'] == '.' else
341 info['sig'].rstrip('/')) + '/' + fn2
342 log.debug("signature url=%r" % url)
343 download(url, join(dst_dir, fn2), session=session)
344 try:
345 if verify(path):
346 return
347 except CondaSignatureError:
348 raise
349
350 raise CondaSignatureError("Error: Signature for '%s' is invalid." % (basename(path)))
351
352
353 def download(url, dst_path, session=None, md5=None, urlstxt=False, retries=None):
354 assert "::" not in str(url), url
355 assert "::" not in str(dst_path), str(dst_path)
356 if not offline_keep(url):
357 raise RuntimeError("Cannot download in offline mode: %s" % (url,))
358
359 pp = dst_path + '.part'
360 dst_dir = dirname(dst_path)
361 session = session or CondaSession()
362
363 if not context.ssl_verify:
364 try:
365 from requests.packages.urllib3.connectionpool import InsecureRequestWarning
366 except ImportError:
367 pass
368 else:
369 warnings.simplefilter('ignore', InsecureRequestWarning)
370
371 if retries is None:
372 retries = RETRIES
373
374 with FileLock(dst_path):
375 rm_rf(dst_path)
376 try:
377 resp = session.get(url, stream=True, proxies=session.proxies, timeout=(3.05, 27))
378 resp.raise_for_status()
379 except requests.exceptions.HTTPError as e:
380 if e.response.status_code == 407: # Proxy Authentication Required
381 handle_proxy_407(url, session)
382 # Try again
383 return download(url, dst_path, session=session, md5=md5,
384 urlstxt=urlstxt, retries=retries)
385 msg = "HTTPError: %s: %s\n" % (e, url)
386 log.debug(msg)
387 raise CondaRuntimeError(msg)
388
389 except requests.exceptions.ConnectionError as e:
390 # requests isn't so nice here. For whatever reason, https gives
391 # this error and http gives the above error. Also, there is no
392 # status_code attribute here. We have to just check if it looks
393 # like 407.
394 # See: https://github.com/kennethreitz/requests/issues/2061.
395 if "407" in str(e): # Proxy Authentication Required
396 handle_proxy_407(url, session)
397 # try again
398 return download(url, dst_path, session=session, md5=md5,
399 urlstxt=urlstxt, retries=retries)
400 msg = "Connection error: %s: %s\n" % (e, url)
401 stderrlog.info('Could not connect to %s\n' % url)
402 log.debug(msg)
403 raise CondaRuntimeError(msg)
404
405 except IOError as e:
406 raise CondaRuntimeError("Could not open '%s': %s" % (url, e))
407
408 size = resp.headers.get('Content-Length')
409 if size:
410 size = int(size)
411 fn = basename(dst_path)
412 getLogger('fetch.start').info((fn[:14], size))
413
414 if md5:
415 h = hashlib.new('md5')
416 try:
417 with open(pp, 'wb') as fo:
418 index = 0
419 for chunk in resp.iter_content(2**14):
420 index += len(chunk)
421 try:
422 fo.write(chunk)
423 except IOError:
424 raise CondaRuntimeError("Failed to write to %r." % pp)
425
426 if md5:
427 h.update(chunk)
428
429 if size and 0 <= index <= size:
430 getLogger('fetch.update').info(index)
431
432 except IOError as e:
433 if e.errno == 104 and retries: # Connection reset by pee
434 # try again
435 log.debug("%s, trying again" % e)
436 return download(url, dst_path, session=session, md5=md5,
437 urlstxt=urlstxt, retries=retries - 1)
438 raise CondaRuntimeError("Could not open %r for writing (%s)." % (pp, e))
439
440 if size:
441 getLogger('fetch.stop').info(None)
442
443 if md5 and h.hexdigest() != md5:
444 if retries:
445 # try again
446 log.debug("MD5 sums mismatch for download: %s (%s != %s), "
447 "trying again" % (url, h.hexdigest(), md5))
448 return download(url, dst_path, session=session, md5=md5,
449 urlstxt=urlstxt, retries=retries - 1)
450 raise MD5MismatchError("MD5 sums mismatch for download: %s (%s != %s)"
451 % (url, h.hexdigest(), md5))
452
453 try:
454 exp_backoff_fn(os.rename, pp, dst_path)
455 except OSError as e:
456 raise CondaRuntimeError("Could not rename %r to %r: %r" %
457 (pp, dst_path, e))
458
459 if urlstxt:
460 add_cached_package(dst_dir, url, overwrite=True, urlstxt=True)
461
462
463 class TmpDownload(object):
464 """
465 Context manager to handle downloads to a tempfile
466 """
467 def __init__(self, url, verbose=True):
468 self.url = url
469 self.verbose = verbose
470
471 def __enter__(self):
472 if '://' not in self.url:
473 # if we provide the file itself, no tmp dir is created
474 self.tmp_dir = None
475 return self.url
476 else:
477 if self.verbose:
478 from .console import setup_handlers
479 setup_handlers()
480 self.tmp_dir = tempfile.mkdtemp()
481 dst = join(self.tmp_dir, basename(self.url))
482 download(self.url, dst)
483 return dst
484
485 def __exit__(self, exc_type, exc_value, traceback):
486 if self.tmp_dir:
487 shutil.rmtree(self.tmp_dir)
488
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/conda/fetch.py b/conda/fetch.py
--- a/conda/fetch.py
+++ b/conda/fetch.py
@@ -351,7 +351,6 @@
def download(url, dst_path, session=None, md5=None, urlstxt=False, retries=None):
- assert "::" not in str(url), url
assert "::" not in str(dst_path), str(dst_path)
if not offline_keep(url):
raise RuntimeError("Cannot download in offline mode: %s" % (url,))
| {"golden_diff": "diff --git a/conda/fetch.py b/conda/fetch.py\n--- a/conda/fetch.py\n+++ b/conda/fetch.py\n@@ -351,7 +351,6 @@\n \n \n def download(url, dst_path, session=None, md5=None, urlstxt=False, retries=None):\n- assert \"::\" not in str(url), url\n assert \"::\" not in str(dst_path), str(dst_path)\n if not offline_keep(url):\n raise RuntimeError(\"Cannot download in offline mode: %s\" % (url,))\n", "issue": "URLs with :: are OK and should not raise assertion errors\nFor conda-build's perl skeleton generator, we can end up with URLs like:\n\nhttp://api.metacpan.org/v0/module/Test::More\n\nUnfortunately, conda prevents us from actually using those URLs:\n\n```\n File \"/Users/msarahan/miniconda2/lib/python2.7/site-packages/conda/fetch.py\", line 354, in download\n assert \"::\" not in str(url), url\nAssertionError: http://api.metacpan.org/v0/module/Test::More\n```\n\nPlease partially revert https://github.com/conda/conda/commit/39605e01ccd05b5af5ebeceeacaafe652f4b32e4\n\n", "before_files": [{"content": "# (c) 2012-2015 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\nfrom __future__ import print_function, division, absolute_import, unicode_literals\n\nimport bz2\nimport getpass\nimport hashlib\nimport json\nimport os\nimport requests\nimport shutil\nimport tempfile\nimport warnings\nfrom functools import wraps\nfrom logging import getLogger, DEBUG\nfrom os.path import basename, dirname, join\nfrom requests.packages.urllib3.connectionpool import InsecureRequestWarning\n\nfrom ._vendor.auxlib.logz import stringify\nfrom .base.context import context\nfrom .common.url import add_username_and_pass_to_url, url_to_path\nfrom .compat import itervalues, input, iteritems\nfrom .connection import CondaSession, RETRIES\nfrom .models.channel import Channel, offline_keep\nfrom .exceptions import (ProxyError, CondaRuntimeError, CondaSignatureError, CondaHTTPError,\n MD5MismatchError)\nfrom .install import add_cached_package, find_new_location, package_cache, dist2pair, rm_rf\nfrom .lock import FileLock\nfrom .utils import exp_backoff_fn, memoized\n\nlog = getLogger(__name__)\ndotlog = getLogger('dotupdate')\nstdoutlog = getLogger('stdoutlog')\nstderrlog = getLogger('stderrlog')\n\nfail_unknown_host = False\n\n\ndef create_cache_dir():\n cache_dir = join(context.pkgs_dirs[0], 'cache')\n try:\n os.makedirs(cache_dir)\n except OSError:\n pass\n return cache_dir\n\n\ndef cache_fn_url(url):\n md5 = hashlib.md5(url.encode('utf-8')).hexdigest()\n return '%s.json' % (md5[:8],)\n\n\ndef add_http_value_to_dict(resp, http_key, d, dict_key):\n value = resp.headers.get(http_key)\n if value:\n d[dict_key] = value\n\n# We need a decorator so that the dot gets printed *after* the repodata is fetched\nclass dotlog_on_return(object):\n def __init__(self, msg):\n self.msg = msg\n\n def __call__(self, f):\n @wraps(f)\n def func(*args, **kwargs):\n res = f(*args, **kwargs)\n dotlog.debug(\"%s args %s kwargs %s\" % (self.msg, args, kwargs))\n return res\n return func\n\n\n@dotlog_on_return(\"fetching repodata:\")\ndef fetch_repodata(url, cache_dir=None, use_cache=False, session=None):\n if not offline_keep(url):\n return {'packages': {}}\n cache_path = join(cache_dir or create_cache_dir(), cache_fn_url(url))\n try:\n log.debug(\"Opening repodata cache for %s at %s\", url, cache_path)\n with open(cache_path) as f:\n cache = json.load(f)\n except (IOError, ValueError):\n cache = {'packages': {}}\n\n if use_cache:\n return cache\n\n if not context.ssl_verify:\n warnings.simplefilter('ignore', InsecureRequestWarning)\n\n session = session or CondaSession()\n\n headers = {}\n if \"_etag\" in cache:\n headers[\"If-None-Match\"] = cache[\"_etag\"]\n if \"_mod\" in cache:\n headers[\"If-Modified-Since\"] = cache[\"_mod\"]\n\n if 'repo.continuum.io' in url or url.startswith(\"file://\"):\n filename = 'repodata.json.bz2'\n headers['Accept-Encoding'] = 'identity'\n else:\n headers['Accept-Encoding'] = 'gzip, deflate, compress, identity'\n headers['Content-Type'] = 'application/json'\n filename = 'repodata.json'\n\n try:\n resp = session.get(url + filename, headers=headers, proxies=session.proxies,\n timeout=(3.05, 60))\n if log.isEnabledFor(DEBUG):\n log.debug(stringify(resp))\n resp.raise_for_status()\n\n if resp.status_code != 304:\n def get_json_str(filename, resp_content):\n if filename.endswith('.bz2'):\n return bz2.decompress(resp_content).decode('utf-8')\n else:\n return resp_content.decode('utf-8')\n\n if url.startswith('file://'):\n file_path = url_to_path(url)\n with FileLock(dirname(file_path)):\n json_str = get_json_str(filename, resp.content)\n else:\n json_str = get_json_str(filename, resp.content)\n\n cache = json.loads(json_str)\n add_http_value_to_dict(resp, 'Etag', cache, '_etag')\n add_http_value_to_dict(resp, 'Last-Modified', cache, '_mod')\n\n except ValueError as e:\n raise CondaRuntimeError(\"Invalid index file: {0}{1}: {2}\"\n .format(url, filename, e))\n\n except requests.exceptions.HTTPError as e:\n if e.response.status_code == 407: # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n\n if e.response.status_code == 404:\n if url.endswith('/noarch/'): # noarch directory might not exist\n return None\n msg = 'Could not find URL: %s' % url\n elif e.response.status_code == 403 and url.endswith('/noarch/'):\n return None\n\n elif e.response.status_code == 401 and context.channel_alias in url:\n # Note, this will not trigger if the binstar configured url does\n # not match the conda configured one.\n msg = (\"Warning: you may need to login to anaconda.org again with \"\n \"'anaconda login' to access private packages(%s, %s)\" %\n (url, e))\n stderrlog.info(msg)\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n\n else:\n msg = \"HTTPError: %s: %s\\n\" % (e, url)\n\n log.debug(msg)\n raise CondaHTTPError(msg)\n\n except requests.exceptions.SSLError as e:\n msg = \"SSL Error: %s\\n\" % e\n stderrlog.info(\"SSL verification error: %s\\n\" % e)\n log.debug(msg)\n\n except requests.exceptions.ConnectionError as e:\n # requests isn't so nice here. For whatever reason, https gives this\n # error and http gives the above error. Also, there is no status_code\n # attribute here. We have to just check if it looks like 407. See\n # https://github.com/kennethreitz/requests/issues/2061.\n if \"407\" in str(e): # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n msg = \"Connection error: %s: %s\\n\" % (e, url)\n stderrlog.info('Could not connect to %s\\n' % url)\n log.debug(msg)\n if fail_unknown_host:\n raise CondaRuntimeError(msg)\n\n raise CondaRuntimeError(msg)\n cache['_url'] = url\n try:\n with open(cache_path, 'w') as fo:\n json.dump(cache, fo, indent=2, sort_keys=True)\n except IOError:\n pass\n\n return cache or None\n\n\ndef handle_proxy_407(url, session):\n \"\"\"\n Prompts the user for the proxy username and password and modifies the\n proxy in the session object to include it.\n \"\"\"\n # We could also use HTTPProxyAuth, but this does not work with https\n # proxies (see https://github.com/kennethreitz/requests/issues/2061).\n scheme = requests.packages.urllib3.util.url.parse_url(url).scheme\n if scheme not in session.proxies:\n raise ProxyError(\"\"\"Could not find a proxy for %r. See\nhttp://conda.pydata.org/docs/html#configure-conda-for-use-behind-a-proxy-server\nfor more information on how to configure proxies.\"\"\" % scheme)\n username, passwd = get_proxy_username_and_pass(scheme)\n session.proxies[scheme] = add_username_and_pass_to_url(\n session.proxies[scheme], username, passwd)\n\n\n@memoized\ndef get_proxy_username_and_pass(scheme):\n username = input(\"\\n%s proxy username: \" % scheme)\n passwd = getpass.getpass(\"Password:\")\n return username, passwd\n\ndef add_unknown(index, priorities):\n priorities = {p[0]: p[1] for p in itervalues(priorities)}\n maxp = max(itervalues(priorities)) + 1 if priorities else 1\n for dist, info in iteritems(package_cache()):\n schannel, dname = dist2pair(dist)\n fname = dname + '.tar.bz2'\n fkey = dist + '.tar.bz2'\n if fkey in index or not info['dirs']:\n continue\n try:\n with open(join(info['dirs'][0], 'info', 'index.json')) as fi:\n meta = json.load(fi)\n except IOError:\n continue\n if info['urls']:\n url = info['urls'][0]\n elif meta.get('url'):\n url = meta['url']\n elif meta.get('channel'):\n url = meta['channel'].rstrip('/') + '/' + fname\n else:\n url = '<unknown>/' + fname\n if url.rsplit('/', 1)[-1] != fname:\n continue\n channel, schannel2 = Channel(url).url_channel_wtf\n if schannel2 != schannel:\n continue\n priority = priorities.get(schannel, maxp)\n if 'link' in meta:\n del meta['link']\n meta.update({'fn': fname, 'url': url, 'channel': channel,\n 'schannel': schannel, 'priority': priority})\n meta.setdefault('depends', [])\n log.debug(\"adding cached pkg to index: %s\" % fkey)\n index[fkey] = meta\n\ndef add_pip_dependency(index):\n for info in itervalues(index):\n if (info['name'] == 'python' and\n info['version'].startswith(('2.', '3.'))):\n info.setdefault('depends', []).append('pip')\n\ndef fetch_index(channel_urls, use_cache=False, unknown=False, index=None):\n log.debug('channel_urls=' + repr(channel_urls))\n # pool = ThreadPool(5)\n if index is None:\n index = {}\n stdoutlog.info(\"Fetching package metadata ...\")\n # if not isinstance(channel_urls, dict):\n # channel_urls = prioritize_channels(channel_urls)\n\n urls = tuple(filter(offline_keep, channel_urls))\n try:\n import concurrent.futures\n executor = concurrent.futures.ThreadPoolExecutor(10)\n except (ImportError, RuntimeError) as e:\n # concurrent.futures is only available in Python >= 3.2 or if futures is installed\n # RuntimeError is thrown if number of threads are limited by OS\n log.debug(repr(e))\n session = CondaSession()\n repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))\n for url in urls]\n else:\n try:\n futures = tuple(executor.submit(fetch_repodata, url, use_cache=use_cache,\n session=CondaSession()) for url in urls)\n repodatas = [(u, f.result()) for u, f in zip(urls, futures)]\n except RuntimeError as e:\n # Cannot start new thread, then give up parallel execution\n log.debug(repr(e))\n session = CondaSession()\n repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))\n for url in urls]\n finally:\n executor.shutdown(wait=True)\n\n for channel, repodata in repodatas:\n if repodata is None:\n continue\n new_index = repodata['packages']\n url_s, priority = channel_urls[channel]\n channel = channel.rstrip('/')\n for fn, info in iteritems(new_index):\n info['fn'] = fn\n info['schannel'] = url_s\n info['channel'] = channel\n info['priority'] = priority\n info['url'] = channel + '/' + fn\n key = url_s + '::' + fn if url_s != 'defaults' else fn\n index[key] = info\n\n stdoutlog.info('\\n')\n if unknown:\n add_unknown(index, channel_urls)\n if context.add_pip_as_python_dependency:\n add_pip_dependency(index)\n return index\n\n\ndef fetch_pkg(info, dst_dir=None, session=None):\n '''\n fetch a package given by `info` and store it into `dst_dir`\n '''\n\n session = session or CondaSession()\n\n fn = info['fn']\n url = info.get('url')\n if url is None:\n url = info['channel'] + '/' + fn\n log.debug(\"url=%r\" % url)\n if dst_dir is None:\n dst_dir = find_new_location(fn[:-8])[0]\n path = join(dst_dir, fn)\n\n download(url, path, session=session, md5=info['md5'], urlstxt=True)\n if info.get('sig'):\n from .signature import verify\n\n fn2 = fn + '.sig'\n url = (info['channel'] if info['sig'] == '.' else\n info['sig'].rstrip('/')) + '/' + fn2\n log.debug(\"signature url=%r\" % url)\n download(url, join(dst_dir, fn2), session=session)\n try:\n if verify(path):\n return\n except CondaSignatureError:\n raise\n\n raise CondaSignatureError(\"Error: Signature for '%s' is invalid.\" % (basename(path)))\n\n\ndef download(url, dst_path, session=None, md5=None, urlstxt=False, retries=None):\n assert \"::\" not in str(url), url\n assert \"::\" not in str(dst_path), str(dst_path)\n if not offline_keep(url):\n raise RuntimeError(\"Cannot download in offline mode: %s\" % (url,))\n\n pp = dst_path + '.part'\n dst_dir = dirname(dst_path)\n session = session or CondaSession()\n\n if not context.ssl_verify:\n try:\n from requests.packages.urllib3.connectionpool import InsecureRequestWarning\n except ImportError:\n pass\n else:\n warnings.simplefilter('ignore', InsecureRequestWarning)\n\n if retries is None:\n retries = RETRIES\n\n with FileLock(dst_path):\n rm_rf(dst_path)\n try:\n resp = session.get(url, stream=True, proxies=session.proxies, timeout=(3.05, 27))\n resp.raise_for_status()\n except requests.exceptions.HTTPError as e:\n if e.response.status_code == 407: # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries)\n msg = \"HTTPError: %s: %s\\n\" % (e, url)\n log.debug(msg)\n raise CondaRuntimeError(msg)\n\n except requests.exceptions.ConnectionError as e:\n # requests isn't so nice here. For whatever reason, https gives\n # this error and http gives the above error. Also, there is no\n # status_code attribute here. We have to just check if it looks\n # like 407.\n # See: https://github.com/kennethreitz/requests/issues/2061.\n if \"407\" in str(e): # Proxy Authentication Required\n handle_proxy_407(url, session)\n # try again\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries)\n msg = \"Connection error: %s: %s\\n\" % (e, url)\n stderrlog.info('Could not connect to %s\\n' % url)\n log.debug(msg)\n raise CondaRuntimeError(msg)\n\n except IOError as e:\n raise CondaRuntimeError(\"Could not open '%s': %s\" % (url, e))\n\n size = resp.headers.get('Content-Length')\n if size:\n size = int(size)\n fn = basename(dst_path)\n getLogger('fetch.start').info((fn[:14], size))\n\n if md5:\n h = hashlib.new('md5')\n try:\n with open(pp, 'wb') as fo:\n index = 0\n for chunk in resp.iter_content(2**14):\n index += len(chunk)\n try:\n fo.write(chunk)\n except IOError:\n raise CondaRuntimeError(\"Failed to write to %r.\" % pp)\n\n if md5:\n h.update(chunk)\n\n if size and 0 <= index <= size:\n getLogger('fetch.update').info(index)\n\n except IOError as e:\n if e.errno == 104 and retries: # Connection reset by pee\n # try again\n log.debug(\"%s, trying again\" % e)\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries - 1)\n raise CondaRuntimeError(\"Could not open %r for writing (%s).\" % (pp, e))\n\n if size:\n getLogger('fetch.stop').info(None)\n\n if md5 and h.hexdigest() != md5:\n if retries:\n # try again\n log.debug(\"MD5 sums mismatch for download: %s (%s != %s), \"\n \"trying again\" % (url, h.hexdigest(), md5))\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries - 1)\n raise MD5MismatchError(\"MD5 sums mismatch for download: %s (%s != %s)\"\n % (url, h.hexdigest(), md5))\n\n try:\n exp_backoff_fn(os.rename, pp, dst_path)\n except OSError as e:\n raise CondaRuntimeError(\"Could not rename %r to %r: %r\" %\n (pp, dst_path, e))\n\n if urlstxt:\n add_cached_package(dst_dir, url, overwrite=True, urlstxt=True)\n\n\nclass TmpDownload(object):\n \"\"\"\n Context manager to handle downloads to a tempfile\n \"\"\"\n def __init__(self, url, verbose=True):\n self.url = url\n self.verbose = verbose\n\n def __enter__(self):\n if '://' not in self.url:\n # if we provide the file itself, no tmp dir is created\n self.tmp_dir = None\n return self.url\n else:\n if self.verbose:\n from .console import setup_handlers\n setup_handlers()\n self.tmp_dir = tempfile.mkdtemp()\n dst = join(self.tmp_dir, basename(self.url))\n download(self.url, dst)\n return dst\n\n def __exit__(self, exc_type, exc_value, traceback):\n if self.tmp_dir:\n shutil.rmtree(self.tmp_dir)\n", "path": "conda/fetch.py"}], "after_files": [{"content": "# (c) 2012-2015 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\nfrom __future__ import print_function, division, absolute_import, unicode_literals\n\nimport bz2\nimport getpass\nimport hashlib\nimport json\nimport os\nimport requests\nimport shutil\nimport tempfile\nimport warnings\nfrom functools import wraps\nfrom logging import getLogger, DEBUG\nfrom os.path import basename, dirname, join\nfrom requests.packages.urllib3.connectionpool import InsecureRequestWarning\n\nfrom ._vendor.auxlib.logz import stringify\nfrom .base.context import context\nfrom .common.url import add_username_and_pass_to_url, url_to_path\nfrom .compat import itervalues, input, iteritems\nfrom .connection import CondaSession, RETRIES\nfrom .models.channel import Channel, offline_keep\nfrom .exceptions import (ProxyError, CondaRuntimeError, CondaSignatureError, CondaHTTPError,\n MD5MismatchError)\nfrom .install import add_cached_package, find_new_location, package_cache, dist2pair, rm_rf\nfrom .lock import FileLock\nfrom .utils import exp_backoff_fn, memoized\n\nlog = getLogger(__name__)\ndotlog = getLogger('dotupdate')\nstdoutlog = getLogger('stdoutlog')\nstderrlog = getLogger('stderrlog')\n\nfail_unknown_host = False\n\n\ndef create_cache_dir():\n cache_dir = join(context.pkgs_dirs[0], 'cache')\n try:\n os.makedirs(cache_dir)\n except OSError:\n pass\n return cache_dir\n\n\ndef cache_fn_url(url):\n md5 = hashlib.md5(url.encode('utf-8')).hexdigest()\n return '%s.json' % (md5[:8],)\n\n\ndef add_http_value_to_dict(resp, http_key, d, dict_key):\n value = resp.headers.get(http_key)\n if value:\n d[dict_key] = value\n\n# We need a decorator so that the dot gets printed *after* the repodata is fetched\nclass dotlog_on_return(object):\n def __init__(self, msg):\n self.msg = msg\n\n def __call__(self, f):\n @wraps(f)\n def func(*args, **kwargs):\n res = f(*args, **kwargs)\n dotlog.debug(\"%s args %s kwargs %s\" % (self.msg, args, kwargs))\n return res\n return func\n\n\n@dotlog_on_return(\"fetching repodata:\")\ndef fetch_repodata(url, cache_dir=None, use_cache=False, session=None):\n if not offline_keep(url):\n return {'packages': {}}\n cache_path = join(cache_dir or create_cache_dir(), cache_fn_url(url))\n try:\n log.debug(\"Opening repodata cache for %s at %s\", url, cache_path)\n with open(cache_path) as f:\n cache = json.load(f)\n except (IOError, ValueError):\n cache = {'packages': {}}\n\n if use_cache:\n return cache\n\n if not context.ssl_verify:\n warnings.simplefilter('ignore', InsecureRequestWarning)\n\n session = session or CondaSession()\n\n headers = {}\n if \"_etag\" in cache:\n headers[\"If-None-Match\"] = cache[\"_etag\"]\n if \"_mod\" in cache:\n headers[\"If-Modified-Since\"] = cache[\"_mod\"]\n\n if 'repo.continuum.io' in url or url.startswith(\"file://\"):\n filename = 'repodata.json.bz2'\n headers['Accept-Encoding'] = 'identity'\n else:\n headers['Accept-Encoding'] = 'gzip, deflate, compress, identity'\n headers['Content-Type'] = 'application/json'\n filename = 'repodata.json'\n\n try:\n resp = session.get(url + filename, headers=headers, proxies=session.proxies,\n timeout=(3.05, 60))\n if log.isEnabledFor(DEBUG):\n log.debug(stringify(resp))\n resp.raise_for_status()\n\n if resp.status_code != 304:\n def get_json_str(filename, resp_content):\n if filename.endswith('.bz2'):\n return bz2.decompress(resp_content).decode('utf-8')\n else:\n return resp_content.decode('utf-8')\n\n if url.startswith('file://'):\n file_path = url_to_path(url)\n with FileLock(dirname(file_path)):\n json_str = get_json_str(filename, resp.content)\n else:\n json_str = get_json_str(filename, resp.content)\n\n cache = json.loads(json_str)\n add_http_value_to_dict(resp, 'Etag', cache, '_etag')\n add_http_value_to_dict(resp, 'Last-Modified', cache, '_mod')\n\n except ValueError as e:\n raise CondaRuntimeError(\"Invalid index file: {0}{1}: {2}\"\n .format(url, filename, e))\n\n except requests.exceptions.HTTPError as e:\n if e.response.status_code == 407: # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n\n if e.response.status_code == 404:\n if url.endswith('/noarch/'): # noarch directory might not exist\n return None\n msg = 'Could not find URL: %s' % url\n elif e.response.status_code == 403 and url.endswith('/noarch/'):\n return None\n\n elif e.response.status_code == 401 and context.channel_alias in url:\n # Note, this will not trigger if the binstar configured url does\n # not match the conda configured one.\n msg = (\"Warning: you may need to login to anaconda.org again with \"\n \"'anaconda login' to access private packages(%s, %s)\" %\n (url, e))\n stderrlog.info(msg)\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n\n else:\n msg = \"HTTPError: %s: %s\\n\" % (e, url)\n\n log.debug(msg)\n raise CondaHTTPError(msg)\n\n except requests.exceptions.SSLError as e:\n msg = \"SSL Error: %s\\n\" % e\n stderrlog.info(\"SSL verification error: %s\\n\" % e)\n log.debug(msg)\n\n except requests.exceptions.ConnectionError as e:\n # requests isn't so nice here. For whatever reason, https gives this\n # error and http gives the above error. Also, there is no status_code\n # attribute here. We have to just check if it looks like 407. See\n # https://github.com/kennethreitz/requests/issues/2061.\n if \"407\" in str(e): # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return fetch_repodata(url, cache_dir=cache_dir, use_cache=use_cache, session=session)\n msg = \"Connection error: %s: %s\\n\" % (e, url)\n stderrlog.info('Could not connect to %s\\n' % url)\n log.debug(msg)\n if fail_unknown_host:\n raise CondaRuntimeError(msg)\n\n raise CondaRuntimeError(msg)\n cache['_url'] = url\n try:\n with open(cache_path, 'w') as fo:\n json.dump(cache, fo, indent=2, sort_keys=True)\n except IOError:\n pass\n\n return cache or None\n\n\ndef handle_proxy_407(url, session):\n \"\"\"\n Prompts the user for the proxy username and password and modifies the\n proxy in the session object to include it.\n \"\"\"\n # We could also use HTTPProxyAuth, but this does not work with https\n # proxies (see https://github.com/kennethreitz/requests/issues/2061).\n scheme = requests.packages.urllib3.util.url.parse_url(url).scheme\n if scheme not in session.proxies:\n raise ProxyError(\"\"\"Could not find a proxy for %r. See\nhttp://conda.pydata.org/docs/html#configure-conda-for-use-behind-a-proxy-server\nfor more information on how to configure proxies.\"\"\" % scheme)\n username, passwd = get_proxy_username_and_pass(scheme)\n session.proxies[scheme] = add_username_and_pass_to_url(\n session.proxies[scheme], username, passwd)\n\n\n@memoized\ndef get_proxy_username_and_pass(scheme):\n username = input(\"\\n%s proxy username: \" % scheme)\n passwd = getpass.getpass(\"Password:\")\n return username, passwd\n\ndef add_unknown(index, priorities):\n priorities = {p[0]: p[1] for p in itervalues(priorities)}\n maxp = max(itervalues(priorities)) + 1 if priorities else 1\n for dist, info in iteritems(package_cache()):\n schannel, dname = dist2pair(dist)\n fname = dname + '.tar.bz2'\n fkey = dist + '.tar.bz2'\n if fkey in index or not info['dirs']:\n continue\n try:\n with open(join(info['dirs'][0], 'info', 'index.json')) as fi:\n meta = json.load(fi)\n except IOError:\n continue\n if info['urls']:\n url = info['urls'][0]\n elif meta.get('url'):\n url = meta['url']\n elif meta.get('channel'):\n url = meta['channel'].rstrip('/') + '/' + fname\n else:\n url = '<unknown>/' + fname\n if url.rsplit('/', 1)[-1] != fname:\n continue\n channel, schannel2 = Channel(url).url_channel_wtf\n if schannel2 != schannel:\n continue\n priority = priorities.get(schannel, maxp)\n if 'link' in meta:\n del meta['link']\n meta.update({'fn': fname, 'url': url, 'channel': channel,\n 'schannel': schannel, 'priority': priority})\n meta.setdefault('depends', [])\n log.debug(\"adding cached pkg to index: %s\" % fkey)\n index[fkey] = meta\n\ndef add_pip_dependency(index):\n for info in itervalues(index):\n if (info['name'] == 'python' and\n info['version'].startswith(('2.', '3.'))):\n info.setdefault('depends', []).append('pip')\n\ndef fetch_index(channel_urls, use_cache=False, unknown=False, index=None):\n log.debug('channel_urls=' + repr(channel_urls))\n # pool = ThreadPool(5)\n if index is None:\n index = {}\n stdoutlog.info(\"Fetching package metadata ...\")\n # if not isinstance(channel_urls, dict):\n # channel_urls = prioritize_channels(channel_urls)\n\n urls = tuple(filter(offline_keep, channel_urls))\n try:\n import concurrent.futures\n executor = concurrent.futures.ThreadPoolExecutor(10)\n except (ImportError, RuntimeError) as e:\n # concurrent.futures is only available in Python >= 3.2 or if futures is installed\n # RuntimeError is thrown if number of threads are limited by OS\n log.debug(repr(e))\n session = CondaSession()\n repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))\n for url in urls]\n else:\n try:\n futures = tuple(executor.submit(fetch_repodata, url, use_cache=use_cache,\n session=CondaSession()) for url in urls)\n repodatas = [(u, f.result()) for u, f in zip(urls, futures)]\n except RuntimeError as e:\n # Cannot start new thread, then give up parallel execution\n log.debug(repr(e))\n session = CondaSession()\n repodatas = [(url, fetch_repodata(url, use_cache=use_cache, session=session))\n for url in urls]\n finally:\n executor.shutdown(wait=True)\n\n for channel, repodata in repodatas:\n if repodata is None:\n continue\n new_index = repodata['packages']\n url_s, priority = channel_urls[channel]\n channel = channel.rstrip('/')\n for fn, info in iteritems(new_index):\n info['fn'] = fn\n info['schannel'] = url_s\n info['channel'] = channel\n info['priority'] = priority\n info['url'] = channel + '/' + fn\n key = url_s + '::' + fn if url_s != 'defaults' else fn\n index[key] = info\n\n stdoutlog.info('\\n')\n if unknown:\n add_unknown(index, channel_urls)\n if context.add_pip_as_python_dependency:\n add_pip_dependency(index)\n return index\n\n\ndef fetch_pkg(info, dst_dir=None, session=None):\n '''\n fetch a package given by `info` and store it into `dst_dir`\n '''\n\n session = session or CondaSession()\n\n fn = info['fn']\n url = info.get('url')\n if url is None:\n url = info['channel'] + '/' + fn\n log.debug(\"url=%r\" % url)\n if dst_dir is None:\n dst_dir = find_new_location(fn[:-8])[0]\n path = join(dst_dir, fn)\n\n download(url, path, session=session, md5=info['md5'], urlstxt=True)\n if info.get('sig'):\n from .signature import verify\n\n fn2 = fn + '.sig'\n url = (info['channel'] if info['sig'] == '.' else\n info['sig'].rstrip('/')) + '/' + fn2\n log.debug(\"signature url=%r\" % url)\n download(url, join(dst_dir, fn2), session=session)\n try:\n if verify(path):\n return\n except CondaSignatureError:\n raise\n\n raise CondaSignatureError(\"Error: Signature for '%s' is invalid.\" % (basename(path)))\n\n\ndef download(url, dst_path, session=None, md5=None, urlstxt=False, retries=None):\n assert \"::\" not in str(dst_path), str(dst_path)\n if not offline_keep(url):\n raise RuntimeError(\"Cannot download in offline mode: %s\" % (url,))\n\n pp = dst_path + '.part'\n dst_dir = dirname(dst_path)\n session = session or CondaSession()\n\n if not context.ssl_verify:\n try:\n from requests.packages.urllib3.connectionpool import InsecureRequestWarning\n except ImportError:\n pass\n else:\n warnings.simplefilter('ignore', InsecureRequestWarning)\n\n if retries is None:\n retries = RETRIES\n\n with FileLock(dst_path):\n rm_rf(dst_path)\n try:\n resp = session.get(url, stream=True, proxies=session.proxies, timeout=(3.05, 27))\n resp.raise_for_status()\n except requests.exceptions.HTTPError as e:\n if e.response.status_code == 407: # Proxy Authentication Required\n handle_proxy_407(url, session)\n # Try again\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries)\n msg = \"HTTPError: %s: %s\\n\" % (e, url)\n log.debug(msg)\n raise CondaRuntimeError(msg)\n\n except requests.exceptions.ConnectionError as e:\n # requests isn't so nice here. For whatever reason, https gives\n # this error and http gives the above error. Also, there is no\n # status_code attribute here. We have to just check if it looks\n # like 407.\n # See: https://github.com/kennethreitz/requests/issues/2061.\n if \"407\" in str(e): # Proxy Authentication Required\n handle_proxy_407(url, session)\n # try again\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries)\n msg = \"Connection error: %s: %s\\n\" % (e, url)\n stderrlog.info('Could not connect to %s\\n' % url)\n log.debug(msg)\n raise CondaRuntimeError(msg)\n\n except IOError as e:\n raise CondaRuntimeError(\"Could not open '%s': %s\" % (url, e))\n\n size = resp.headers.get('Content-Length')\n if size:\n size = int(size)\n fn = basename(dst_path)\n getLogger('fetch.start').info((fn[:14], size))\n\n if md5:\n h = hashlib.new('md5')\n try:\n with open(pp, 'wb') as fo:\n index = 0\n for chunk in resp.iter_content(2**14):\n index += len(chunk)\n try:\n fo.write(chunk)\n except IOError:\n raise CondaRuntimeError(\"Failed to write to %r.\" % pp)\n\n if md5:\n h.update(chunk)\n\n if size and 0 <= index <= size:\n getLogger('fetch.update').info(index)\n\n except IOError as e:\n if e.errno == 104 and retries: # Connection reset by pee\n # try again\n log.debug(\"%s, trying again\" % e)\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries - 1)\n raise CondaRuntimeError(\"Could not open %r for writing (%s).\" % (pp, e))\n\n if size:\n getLogger('fetch.stop').info(None)\n\n if md5 and h.hexdigest() != md5:\n if retries:\n # try again\n log.debug(\"MD5 sums mismatch for download: %s (%s != %s), \"\n \"trying again\" % (url, h.hexdigest(), md5))\n return download(url, dst_path, session=session, md5=md5,\n urlstxt=urlstxt, retries=retries - 1)\n raise MD5MismatchError(\"MD5 sums mismatch for download: %s (%s != %s)\"\n % (url, h.hexdigest(), md5))\n\n try:\n exp_backoff_fn(os.rename, pp, dst_path)\n except OSError as e:\n raise CondaRuntimeError(\"Could not rename %r to %r: %r\" %\n (pp, dst_path, e))\n\n if urlstxt:\n add_cached_package(dst_dir, url, overwrite=True, urlstxt=True)\n\n\nclass TmpDownload(object):\n \"\"\"\n Context manager to handle downloads to a tempfile\n \"\"\"\n def __init__(self, url, verbose=True):\n self.url = url\n self.verbose = verbose\n\n def __enter__(self):\n if '://' not in self.url:\n # if we provide the file itself, no tmp dir is created\n self.tmp_dir = None\n return self.url\n else:\n if self.verbose:\n from .console import setup_handlers\n setup_handlers()\n self.tmp_dir = tempfile.mkdtemp()\n dst = join(self.tmp_dir, basename(self.url))\n download(self.url, dst)\n return dst\n\n def __exit__(self, exc_type, exc_value, traceback):\n if self.tmp_dir:\n shutil.rmtree(self.tmp_dir)\n", "path": "conda/fetch.py"}]} |
gh_patches_debug_1195 | rasdani/github-patches | git_diff | kivy__kivy-2855 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Please distribute pxd files, expose c apis.
I'm writing some kivy extension code, and i want to cimport kivy's extension types, which is more efficient than python api, but kivy don't distribute pxd files to installation directory.
I can set PYTHONPATH to kivy's source directory, and ship cython compiled c file with my library, but it would be better if kivy distribute pxd files with it.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `setup.py`
Content:
```
1 #
2 # Kivy - Crossplatform NUI toolkit
3 # http://kivy.org/
4 #
5
6 import sys
7
8 from copy import deepcopy
9 import os
10 from os.path import join, dirname, sep, exists, basename
11 from os import walk, environ
12 from distutils.core import setup
13 from distutils.extension import Extension
14 from collections import OrderedDict
15
16 if sys.version > '3':
17
18 PY3 = True
19 else:
20 PY3 = False
21
22
23 def getoutput(cmd):
24 import subprocess
25 p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
26 return p.communicate()[0]
27
28
29 def pkgconfig(*packages, **kw):
30 flag_map = {'-I': 'include_dirs', '-L': 'library_dirs', '-l': 'libraries'}
31 cmd = 'pkg-config --libs --cflags {}'.format(' '.join(packages))
32 for token in getoutput(cmd).split():
33 ext = token[:2].decode('utf-8')
34 flag = flag_map.get(ext)
35 if not flag:
36 continue
37 kw.setdefault(flag, []).append(token[2:].decode('utf-8'))
38 return kw
39
40
41 # -----------------------------------------------------------------------------
42 # Determine on which platform we are
43
44 platform = sys.platform
45
46 # Detect 32/64bit for OSX (http://stackoverflow.com/a/1405971/798575)
47 if sys.platform == 'darwin':
48 if sys.maxsize > 2 ** 32:
49 osx_arch = 'x86_64'
50 else:
51 osx_arch = 'i386'
52
53 # Detect Python for android project (http://github.com/kivy/python-for-android)
54 ndkplatform = environ.get('NDKPLATFORM')
55 if ndkplatform is not None and environ.get('LIBLINK'):
56 platform = 'android'
57 kivy_ios_root = environ.get('KIVYIOSROOT', None)
58 if kivy_ios_root is not None:
59 platform = 'ios'
60 if exists('/opt/vc/include/bcm_host.h'):
61 platform = 'rpi'
62
63 # -----------------------------------------------------------------------------
64 # Detect options
65 #
66 c_options = OrderedDict()
67 c_options['use_rpi'] = platform == 'rpi'
68 c_options['use_opengl_es2'] = None
69 c_options['use_opengl_debug'] = False
70 c_options['use_glew'] = False
71 c_options['use_sdl'] = False
72 c_options['use_sdl2'] = False
73 c_options['use_ios'] = False
74 c_options['use_mesagl'] = False
75 c_options['use_x11'] = False
76 c_options['use_gstreamer'] = False
77 c_options['use_avfoundation'] = platform == 'darwin'
78 c_options['use_osx_frameworks'] = platform == 'darwin'
79
80 # now check if environ is changing the default values
81 for key in list(c_options.keys()):
82 ukey = key.upper()
83 if ukey in environ:
84 value = bool(int(environ[ukey]))
85 print('Environ change {0} -> {1}'.format(key, value))
86 c_options[key] = value
87
88 # -----------------------------------------------------------------------------
89 # Cython check
90 # on python-for-android and kivy-ios, cython usage is external
91 have_cython = False
92 if platform in ('ios', 'android'):
93 print('\nCython check avoided.')
94 else:
95 try:
96 # check for cython
97 from Cython.Distutils import build_ext
98 have_cython = True
99 except ImportError:
100 print('\nCython is missing, its required for compiling kivy !\n\n')
101 raise
102
103 if not have_cython:
104 from distutils.command.build_ext import build_ext
105
106 # -----------------------------------------------------------------------------
107 # Setup classes
108
109
110 class KivyBuildExt(build_ext):
111
112 def build_extensions(self):
113 print('Build configuration is:')
114 for opt, value in c_options.items():
115 print(' * {0} = {1}'.format(opt, value))
116 debug = bool(self.debug)
117 print(' * debug = {0}'.format(debug))
118 print('Generate config.h')
119 config_h_fn = expand('graphics', 'config.h')
120 config_h = '// Autogenerated file for Kivy C configuration\n'
121 config_h += '#define __PY3 {0}\n'.format(int(PY3))
122 for k, v in c_options.items():
123 config_h += '#define __{0} {1}\n'.format(k.upper(), int(v))
124 self.update_if_changed(config_h_fn, config_h)
125
126 print('Generate config.pxi')
127 config_pxi_fn = expand('graphics', 'config.pxi')
128 # update the pxi only if the content changed
129 config_pxi = '# Autogenerated file for Kivy Cython configuration\n'
130 config_pxi += 'DEF PY3 = {0}\n'.format(int(PY3))
131 for k, v in c_options.items():
132 config_pxi += 'DEF {0} = {1}\n'.format(k.upper(), int(v))
133 config_pxi += 'DEF DEBUG = {0}\n'.format(debug)
134 self.update_if_changed(config_pxi_fn, config_pxi)
135
136 print('Generate setupconfig.py')
137 config_py_fn = expand('setupconfig.py')
138 config_py = '# Autogenerated file for Kivy configuration\n'
139 config_py += 'PY3 = {0}\n'.format(int(PY3))
140 for k, v in c_options.items():
141 config_py += '{0} = {1}\n'.format(k.upper(), int(v))
142 config_py += 'DEBUG = {0}\n'.format(debug)
143 self.update_if_changed(config_py_fn, config_py)
144
145 c = self.compiler.compiler_type
146 print('Detected compiler is {}'.format(c))
147 if c != 'msvc':
148 for e in self.extensions:
149 e.extra_link_args += ['-lm']
150
151 build_ext.build_extensions(self)
152
153 def update_if_changed(self, fn, content):
154 need_update = True
155 if exists(fn):
156 with open(fn) as fd:
157 need_update = fd.read() != content
158 if need_update:
159 with open(fn, 'w') as fd:
160 fd.write(content)
161
162
163 # -----------------------------------------------------------------------------
164 # extract version (simulate doc generation, kivy will be not imported)
165 environ['KIVY_DOC_INCLUDE'] = '1'
166 import kivy
167
168 # extra build commands go in the cmdclass dict {'command-name': CommandClass}
169 # see tools.packaging.{platform}.build.py for custom build commands for
170 # portable packages. also e.g. we use build_ext command from cython if its
171 # installed for c extensions.
172 from kivy.tools.packaging.factory import FactoryBuild
173 cmdclass = {
174 'build_factory': FactoryBuild,
175 'build_ext': KivyBuildExt}
176
177 try:
178 # add build rules for portable packages to cmdclass
179 if platform == 'win32':
180 from kivy.tools.packaging.win32.build import WindowsPortableBuild
181 cmdclass['build_portable'] = WindowsPortableBuild
182 elif platform == 'darwin':
183 from kivy.tools.packaging.osx.build import OSXPortableBuild
184 cmdclass['build_portable'] = OSXPortableBuild
185 except ImportError:
186 print('User distribution detected, avoid portable command.')
187
188 # Detect which opengl version headers to use
189 if platform in ('android', 'darwin', 'ios', 'rpi'):
190 c_options['use_opengl_es2'] = True
191 elif platform == 'win32':
192 print('Windows platform detected, force GLEW usage.')
193 c_options['use_glew'] = True
194 c_options['use_opengl_es2'] = False
195 else:
196 if c_options['use_opengl_es2'] is None:
197 GLES = environ.get('GRAPHICS') == 'GLES'
198 OPENGL = environ.get('GRAPHICS') == 'OPENGL'
199 if GLES:
200 c_options['use_opengl_es2'] = True
201 elif OPENGL:
202 c_options['use_opengl_es2'] = False
203 else:
204 # auto detection of GLES headers
205 default_header_dirs = ['/usr/include', '/usr/local/include']
206 c_options['use_opengl_es2'] = False
207 for hdir in default_header_dirs:
208 filename = join(hdir, 'GLES2', 'gl2.h')
209 if exists(filename):
210 c_options['use_opengl_es2'] = True
211 print('NOTE: Found GLES 2.0 headers at {0}'.format(
212 filename))
213 break
214 if not c_options['use_opengl_es2']:
215 print('NOTE: Not found GLES 2.0 headers at: {}'.format(
216 default_header_dirs))
217 print(' Please contact us if your distribution '
218 'uses an alternative path for the headers.')
219
220 print('Using this graphics system: {}'.format(
221 ['OpenGL', 'OpenGL ES 2'][int(c_options['use_opengl_es2'] or False)]))
222
223 # check if we are in a kivy-ios build
224 if platform == 'ios':
225 print('Kivy-IOS project environment detect, use it.')
226 print('Kivy-IOS project located at {0}'.format(kivy_ios_root))
227 print('Activate SDL compilation.')
228 c_options['use_ios'] = True
229 c_options['use_sdl'] = True
230
231 # detect gstreamer/sdl2, only on desktop
232 sdl2_flags = {}
233 if platform not in ('ios', 'android'):
234
235 if c_options['use_osx_frameworks'] and platform == 'darwin':
236 # check the existence of frameworks
237 f_path = '/Library/Frameworks/GStreamer.framework'
238 if not exists(f_path):
239 c_options['use_gstreamer'] = False
240 print('Missing GStreamer framework {}'.format(f_path))
241 else:
242 c_options['use_gstreamer'] = True
243 gst_flags = {
244 'extra_link_args': [
245 '-Xlinker', '-headerpad',
246 '-Xlinker', '190',
247 '-framework', 'GStreamer'],
248 'include_dirs': [join(f_path, 'Headers')]}
249
250 sdl2_valid = True
251 sdl2_flags = {
252 'extra_link_args': [
253 '-Xlinker', '-headerpad',
254 '-Xlinker', '190'],
255 'include_dirs': []
256 }
257 for name in ('SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer'):
258 f_path = '/Library/Frameworks/{}.framework'.format(name)
259 if not exists(f_path):
260 print('Missing framework {}'.format(f_path))
261 sdl2_valid = False
262 continue
263 sdl2_flags['extra_link_args'] += ['-framework', name]
264 sdl2_flags['include_dirs'] += [join(f_path, 'Headers')]
265 print('Found sdl2 frameworks: {}'.format(f_path))
266
267 if not sdl2_valid:
268 c_options['use_sdl2'] = False
269 print('Deactivate SDL2 compilation due to missing frameworks')
270 else:
271 c_options['use_sdl2'] = True
272 print('Activate SDL2 compilation')
273
274 else:
275 # use pkg-config approach instead
276 gst_flags = pkgconfig('gstreamer-1.0')
277 if 'libraries' in gst_flags:
278 c_options['use_gstreamer'] = True
279 sdl2_flags = pkgconfig('sdl2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer')
280 if 'libraries' in sdl2_flags:
281 c_options['use_sdl2'] = True
282
283 if c_options['use_sdl2']:
284 print('SDL2 compilation enabled, deactivate 1.x')
285 c_options['use_sdl'] = False
286
287
288 # -----------------------------------------------------------------------------
289 # declare flags
290
291
292 def get_modulename_from_file(filename):
293 filename = filename.replace(sep, '/')
294 pyx = '.'.join(filename.split('.')[:-1])
295 pyxl = pyx.split('/')
296 while pyxl[0] != 'kivy':
297 pyxl.pop(0)
298 if pyxl[1] == 'kivy':
299 pyxl.pop(0)
300 return '.'.join(pyxl)
301
302
303 def expand(*args):
304 return join(dirname(__file__), 'kivy', *args)
305
306
307 class CythonExtension(Extension):
308
309 def __init__(self, *args, **kwargs):
310 Extension.__init__(self, *args, **kwargs)
311 self.cython_directives = {
312 'c_string_encoding': 'utf-8',
313 'profile': 'USE_PROFILE' in environ,
314 'embedsignature': 'USE_EMBEDSIGNATURE' in environ}
315 # XXX with pip, setuptools is imported before distutils, and change
316 # our pyx to c, then, cythonize doesn't happen. So force again our
317 # sources
318 self.sources = args[1]
319
320
321 def merge(d1, *args):
322 d1 = deepcopy(d1)
323 for d2 in args:
324 for key, value in d2.items():
325 value = deepcopy(value)
326 if key in d1:
327 d1[key].extend(value)
328 else:
329 d1[key] = value
330 return d1
331
332
333 def determine_base_flags():
334 flags = {
335 'libraries': [],
336 'include_dirs': [],
337 'extra_link_args': [],
338 'extra_compile_args': []}
339 if c_options['use_ios']:
340 sysroot = environ.get('IOSSDKROOT', environ.get('SDKROOT'))
341 if not sysroot:
342 raise Exception('IOSSDKROOT is not set')
343 flags['include_dirs'] += [sysroot]
344 flags['extra_compile_args'] += ['-isysroot', sysroot]
345 flags['extra_link_args'] += ['-isysroot', sysroot]
346 elif platform == 'darwin':
347 v = os.uname()
348 if v[2] >= '13.0.0':
349 # use xcode-select to search on the right Xcode path
350 # XXX use the best SDK available instead of a specific one
351 import platform as _platform
352 xcode_dev = getoutput('xcode-select -p').splitlines()[0]
353 sdk_mac_ver = '.'.join(_platform.mac_ver()[0].split('.')[:2])
354 print('Xcode detected at {}, and using MacOSX{} sdk'.format(
355 xcode_dev, sdk_mac_ver))
356 sysroot = join(xcode_dev.decode('utf-8'),
357 'Platforms/MacOSX.platform/Developer/SDKs',
358 'MacOSX{}.sdk'.format(sdk_mac_ver),
359 'System/Library/Frameworks')
360 else:
361 sysroot = ('/System/Library/Frameworks/'
362 'ApplicationServices.framework/Frameworks')
363 flags['extra_compile_args'] += ['-F%s' % sysroot]
364 flags['extra_link_args'] += ['-F%s' % sysroot]
365 return flags
366
367
368 def determine_gl_flags():
369 flags = {'libraries': []}
370 if platform == 'win32':
371 flags['libraries'] = ['opengl32']
372 elif platform == 'ios':
373 flags['libraries'] = ['GLESv2']
374 flags['extra_link_args'] = ['-framework', 'OpenGLES']
375 elif platform == 'darwin':
376 flags['extra_link_args'] = ['-framework', 'OpenGL', '-arch', osx_arch]
377 flags['extra_compile_args'] = ['-arch', osx_arch]
378 elif platform.startswith('freebsd'):
379 flags['include_dirs'] = ['/usr/local/include']
380 flags['extra_link_args'] = ['-L', '/usr/local/lib']
381 flags['libraries'] = ['GL']
382 elif platform.startswith('openbsd'):
383 flags['include_dirs'] = ['/usr/X11R6/include']
384 flags['extra_link_args'] = ['-L', '/usr/X11R6/lib']
385 flags['libraries'] = ['GL']
386 elif platform == 'android':
387 flags['include_dirs'] = [join(ndkplatform, 'usr', 'include')]
388 flags['extra_link_args'] = ['-L', join(ndkplatform, 'usr', 'lib')]
389 flags['libraries'] = ['GLESv2']
390 elif platform == 'rpi':
391 flags['include_dirs'] = ['/opt/vc/include',
392 '/opt/vc/include/interface/vcos/pthreads',
393 '/opt/vc/include/interface/vmcs_host/linux']
394 flags['library_dirs'] = ['/opt/vc/lib']
395 flags['libraries'] = ['bcm_host', 'EGL', 'GLESv2']
396 else:
397 flags['libraries'] = ['GL']
398 if c_options['use_glew']:
399 if platform == 'win32':
400 flags['libraries'] += ['glew32']
401 else:
402 flags['libraries'] += ['GLEW']
403 return flags
404
405
406 def determine_sdl():
407 flags = {}
408 if not c_options['use_sdl']:
409 return flags
410
411 flags['libraries'] = ['SDL', 'SDL_ttf', 'freetype', 'z', 'bz2']
412 flags['include_dirs'] = []
413 flags['extra_link_args'] = []
414 flags['extra_compile_args'] = []
415
416 # Paths as per homebrew (modified formula to use hg checkout)
417 if c_options['use_ios']:
418 # Note: on IOS, SDL is already loaded by the launcher/main.m
419 # So if we add it here, it will just complain about duplicate
420 # symbol, cause libSDL.a would be included in main.m binary +
421 # text_sdlttf.so
422 # At the result, we are linking without SDL explicitly, and add
423 # -undefined dynamic_lookup
424 # (/tito)
425 flags['libraries'] = ['SDL_ttf', 'freetype', 'bz2']
426 flags['include_dirs'] += [
427 join(kivy_ios_root, 'build', 'include'),
428 join(kivy_ios_root, 'build', 'include', 'SDL'),
429 join(kivy_ios_root, 'build', 'include', 'freetype')]
430 flags['extra_link_args'] += [
431 '-L', join(kivy_ios_root, 'build', 'lib'),
432 '-undefined', 'dynamic_lookup']
433 else:
434 flags['include_dirs'] = ['/usr/local/include/SDL']
435 flags['extra_link_args'] += ['-L/usr/local/lib/']
436
437 if platform == 'ios':
438 flags['extra_link_args'] += [
439 '-framework', 'Foundation',
440 '-framework', 'UIKit',
441 '-framework', 'AudioToolbox',
442 '-framework', 'CoreGraphics',
443 '-framework', 'QuartzCore',
444 '-framework', 'MobileCoreServices',
445 '-framework', 'ImageIO']
446 elif platform == 'darwin':
447 flags['extra_link_args'] += [
448 '-framework', 'ApplicationServices']
449 return flags
450
451
452 def determine_sdl2():
453 flags = {}
454 if not c_options['use_sdl2']:
455 return flags
456
457 sdl2_path = environ.get('KIVY_SDL2_PATH', None)
458
459 if sdl2_flags and not sdl2_path:
460 return sdl2_flags
461
462 # no pkgconfig info, or we want to use a specific sdl2 path, so perform
463 # manual configuration
464 flags['libraries'] = ['SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer']
465 flags['include_dirs'] = ([sdl2_path] if sdl2_path else
466 ['/usr/local/include/SDL2', '/usr/include/SDL2'])
467
468 flags['extra_link_args'] = []
469 flags['extra_compile_args'] = []
470 flags['extra_link_args'] += (['-L' + sdl2_path] if sdl2_path else
471 ['-L/usr/local/lib/'])
472
473 # ensure headers for all the SDL2 and sub libraries are available
474 libs_to_check = ['SDL', 'SDL_mixer', 'SDL_ttf', 'SDL_image']
475 can_compile = True
476 for lib in libs_to_check:
477 found = False
478 for d in flags['include_dirs']:
479 fn = join(d, '{}.h'.format(lib))
480 if exists(fn):
481 found = True
482 print('SDL2: found {} header at {}'.format(lib, fn))
483 break
484
485 if not found:
486 print('SDL2: missing sub library {}'.format(lib))
487 can_compile = False
488
489 if not can_compile:
490 c_options['use_sdl2'] = False
491 return {}
492
493 return flags
494
495
496 base_flags = determine_base_flags()
497 gl_flags = determine_gl_flags()
498
499 # -----------------------------------------------------------------------------
500 # sources to compile
501 # all the dependencies have been found manually with:
502 # grep -inr -E '(cimport|include)' kivy/graphics/context_instructions.{pxd,pyx}
503 graphics_dependencies = {
504 'gl_redirect.h': ['common_subset.h'],
505 'c_opengl.pxd': ['config.pxi', 'gl_redirect.h'],
506 'buffer.pyx': ['common.pxi'],
507 'context.pxd': [
508 'instructions.pxd', 'texture.pxd', 'vbo.pxd',
509 'c_opengl.pxd', 'c_opengl_debug.pxd'],
510 'c_opengl_debug.pyx': ['common.pxi', 'c_opengl.pxd'],
511 'compiler.pxd': ['instructions.pxd'],
512 'compiler.pyx': ['context_instructions.pxd'],
513 'context_instructions.pxd': [
514 'transformation.pxd', 'instructions.pxd', 'texture.pxd'],
515 'fbo.pxd': ['c_opengl.pxd', 'instructions.pxd', 'texture.pxd'],
516 'fbo.pyx': [
517 'config.pxi', 'opcodes.pxi', 'transformation.pxd', 'context.pxd',
518 'c_opengl_debug.pxd'],
519 'gl_instructions.pyx': [
520 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',
521 'instructions.pxd'],
522 'instructions.pxd': [
523 'vbo.pxd', 'context_instructions.pxd', 'compiler.pxd', 'shader.pxd',
524 'texture.pxd', '../_event.pxd'],
525 'instructions.pyx': [
526 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',
527 'context.pxd', 'common.pxi', 'vertex.pxd', 'transformation.pxd'],
528 'opengl.pyx': ['config.pxi', 'common.pxi', 'c_opengl.pxd', 'gl_redirect.h'],
529 'opengl_utils.pyx': ['opengl_utils_def.pxi', 'c_opengl.pxd'],
530 'shader.pxd': ['c_opengl.pxd', 'transformation.pxd', 'vertex.pxd'],
531 'shader.pyx': [
532 'config.pxi', 'common.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',
533 'vertex.pxd', 'transformation.pxd', 'context.pxd'],
534 'stencil_instructions.pxd': ['instructions.pxd'],
535 'stencil_instructions.pyx': [
536 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd'],
537 'svg.pyx': ['config.pxi', 'common.pxi', 'texture.pxd', 'instructions.pxd',
538 'vertex_instructions.pxd', 'tesselator.pxd'],
539 'texture.pxd': ['c_opengl.pxd'],
540 'texture.pyx': [
541 'config.pxi', 'common.pxi', 'opengl_utils_def.pxi', 'context.pxd',
542 'c_opengl.pxd', 'c_opengl_debug.pxd', 'opengl_utils.pxd',
543 'img_tools.pxi'],
544 'vbo.pxd': ['buffer.pxd', 'c_opengl.pxd', 'vertex.pxd'],
545 'vbo.pyx': [
546 'config.pxi', 'common.pxi', 'c_opengl_debug.pxd', 'context.pxd',
547 'instructions.pxd', 'shader.pxd'],
548 'vertex.pxd': ['c_opengl.pxd'],
549 'vertex.pyx': ['config.pxi', 'common.pxi'],
550 'vertex_instructions.pyx': [
551 'config.pxi', 'common.pxi', 'vbo.pxd', 'vertex.pxd', 'instructions.pxd',
552 'vertex_instructions.pxd',
553 'c_opengl.pxd', 'c_opengl_debug.pxd', 'texture.pxd',
554 'vertex_instructions_line.pxi'],
555 'vertex_instructions_line.pxi': ['stencil_instructions.pxd']}
556
557 sources = {
558 '_event.pyx': merge(base_flags, {'depends': ['properties.pxd']}),
559 'properties.pyx': merge(base_flags, {'depends': ['_event.pxd']}),
560 'graphics/buffer.pyx': base_flags,
561 'graphics/context.pyx': merge(base_flags, gl_flags),
562 'graphics/c_opengl_debug.pyx': merge(base_flags, gl_flags),
563 'graphics/compiler.pyx': merge(base_flags, gl_flags),
564 'graphics/context_instructions.pyx': merge(base_flags, gl_flags),
565 'graphics/fbo.pyx': merge(base_flags, gl_flags),
566 'graphics/gl_instructions.pyx': merge(base_flags, gl_flags),
567 'graphics/instructions.pyx': merge(base_flags, gl_flags),
568 'graphics/opengl.pyx': merge(base_flags, gl_flags),
569 'graphics/opengl_utils.pyx': merge(base_flags, gl_flags),
570 'graphics/shader.pyx': merge(base_flags, gl_flags),
571 'graphics/stencil_instructions.pyx': merge(base_flags, gl_flags),
572 'graphics/texture.pyx': merge(base_flags, gl_flags),
573 'graphics/transformation.pyx': merge(base_flags, gl_flags),
574 'graphics/vbo.pyx': merge(base_flags, gl_flags),
575 'graphics/vertex.pyx': merge(base_flags, gl_flags),
576 'graphics/vertex_instructions.pyx': merge(base_flags, gl_flags),
577 'core/text/text_layout.pyx': base_flags,
578 'graphics/tesselator.pyx': merge(base_flags, {
579 'include_dirs': ['kivy/lib/libtess2/Include'],
580 'c_depends': [
581 'lib/libtess2/Source/bucketalloc.c',
582 'lib/libtess2/Source/dict.c',
583 'lib/libtess2/Source/geom.c',
584 'lib/libtess2/Source/mesh.c',
585 'lib/libtess2/Source/priorityq.c',
586 'lib/libtess2/Source/sweep.c',
587 'lib/libtess2/Source/tess.c'
588 ]
589 }),
590 'graphics/svg.pyx': merge(base_flags, gl_flags)
591 }
592
593 if c_options['use_sdl']:
594 sdl_flags = determine_sdl()
595 sources['core/window/sdl.pyx'] = merge(
596 base_flags, gl_flags, sdl_flags)
597 sources['core/text/text_sdlttf.pyx'] = merge(
598 base_flags, gl_flags, sdl_flags)
599 sources['core/audio/audio_sdl.pyx'] = merge(
600 base_flags, sdl_flags)
601
602 if c_options['use_sdl2']:
603 sdl2_flags = determine_sdl2()
604 if sdl2_flags:
605 sources['core/window/_window_sdl2.pyx'] = merge(
606 base_flags, gl_flags, sdl2_flags)
607 sources['core/image/_img_sdl2.pyx'] = merge(
608 base_flags, gl_flags, sdl2_flags)
609 sources['core/text/_text_sdl2.pyx'] = merge(
610 base_flags, gl_flags, sdl2_flags)
611 sources['core/clipboard/_clipboard_sdl2.pyx'] = merge(
612 base_flags, gl_flags, sdl2_flags)
613
614 if platform in ('darwin', 'ios'):
615 # activate ImageIO provider for our core image
616 if platform == 'ios':
617 osx_flags = {'extra_link_args': [
618 '-framework', 'Foundation',
619 '-framework', 'UIKit',
620 '-framework', 'AudioToolbox',
621 '-framework', 'CoreGraphics',
622 '-framework', 'QuartzCore',
623 '-framework', 'ImageIO',
624 '-framework', 'Accelerate']}
625 else:
626 osx_flags = {'extra_link_args': [
627 '-framework', 'ApplicationServices']}
628 sources['core/image/img_imageio.pyx'] = merge(
629 base_flags, osx_flags)
630
631 if c_options['use_avfoundation']:
632 import platform as _platform
633 mac_ver = [int(x) for x in _platform.mac_ver()[0].split('.')[:2]]
634 if mac_ver >= [10, 7]:
635 osx_flags = {
636 'extra_link_args': ['-framework', 'AVFoundation'],
637 'extra_compile_args': ['-ObjC++'],
638 'depends': ['core/camera/camera_avfoundation_implem.m']}
639 sources['core/camera/camera_avfoundation.pyx'] = merge(
640 base_flags, osx_flags)
641 else:
642 print('AVFoundation cannot be used, OSX >= 10.7 is required')
643
644 if c_options['use_rpi']:
645 sources['lib/vidcore_lite/egl.pyx'] = merge(
646 base_flags, gl_flags)
647 sources['lib/vidcore_lite/bcm.pyx'] = merge(
648 base_flags, gl_flags)
649
650 if c_options['use_x11']:
651 sources['core/window/window_x11.pyx'] = merge(
652 base_flags, gl_flags, {
653 # FIXME add an option to depend on them but not compile them
654 # cause keytab is included in core, and core is included in
655 # window_x11
656 #
657 #'depends': [
658 # 'core/window/window_x11_keytab.c',
659 # 'core/window/window_x11_core.c'],
660 'libraries': ['Xrender', 'X11']})
661
662 if c_options['use_gstreamer']:
663 sources['lib/gstplayer/_gstplayer.pyx'] = merge(
664 base_flags, gst_flags, {
665 'depends': ['lib/gstplayer/_gstplayer.h']})
666
667
668 # -----------------------------------------------------------------------------
669 # extension modules
670
671 def get_dependencies(name, deps=None):
672 if deps is None:
673 deps = []
674 for dep in graphics_dependencies.get(name, []):
675 if dep not in deps:
676 deps.append(dep)
677 get_dependencies(dep, deps)
678 return deps
679
680
681 def resolve_dependencies(fn, depends):
682 fn = basename(fn)
683 deps = []
684 get_dependencies(fn, deps)
685 get_dependencies(fn.replace('.pyx', '.pxd'), deps)
686 return [expand('graphics', x) for x in deps]
687
688
689 def get_extensions_from_sources(sources):
690 ext_modules = []
691 if environ.get('KIVY_FAKE_BUILDEXT'):
692 print('Fake build_ext asked, will generate only .h/.c')
693 return ext_modules
694 for pyx, flags in sources.items():
695 is_graphics = pyx.startswith('graphics')
696 pyx = expand(pyx)
697 depends = [expand(x) for x in flags.pop('depends', [])]
698 c_depends = [expand(x) for x in flags.pop('c_depends', [])]
699 if not have_cython:
700 pyx = '%s.c' % pyx[:-4]
701 if is_graphics:
702 depends = resolve_dependencies(pyx, depends)
703 f_depends = [x for x in depends if x.rsplit('.', 1)[-1] in (
704 'c', 'cpp', 'm')]
705 module_name = get_modulename_from_file(pyx)
706 flags_clean = {'depends': depends}
707 for key, value in flags.items():
708 if len(value):
709 flags_clean[key] = value
710 ext_modules.append(CythonExtension(module_name,
711 [pyx] + f_depends + c_depends, **flags_clean))
712 return ext_modules
713
714 ext_modules = get_extensions_from_sources(sources)
715
716 # -----------------------------------------------------------------------------
717 # automatically detect data files
718 data_file_prefix = 'share/kivy-'
719 examples = {}
720 examples_allowed_ext = ('readme', 'py', 'wav', 'png', 'jpg', 'svg', 'json',
721 'avi', 'gif', 'txt', 'ttf', 'obj', 'mtl', 'kv')
722 for root, subFolders, files in walk('examples'):
723 for fn in files:
724 ext = fn.split('.')[-1].lower()
725 if ext not in examples_allowed_ext:
726 continue
727 filename = join(root, fn)
728 directory = '%s%s' % (data_file_prefix, dirname(filename))
729 if not directory in examples:
730 examples[directory] = []
731 examples[directory].append(filename)
732
733 # -----------------------------------------------------------------------------
734 # setup !
735 setup(
736 name='Kivy',
737 version=kivy.__version__,
738 author='Kivy Crew',
739 author_email='[email protected]',
740 url='http://kivy.org/',
741 license='MIT',
742 description=(
743 'A software library for rapid development of '
744 'hardware-accelerated multitouch applications.'),
745 ext_modules=ext_modules,
746 cmdclass=cmdclass,
747 packages=[
748 'kivy',
749 'kivy.adapters',
750 'kivy.core',
751 'kivy.core.audio',
752 'kivy.core.camera',
753 'kivy.core.clipboard',
754 'kivy.core.image',
755 'kivy.core.gl',
756 'kivy.core.spelling',
757 'kivy.core.text',
758 'kivy.core.video',
759 'kivy.core.window',
760 'kivy.effects',
761 'kivy.ext',
762 'kivy.graphics',
763 'kivy.garden',
764 'kivy.input',
765 'kivy.input.postproc',
766 'kivy.input.providers',
767 'kivy.lib',
768 'kivy.lib.osc',
769 'kivy.lib.gstplayer',
770 'kivy.lib.vidcore_lite',
771 'kivy.modules',
772 'kivy.network',
773 'kivy.storage',
774 'kivy.tools',
775 'kivy.tools.packaging',
776 'kivy.tools.packaging.pyinstaller_hooks',
777 'kivy.tools.highlight',
778 'kivy.extras',
779 'kivy.tools.extensions',
780 'kivy.uix', ],
781 package_dir={'kivy': 'kivy'},
782 package_data={'kivy': [
783 'data/*.kv',
784 'data/*.json',
785 'data/fonts/*.ttf',
786 'data/images/*.png',
787 'data/images/*.jpg',
788 'data/images/*.gif',
789 'data/images/*.atlas',
790 'data/keyboards/*.json',
791 'data/logo/*.png',
792 'data/glsl/*.png',
793 'data/glsl/*.vs',
794 'data/glsl/*.fs',
795 'tools/highlight/*.vim',
796 'tools/highlight/*.el',
797 'tools/packaging/README.txt',
798 'tools/packaging/win32/kivy.bat',
799 'tools/packaging/win32/kivyenv.sh',
800 'tools/packaging/win32/README.txt',
801 'tools/packaging/osx/Info.plist',
802 'tools/packaging/osx/InfoPlist.strings',
803 'tools/packaging/osx/kivy.sh']},
804 data_files=list(examples.items()),
805 classifiers=[
806 'Development Status :: 5 - Production/Stable',
807 'Environment :: MacOS X',
808 'Environment :: Win32 (MS Windows)',
809 'Environment :: X11 Applications',
810 'Intended Audience :: Developers',
811 'Intended Audience :: End Users/Desktop',
812 'Intended Audience :: Information Technology',
813 'Intended Audience :: Science/Research',
814 'License :: OSI Approved :: MIT License',
815 'Natural Language :: English',
816 'Operating System :: MacOS :: MacOS X',
817 'Operating System :: Microsoft :: Windows',
818 'Operating System :: POSIX :: BSD :: FreeBSD',
819 'Operating System :: POSIX :: Linux',
820 'Programming Language :: Python :: 2.7',
821 'Programming Language :: Python :: 3.3',
822 'Programming Language :: Python :: 3.4',
823 'Topic :: Artistic Software',
824 'Topic :: Games/Entertainment',
825 'Topic :: Multimedia :: Graphics :: 3D Rendering',
826 'Topic :: Multimedia :: Graphics :: Capture :: Digital Camera',
827 'Topic :: Multimedia :: Graphics :: Presentation',
828 'Topic :: Multimedia :: Graphics :: Viewers',
829 'Topic :: Multimedia :: Sound/Audio :: Players :: MP3',
830 'Topic :: Multimedia :: Video :: Display',
831 'Topic :: Scientific/Engineering :: Human Machine Interfaces',
832 'Topic :: Scientific/Engineering :: Visualization',
833 'Topic :: Software Development :: Libraries :: Application Frameworks',
834 'Topic :: Software Development :: User Interfaces'],
835 dependency_links=[
836 'https://github.com/kivy-garden/garden/archive/master.zip'],
837 install_requires=['Kivy-Garden==0.1.1'])
838
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -780,6 +780,14 @@
'kivy.uix', ],
package_dir={'kivy': 'kivy'},
package_data={'kivy': [
+ '*.pxd',
+ '*.pxi',
+ 'core/text/*.pxd',
+ 'core/text/*.pxi',
+ 'graphics/*.pxd',
+ 'graphics/*.pxi',
+ 'lib/vidcore_lite/*.pxd',
+ 'lib/vidcore_lite/*.pxi',
'data/*.kv',
'data/*.json',
'data/fonts/*.ttf',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -780,6 +780,14 @@\n 'kivy.uix', ],\n package_dir={'kivy': 'kivy'},\n package_data={'kivy': [\n+ '*.pxd',\n+ '*.pxi',\n+ 'core/text/*.pxd',\n+ 'core/text/*.pxi',\n+ 'graphics/*.pxd',\n+ 'graphics/*.pxi',\n+ 'lib/vidcore_lite/*.pxd',\n+ 'lib/vidcore_lite/*.pxi',\n 'data/*.kv',\n 'data/*.json',\n 'data/fonts/*.ttf',\n", "issue": "Please distribute pxd files, expose c apis.\nI'm writing some kivy extension code, and i want to cimport kivy's extension types, which is more efficient than python api, but kivy don't distribute pxd files to installation directory.\nI can set PYTHONPATH to kivy's source directory, and ship cython compiled c file with my library, but it would be better if kivy distribute pxd files with it.\n\n", "before_files": [{"content": "#\n# Kivy - Crossplatform NUI toolkit\n# http://kivy.org/\n#\n\nimport sys\n\nfrom copy import deepcopy\nimport os\nfrom os.path import join, dirname, sep, exists, basename\nfrom os import walk, environ\nfrom distutils.core import setup\nfrom distutils.extension import Extension\nfrom collections import OrderedDict\n\nif sys.version > '3':\n\n PY3 = True\nelse:\n PY3 = False\n\n\ndef getoutput(cmd):\n import subprocess\n p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\n return p.communicate()[0]\n\n\ndef pkgconfig(*packages, **kw):\n flag_map = {'-I': 'include_dirs', '-L': 'library_dirs', '-l': 'libraries'}\n cmd = 'pkg-config --libs --cflags {}'.format(' '.join(packages))\n for token in getoutput(cmd).split():\n ext = token[:2].decode('utf-8')\n flag = flag_map.get(ext)\n if not flag:\n continue\n kw.setdefault(flag, []).append(token[2:].decode('utf-8'))\n return kw\n\n\n# -----------------------------------------------------------------------------\n# Determine on which platform we are\n\nplatform = sys.platform\n\n# Detect 32/64bit for OSX (http://stackoverflow.com/a/1405971/798575)\nif sys.platform == 'darwin':\n if sys.maxsize > 2 ** 32:\n osx_arch = 'x86_64'\n else:\n osx_arch = 'i386'\n\n# Detect Python for android project (http://github.com/kivy/python-for-android)\nndkplatform = environ.get('NDKPLATFORM')\nif ndkplatform is not None and environ.get('LIBLINK'):\n platform = 'android'\nkivy_ios_root = environ.get('KIVYIOSROOT', None)\nif kivy_ios_root is not None:\n platform = 'ios'\nif exists('/opt/vc/include/bcm_host.h'):\n platform = 'rpi'\n\n# -----------------------------------------------------------------------------\n# Detect options\n#\nc_options = OrderedDict()\nc_options['use_rpi'] = platform == 'rpi'\nc_options['use_opengl_es2'] = None\nc_options['use_opengl_debug'] = False\nc_options['use_glew'] = False\nc_options['use_sdl'] = False\nc_options['use_sdl2'] = False\nc_options['use_ios'] = False\nc_options['use_mesagl'] = False\nc_options['use_x11'] = False\nc_options['use_gstreamer'] = False\nc_options['use_avfoundation'] = platform == 'darwin'\nc_options['use_osx_frameworks'] = platform == 'darwin'\n\n# now check if environ is changing the default values\nfor key in list(c_options.keys()):\n ukey = key.upper()\n if ukey in environ:\n value = bool(int(environ[ukey]))\n print('Environ change {0} -> {1}'.format(key, value))\n c_options[key] = value\n\n# -----------------------------------------------------------------------------\n# Cython check\n# on python-for-android and kivy-ios, cython usage is external\nhave_cython = False\nif platform in ('ios', 'android'):\n print('\\nCython check avoided.')\nelse:\n try:\n # check for cython\n from Cython.Distutils import build_ext\n have_cython = True\n except ImportError:\n print('\\nCython is missing, its required for compiling kivy !\\n\\n')\n raise\n\nif not have_cython:\n from distutils.command.build_ext import build_ext\n\n# -----------------------------------------------------------------------------\n# Setup classes\n\n\nclass KivyBuildExt(build_ext):\n\n def build_extensions(self):\n print('Build configuration is:')\n for opt, value in c_options.items():\n print(' * {0} = {1}'.format(opt, value))\n debug = bool(self.debug)\n print(' * debug = {0}'.format(debug))\n print('Generate config.h')\n config_h_fn = expand('graphics', 'config.h')\n config_h = '// Autogenerated file for Kivy C configuration\\n'\n config_h += '#define __PY3 {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_h += '#define __{0} {1}\\n'.format(k.upper(), int(v))\n self.update_if_changed(config_h_fn, config_h)\n\n print('Generate config.pxi')\n config_pxi_fn = expand('graphics', 'config.pxi')\n # update the pxi only if the content changed\n config_pxi = '# Autogenerated file for Kivy Cython configuration\\n'\n config_pxi += 'DEF PY3 = {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_pxi += 'DEF {0} = {1}\\n'.format(k.upper(), int(v))\n config_pxi += 'DEF DEBUG = {0}\\n'.format(debug)\n self.update_if_changed(config_pxi_fn, config_pxi)\n\n print('Generate setupconfig.py')\n config_py_fn = expand('setupconfig.py')\n config_py = '# Autogenerated file for Kivy configuration\\n'\n config_py += 'PY3 = {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_py += '{0} = {1}\\n'.format(k.upper(), int(v))\n config_py += 'DEBUG = {0}\\n'.format(debug)\n self.update_if_changed(config_py_fn, config_py)\n\n c = self.compiler.compiler_type\n print('Detected compiler is {}'.format(c))\n if c != 'msvc':\n for e in self.extensions:\n e.extra_link_args += ['-lm']\n\n build_ext.build_extensions(self)\n\n def update_if_changed(self, fn, content):\n need_update = True\n if exists(fn):\n with open(fn) as fd:\n need_update = fd.read() != content\n if need_update:\n with open(fn, 'w') as fd:\n fd.write(content)\n\n\n# -----------------------------------------------------------------------------\n# extract version (simulate doc generation, kivy will be not imported)\nenviron['KIVY_DOC_INCLUDE'] = '1'\nimport kivy\n\n# extra build commands go in the cmdclass dict {'command-name': CommandClass}\n# see tools.packaging.{platform}.build.py for custom build commands for\n# portable packages. also e.g. we use build_ext command from cython if its\n# installed for c extensions.\nfrom kivy.tools.packaging.factory import FactoryBuild\ncmdclass = {\n 'build_factory': FactoryBuild,\n 'build_ext': KivyBuildExt}\n\ntry:\n # add build rules for portable packages to cmdclass\n if platform == 'win32':\n from kivy.tools.packaging.win32.build import WindowsPortableBuild\n cmdclass['build_portable'] = WindowsPortableBuild\n elif platform == 'darwin':\n from kivy.tools.packaging.osx.build import OSXPortableBuild\n cmdclass['build_portable'] = OSXPortableBuild\nexcept ImportError:\n print('User distribution detected, avoid portable command.')\n\n# Detect which opengl version headers to use\nif platform in ('android', 'darwin', 'ios', 'rpi'):\n c_options['use_opengl_es2'] = True\nelif platform == 'win32':\n print('Windows platform detected, force GLEW usage.')\n c_options['use_glew'] = True\n c_options['use_opengl_es2'] = False\nelse:\n if c_options['use_opengl_es2'] is None:\n GLES = environ.get('GRAPHICS') == 'GLES'\n OPENGL = environ.get('GRAPHICS') == 'OPENGL'\n if GLES:\n c_options['use_opengl_es2'] = True\n elif OPENGL:\n c_options['use_opengl_es2'] = False\n else:\n # auto detection of GLES headers\n default_header_dirs = ['/usr/include', '/usr/local/include']\n c_options['use_opengl_es2'] = False\n for hdir in default_header_dirs:\n filename = join(hdir, 'GLES2', 'gl2.h')\n if exists(filename):\n c_options['use_opengl_es2'] = True\n print('NOTE: Found GLES 2.0 headers at {0}'.format(\n filename))\n break\n if not c_options['use_opengl_es2']:\n print('NOTE: Not found GLES 2.0 headers at: {}'.format(\n default_header_dirs))\n print(' Please contact us if your distribution '\n 'uses an alternative path for the headers.')\n\nprint('Using this graphics system: {}'.format(\n ['OpenGL', 'OpenGL ES 2'][int(c_options['use_opengl_es2'] or False)]))\n\n# check if we are in a kivy-ios build\nif platform == 'ios':\n print('Kivy-IOS project environment detect, use it.')\n print('Kivy-IOS project located at {0}'.format(kivy_ios_root))\n print('Activate SDL compilation.')\n c_options['use_ios'] = True\n c_options['use_sdl'] = True\n\n# detect gstreamer/sdl2, only on desktop\nsdl2_flags = {}\nif platform not in ('ios', 'android'):\n\n if c_options['use_osx_frameworks'] and platform == 'darwin':\n # check the existence of frameworks\n f_path = '/Library/Frameworks/GStreamer.framework'\n if not exists(f_path):\n c_options['use_gstreamer'] = False\n print('Missing GStreamer framework {}'.format(f_path))\n else:\n c_options['use_gstreamer'] = True\n gst_flags = {\n 'extra_link_args': [\n '-Xlinker', '-headerpad',\n '-Xlinker', '190',\n '-framework', 'GStreamer'],\n 'include_dirs': [join(f_path, 'Headers')]}\n\n sdl2_valid = True\n sdl2_flags = {\n 'extra_link_args': [\n '-Xlinker', '-headerpad',\n '-Xlinker', '190'],\n 'include_dirs': []\n }\n for name in ('SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer'):\n f_path = '/Library/Frameworks/{}.framework'.format(name)\n if not exists(f_path):\n print('Missing framework {}'.format(f_path))\n sdl2_valid = False\n continue\n sdl2_flags['extra_link_args'] += ['-framework', name]\n sdl2_flags['include_dirs'] += [join(f_path, 'Headers')]\n print('Found sdl2 frameworks: {}'.format(f_path))\n\n if not sdl2_valid:\n c_options['use_sdl2'] = False\n print('Deactivate SDL2 compilation due to missing frameworks')\n else:\n c_options['use_sdl2'] = True\n print('Activate SDL2 compilation')\n\n else:\n # use pkg-config approach instead\n gst_flags = pkgconfig('gstreamer-1.0')\n if 'libraries' in gst_flags:\n c_options['use_gstreamer'] = True\n sdl2_flags = pkgconfig('sdl2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer')\n if 'libraries' in sdl2_flags:\n c_options['use_sdl2'] = True\n\n if c_options['use_sdl2']:\n print('SDL2 compilation enabled, deactivate 1.x')\n c_options['use_sdl'] = False\n\n\n# -----------------------------------------------------------------------------\n# declare flags\n\n\ndef get_modulename_from_file(filename):\n filename = filename.replace(sep, '/')\n pyx = '.'.join(filename.split('.')[:-1])\n pyxl = pyx.split('/')\n while pyxl[0] != 'kivy':\n pyxl.pop(0)\n if pyxl[1] == 'kivy':\n pyxl.pop(0)\n return '.'.join(pyxl)\n\n\ndef expand(*args):\n return join(dirname(__file__), 'kivy', *args)\n\n\nclass CythonExtension(Extension):\n\n def __init__(self, *args, **kwargs):\n Extension.__init__(self, *args, **kwargs)\n self.cython_directives = {\n 'c_string_encoding': 'utf-8',\n 'profile': 'USE_PROFILE' in environ,\n 'embedsignature': 'USE_EMBEDSIGNATURE' in environ}\n # XXX with pip, setuptools is imported before distutils, and change\n # our pyx to c, then, cythonize doesn't happen. So force again our\n # sources\n self.sources = args[1]\n\n\ndef merge(d1, *args):\n d1 = deepcopy(d1)\n for d2 in args:\n for key, value in d2.items():\n value = deepcopy(value)\n if key in d1:\n d1[key].extend(value)\n else:\n d1[key] = value\n return d1\n\n\ndef determine_base_flags():\n flags = {\n 'libraries': [],\n 'include_dirs': [],\n 'extra_link_args': [],\n 'extra_compile_args': []}\n if c_options['use_ios']:\n sysroot = environ.get('IOSSDKROOT', environ.get('SDKROOT'))\n if not sysroot:\n raise Exception('IOSSDKROOT is not set')\n flags['include_dirs'] += [sysroot]\n flags['extra_compile_args'] += ['-isysroot', sysroot]\n flags['extra_link_args'] += ['-isysroot', sysroot]\n elif platform == 'darwin':\n v = os.uname()\n if v[2] >= '13.0.0':\n # use xcode-select to search on the right Xcode path\n # XXX use the best SDK available instead of a specific one\n import platform as _platform\n xcode_dev = getoutput('xcode-select -p').splitlines()[0]\n sdk_mac_ver = '.'.join(_platform.mac_ver()[0].split('.')[:2])\n print('Xcode detected at {}, and using MacOSX{} sdk'.format(\n xcode_dev, sdk_mac_ver))\n sysroot = join(xcode_dev.decode('utf-8'),\n 'Platforms/MacOSX.platform/Developer/SDKs',\n 'MacOSX{}.sdk'.format(sdk_mac_ver),\n 'System/Library/Frameworks')\n else:\n sysroot = ('/System/Library/Frameworks/'\n 'ApplicationServices.framework/Frameworks')\n flags['extra_compile_args'] += ['-F%s' % sysroot]\n flags['extra_link_args'] += ['-F%s' % sysroot]\n return flags\n\n\ndef determine_gl_flags():\n flags = {'libraries': []}\n if platform == 'win32':\n flags['libraries'] = ['opengl32']\n elif platform == 'ios':\n flags['libraries'] = ['GLESv2']\n flags['extra_link_args'] = ['-framework', 'OpenGLES']\n elif platform == 'darwin':\n flags['extra_link_args'] = ['-framework', 'OpenGL', '-arch', osx_arch]\n flags['extra_compile_args'] = ['-arch', osx_arch]\n elif platform.startswith('freebsd'):\n flags['include_dirs'] = ['/usr/local/include']\n flags['extra_link_args'] = ['-L', '/usr/local/lib']\n flags['libraries'] = ['GL']\n elif platform.startswith('openbsd'):\n flags['include_dirs'] = ['/usr/X11R6/include']\n flags['extra_link_args'] = ['-L', '/usr/X11R6/lib']\n flags['libraries'] = ['GL']\n elif platform == 'android':\n flags['include_dirs'] = [join(ndkplatform, 'usr', 'include')]\n flags['extra_link_args'] = ['-L', join(ndkplatform, 'usr', 'lib')]\n flags['libraries'] = ['GLESv2']\n elif platform == 'rpi':\n flags['include_dirs'] = ['/opt/vc/include',\n '/opt/vc/include/interface/vcos/pthreads',\n '/opt/vc/include/interface/vmcs_host/linux']\n flags['library_dirs'] = ['/opt/vc/lib']\n flags['libraries'] = ['bcm_host', 'EGL', 'GLESv2']\n else:\n flags['libraries'] = ['GL']\n if c_options['use_glew']:\n if platform == 'win32':\n flags['libraries'] += ['glew32']\n else:\n flags['libraries'] += ['GLEW']\n return flags\n\n\ndef determine_sdl():\n flags = {}\n if not c_options['use_sdl']:\n return flags\n\n flags['libraries'] = ['SDL', 'SDL_ttf', 'freetype', 'z', 'bz2']\n flags['include_dirs'] = []\n flags['extra_link_args'] = []\n flags['extra_compile_args'] = []\n\n # Paths as per homebrew (modified formula to use hg checkout)\n if c_options['use_ios']:\n # Note: on IOS, SDL is already loaded by the launcher/main.m\n # So if we add it here, it will just complain about duplicate\n # symbol, cause libSDL.a would be included in main.m binary +\n # text_sdlttf.so\n # At the result, we are linking without SDL explicitly, and add\n # -undefined dynamic_lookup\n # (/tito)\n flags['libraries'] = ['SDL_ttf', 'freetype', 'bz2']\n flags['include_dirs'] += [\n join(kivy_ios_root, 'build', 'include'),\n join(kivy_ios_root, 'build', 'include', 'SDL'),\n join(kivy_ios_root, 'build', 'include', 'freetype')]\n flags['extra_link_args'] += [\n '-L', join(kivy_ios_root, 'build', 'lib'),\n '-undefined', 'dynamic_lookup']\n else:\n flags['include_dirs'] = ['/usr/local/include/SDL']\n flags['extra_link_args'] += ['-L/usr/local/lib/']\n\n if platform == 'ios':\n flags['extra_link_args'] += [\n '-framework', 'Foundation',\n '-framework', 'UIKit',\n '-framework', 'AudioToolbox',\n '-framework', 'CoreGraphics',\n '-framework', 'QuartzCore',\n '-framework', 'MobileCoreServices',\n '-framework', 'ImageIO']\n elif platform == 'darwin':\n flags['extra_link_args'] += [\n '-framework', 'ApplicationServices']\n return flags\n\n\ndef determine_sdl2():\n flags = {}\n if not c_options['use_sdl2']:\n return flags\n\n sdl2_path = environ.get('KIVY_SDL2_PATH', None)\n\n if sdl2_flags and not sdl2_path:\n return sdl2_flags\n\n # no pkgconfig info, or we want to use a specific sdl2 path, so perform\n # manual configuration\n flags['libraries'] = ['SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer']\n flags['include_dirs'] = ([sdl2_path] if sdl2_path else\n ['/usr/local/include/SDL2', '/usr/include/SDL2'])\n\n flags['extra_link_args'] = []\n flags['extra_compile_args'] = []\n flags['extra_link_args'] += (['-L' + sdl2_path] if sdl2_path else\n ['-L/usr/local/lib/'])\n\n # ensure headers for all the SDL2 and sub libraries are available\n libs_to_check = ['SDL', 'SDL_mixer', 'SDL_ttf', 'SDL_image']\n can_compile = True\n for lib in libs_to_check:\n found = False\n for d in flags['include_dirs']:\n fn = join(d, '{}.h'.format(lib))\n if exists(fn):\n found = True\n print('SDL2: found {} header at {}'.format(lib, fn))\n break\n\n if not found:\n print('SDL2: missing sub library {}'.format(lib))\n can_compile = False\n\n if not can_compile:\n c_options['use_sdl2'] = False\n return {}\n\n return flags\n\n\nbase_flags = determine_base_flags()\ngl_flags = determine_gl_flags()\n\n# -----------------------------------------------------------------------------\n# sources to compile\n# all the dependencies have been found manually with:\n# grep -inr -E '(cimport|include)' kivy/graphics/context_instructions.{pxd,pyx}\ngraphics_dependencies = {\n 'gl_redirect.h': ['common_subset.h'],\n 'c_opengl.pxd': ['config.pxi', 'gl_redirect.h'],\n 'buffer.pyx': ['common.pxi'],\n 'context.pxd': [\n 'instructions.pxd', 'texture.pxd', 'vbo.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd'],\n 'c_opengl_debug.pyx': ['common.pxi', 'c_opengl.pxd'],\n 'compiler.pxd': ['instructions.pxd'],\n 'compiler.pyx': ['context_instructions.pxd'],\n 'context_instructions.pxd': [\n 'transformation.pxd', 'instructions.pxd', 'texture.pxd'],\n 'fbo.pxd': ['c_opengl.pxd', 'instructions.pxd', 'texture.pxd'],\n 'fbo.pyx': [\n 'config.pxi', 'opcodes.pxi', 'transformation.pxd', 'context.pxd',\n 'c_opengl_debug.pxd'],\n 'gl_instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'instructions.pxd'],\n 'instructions.pxd': [\n 'vbo.pxd', 'context_instructions.pxd', 'compiler.pxd', 'shader.pxd',\n 'texture.pxd', '../_event.pxd'],\n 'instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'context.pxd', 'common.pxi', 'vertex.pxd', 'transformation.pxd'],\n 'opengl.pyx': ['config.pxi', 'common.pxi', 'c_opengl.pxd', 'gl_redirect.h'],\n 'opengl_utils.pyx': ['opengl_utils_def.pxi', 'c_opengl.pxd'],\n 'shader.pxd': ['c_opengl.pxd', 'transformation.pxd', 'vertex.pxd'],\n 'shader.pyx': [\n 'config.pxi', 'common.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'vertex.pxd', 'transformation.pxd', 'context.pxd'],\n 'stencil_instructions.pxd': ['instructions.pxd'],\n 'stencil_instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd'],\n 'svg.pyx': ['config.pxi', 'common.pxi', 'texture.pxd', 'instructions.pxd',\n 'vertex_instructions.pxd', 'tesselator.pxd'],\n 'texture.pxd': ['c_opengl.pxd'],\n 'texture.pyx': [\n 'config.pxi', 'common.pxi', 'opengl_utils_def.pxi', 'context.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd', 'opengl_utils.pxd',\n 'img_tools.pxi'],\n 'vbo.pxd': ['buffer.pxd', 'c_opengl.pxd', 'vertex.pxd'],\n 'vbo.pyx': [\n 'config.pxi', 'common.pxi', 'c_opengl_debug.pxd', 'context.pxd',\n 'instructions.pxd', 'shader.pxd'],\n 'vertex.pxd': ['c_opengl.pxd'],\n 'vertex.pyx': ['config.pxi', 'common.pxi'],\n 'vertex_instructions.pyx': [\n 'config.pxi', 'common.pxi', 'vbo.pxd', 'vertex.pxd', 'instructions.pxd',\n 'vertex_instructions.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd', 'texture.pxd',\n 'vertex_instructions_line.pxi'],\n 'vertex_instructions_line.pxi': ['stencil_instructions.pxd']}\n\nsources = {\n '_event.pyx': merge(base_flags, {'depends': ['properties.pxd']}),\n 'properties.pyx': merge(base_flags, {'depends': ['_event.pxd']}),\n 'graphics/buffer.pyx': base_flags,\n 'graphics/context.pyx': merge(base_flags, gl_flags),\n 'graphics/c_opengl_debug.pyx': merge(base_flags, gl_flags),\n 'graphics/compiler.pyx': merge(base_flags, gl_flags),\n 'graphics/context_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/fbo.pyx': merge(base_flags, gl_flags),\n 'graphics/gl_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/opengl.pyx': merge(base_flags, gl_flags),\n 'graphics/opengl_utils.pyx': merge(base_flags, gl_flags),\n 'graphics/shader.pyx': merge(base_flags, gl_flags),\n 'graphics/stencil_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/texture.pyx': merge(base_flags, gl_flags),\n 'graphics/transformation.pyx': merge(base_flags, gl_flags),\n 'graphics/vbo.pyx': merge(base_flags, gl_flags),\n 'graphics/vertex.pyx': merge(base_flags, gl_flags),\n 'graphics/vertex_instructions.pyx': merge(base_flags, gl_flags),\n 'core/text/text_layout.pyx': base_flags,\n 'graphics/tesselator.pyx': merge(base_flags, {\n 'include_dirs': ['kivy/lib/libtess2/Include'],\n 'c_depends': [\n 'lib/libtess2/Source/bucketalloc.c',\n 'lib/libtess2/Source/dict.c',\n 'lib/libtess2/Source/geom.c',\n 'lib/libtess2/Source/mesh.c',\n 'lib/libtess2/Source/priorityq.c',\n 'lib/libtess2/Source/sweep.c',\n 'lib/libtess2/Source/tess.c'\n ]\n }),\n 'graphics/svg.pyx': merge(base_flags, gl_flags)\n}\n\nif c_options['use_sdl']:\n sdl_flags = determine_sdl()\n sources['core/window/sdl.pyx'] = merge(\n base_flags, gl_flags, sdl_flags)\n sources['core/text/text_sdlttf.pyx'] = merge(\n base_flags, gl_flags, sdl_flags)\n sources['core/audio/audio_sdl.pyx'] = merge(\n base_flags, sdl_flags)\n\nif c_options['use_sdl2']:\n sdl2_flags = determine_sdl2()\n if sdl2_flags:\n sources['core/window/_window_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/image/_img_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/text/_text_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/clipboard/_clipboard_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n\nif platform in ('darwin', 'ios'):\n # activate ImageIO provider for our core image\n if platform == 'ios':\n osx_flags = {'extra_link_args': [\n '-framework', 'Foundation',\n '-framework', 'UIKit',\n '-framework', 'AudioToolbox',\n '-framework', 'CoreGraphics',\n '-framework', 'QuartzCore',\n '-framework', 'ImageIO',\n '-framework', 'Accelerate']}\n else:\n osx_flags = {'extra_link_args': [\n '-framework', 'ApplicationServices']}\n sources['core/image/img_imageio.pyx'] = merge(\n base_flags, osx_flags)\n\nif c_options['use_avfoundation']:\n import platform as _platform\n mac_ver = [int(x) for x in _platform.mac_ver()[0].split('.')[:2]]\n if mac_ver >= [10, 7]:\n osx_flags = {\n 'extra_link_args': ['-framework', 'AVFoundation'],\n 'extra_compile_args': ['-ObjC++'],\n 'depends': ['core/camera/camera_avfoundation_implem.m']}\n sources['core/camera/camera_avfoundation.pyx'] = merge(\n base_flags, osx_flags)\n else:\n print('AVFoundation cannot be used, OSX >= 10.7 is required')\n\nif c_options['use_rpi']:\n sources['lib/vidcore_lite/egl.pyx'] = merge(\n base_flags, gl_flags)\n sources['lib/vidcore_lite/bcm.pyx'] = merge(\n base_flags, gl_flags)\n\nif c_options['use_x11']:\n sources['core/window/window_x11.pyx'] = merge(\n base_flags, gl_flags, {\n # FIXME add an option to depend on them but not compile them\n # cause keytab is included in core, and core is included in\n # window_x11\n #\n #'depends': [\n # 'core/window/window_x11_keytab.c',\n # 'core/window/window_x11_core.c'],\n 'libraries': ['Xrender', 'X11']})\n\nif c_options['use_gstreamer']:\n sources['lib/gstplayer/_gstplayer.pyx'] = merge(\n base_flags, gst_flags, {\n 'depends': ['lib/gstplayer/_gstplayer.h']})\n\n\n# -----------------------------------------------------------------------------\n# extension modules\n\ndef get_dependencies(name, deps=None):\n if deps is None:\n deps = []\n for dep in graphics_dependencies.get(name, []):\n if dep not in deps:\n deps.append(dep)\n get_dependencies(dep, deps)\n return deps\n\n\ndef resolve_dependencies(fn, depends):\n fn = basename(fn)\n deps = []\n get_dependencies(fn, deps)\n get_dependencies(fn.replace('.pyx', '.pxd'), deps)\n return [expand('graphics', x) for x in deps]\n\n\ndef get_extensions_from_sources(sources):\n ext_modules = []\n if environ.get('KIVY_FAKE_BUILDEXT'):\n print('Fake build_ext asked, will generate only .h/.c')\n return ext_modules\n for pyx, flags in sources.items():\n is_graphics = pyx.startswith('graphics')\n pyx = expand(pyx)\n depends = [expand(x) for x in flags.pop('depends', [])]\n c_depends = [expand(x) for x in flags.pop('c_depends', [])]\n if not have_cython:\n pyx = '%s.c' % pyx[:-4]\n if is_graphics:\n depends = resolve_dependencies(pyx, depends)\n f_depends = [x for x in depends if x.rsplit('.', 1)[-1] in (\n 'c', 'cpp', 'm')]\n module_name = get_modulename_from_file(pyx)\n flags_clean = {'depends': depends}\n for key, value in flags.items():\n if len(value):\n flags_clean[key] = value\n ext_modules.append(CythonExtension(module_name,\n [pyx] + f_depends + c_depends, **flags_clean))\n return ext_modules\n\next_modules = get_extensions_from_sources(sources)\n\n# -----------------------------------------------------------------------------\n# automatically detect data files\ndata_file_prefix = 'share/kivy-'\nexamples = {}\nexamples_allowed_ext = ('readme', 'py', 'wav', 'png', 'jpg', 'svg', 'json',\n 'avi', 'gif', 'txt', 'ttf', 'obj', 'mtl', 'kv')\nfor root, subFolders, files in walk('examples'):\n for fn in files:\n ext = fn.split('.')[-1].lower()\n if ext not in examples_allowed_ext:\n continue\n filename = join(root, fn)\n directory = '%s%s' % (data_file_prefix, dirname(filename))\n if not directory in examples:\n examples[directory] = []\n examples[directory].append(filename)\n\n# -----------------------------------------------------------------------------\n# setup !\nsetup(\n name='Kivy',\n version=kivy.__version__,\n author='Kivy Crew',\n author_email='[email protected]',\n url='http://kivy.org/',\n license='MIT',\n description=(\n 'A software library for rapid development of '\n 'hardware-accelerated multitouch applications.'),\n ext_modules=ext_modules,\n cmdclass=cmdclass,\n packages=[\n 'kivy',\n 'kivy.adapters',\n 'kivy.core',\n 'kivy.core.audio',\n 'kivy.core.camera',\n 'kivy.core.clipboard',\n 'kivy.core.image',\n 'kivy.core.gl',\n 'kivy.core.spelling',\n 'kivy.core.text',\n 'kivy.core.video',\n 'kivy.core.window',\n 'kivy.effects',\n 'kivy.ext',\n 'kivy.graphics',\n 'kivy.garden',\n 'kivy.input',\n 'kivy.input.postproc',\n 'kivy.input.providers',\n 'kivy.lib',\n 'kivy.lib.osc',\n 'kivy.lib.gstplayer',\n 'kivy.lib.vidcore_lite',\n 'kivy.modules',\n 'kivy.network',\n 'kivy.storage',\n 'kivy.tools',\n 'kivy.tools.packaging',\n 'kivy.tools.packaging.pyinstaller_hooks',\n 'kivy.tools.highlight',\n 'kivy.extras',\n 'kivy.tools.extensions',\n 'kivy.uix', ],\n package_dir={'kivy': 'kivy'},\n package_data={'kivy': [\n 'data/*.kv',\n 'data/*.json',\n 'data/fonts/*.ttf',\n 'data/images/*.png',\n 'data/images/*.jpg',\n 'data/images/*.gif',\n 'data/images/*.atlas',\n 'data/keyboards/*.json',\n 'data/logo/*.png',\n 'data/glsl/*.png',\n 'data/glsl/*.vs',\n 'data/glsl/*.fs',\n 'tools/highlight/*.vim',\n 'tools/highlight/*.el',\n 'tools/packaging/README.txt',\n 'tools/packaging/win32/kivy.bat',\n 'tools/packaging/win32/kivyenv.sh',\n 'tools/packaging/win32/README.txt',\n 'tools/packaging/osx/Info.plist',\n 'tools/packaging/osx/InfoPlist.strings',\n 'tools/packaging/osx/kivy.sh']},\n data_files=list(examples.items()),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications',\n 'Intended Audience :: Developers',\n 'Intended Audience :: End Users/Desktop',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: BSD :: FreeBSD',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Artistic Software',\n 'Topic :: Games/Entertainment',\n 'Topic :: Multimedia :: Graphics :: 3D Rendering',\n 'Topic :: Multimedia :: Graphics :: Capture :: Digital Camera',\n 'Topic :: Multimedia :: Graphics :: Presentation',\n 'Topic :: Multimedia :: Graphics :: Viewers',\n 'Topic :: Multimedia :: Sound/Audio :: Players :: MP3',\n 'Topic :: Multimedia :: Video :: Display',\n 'Topic :: Scientific/Engineering :: Human Machine Interfaces',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: User Interfaces'],\n dependency_links=[\n 'https://github.com/kivy-garden/garden/archive/master.zip'],\n install_requires=['Kivy-Garden==0.1.1'])\n", "path": "setup.py"}], "after_files": [{"content": "#\n# Kivy - Crossplatform NUI toolkit\n# http://kivy.org/\n#\n\nimport sys\n\nfrom copy import deepcopy\nimport os\nfrom os.path import join, dirname, sep, exists, basename\nfrom os import walk, environ\nfrom distutils.core import setup\nfrom distutils.extension import Extension\nfrom collections import OrderedDict\n\nif sys.version > '3':\n\n PY3 = True\nelse:\n PY3 = False\n\n\ndef getoutput(cmd):\n import subprocess\n p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\n return p.communicate()[0]\n\n\ndef pkgconfig(*packages, **kw):\n flag_map = {'-I': 'include_dirs', '-L': 'library_dirs', '-l': 'libraries'}\n cmd = 'pkg-config --libs --cflags {}'.format(' '.join(packages))\n for token in getoutput(cmd).split():\n ext = token[:2].decode('utf-8')\n flag = flag_map.get(ext)\n if not flag:\n continue\n kw.setdefault(flag, []).append(token[2:].decode('utf-8'))\n return kw\n\n\n# -----------------------------------------------------------------------------\n# Determine on which platform we are\n\nplatform = sys.platform\n\n# Detect 32/64bit for OSX (http://stackoverflow.com/a/1405971/798575)\nif sys.platform == 'darwin':\n if sys.maxsize > 2 ** 32:\n osx_arch = 'x86_64'\n else:\n osx_arch = 'i386'\n\n# Detect Python for android project (http://github.com/kivy/python-for-android)\nndkplatform = environ.get('NDKPLATFORM')\nif ndkplatform is not None and environ.get('LIBLINK'):\n platform = 'android'\nkivy_ios_root = environ.get('KIVYIOSROOT', None)\nif kivy_ios_root is not None:\n platform = 'ios'\nif exists('/opt/vc/include/bcm_host.h'):\n platform = 'rpi'\n\n# -----------------------------------------------------------------------------\n# Detect options\n#\nc_options = OrderedDict()\nc_options['use_rpi'] = platform == 'rpi'\nc_options['use_opengl_es2'] = None\nc_options['use_opengl_debug'] = False\nc_options['use_glew'] = False\nc_options['use_sdl'] = False\nc_options['use_sdl2'] = False\nc_options['use_ios'] = False\nc_options['use_mesagl'] = False\nc_options['use_x11'] = False\nc_options['use_gstreamer'] = False\nc_options['use_avfoundation'] = platform == 'darwin'\nc_options['use_osx_frameworks'] = platform == 'darwin'\n\n# now check if environ is changing the default values\nfor key in list(c_options.keys()):\n ukey = key.upper()\n if ukey in environ:\n value = bool(int(environ[ukey]))\n print('Environ change {0} -> {1}'.format(key, value))\n c_options[key] = value\n\n# -----------------------------------------------------------------------------\n# Cython check\n# on python-for-android and kivy-ios, cython usage is external\nhave_cython = False\nif platform in ('ios', 'android'):\n print('\\nCython check avoided.')\nelse:\n try:\n # check for cython\n from Cython.Distutils import build_ext\n have_cython = True\n except ImportError:\n print('\\nCython is missing, its required for compiling kivy !\\n\\n')\n raise\n\nif not have_cython:\n from distutils.command.build_ext import build_ext\n\n# -----------------------------------------------------------------------------\n# Setup classes\n\n\nclass KivyBuildExt(build_ext):\n\n def build_extensions(self):\n print('Build configuration is:')\n for opt, value in c_options.items():\n print(' * {0} = {1}'.format(opt, value))\n debug = bool(self.debug)\n print(' * debug = {0}'.format(debug))\n print('Generate config.h')\n config_h_fn = expand('graphics', 'config.h')\n config_h = '// Autogenerated file for Kivy C configuration\\n'\n config_h += '#define __PY3 {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_h += '#define __{0} {1}\\n'.format(k.upper(), int(v))\n self.update_if_changed(config_h_fn, config_h)\n\n print('Generate config.pxi')\n config_pxi_fn = expand('graphics', 'config.pxi')\n # update the pxi only if the content changed\n config_pxi = '# Autogenerated file for Kivy Cython configuration\\n'\n config_pxi += 'DEF PY3 = {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_pxi += 'DEF {0} = {1}\\n'.format(k.upper(), int(v))\n config_pxi += 'DEF DEBUG = {0}\\n'.format(debug)\n self.update_if_changed(config_pxi_fn, config_pxi)\n\n print('Generate setupconfig.py')\n config_py_fn = expand('setupconfig.py')\n config_py = '# Autogenerated file for Kivy configuration\\n'\n config_py += 'PY3 = {0}\\n'.format(int(PY3))\n for k, v in c_options.items():\n config_py += '{0} = {1}\\n'.format(k.upper(), int(v))\n config_py += 'DEBUG = {0}\\n'.format(debug)\n self.update_if_changed(config_py_fn, config_py)\n\n c = self.compiler.compiler_type\n print('Detected compiler is {}'.format(c))\n if c != 'msvc':\n for e in self.extensions:\n e.extra_link_args += ['-lm']\n\n build_ext.build_extensions(self)\n\n def update_if_changed(self, fn, content):\n need_update = True\n if exists(fn):\n with open(fn) as fd:\n need_update = fd.read() != content\n if need_update:\n with open(fn, 'w') as fd:\n fd.write(content)\n\n\n# -----------------------------------------------------------------------------\n# extract version (simulate doc generation, kivy will be not imported)\nenviron['KIVY_DOC_INCLUDE'] = '1'\nimport kivy\n\n# extra build commands go in the cmdclass dict {'command-name': CommandClass}\n# see tools.packaging.{platform}.build.py for custom build commands for\n# portable packages. also e.g. we use build_ext command from cython if its\n# installed for c extensions.\nfrom kivy.tools.packaging.factory import FactoryBuild\ncmdclass = {\n 'build_factory': FactoryBuild,\n 'build_ext': KivyBuildExt}\n\ntry:\n # add build rules for portable packages to cmdclass\n if platform == 'win32':\n from kivy.tools.packaging.win32.build import WindowsPortableBuild\n cmdclass['build_portable'] = WindowsPortableBuild\n elif platform == 'darwin':\n from kivy.tools.packaging.osx.build import OSXPortableBuild\n cmdclass['build_portable'] = OSXPortableBuild\nexcept ImportError:\n print('User distribution detected, avoid portable command.')\n\n# Detect which opengl version headers to use\nif platform in ('android', 'darwin', 'ios', 'rpi'):\n c_options['use_opengl_es2'] = True\nelif platform == 'win32':\n print('Windows platform detected, force GLEW usage.')\n c_options['use_glew'] = True\n c_options['use_opengl_es2'] = False\nelse:\n if c_options['use_opengl_es2'] is None:\n GLES = environ.get('GRAPHICS') == 'GLES'\n OPENGL = environ.get('GRAPHICS') == 'OPENGL'\n if GLES:\n c_options['use_opengl_es2'] = True\n elif OPENGL:\n c_options['use_opengl_es2'] = False\n else:\n # auto detection of GLES headers\n default_header_dirs = ['/usr/include', '/usr/local/include']\n c_options['use_opengl_es2'] = False\n for hdir in default_header_dirs:\n filename = join(hdir, 'GLES2', 'gl2.h')\n if exists(filename):\n c_options['use_opengl_es2'] = True\n print('NOTE: Found GLES 2.0 headers at {0}'.format(\n filename))\n break\n if not c_options['use_opengl_es2']:\n print('NOTE: Not found GLES 2.0 headers at: {}'.format(\n default_header_dirs))\n print(' Please contact us if your distribution '\n 'uses an alternative path for the headers.')\n\nprint('Using this graphics system: {}'.format(\n ['OpenGL', 'OpenGL ES 2'][int(c_options['use_opengl_es2'] or False)]))\n\n# check if we are in a kivy-ios build\nif platform == 'ios':\n print('Kivy-IOS project environment detect, use it.')\n print('Kivy-IOS project located at {0}'.format(kivy_ios_root))\n print('Activate SDL compilation.')\n c_options['use_ios'] = True\n c_options['use_sdl'] = True\n\n# detect gstreamer/sdl2, only on desktop\nsdl2_flags = {}\nif platform not in ('ios', 'android'):\n\n if c_options['use_osx_frameworks'] and platform == 'darwin':\n # check the existence of frameworks\n f_path = '/Library/Frameworks/GStreamer.framework'\n if not exists(f_path):\n c_options['use_gstreamer'] = False\n print('Missing GStreamer framework {}'.format(f_path))\n else:\n c_options['use_gstreamer'] = True\n gst_flags = {\n 'extra_link_args': [\n '-Xlinker', '-headerpad',\n '-Xlinker', '190',\n '-framework', 'GStreamer'],\n 'include_dirs': [join(f_path, 'Headers')]}\n\n sdl2_valid = True\n sdl2_flags = {\n 'extra_link_args': [\n '-Xlinker', '-headerpad',\n '-Xlinker', '190'],\n 'include_dirs': []\n }\n for name in ('SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer'):\n f_path = '/Library/Frameworks/{}.framework'.format(name)\n if not exists(f_path):\n print('Missing framework {}'.format(f_path))\n sdl2_valid = False\n continue\n sdl2_flags['extra_link_args'] += ['-framework', name]\n sdl2_flags['include_dirs'] += [join(f_path, 'Headers')]\n print('Found sdl2 frameworks: {}'.format(f_path))\n\n if not sdl2_valid:\n c_options['use_sdl2'] = False\n print('Deactivate SDL2 compilation due to missing frameworks')\n else:\n c_options['use_sdl2'] = True\n print('Activate SDL2 compilation')\n\n else:\n # use pkg-config approach instead\n gst_flags = pkgconfig('gstreamer-1.0')\n if 'libraries' in gst_flags:\n c_options['use_gstreamer'] = True\n sdl2_flags = pkgconfig('sdl2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer')\n if 'libraries' in sdl2_flags:\n c_options['use_sdl2'] = True\n\n if c_options['use_sdl2']:\n print('SDL2 compilation enabled, deactivate 1.x')\n c_options['use_sdl'] = False\n\n\n# -----------------------------------------------------------------------------\n# declare flags\n\n\ndef get_modulename_from_file(filename):\n filename = filename.replace(sep, '/')\n pyx = '.'.join(filename.split('.')[:-1])\n pyxl = pyx.split('/')\n while pyxl[0] != 'kivy':\n pyxl.pop(0)\n if pyxl[1] == 'kivy':\n pyxl.pop(0)\n return '.'.join(pyxl)\n\n\ndef expand(*args):\n return join(dirname(__file__), 'kivy', *args)\n\n\nclass CythonExtension(Extension):\n\n def __init__(self, *args, **kwargs):\n Extension.__init__(self, *args, **kwargs)\n self.cython_directives = {\n 'c_string_encoding': 'utf-8',\n 'profile': 'USE_PROFILE' in environ,\n 'embedsignature': 'USE_EMBEDSIGNATURE' in environ}\n # XXX with pip, setuptools is imported before distutils, and change\n # our pyx to c, then, cythonize doesn't happen. So force again our\n # sources\n self.sources = args[1]\n\n\ndef merge(d1, *args):\n d1 = deepcopy(d1)\n for d2 in args:\n for key, value in d2.items():\n value = deepcopy(value)\n if key in d1:\n d1[key].extend(value)\n else:\n d1[key] = value\n return d1\n\n\ndef determine_base_flags():\n flags = {\n 'libraries': [],\n 'include_dirs': [],\n 'extra_link_args': [],\n 'extra_compile_args': []}\n if c_options['use_ios']:\n sysroot = environ.get('IOSSDKROOT', environ.get('SDKROOT'))\n if not sysroot:\n raise Exception('IOSSDKROOT is not set')\n flags['include_dirs'] += [sysroot]\n flags['extra_compile_args'] += ['-isysroot', sysroot]\n flags['extra_link_args'] += ['-isysroot', sysroot]\n elif platform == 'darwin':\n v = os.uname()\n if v[2] >= '13.0.0':\n # use xcode-select to search on the right Xcode path\n # XXX use the best SDK available instead of a specific one\n import platform as _platform\n xcode_dev = getoutput('xcode-select -p').splitlines()[0]\n sdk_mac_ver = '.'.join(_platform.mac_ver()[0].split('.')[:2])\n print('Xcode detected at {}, and using MacOSX{} sdk'.format(\n xcode_dev, sdk_mac_ver))\n sysroot = join(xcode_dev.decode('utf-8'),\n 'Platforms/MacOSX.platform/Developer/SDKs',\n 'MacOSX{}.sdk'.format(sdk_mac_ver),\n 'System/Library/Frameworks')\n else:\n sysroot = ('/System/Library/Frameworks/'\n 'ApplicationServices.framework/Frameworks')\n flags['extra_compile_args'] += ['-F%s' % sysroot]\n flags['extra_link_args'] += ['-F%s' % sysroot]\n return flags\n\n\ndef determine_gl_flags():\n flags = {'libraries': []}\n if platform == 'win32':\n flags['libraries'] = ['opengl32']\n elif platform == 'ios':\n flags['libraries'] = ['GLESv2']\n flags['extra_link_args'] = ['-framework', 'OpenGLES']\n elif platform == 'darwin':\n flags['extra_link_args'] = ['-framework', 'OpenGL', '-arch', osx_arch]\n flags['extra_compile_args'] = ['-arch', osx_arch]\n elif platform.startswith('freebsd'):\n flags['include_dirs'] = ['/usr/local/include']\n flags['extra_link_args'] = ['-L', '/usr/local/lib']\n flags['libraries'] = ['GL']\n elif platform.startswith('openbsd'):\n flags['include_dirs'] = ['/usr/X11R6/include']\n flags['extra_link_args'] = ['-L', '/usr/X11R6/lib']\n flags['libraries'] = ['GL']\n elif platform == 'android':\n flags['include_dirs'] = [join(ndkplatform, 'usr', 'include')]\n flags['extra_link_args'] = ['-L', join(ndkplatform, 'usr', 'lib')]\n flags['libraries'] = ['GLESv2']\n elif platform == 'rpi':\n flags['include_dirs'] = ['/opt/vc/include',\n '/opt/vc/include/interface/vcos/pthreads',\n '/opt/vc/include/interface/vmcs_host/linux']\n flags['library_dirs'] = ['/opt/vc/lib']\n flags['libraries'] = ['bcm_host', 'EGL', 'GLESv2']\n else:\n flags['libraries'] = ['GL']\n if c_options['use_glew']:\n if platform == 'win32':\n flags['libraries'] += ['glew32']\n else:\n flags['libraries'] += ['GLEW']\n return flags\n\n\ndef determine_sdl():\n flags = {}\n if not c_options['use_sdl']:\n return flags\n\n flags['libraries'] = ['SDL', 'SDL_ttf', 'freetype', 'z', 'bz2']\n flags['include_dirs'] = []\n flags['extra_link_args'] = []\n flags['extra_compile_args'] = []\n\n # Paths as per homebrew (modified formula to use hg checkout)\n if c_options['use_ios']:\n # Note: on IOS, SDL is already loaded by the launcher/main.m\n # So if we add it here, it will just complain about duplicate\n # symbol, cause libSDL.a would be included in main.m binary +\n # text_sdlttf.so\n # At the result, we are linking without SDL explicitly, and add\n # -undefined dynamic_lookup\n # (/tito)\n flags['libraries'] = ['SDL_ttf', 'freetype', 'bz2']\n flags['include_dirs'] += [\n join(kivy_ios_root, 'build', 'include'),\n join(kivy_ios_root, 'build', 'include', 'SDL'),\n join(kivy_ios_root, 'build', 'include', 'freetype')]\n flags['extra_link_args'] += [\n '-L', join(kivy_ios_root, 'build', 'lib'),\n '-undefined', 'dynamic_lookup']\n else:\n flags['include_dirs'] = ['/usr/local/include/SDL']\n flags['extra_link_args'] += ['-L/usr/local/lib/']\n\n if platform == 'ios':\n flags['extra_link_args'] += [\n '-framework', 'Foundation',\n '-framework', 'UIKit',\n '-framework', 'AudioToolbox',\n '-framework', 'CoreGraphics',\n '-framework', 'QuartzCore',\n '-framework', 'MobileCoreServices',\n '-framework', 'ImageIO']\n elif platform == 'darwin':\n flags['extra_link_args'] += [\n '-framework', 'ApplicationServices']\n return flags\n\n\ndef determine_sdl2():\n flags = {}\n if not c_options['use_sdl2']:\n return flags\n\n sdl2_path = environ.get('KIVY_SDL2_PATH', None)\n\n if sdl2_flags and not sdl2_path:\n return sdl2_flags\n\n # no pkgconfig info, or we want to use a specific sdl2 path, so perform\n # manual configuration\n flags['libraries'] = ['SDL2', 'SDL2_ttf', 'SDL2_image', 'SDL2_mixer']\n flags['include_dirs'] = ([sdl2_path] if sdl2_path else\n ['/usr/local/include/SDL2', '/usr/include/SDL2'])\n\n flags['extra_link_args'] = []\n flags['extra_compile_args'] = []\n flags['extra_link_args'] += (['-L' + sdl2_path] if sdl2_path else\n ['-L/usr/local/lib/'])\n\n # ensure headers for all the SDL2 and sub libraries are available\n libs_to_check = ['SDL', 'SDL_mixer', 'SDL_ttf', 'SDL_image']\n can_compile = True\n for lib in libs_to_check:\n found = False\n for d in flags['include_dirs']:\n fn = join(d, '{}.h'.format(lib))\n if exists(fn):\n found = True\n print('SDL2: found {} header at {}'.format(lib, fn))\n break\n\n if not found:\n print('SDL2: missing sub library {}'.format(lib))\n can_compile = False\n\n if not can_compile:\n c_options['use_sdl2'] = False\n return {}\n\n return flags\n\n\nbase_flags = determine_base_flags()\ngl_flags = determine_gl_flags()\n\n# -----------------------------------------------------------------------------\n# sources to compile\n# all the dependencies have been found manually with:\n# grep -inr -E '(cimport|include)' kivy/graphics/context_instructions.{pxd,pyx}\ngraphics_dependencies = {\n 'gl_redirect.h': ['common_subset.h'],\n 'c_opengl.pxd': ['config.pxi', 'gl_redirect.h'],\n 'buffer.pyx': ['common.pxi'],\n 'context.pxd': [\n 'instructions.pxd', 'texture.pxd', 'vbo.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd'],\n 'c_opengl_debug.pyx': ['common.pxi', 'c_opengl.pxd'],\n 'compiler.pxd': ['instructions.pxd'],\n 'compiler.pyx': ['context_instructions.pxd'],\n 'context_instructions.pxd': [\n 'transformation.pxd', 'instructions.pxd', 'texture.pxd'],\n 'fbo.pxd': ['c_opengl.pxd', 'instructions.pxd', 'texture.pxd'],\n 'fbo.pyx': [\n 'config.pxi', 'opcodes.pxi', 'transformation.pxd', 'context.pxd',\n 'c_opengl_debug.pxd'],\n 'gl_instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'instructions.pxd'],\n 'instructions.pxd': [\n 'vbo.pxd', 'context_instructions.pxd', 'compiler.pxd', 'shader.pxd',\n 'texture.pxd', '../_event.pxd'],\n 'instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'context.pxd', 'common.pxi', 'vertex.pxd', 'transformation.pxd'],\n 'opengl.pyx': ['config.pxi', 'common.pxi', 'c_opengl.pxd', 'gl_redirect.h'],\n 'opengl_utils.pyx': ['opengl_utils_def.pxi', 'c_opengl.pxd'],\n 'shader.pxd': ['c_opengl.pxd', 'transformation.pxd', 'vertex.pxd'],\n 'shader.pyx': [\n 'config.pxi', 'common.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd',\n 'vertex.pxd', 'transformation.pxd', 'context.pxd'],\n 'stencil_instructions.pxd': ['instructions.pxd'],\n 'stencil_instructions.pyx': [\n 'config.pxi', 'opcodes.pxi', 'c_opengl.pxd', 'c_opengl_debug.pxd'],\n 'svg.pyx': ['config.pxi', 'common.pxi', 'texture.pxd', 'instructions.pxd',\n 'vertex_instructions.pxd', 'tesselator.pxd'],\n 'texture.pxd': ['c_opengl.pxd'],\n 'texture.pyx': [\n 'config.pxi', 'common.pxi', 'opengl_utils_def.pxi', 'context.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd', 'opengl_utils.pxd',\n 'img_tools.pxi'],\n 'vbo.pxd': ['buffer.pxd', 'c_opengl.pxd', 'vertex.pxd'],\n 'vbo.pyx': [\n 'config.pxi', 'common.pxi', 'c_opengl_debug.pxd', 'context.pxd',\n 'instructions.pxd', 'shader.pxd'],\n 'vertex.pxd': ['c_opengl.pxd'],\n 'vertex.pyx': ['config.pxi', 'common.pxi'],\n 'vertex_instructions.pyx': [\n 'config.pxi', 'common.pxi', 'vbo.pxd', 'vertex.pxd', 'instructions.pxd',\n 'vertex_instructions.pxd',\n 'c_opengl.pxd', 'c_opengl_debug.pxd', 'texture.pxd',\n 'vertex_instructions_line.pxi'],\n 'vertex_instructions_line.pxi': ['stencil_instructions.pxd']}\n\nsources = {\n '_event.pyx': merge(base_flags, {'depends': ['properties.pxd']}),\n 'properties.pyx': merge(base_flags, {'depends': ['_event.pxd']}),\n 'graphics/buffer.pyx': base_flags,\n 'graphics/context.pyx': merge(base_flags, gl_flags),\n 'graphics/c_opengl_debug.pyx': merge(base_flags, gl_flags),\n 'graphics/compiler.pyx': merge(base_flags, gl_flags),\n 'graphics/context_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/fbo.pyx': merge(base_flags, gl_flags),\n 'graphics/gl_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/opengl.pyx': merge(base_flags, gl_flags),\n 'graphics/opengl_utils.pyx': merge(base_flags, gl_flags),\n 'graphics/shader.pyx': merge(base_flags, gl_flags),\n 'graphics/stencil_instructions.pyx': merge(base_flags, gl_flags),\n 'graphics/texture.pyx': merge(base_flags, gl_flags),\n 'graphics/transformation.pyx': merge(base_flags, gl_flags),\n 'graphics/vbo.pyx': merge(base_flags, gl_flags),\n 'graphics/vertex.pyx': merge(base_flags, gl_flags),\n 'graphics/vertex_instructions.pyx': merge(base_flags, gl_flags),\n 'core/text/text_layout.pyx': base_flags,\n 'graphics/tesselator.pyx': merge(base_flags, {\n 'include_dirs': ['kivy/lib/libtess2/Include'],\n 'c_depends': [\n 'lib/libtess2/Source/bucketalloc.c',\n 'lib/libtess2/Source/dict.c',\n 'lib/libtess2/Source/geom.c',\n 'lib/libtess2/Source/mesh.c',\n 'lib/libtess2/Source/priorityq.c',\n 'lib/libtess2/Source/sweep.c',\n 'lib/libtess2/Source/tess.c'\n ]\n }),\n 'graphics/svg.pyx': merge(base_flags, gl_flags)\n}\n\nif c_options['use_sdl']:\n sdl_flags = determine_sdl()\n sources['core/window/sdl.pyx'] = merge(\n base_flags, gl_flags, sdl_flags)\n sources['core/text/text_sdlttf.pyx'] = merge(\n base_flags, gl_flags, sdl_flags)\n sources['core/audio/audio_sdl.pyx'] = merge(\n base_flags, sdl_flags)\n\nif c_options['use_sdl2']:\n sdl2_flags = determine_sdl2()\n if sdl2_flags:\n sources['core/window/_window_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/image/_img_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/text/_text_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n sources['core/clipboard/_clipboard_sdl2.pyx'] = merge(\n base_flags, gl_flags, sdl2_flags)\n\nif platform in ('darwin', 'ios'):\n # activate ImageIO provider for our core image\n if platform == 'ios':\n osx_flags = {'extra_link_args': [\n '-framework', 'Foundation',\n '-framework', 'UIKit',\n '-framework', 'AudioToolbox',\n '-framework', 'CoreGraphics',\n '-framework', 'QuartzCore',\n '-framework', 'ImageIO',\n '-framework', 'Accelerate']}\n else:\n osx_flags = {'extra_link_args': [\n '-framework', 'ApplicationServices']}\n sources['core/image/img_imageio.pyx'] = merge(\n base_flags, osx_flags)\n\nif c_options['use_avfoundation']:\n import platform as _platform\n mac_ver = [int(x) for x in _platform.mac_ver()[0].split('.')[:2]]\n if mac_ver >= [10, 7]:\n osx_flags = {\n 'extra_link_args': ['-framework', 'AVFoundation'],\n 'extra_compile_args': ['-ObjC++'],\n 'depends': ['core/camera/camera_avfoundation_implem.m']}\n sources['core/camera/camera_avfoundation.pyx'] = merge(\n base_flags, osx_flags)\n else:\n print('AVFoundation cannot be used, OSX >= 10.7 is required')\n\nif c_options['use_rpi']:\n sources['lib/vidcore_lite/egl.pyx'] = merge(\n base_flags, gl_flags)\n sources['lib/vidcore_lite/bcm.pyx'] = merge(\n base_flags, gl_flags)\n\nif c_options['use_x11']:\n sources['core/window/window_x11.pyx'] = merge(\n base_flags, gl_flags, {\n # FIXME add an option to depend on them but not compile them\n # cause keytab is included in core, and core is included in\n # window_x11\n #\n #'depends': [\n # 'core/window/window_x11_keytab.c',\n # 'core/window/window_x11_core.c'],\n 'libraries': ['Xrender', 'X11']})\n\nif c_options['use_gstreamer']:\n sources['lib/gstplayer/_gstplayer.pyx'] = merge(\n base_flags, gst_flags, {\n 'depends': ['lib/gstplayer/_gstplayer.h']})\n\n\n# -----------------------------------------------------------------------------\n# extension modules\n\ndef get_dependencies(name, deps=None):\n if deps is None:\n deps = []\n for dep in graphics_dependencies.get(name, []):\n if dep not in deps:\n deps.append(dep)\n get_dependencies(dep, deps)\n return deps\n\n\ndef resolve_dependencies(fn, depends):\n fn = basename(fn)\n deps = []\n get_dependencies(fn, deps)\n get_dependencies(fn.replace('.pyx', '.pxd'), deps)\n return [expand('graphics', x) for x in deps]\n\n\ndef get_extensions_from_sources(sources):\n ext_modules = []\n if environ.get('KIVY_FAKE_BUILDEXT'):\n print('Fake build_ext asked, will generate only .h/.c')\n return ext_modules\n for pyx, flags in sources.items():\n is_graphics = pyx.startswith('graphics')\n pyx = expand(pyx)\n depends = [expand(x) for x in flags.pop('depends', [])]\n c_depends = [expand(x) for x in flags.pop('c_depends', [])]\n if not have_cython:\n pyx = '%s.c' % pyx[:-4]\n if is_graphics:\n depends = resolve_dependencies(pyx, depends)\n f_depends = [x for x in depends if x.rsplit('.', 1)[-1] in (\n 'c', 'cpp', 'm')]\n module_name = get_modulename_from_file(pyx)\n flags_clean = {'depends': depends}\n for key, value in flags.items():\n if len(value):\n flags_clean[key] = value\n ext_modules.append(CythonExtension(module_name,\n [pyx] + f_depends + c_depends, **flags_clean))\n return ext_modules\n\next_modules = get_extensions_from_sources(sources)\n\n# -----------------------------------------------------------------------------\n# automatically detect data files\ndata_file_prefix = 'share/kivy-'\nexamples = {}\nexamples_allowed_ext = ('readme', 'py', 'wav', 'png', 'jpg', 'svg', 'json',\n 'avi', 'gif', 'txt', 'ttf', 'obj', 'mtl', 'kv')\nfor root, subFolders, files in walk('examples'):\n for fn in files:\n ext = fn.split('.')[-1].lower()\n if ext not in examples_allowed_ext:\n continue\n filename = join(root, fn)\n directory = '%s%s' % (data_file_prefix, dirname(filename))\n if not directory in examples:\n examples[directory] = []\n examples[directory].append(filename)\n\n# -----------------------------------------------------------------------------\n# setup !\nsetup(\n name='Kivy',\n version=kivy.__version__,\n author='Kivy Crew',\n author_email='[email protected]',\n url='http://kivy.org/',\n license='MIT',\n description=(\n 'A software library for rapid development of '\n 'hardware-accelerated multitouch applications.'),\n ext_modules=ext_modules,\n cmdclass=cmdclass,\n packages=[\n 'kivy',\n 'kivy.adapters',\n 'kivy.core',\n 'kivy.core.audio',\n 'kivy.core.camera',\n 'kivy.core.clipboard',\n 'kivy.core.image',\n 'kivy.core.gl',\n 'kivy.core.spelling',\n 'kivy.core.text',\n 'kivy.core.video',\n 'kivy.core.window',\n 'kivy.effects',\n 'kivy.ext',\n 'kivy.graphics',\n 'kivy.garden',\n 'kivy.input',\n 'kivy.input.postproc',\n 'kivy.input.providers',\n 'kivy.lib',\n 'kivy.lib.osc',\n 'kivy.lib.gstplayer',\n 'kivy.lib.vidcore_lite',\n 'kivy.modules',\n 'kivy.network',\n 'kivy.storage',\n 'kivy.tools',\n 'kivy.tools.packaging',\n 'kivy.tools.packaging.pyinstaller_hooks',\n 'kivy.tools.highlight',\n 'kivy.extras',\n 'kivy.tools.extensions',\n 'kivy.uix', ],\n package_dir={'kivy': 'kivy'},\n package_data={'kivy': [\n '*.pxd',\n '*.pxi',\n 'core/text/*.pxd',\n 'core/text/*.pxi',\n 'graphics/*.pxd',\n 'graphics/*.pxi',\n 'lib/vidcore_lite/*.pxd',\n 'lib/vidcore_lite/*.pxi',\n 'data/*.kv',\n 'data/*.json',\n 'data/fonts/*.ttf',\n 'data/images/*.png',\n 'data/images/*.jpg',\n 'data/images/*.gif',\n 'data/images/*.atlas',\n 'data/keyboards/*.json',\n 'data/logo/*.png',\n 'data/glsl/*.png',\n 'data/glsl/*.vs',\n 'data/glsl/*.fs',\n 'tools/highlight/*.vim',\n 'tools/highlight/*.el',\n 'tools/packaging/README.txt',\n 'tools/packaging/win32/kivy.bat',\n 'tools/packaging/win32/kivyenv.sh',\n 'tools/packaging/win32/README.txt',\n 'tools/packaging/osx/Info.plist',\n 'tools/packaging/osx/InfoPlist.strings',\n 'tools/packaging/osx/kivy.sh']},\n data_files=list(examples.items()),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications',\n 'Intended Audience :: Developers',\n 'Intended Audience :: End Users/Desktop',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: BSD :: FreeBSD',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Artistic Software',\n 'Topic :: Games/Entertainment',\n 'Topic :: Multimedia :: Graphics :: 3D Rendering',\n 'Topic :: Multimedia :: Graphics :: Capture :: Digital Camera',\n 'Topic :: Multimedia :: Graphics :: Presentation',\n 'Topic :: Multimedia :: Graphics :: Viewers',\n 'Topic :: Multimedia :: Sound/Audio :: Players :: MP3',\n 'Topic :: Multimedia :: Video :: Display',\n 'Topic :: Scientific/Engineering :: Human Machine Interfaces',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Topic :: Software Development :: User Interfaces'],\n dependency_links=[\n 'https://github.com/kivy-garden/garden/archive/master.zip'],\n install_requires=['Kivy-Garden==0.1.1'])\n", "path": "setup.py"}]} |
gh_patches_debug_1196 | rasdani/github-patches | git_diff | lutris__lutris-4948 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[Feature Request] Make Locale dropdown list searchable
On Fedora and OpenSUSE, all locales are enabled by default. There are at least 100 of them, probably more. If you want to play some games in a certain locale, like English, and other games in Japanese, you can't set a global preference. It would be a lot easier if you could just search the locale dropdown to find the locale you need.
There is also no way that I know of to remove locales in OpenSUSE. `locale-gen` isn't even installed, there is no `/etc/locale.gen` file, and I only have a single language configured in YaST Sysconfig. I really don't think you're meant to disable locales at all in OpenSUSE.
Here's an idea of what the screen looks like on OpenSUSE:

--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `lutris/sysoptions.py`
Content:
```
1 """Options list for system config."""
2 import os
3 from collections import OrderedDict
4 from gettext import gettext as _
5
6 from lutris import runners
7 from lutris.util import linux, system
8 from lutris.util.display import DISPLAY_MANAGER, SCREEN_SAVER_INHIBITOR, USE_DRI_PRIME
9 from lutris.util.system import get_vk_icd_file_sets, get_vulkan_gpu_name
10
11
12 def get_resolution_choices():
13 """Return list of available resolutions as label, value tuples
14 suitable for inclusion in drop-downs.
15 """
16 resolutions = DISPLAY_MANAGER.get_resolutions()
17 resolution_choices = list(zip(resolutions, resolutions))
18 resolution_choices.insert(0, (_("Keep current"), "off"))
19 return resolution_choices
20
21
22 def get_locale_choices():
23 """Return list of available locales as label, value tuples
24 suitable for inclusion in drop-downs.
25 """
26 locales = system.get_locale_list()
27
28 # adds "(recommended)" string to utf8 locales
29 locales_humanized = locales.copy()
30 for index, locale in enumerate(locales_humanized):
31 if "utf8" in locale:
32 locales_humanized[index] += " " + _("(recommended)")
33
34 locale_choices = list(zip(locales_humanized, locales))
35 locale_choices.insert(0, (_("System"), ""))
36
37 return locale_choices
38
39
40 def get_output_choices():
41 """Return list of outputs for drop-downs"""
42 displays = DISPLAY_MANAGER.get_display_names()
43 output_choices = list(zip(displays, displays))
44 output_choices.insert(0, (_("Off"), "off"))
45 output_choices.insert(1, (_("Primary"), "primary"))
46 return output_choices
47
48
49 def get_output_list():
50 """Return a list of output with their index.
51 This is used to indicate to SDL 1.2 which monitor to use.
52 """
53 choices = [(_("Off"), "off")]
54 displays = DISPLAY_MANAGER.get_display_names()
55 for index, output in enumerate(displays):
56 # Display name can't be used because they might not be in the right order
57 # Using DISPLAYS to get the number of connected monitors
58 choices.append((output, str(index)))
59 return choices
60
61
62 def get_optirun_choices():
63 """Return menu choices (label, value) for Optimus"""
64 choices = [(_("Off"), "off")]
65 if system.find_executable("primusrun"):
66 choices.append(("primusrun", "primusrun"))
67 if system.find_executable("optirun"):
68 choices.append(("optirun/virtualgl", "optirun"))
69 if system.find_executable("pvkrun"):
70 choices.append(("primus vk", "pvkrun"))
71 return choices
72
73
74 def get_vk_icd_choices():
75 """Return available Vulkan ICD loaders"""
76 # fallback in case any ICDs don't match a known type
77 icd_file_sets = get_vk_icd_file_sets()
78
79 intel_files = ":".join(icd_file_sets["intel"])
80 amdradv_files = ":".join(icd_file_sets["amdradv"])
81 nvidia_files = ":".join(icd_file_sets["nvidia"])
82 amdvlk_files = ":".join(icd_file_sets["amdvlk"])
83 amdvlkpro_files = ":".join(icd_file_sets["amdvlkpro"])
84 unknown_files = ":".join(icd_file_sets["unknown"])
85
86 # default choice should always be blank so the env var gets left as is
87 # This ensures Lutris doesn't change the vulkan loader behavior unless you select
88 # a specific ICD from the list, to avoid surprises
89 choices = [("Unspecified", "")]
90
91 if intel_files:
92 choices.append(("Intel Open Source (MESA: ANV)", intel_files))
93 if amdradv_files:
94 choices.append(("AMD RADV Open Source (MESA: RADV)", amdradv_files))
95 if nvidia_files:
96 choices.append(("Nvidia Proprietary", nvidia_files))
97 if amdvlk_files:
98 if not amdvlkpro_files:
99 choices.append(("AMDVLK/AMDGPU-PRO Proprietary", amdvlk_files))
100 else:
101 choices.append(("AMDVLK Open source", amdvlk_files))
102 if amdvlkpro_files:
103 choices.append(("AMDGPU-PRO Proprietary", amdvlkpro_files))
104 if unknown_files:
105 choices.append(("Unknown Vendor", unknown_files))
106
107 choices = [(prefix + ": " + get_vulkan_gpu_name(files, USE_DRI_PRIME), files) for prefix, files in choices]
108
109 return choices
110
111
112 system_options = [ # pylint: disable=invalid-name
113 {
114 "section": "Lutris",
115 "option": "game_path",
116 "type": "directory_chooser",
117 "label": _("Default installation folder"),
118 "default": os.path.expanduser("~/Games"),
119 "scope": ["runner", "system"],
120 "help": _("The default folder where you install your games.")
121 },
122 {
123 "section": "Lutris",
124 "option": "disable_runtime",
125 "type": "bool",
126 "label": _("Disable Lutris Runtime"),
127 "default": False,
128 "help": _("The Lutris Runtime loads some libraries before running the "
129 "game, which can cause some incompatibilities in some cases. "
130 "Check this option to disable it."),
131 },
132 {
133 "section": "Lutris",
134 "option": "prefer_system_libs",
135 "type": "bool",
136 "label": _("Prefer system libraries"),
137 "default": True,
138 "help": _("When the runtime is enabled, prioritize the system libraries"
139 " over the provided ones."),
140 },
141
142 {
143 "section": "Gamescope",
144 "option": "gamescope",
145 "type": "bool",
146 "label": _("Enable Gamescope"),
147 "default": False,
148 "condition": bool(system.find_executable("gamescope")) and linux.LINUX_SYSTEM.nvidia_gamescope_support(),
149 "help": _("Use gamescope to draw the game window isolated from your desktop.\n"
150 "Toggle fullscreen: Super + F"),
151 },
152 {
153 "section": "Gamescope",
154 "option": "gamescope_force_grab_cursor",
155 "type": "bool",
156 "label": _("Relative Mouse Mode"),
157 "advanced": True,
158 "default": False,
159 "condition": bool(system.find_executable("gamescope")),
160 "help": _("Always use relative mouse mode instead of flipping\n"
161 "dependent on cursor visibility (--force-grab-cursor).\n"
162 "(Since gamescope git commit 054458f, Jan 12, 2023)"),
163 },
164 {
165 "section": "Gamescope",
166 "option": "gamescope_output_res",
167 "type": "choice_with_entry",
168 "label": _("Output Resolution"),
169 "choices": DISPLAY_MANAGER.get_resolutions,
170 "advanced": True,
171 "condition": bool(system.find_executable("gamescope")),
172 "help": _("Set the resolution used by gamescope (-W, -H).\n"
173 "Resizing the gamescope window will update these settings.\n"
174 "\n"
175 "<b>Empty string:</b> Disabled\n"
176 "<b>Custom Resolutions:</b> (width)x(height)"),
177 },
178 {
179 "section": "Gamescope",
180 "option": "gamescope_game_res",
181 "type": "choice_with_entry",
182 "label": _("Game Resolution"),
183 "advanced": True,
184 "choices": DISPLAY_MANAGER.get_resolutions,
185 "condition": bool(system.find_executable("gamescope")),
186 "help": _("Set the maximum resolution used by the game (-w, -h).\n"
187 "\n"
188 "<b>Empty string:</b> Disabled\n"
189 "<b>Custom Resolutions:</b> (width)x(height)"),
190 },
191 {
192 "section": "Gamescope",
193 "option": "gamescope_window_mode",
194 "label": _("Window Mode"),
195 "advanced": True,
196 "type": "choice",
197 "choices": (
198 (_("Fullscreen"), "-f"),
199 (_("Windowed"), ""),
200 (_("Borderless"), "-b"),
201 ),
202 "default": "-f",
203 "condition": bool(system.find_executable("gamescope")),
204 "help": _("Run gamescope in fullscreen, windowed or borderless mode\n"
205 "Toggle fullscreen (-f) : Super + F"),
206 },
207 {
208 "section": "Gamescope",
209 "option": "gamescope_fsr_sharpness",
210 "label": _("FSR Level"),
211 "advanced": True,
212 "type": "string",
213 "condition": bool(system.find_executable("gamescope")),
214 "help": _("Use AMD FidelityFX™ Super Resolution 1.0 for upscaling (-U).\n"
215 "Upscaler sharpness from 0 (max) to 20 (min).\n"
216 "\n"
217 "<b>Empty string:</b> Disabled"),
218 },
219 {
220 "section": "Gamescope",
221 "option": "gamescope_fps_limiter",
222 "label": _("FPS Limiter"),
223 "advanced": True,
224 "type": "string",
225 "condition": bool(system.find_executable("gamescope")),
226 "help": _("Set a frame-rate limit for gamescope specified in frames per second (-r).\n"
227 "\n"
228 "<b>Empty string:</b> Disabled"),
229 },
230 {
231 "section": "Gamescope",
232 "option": "gamescope_flags",
233 "label": _("Custom Settings"),
234 "advanced": True,
235 "type": "string",
236 "condition": bool(system.find_executable("gamescope")),
237 "help": _("Set additional flags for gamescope (if available).\n"
238 "See 'gamescope --help' for a full list of options.\n"
239 "\n"
240 "<b>Empty String:</b> Disabled"),
241 },
242 {
243 "section": "CPU",
244 "option": "single_cpu",
245 "type": "bool",
246 "label": _("Restrict number of cores used"),
247 "default": False,
248 "help": _("Restrict the game to a maximum number of CPU cores."),
249 },
250 {
251 "section": "CPU",
252 "option": "limit_cpu_count",
253 "type": "string",
254 "label": _("Restrict number of cores to"),
255 "default": "1",
256 "help": _("Maximum number of CPU cores to be used, if 'Restrict number of cores used' is turned on."),
257 },
258 {
259 "section": "CPU",
260 "option": "gamemode",
261 "type": "bool",
262 "default": linux.LINUX_SYSTEM.gamemode_available(),
263 "condition": linux.LINUX_SYSTEM.gamemode_available(),
264 "label": _("Enable Feral GameMode"),
265 "help": _("Request a set of optimisations be temporarily applied to the host OS"),
266 },
267 {
268 "section": "Display",
269 "option": "mangohud",
270 "type": "bool",
271 "label": _("FPS counter (MangoHud)"),
272 "default": False,
273 "condition": bool(system.find_executable("mangohud")),
274 "help": _("Display the game's FPS + other information. Requires MangoHud to be installed."),
275 },
276 {
277 "section": "Display",
278 "option": "reset_desktop",
279 "type": "bool",
280 "label": _("Restore resolution on game exit"),
281 "default": False,
282 "help": _("Some games don't restore your screen resolution when \n"
283 "closed or when they crash. This is when this option comes \n"
284 "into play to save your bacon."),
285 },
286 {
287 "section": "Display",
288 "option": "restore_gamma",
289 "type": "bool",
290 "default": False,
291 "label": _("Restore gamma on game exit"),
292 "advanced": True,
293 "help": _("Some games don't correctly restores gamma on exit, making "
294 "your display too bright. Select this option to correct it."),
295 },
296 {
297 "section": "Display",
298 "option": "disable_compositor",
299 "label": _("Disable desktop effects"),
300 "type": "bool",
301 "default": False,
302 "advanced": True,
303 "help": _("Disable desktop effects while game is running, "
304 "reducing stuttering and increasing performance"),
305 },
306 {
307 "section": "Display",
308 "option": "disable_screen_saver",
309 "label": _("Disable screen saver"),
310 "type": "bool",
311 "default": SCREEN_SAVER_INHIBITOR is not None,
312 "advanced": False,
313 "condition": SCREEN_SAVER_INHIBITOR is not None,
314 "help": _("Disable the screen saver while a game is running. "
315 "Requires the screen saver's functionality "
316 "to be exposed over DBus."),
317 },
318
319 {
320 "section": "Display",
321 "option": "fps_limit",
322 "type": "string",
323 "size": "small",
324 "label": _("FPS limit"),
325 "condition": bool(system.find_executable("strangle")),
326 "help": _("Limit the game's FPS using libstrangle"),
327 },
328 {
329 "section": "Display",
330 "option": "sdl_video_fullscreen",
331 "type": "choice",
332 "label": _("SDL 1.2 Fullscreen Monitor"),
333 "choices": get_output_list,
334 "default": "off",
335 "advanced": True,
336 "help": _("Hint SDL 1.2 games to use a specific monitor when going "
337 "fullscreen by setting the SDL_VIDEO_FULLSCREEN "
338 "environment variable"),
339 },
340 {
341 "section": "Display",
342 "option": "display",
343 "type": "choice",
344 "label": _("Turn off monitors except"),
345 "choices": get_output_choices,
346 "default": "off",
347 "advanced": True,
348 "help": _("Only keep the selected screen active while the game is "
349 "running. \n"
350 "This is useful if you have a dual-screen setup, and are \n"
351 "having display issues when running a game in fullscreen."),
352 },
353 {
354 "section": "Display",
355 "option": "resolution",
356 "type": "choice",
357 "label": _("Switch resolution to"),
358 "choices": get_resolution_choices,
359 "default": "off",
360 "help": _("Switch to this screen resolution while the game is running."),
361 },
362 {
363 "section": "Audio",
364 "option": "reset_pulse",
365 "type": "bool",
366 "label": _("Reset PulseAudio"),
367 "default": False,
368 "advanced": True,
369 "condition": system.find_executable("pulseaudio"),
370 "help": _("Restart PulseAudio before launching the game."),
371 },
372 {
373 "section": "Audio",
374 "option": "pulse_latency",
375 "type": "bool",
376 "label": _("Reduce PulseAudio latency"),
377 "default": False,
378 "advanced": True,
379 "condition": system.find_executable("pulseaudio") or system.find_executable("pipewire-pulse"),
380 "help": _("Set the environment variable PULSE_LATENCY_MSEC=60 "
381 "to improve audio quality on some games"),
382 },
383 {
384 "section": "Input",
385 "option": "use_us_layout",
386 "type": "bool",
387 "label": _("Switch to US keyboard layout"),
388 "default": False,
389 "advanced": True,
390 "help": _("Switch to US keyboard QWERTY layout while game is running"),
391 },
392 {
393 "section": "Input",
394 "option": "antimicro_config",
395 "type": "file",
396 "label": _("AntiMicroX Profile"),
397 "advanced": True,
398 "help": _("Path to an AntiMicroX profile file"),
399 },
400
401 {
402 "section": "Input",
403 "option": "sdl_gamecontrollerconfig",
404 "type": "string",
405 "label": _("SDL2 gamepad mapping"),
406 "advanced": True,
407 "help": _("SDL_GAMECONTROLLERCONFIG mapping string or path to a custom "
408 "gamecontrollerdb.txt file containing mappings."),
409 },
410 {
411 "section": "Multi-GPU",
412 "option": "prime",
413 "type": "bool",
414 "default": False,
415 "condition": True,
416 "label": _("Enable NVIDIA Prime Render Offload"),
417 "help": _("If you have the latest NVIDIA driver and the properly patched xorg-server (see "
418 "https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html"
419 "), you can launch a game on your NVIDIA GPU by toggling this switch. This will apply "
420 "__NV_PRIME_RENDER_OFFLOAD=1 and "
421 "__GLX_VENDOR_LIBRARY_NAME=nvidia environment variables.")
422 },
423 {
424 "section": "Multi-GPU",
425 "option": "dri_prime",
426 "type": "bool",
427 "default": USE_DRI_PRIME,
428 "condition": USE_DRI_PRIME,
429 "label": _("Use discrete graphics"),
430 "advanced": True,
431 "help": _("If you have open source graphic drivers (Mesa), selecting this "
432 "option will run the game with the 'DRI_PRIME=1' environment variable, "
433 "activating your discrete graphic chip for high 3D "
434 "performance."),
435 },
436 {
437 "section": "Multi-GPU",
438 "option": "optimus",
439 "type": "choice",
440 "default": "off",
441 "choices": get_optirun_choices,
442 "label": _("Optimus launcher (NVIDIA Optimus laptops)"),
443 "advanced": True,
444 "help": _("If you have installed the primus or bumblebee packages, "
445 "select what launcher will run the game with the command, "
446 "activating your NVIDIA graphic chip for high 3D "
447 "performance. primusrun normally has better performance, but"
448 "optirun/virtualgl works better for more games."
449 "Primus VK provide vulkan support under bumblebee."),
450 },
451 {
452 "section": "Multi-GPU",
453 "option": "vk_icd",
454 "type": "choice",
455 # Default is "" which does not set the VK_ICD_FILENAMES env var
456 # (Matches "Unspecified" in dropdown)
457 "default": "",
458 "choices": get_vk_icd_choices,
459 "label": _("Vulkan ICD loader"),
460 "advanced": True,
461 "help": _("The ICD loader is a library that is placed between a Vulkan "
462 "application and any number of Vulkan drivers, in order to support "
463 "multiple drivers and the instance-level functionality that works "
464 "across these drivers.")
465 },
466 {
467 "section": "Text based games",
468 "option": "terminal",
469 "label": _("CLI mode"),
470 "type": "bool",
471 "default": False,
472 "advanced": True,
473 "help": _("Enable a terminal for text-based games. "
474 "Only useful for ASCII based games. May cause issues with graphical games."),
475 },
476 {
477 "section": "Text based games",
478 "option": "terminal_app",
479 "label": _("Text based games emulator"),
480 "type": "choice_with_entry",
481 "choices": linux.get_terminal_apps,
482 "default": linux.get_default_terminal(),
483 "advanced": True,
484 "help": _("The terminal emulator used with the CLI mode. "
485 "Choose from the list of detected terminal apps or enter "
486 "the terminal's command or path."),
487 },
488 {
489 "section": "Game execution",
490 "option": "env",
491 "type": "mapping",
492 "label": _("Environment variables"),
493 "help": _("Environment variables loaded at run time"),
494 },
495 {
496 "section": "Game execution",
497 "option": "locale",
498 "type": "choice",
499 "label": _("Locale"),
500 "choices": (
501 get_locale_choices()
502 ),
503 "default": "",
504 "advanced": False,
505 "help": _("Can be used to force certain locale for an app. Fixes encoding issues in legacy software."),
506 },
507 {
508 "section": "Game execution",
509 "option": "prefix_command",
510 "type": "string",
511 "label": _("Command prefix"),
512 "advanced": True,
513 "help": _("Command line instructions to add in front of the game's "
514 "execution command."),
515 },
516 {
517 "section": "Game execution",
518 "option": "manual_command",
519 "type": "file",
520 "label": _("Manual script"),
521 "advanced": True,
522 "help": _("Script to execute from the game's contextual menu"),
523 },
524 {
525 "section": "Game execution",
526 "option": "prelaunch_command",
527 "type": "file",
528 "label": _("Pre-launch script"),
529 "advanced": True,
530 "help": _("Script to execute before the game starts"),
531 },
532 {
533 "section": "Game execution",
534 "option": "prelaunch_wait",
535 "type": "bool",
536 "label": _("Wait for pre-launch script completion"),
537 "advanced": True,
538 "default": False,
539 "help": _("Run the game only once the pre-launch script has exited"),
540 },
541 {
542 "section": "Game execution",
543 "option": "postexit_command",
544 "type": "file",
545 "label": _("Post-exit script"),
546 "advanced": True,
547 "help": _("Script to execute when the game exits"),
548 },
549 {
550 "section": "Game execution",
551 "option": "include_processes",
552 "type": "string",
553 "label": _("Include processes"),
554 "advanced": True,
555 "help": _("What processes to include in process monitoring. "
556 "This is to override the built-in exclude list.\n"
557 "Space-separated list, processes including spaces "
558 "can be wrapped in quotation marks."),
559 },
560 {
561 "section": "Game execution",
562 "option": "exclude_processes",
563 "type": "string",
564 "label": _("Exclude processes"),
565 "advanced": True,
566 "help": _("What processes to exclude in process monitoring. "
567 "For example background processes that stick around "
568 "after the game has been closed.\n"
569 "Space-separated list, processes including spaces "
570 "can be wrapped in quotation marks."),
571 },
572 {
573 "section": "Game execution",
574 "option": "killswitch",
575 "type": "string",
576 "label": _("Killswitch file"),
577 "advanced": True,
578 "help": _("Path to a file which will stop the game when deleted \n"
579 "(usually /dev/input/js0 to stop the game on joystick "
580 "unplugging)"),
581 },
582
583 {
584 "section": "Xephyr (Deprecated, use Gamescope)",
585 "option": "xephyr",
586 "label": _("Use Xephyr"),
587 "type": "choice",
588 "choices": (
589 (_("Off"), "off"),
590 (_("8BPP (256 colors)"), "8bpp"),
591 (_("16BPP (65536 colors)"), "16bpp"),
592 (_("24BPP (16M colors)"), "24bpp"),
593 ),
594 "default": "off",
595 "advanced": True,
596 "help": _("Run program in Xephyr to support 8BPP and 16BPP color modes"),
597 },
598 {
599 "section": "Xephyr (Deprecated, use Gamescope)",
600 "option": "xephyr_resolution",
601 "type": "string",
602 "label": _("Xephyr resolution"),
603 "advanced": True,
604 "help": _("Screen resolution of the Xephyr server"),
605 },
606 {
607 "section": "Xephyr (Deprecated, use Gamescope)",
608 "option": "xephyr_fullscreen",
609 "type": "bool",
610 "label": _("Xephyr Fullscreen"),
611 "default": True,
612 "advanced": True,
613 "help": _("Open Xephyr in fullscreen (at the desktop resolution)"),
614 },
615 ]
616
617
618 def with_runner_overrides(runner_slug):
619 """Return system options updated with overrides from given runner."""
620 options = system_options
621 try:
622 runner = runners.import_runner(runner_slug)
623 except runners.InvalidRunner:
624 return options
625 if not getattr(runner, "system_options_override"):
626 runner = runner()
627 if runner.system_options_override:
628 opts_dict = OrderedDict((opt["option"], opt) for opt in options)
629 for option in runner.system_options_override:
630 key = option["option"]
631 if opts_dict.get(key):
632 opts_dict[key] = opts_dict[key].copy()
633 opts_dict[key].update(option)
634 else:
635 opts_dict[key] = option
636 options = list(opts_dict.values())
637 return options
638
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/lutris/sysoptions.py b/lutris/sysoptions.py
--- a/lutris/sysoptions.py
+++ b/lutris/sysoptions.py
@@ -495,10 +495,10 @@
{
"section": "Game execution",
"option": "locale",
- "type": "choice",
+ "type": "choice_with_search",
"label": _("Locale"),
"choices": (
- get_locale_choices()
+ get_locale_choices
),
"default": "",
"advanced": False,
| {"golden_diff": "diff --git a/lutris/sysoptions.py b/lutris/sysoptions.py\n--- a/lutris/sysoptions.py\n+++ b/lutris/sysoptions.py\n@@ -495,10 +495,10 @@\n {\n \"section\": \"Game execution\",\n \"option\": \"locale\",\n- \"type\": \"choice\",\n+ \"type\": \"choice_with_search\",\n \"label\": _(\"Locale\"),\n \"choices\": (\n- get_locale_choices()\n+ get_locale_choices\n ),\n \"default\": \"\",\n \"advanced\": False,\n", "issue": "[Feature Request] Make Locale dropdown list searchable\nOn Fedora and OpenSUSE, all locales are enabled by default. There are at least 100 of them, probably more. If you want to play some games in a certain locale, like English, and other games in Japanese, you can't set a global preference. It would be a lot easier if you could just search the locale dropdown to find the locale you need.\r\n\r\nThere is also no way that I know of to remove locales in OpenSUSE. `locale-gen` isn't even installed, there is no `/etc/locale.gen` file, and I only have a single language configured in YaST Sysconfig. I really don't think you're meant to disable locales at all in OpenSUSE.\r\n\r\nHere's an idea of what the screen looks like on OpenSUSE:\r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"Options list for system config.\"\"\"\nimport os\nfrom collections import OrderedDict\nfrom gettext import gettext as _\n\nfrom lutris import runners\nfrom lutris.util import linux, system\nfrom lutris.util.display import DISPLAY_MANAGER, SCREEN_SAVER_INHIBITOR, USE_DRI_PRIME\nfrom lutris.util.system import get_vk_icd_file_sets, get_vulkan_gpu_name\n\n\ndef get_resolution_choices():\n \"\"\"Return list of available resolutions as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n resolutions = DISPLAY_MANAGER.get_resolutions()\n resolution_choices = list(zip(resolutions, resolutions))\n resolution_choices.insert(0, (_(\"Keep current\"), \"off\"))\n return resolution_choices\n\n\ndef get_locale_choices():\n \"\"\"Return list of available locales as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n locales = system.get_locale_list()\n\n # adds \"(recommended)\" string to utf8 locales\n locales_humanized = locales.copy()\n for index, locale in enumerate(locales_humanized):\n if \"utf8\" in locale:\n locales_humanized[index] += \" \" + _(\"(recommended)\")\n\n locale_choices = list(zip(locales_humanized, locales))\n locale_choices.insert(0, (_(\"System\"), \"\"))\n\n return locale_choices\n\n\ndef get_output_choices():\n \"\"\"Return list of outputs for drop-downs\"\"\"\n displays = DISPLAY_MANAGER.get_display_names()\n output_choices = list(zip(displays, displays))\n output_choices.insert(0, (_(\"Off\"), \"off\"))\n output_choices.insert(1, (_(\"Primary\"), \"primary\"))\n return output_choices\n\n\ndef get_output_list():\n \"\"\"Return a list of output with their index.\n This is used to indicate to SDL 1.2 which monitor to use.\n \"\"\"\n choices = [(_(\"Off\"), \"off\")]\n displays = DISPLAY_MANAGER.get_display_names()\n for index, output in enumerate(displays):\n # Display name can't be used because they might not be in the right order\n # Using DISPLAYS to get the number of connected monitors\n choices.append((output, str(index)))\n return choices\n\n\ndef get_optirun_choices():\n \"\"\"Return menu choices (label, value) for Optimus\"\"\"\n choices = [(_(\"Off\"), \"off\")]\n if system.find_executable(\"primusrun\"):\n choices.append((\"primusrun\", \"primusrun\"))\n if system.find_executable(\"optirun\"):\n choices.append((\"optirun/virtualgl\", \"optirun\"))\n if system.find_executable(\"pvkrun\"):\n choices.append((\"primus vk\", \"pvkrun\"))\n return choices\n\n\ndef get_vk_icd_choices():\n \"\"\"Return available Vulkan ICD loaders\"\"\"\n # fallback in case any ICDs don't match a known type\n icd_file_sets = get_vk_icd_file_sets()\n\n intel_files = \":\".join(icd_file_sets[\"intel\"])\n amdradv_files = \":\".join(icd_file_sets[\"amdradv\"])\n nvidia_files = \":\".join(icd_file_sets[\"nvidia\"])\n amdvlk_files = \":\".join(icd_file_sets[\"amdvlk\"])\n amdvlkpro_files = \":\".join(icd_file_sets[\"amdvlkpro\"])\n unknown_files = \":\".join(icd_file_sets[\"unknown\"])\n\n # default choice should always be blank so the env var gets left as is\n # This ensures Lutris doesn't change the vulkan loader behavior unless you select\n # a specific ICD from the list, to avoid surprises\n choices = [(\"Unspecified\", \"\")]\n\n if intel_files:\n choices.append((\"Intel Open Source (MESA: ANV)\", intel_files))\n if amdradv_files:\n choices.append((\"AMD RADV Open Source (MESA: RADV)\", amdradv_files))\n if nvidia_files:\n choices.append((\"Nvidia Proprietary\", nvidia_files))\n if amdvlk_files:\n if not amdvlkpro_files:\n choices.append((\"AMDVLK/AMDGPU-PRO Proprietary\", amdvlk_files))\n else:\n choices.append((\"AMDVLK Open source\", amdvlk_files))\n if amdvlkpro_files:\n choices.append((\"AMDGPU-PRO Proprietary\", amdvlkpro_files))\n if unknown_files:\n choices.append((\"Unknown Vendor\", unknown_files))\n\n choices = [(prefix + \": \" + get_vulkan_gpu_name(files, USE_DRI_PRIME), files) for prefix, files in choices]\n\n return choices\n\n\nsystem_options = [ # pylint: disable=invalid-name\n {\n \"section\": \"Lutris\",\n \"option\": \"game_path\",\n \"type\": \"directory_chooser\",\n \"label\": _(\"Default installation folder\"),\n \"default\": os.path.expanduser(\"~/Games\"),\n \"scope\": [\"runner\", \"system\"],\n \"help\": _(\"The default folder where you install your games.\")\n },\n {\n \"section\": \"Lutris\",\n \"option\": \"disable_runtime\",\n \"type\": \"bool\",\n \"label\": _(\"Disable Lutris Runtime\"),\n \"default\": False,\n \"help\": _(\"The Lutris Runtime loads some libraries before running the \"\n \"game, which can cause some incompatibilities in some cases. \"\n \"Check this option to disable it.\"),\n },\n {\n \"section\": \"Lutris\",\n \"option\": \"prefer_system_libs\",\n \"type\": \"bool\",\n \"label\": _(\"Prefer system libraries\"),\n \"default\": True,\n \"help\": _(\"When the runtime is enabled, prioritize the system libraries\"\n \" over the provided ones.\"),\n },\n\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope\",\n \"type\": \"bool\",\n \"label\": _(\"Enable Gamescope\"),\n \"default\": False,\n \"condition\": bool(system.find_executable(\"gamescope\")) and linux.LINUX_SYSTEM.nvidia_gamescope_support(),\n \"help\": _(\"Use gamescope to draw the game window isolated from your desktop.\\n\"\n \"Toggle fullscreen: Super + F\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_force_grab_cursor\",\n \"type\": \"bool\",\n \"label\": _(\"Relative Mouse Mode\"),\n \"advanced\": True,\n \"default\": False,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Always use relative mouse mode instead of flipping\\n\"\n \"dependent on cursor visibility (--force-grab-cursor).\\n\"\n \"(Since gamescope git commit 054458f, Jan 12, 2023)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_output_res\",\n \"type\": \"choice_with_entry\",\n \"label\": _(\"Output Resolution\"),\n \"choices\": DISPLAY_MANAGER.get_resolutions,\n \"advanced\": True,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set the resolution used by gamescope (-W, -H).\\n\"\n \"Resizing the gamescope window will update these settings.\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\\n\"\n \"<b>Custom Resolutions:</b> (width)x(height)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_game_res\",\n \"type\": \"choice_with_entry\",\n \"label\": _(\"Game Resolution\"),\n \"advanced\": True,\n \"choices\": DISPLAY_MANAGER.get_resolutions,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set the maximum resolution used by the game (-w, -h).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\\n\"\n \"<b>Custom Resolutions:</b> (width)x(height)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_window_mode\",\n \"label\": _(\"Window Mode\"),\n \"advanced\": True,\n \"type\": \"choice\",\n \"choices\": (\n (_(\"Fullscreen\"), \"-f\"),\n (_(\"Windowed\"), \"\"),\n (_(\"Borderless\"), \"-b\"),\n ),\n \"default\": \"-f\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Run gamescope in fullscreen, windowed or borderless mode\\n\"\n \"Toggle fullscreen (-f) : Super + F\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_fsr_sharpness\",\n \"label\": _(\"FSR Level\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Use AMD FidelityFX\u2122 Super Resolution 1.0 for upscaling (-U).\\n\"\n \"Upscaler sharpness from 0 (max) to 20 (min).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_fps_limiter\",\n \"label\": _(\"FPS Limiter\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set a frame-rate limit for gamescope specified in frames per second (-r).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_flags\",\n \"label\": _(\"Custom Settings\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set additional flags for gamescope (if available).\\n\"\n \"See 'gamescope --help' for a full list of options.\\n\"\n \"\\n\"\n \"<b>Empty String:</b> Disabled\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"single_cpu\",\n \"type\": \"bool\",\n \"label\": _(\"Restrict number of cores used\"),\n \"default\": False,\n \"help\": _(\"Restrict the game to a maximum number of CPU cores.\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"limit_cpu_count\",\n \"type\": \"string\",\n \"label\": _(\"Restrict number of cores to\"),\n \"default\": \"1\",\n \"help\": _(\"Maximum number of CPU cores to be used, if 'Restrict number of cores used' is turned on.\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"gamemode\",\n \"type\": \"bool\",\n \"default\": linux.LINUX_SYSTEM.gamemode_available(),\n \"condition\": linux.LINUX_SYSTEM.gamemode_available(),\n \"label\": _(\"Enable Feral GameMode\"),\n \"help\": _(\"Request a set of optimisations be temporarily applied to the host OS\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"mangohud\",\n \"type\": \"bool\",\n \"label\": _(\"FPS counter (MangoHud)\"),\n \"default\": False,\n \"condition\": bool(system.find_executable(\"mangohud\")),\n \"help\": _(\"Display the game's FPS + other information. Requires MangoHud to be installed.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"reset_desktop\",\n \"type\": \"bool\",\n \"label\": _(\"Restore resolution on game exit\"),\n \"default\": False,\n \"help\": _(\"Some games don't restore your screen resolution when \\n\"\n \"closed or when they crash. This is when this option comes \\n\"\n \"into play to save your bacon.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"restore_gamma\",\n \"type\": \"bool\",\n \"default\": False,\n \"label\": _(\"Restore gamma on game exit\"),\n \"advanced\": True,\n \"help\": _(\"Some games don't correctly restores gamma on exit, making \"\n \"your display too bright. Select this option to correct it.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"disable_compositor\",\n \"label\": _(\"Disable desktop effects\"),\n \"type\": \"bool\",\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Disable desktop effects while game is running, \"\n \"reducing stuttering and increasing performance\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"disable_screen_saver\",\n \"label\": _(\"Disable screen saver\"),\n \"type\": \"bool\",\n \"default\": SCREEN_SAVER_INHIBITOR is not None,\n \"advanced\": False,\n \"condition\": SCREEN_SAVER_INHIBITOR is not None,\n \"help\": _(\"Disable the screen saver while a game is running. \"\n \"Requires the screen saver's functionality \"\n \"to be exposed over DBus.\"),\n },\n\n {\n \"section\": \"Display\",\n \"option\": \"fps_limit\",\n \"type\": \"string\",\n \"size\": \"small\",\n \"label\": _(\"FPS limit\"),\n \"condition\": bool(system.find_executable(\"strangle\")),\n \"help\": _(\"Limit the game's FPS using libstrangle\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"sdl_video_fullscreen\",\n \"type\": \"choice\",\n \"label\": _(\"SDL 1.2 Fullscreen Monitor\"),\n \"choices\": get_output_list,\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Hint SDL 1.2 games to use a specific monitor when going \"\n \"fullscreen by setting the SDL_VIDEO_FULLSCREEN \"\n \"environment variable\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"display\",\n \"type\": \"choice\",\n \"label\": _(\"Turn off monitors except\"),\n \"choices\": get_output_choices,\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Only keep the selected screen active while the game is \"\n \"running. \\n\"\n \"This is useful if you have a dual-screen setup, and are \\n\"\n \"having display issues when running a game in fullscreen.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"resolution\",\n \"type\": \"choice\",\n \"label\": _(\"Switch resolution to\"),\n \"choices\": get_resolution_choices,\n \"default\": \"off\",\n \"help\": _(\"Switch to this screen resolution while the game is running.\"),\n },\n {\n \"section\": \"Audio\",\n \"option\": \"reset_pulse\",\n \"type\": \"bool\",\n \"label\": _(\"Reset PulseAudio\"),\n \"default\": False,\n \"advanced\": True,\n \"condition\": system.find_executable(\"pulseaudio\"),\n \"help\": _(\"Restart PulseAudio before launching the game.\"),\n },\n {\n \"section\": \"Audio\",\n \"option\": \"pulse_latency\",\n \"type\": \"bool\",\n \"label\": _(\"Reduce PulseAudio latency\"),\n \"default\": False,\n \"advanced\": True,\n \"condition\": system.find_executable(\"pulseaudio\") or system.find_executable(\"pipewire-pulse\"),\n \"help\": _(\"Set the environment variable PULSE_LATENCY_MSEC=60 \"\n \"to improve audio quality on some games\"),\n },\n {\n \"section\": \"Input\",\n \"option\": \"use_us_layout\",\n \"type\": \"bool\",\n \"label\": _(\"Switch to US keyboard layout\"),\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Switch to US keyboard QWERTY layout while game is running\"),\n },\n {\n \"section\": \"Input\",\n \"option\": \"antimicro_config\",\n \"type\": \"file\",\n \"label\": _(\"AntiMicroX Profile\"),\n \"advanced\": True,\n \"help\": _(\"Path to an AntiMicroX profile file\"),\n },\n\n {\n \"section\": \"Input\",\n \"option\": \"sdl_gamecontrollerconfig\",\n \"type\": \"string\",\n \"label\": _(\"SDL2 gamepad mapping\"),\n \"advanced\": True,\n \"help\": _(\"SDL_GAMECONTROLLERCONFIG mapping string or path to a custom \"\n \"gamecontrollerdb.txt file containing mappings.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"prime\",\n \"type\": \"bool\",\n \"default\": False,\n \"condition\": True,\n \"label\": _(\"Enable NVIDIA Prime Render Offload\"),\n \"help\": _(\"If you have the latest NVIDIA driver and the properly patched xorg-server (see \"\n \"https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html\"\n \"), you can launch a game on your NVIDIA GPU by toggling this switch. This will apply \"\n \"__NV_PRIME_RENDER_OFFLOAD=1 and \"\n \"__GLX_VENDOR_LIBRARY_NAME=nvidia environment variables.\")\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"dri_prime\",\n \"type\": \"bool\",\n \"default\": USE_DRI_PRIME,\n \"condition\": USE_DRI_PRIME,\n \"label\": _(\"Use discrete graphics\"),\n \"advanced\": True,\n \"help\": _(\"If you have open source graphic drivers (Mesa), selecting this \"\n \"option will run the game with the 'DRI_PRIME=1' environment variable, \"\n \"activating your discrete graphic chip for high 3D \"\n \"performance.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"optimus\",\n \"type\": \"choice\",\n \"default\": \"off\",\n \"choices\": get_optirun_choices,\n \"label\": _(\"Optimus launcher (NVIDIA Optimus laptops)\"),\n \"advanced\": True,\n \"help\": _(\"If you have installed the primus or bumblebee packages, \"\n \"select what launcher will run the game with the command, \"\n \"activating your NVIDIA graphic chip for high 3D \"\n \"performance. primusrun normally has better performance, but\"\n \"optirun/virtualgl works better for more games.\"\n \"Primus VK provide vulkan support under bumblebee.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"vk_icd\",\n \"type\": \"choice\",\n # Default is \"\" which does not set the VK_ICD_FILENAMES env var\n # (Matches \"Unspecified\" in dropdown)\n \"default\": \"\",\n \"choices\": get_vk_icd_choices,\n \"label\": _(\"Vulkan ICD loader\"),\n \"advanced\": True,\n \"help\": _(\"The ICD loader is a library that is placed between a Vulkan \"\n \"application and any number of Vulkan drivers, in order to support \"\n \"multiple drivers and the instance-level functionality that works \"\n \"across these drivers.\")\n },\n {\n \"section\": \"Text based games\",\n \"option\": \"terminal\",\n \"label\": _(\"CLI mode\"),\n \"type\": \"bool\",\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Enable a terminal for text-based games. \"\n \"Only useful for ASCII based games. May cause issues with graphical games.\"),\n },\n {\n \"section\": \"Text based games\",\n \"option\": \"terminal_app\",\n \"label\": _(\"Text based games emulator\"),\n \"type\": \"choice_with_entry\",\n \"choices\": linux.get_terminal_apps,\n \"default\": linux.get_default_terminal(),\n \"advanced\": True,\n \"help\": _(\"The terminal emulator used with the CLI mode. \"\n \"Choose from the list of detected terminal apps or enter \"\n \"the terminal's command or path.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"env\",\n \"type\": \"mapping\",\n \"label\": _(\"Environment variables\"),\n \"help\": _(\"Environment variables loaded at run time\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"locale\",\n \"type\": \"choice\",\n \"label\": _(\"Locale\"),\n \"choices\": (\n get_locale_choices()\n ),\n \"default\": \"\",\n \"advanced\": False,\n \"help\": _(\"Can be used to force certain locale for an app. Fixes encoding issues in legacy software.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prefix_command\",\n \"type\": \"string\",\n \"label\": _(\"Command prefix\"),\n \"advanced\": True,\n \"help\": _(\"Command line instructions to add in front of the game's \"\n \"execution command.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"manual_command\",\n \"type\": \"file\",\n \"label\": _(\"Manual script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute from the game's contextual menu\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prelaunch_command\",\n \"type\": \"file\",\n \"label\": _(\"Pre-launch script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute before the game starts\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prelaunch_wait\",\n \"type\": \"bool\",\n \"label\": _(\"Wait for pre-launch script completion\"),\n \"advanced\": True,\n \"default\": False,\n \"help\": _(\"Run the game only once the pre-launch script has exited\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"postexit_command\",\n \"type\": \"file\",\n \"label\": _(\"Post-exit script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute when the game exits\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"include_processes\",\n \"type\": \"string\",\n \"label\": _(\"Include processes\"),\n \"advanced\": True,\n \"help\": _(\"What processes to include in process monitoring. \"\n \"This is to override the built-in exclude list.\\n\"\n \"Space-separated list, processes including spaces \"\n \"can be wrapped in quotation marks.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"exclude_processes\",\n \"type\": \"string\",\n \"label\": _(\"Exclude processes\"),\n \"advanced\": True,\n \"help\": _(\"What processes to exclude in process monitoring. \"\n \"For example background processes that stick around \"\n \"after the game has been closed.\\n\"\n \"Space-separated list, processes including spaces \"\n \"can be wrapped in quotation marks.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"killswitch\",\n \"type\": \"string\",\n \"label\": _(\"Killswitch file\"),\n \"advanced\": True,\n \"help\": _(\"Path to a file which will stop the game when deleted \\n\"\n \"(usually /dev/input/js0 to stop the game on joystick \"\n \"unplugging)\"),\n },\n\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr\",\n \"label\": _(\"Use Xephyr\"),\n \"type\": \"choice\",\n \"choices\": (\n (_(\"Off\"), \"off\"),\n (_(\"8BPP (256 colors)\"), \"8bpp\"),\n (_(\"16BPP (65536 colors)\"), \"16bpp\"),\n (_(\"24BPP (16M colors)\"), \"24bpp\"),\n ),\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Run program in Xephyr to support 8BPP and 16BPP color modes\"),\n },\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr_resolution\",\n \"type\": \"string\",\n \"label\": _(\"Xephyr resolution\"),\n \"advanced\": True,\n \"help\": _(\"Screen resolution of the Xephyr server\"),\n },\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr_fullscreen\",\n \"type\": \"bool\",\n \"label\": _(\"Xephyr Fullscreen\"),\n \"default\": True,\n \"advanced\": True,\n \"help\": _(\"Open Xephyr in fullscreen (at the desktop resolution)\"),\n },\n]\n\n\ndef with_runner_overrides(runner_slug):\n \"\"\"Return system options updated with overrides from given runner.\"\"\"\n options = system_options\n try:\n runner = runners.import_runner(runner_slug)\n except runners.InvalidRunner:\n return options\n if not getattr(runner, \"system_options_override\"):\n runner = runner()\n if runner.system_options_override:\n opts_dict = OrderedDict((opt[\"option\"], opt) for opt in options)\n for option in runner.system_options_override:\n key = option[\"option\"]\n if opts_dict.get(key):\n opts_dict[key] = opts_dict[key].copy()\n opts_dict[key].update(option)\n else:\n opts_dict[key] = option\n options = list(opts_dict.values())\n return options\n", "path": "lutris/sysoptions.py"}], "after_files": [{"content": "\"\"\"Options list for system config.\"\"\"\nimport os\nfrom collections import OrderedDict\nfrom gettext import gettext as _\n\nfrom lutris import runners\nfrom lutris.util import linux, system\nfrom lutris.util.display import DISPLAY_MANAGER, SCREEN_SAVER_INHIBITOR, USE_DRI_PRIME\nfrom lutris.util.system import get_vk_icd_file_sets, get_vulkan_gpu_name\n\n\ndef get_resolution_choices():\n \"\"\"Return list of available resolutions as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n resolutions = DISPLAY_MANAGER.get_resolutions()\n resolution_choices = list(zip(resolutions, resolutions))\n resolution_choices.insert(0, (_(\"Keep current\"), \"off\"))\n return resolution_choices\n\n\ndef get_locale_choices():\n \"\"\"Return list of available locales as label, value tuples\n suitable for inclusion in drop-downs.\n \"\"\"\n locales = system.get_locale_list()\n\n # adds \"(recommended)\" string to utf8 locales\n locales_humanized = locales.copy()\n for index, locale in enumerate(locales_humanized):\n if \"utf8\" in locale:\n locales_humanized[index] += \" \" + _(\"(recommended)\")\n\n locale_choices = list(zip(locales_humanized, locales))\n locale_choices.insert(0, (_(\"System\"), \"\"))\n\n return locale_choices\n\n\ndef get_output_choices():\n \"\"\"Return list of outputs for drop-downs\"\"\"\n displays = DISPLAY_MANAGER.get_display_names()\n output_choices = list(zip(displays, displays))\n output_choices.insert(0, (_(\"Off\"), \"off\"))\n output_choices.insert(1, (_(\"Primary\"), \"primary\"))\n return output_choices\n\n\ndef get_output_list():\n \"\"\"Return a list of output with their index.\n This is used to indicate to SDL 1.2 which monitor to use.\n \"\"\"\n choices = [(_(\"Off\"), \"off\")]\n displays = DISPLAY_MANAGER.get_display_names()\n for index, output in enumerate(displays):\n # Display name can't be used because they might not be in the right order\n # Using DISPLAYS to get the number of connected monitors\n choices.append((output, str(index)))\n return choices\n\n\ndef get_optirun_choices():\n \"\"\"Return menu choices (label, value) for Optimus\"\"\"\n choices = [(_(\"Off\"), \"off\")]\n if system.find_executable(\"primusrun\"):\n choices.append((\"primusrun\", \"primusrun\"))\n if system.find_executable(\"optirun\"):\n choices.append((\"optirun/virtualgl\", \"optirun\"))\n if system.find_executable(\"pvkrun\"):\n choices.append((\"primus vk\", \"pvkrun\"))\n return choices\n\n\ndef get_vk_icd_choices():\n \"\"\"Return available Vulkan ICD loaders\"\"\"\n # fallback in case any ICDs don't match a known type\n icd_file_sets = get_vk_icd_file_sets()\n\n intel_files = \":\".join(icd_file_sets[\"intel\"])\n amdradv_files = \":\".join(icd_file_sets[\"amdradv\"])\n nvidia_files = \":\".join(icd_file_sets[\"nvidia\"])\n amdvlk_files = \":\".join(icd_file_sets[\"amdvlk\"])\n amdvlkpro_files = \":\".join(icd_file_sets[\"amdvlkpro\"])\n unknown_files = \":\".join(icd_file_sets[\"unknown\"])\n\n # default choice should always be blank so the env var gets left as is\n # This ensures Lutris doesn't change the vulkan loader behavior unless you select\n # a specific ICD from the list, to avoid surprises\n choices = [(\"Unspecified\", \"\")]\n\n if intel_files:\n choices.append((\"Intel Open Source (MESA: ANV)\", intel_files))\n if amdradv_files:\n choices.append((\"AMD RADV Open Source (MESA: RADV)\", amdradv_files))\n if nvidia_files:\n choices.append((\"Nvidia Proprietary\", nvidia_files))\n if amdvlk_files:\n if not amdvlkpro_files:\n choices.append((\"AMDVLK/AMDGPU-PRO Proprietary\", amdvlk_files))\n else:\n choices.append((\"AMDVLK Open source\", amdvlk_files))\n if amdvlkpro_files:\n choices.append((\"AMDGPU-PRO Proprietary\", amdvlkpro_files))\n if unknown_files:\n choices.append((\"Unknown Vendor\", unknown_files))\n\n choices = [(prefix + \": \" + get_vulkan_gpu_name(files, USE_DRI_PRIME), files) for prefix, files in choices]\n\n return choices\n\n\nsystem_options = [ # pylint: disable=invalid-name\n {\n \"section\": \"Lutris\",\n \"option\": \"game_path\",\n \"type\": \"directory_chooser\",\n \"label\": _(\"Default installation folder\"),\n \"default\": os.path.expanduser(\"~/Games\"),\n \"scope\": [\"runner\", \"system\"],\n \"help\": _(\"The default folder where you install your games.\")\n },\n {\n \"section\": \"Lutris\",\n \"option\": \"disable_runtime\",\n \"type\": \"bool\",\n \"label\": _(\"Disable Lutris Runtime\"),\n \"default\": False,\n \"help\": _(\"The Lutris Runtime loads some libraries before running the \"\n \"game, which can cause some incompatibilities in some cases. \"\n \"Check this option to disable it.\"),\n },\n {\n \"section\": \"Lutris\",\n \"option\": \"prefer_system_libs\",\n \"type\": \"bool\",\n \"label\": _(\"Prefer system libraries\"),\n \"default\": True,\n \"help\": _(\"When the runtime is enabled, prioritize the system libraries\"\n \" over the provided ones.\"),\n },\n\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope\",\n \"type\": \"bool\",\n \"label\": _(\"Enable Gamescope\"),\n \"default\": False,\n \"condition\": bool(system.find_executable(\"gamescope\")) and linux.LINUX_SYSTEM.nvidia_gamescope_support(),\n \"help\": _(\"Use gamescope to draw the game window isolated from your desktop.\\n\"\n \"Toggle fullscreen: Super + F\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_force_grab_cursor\",\n \"type\": \"bool\",\n \"label\": _(\"Relative Mouse Mode\"),\n \"advanced\": True,\n \"default\": False,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Always use relative mouse mode instead of flipping\\n\"\n \"dependent on cursor visibility (--force-grab-cursor).\\n\"\n \"(Since gamescope git commit 054458f, Jan 12, 2023)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_output_res\",\n \"type\": \"choice_with_entry\",\n \"label\": _(\"Output Resolution\"),\n \"choices\": DISPLAY_MANAGER.get_resolutions,\n \"advanced\": True,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set the resolution used by gamescope (-W, -H).\\n\"\n \"Resizing the gamescope window will update these settings.\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\\n\"\n \"<b>Custom Resolutions:</b> (width)x(height)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_game_res\",\n \"type\": \"choice_with_entry\",\n \"label\": _(\"Game Resolution\"),\n \"advanced\": True,\n \"choices\": DISPLAY_MANAGER.get_resolutions,\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set the maximum resolution used by the game (-w, -h).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\\n\"\n \"<b>Custom Resolutions:</b> (width)x(height)\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_window_mode\",\n \"label\": _(\"Window Mode\"),\n \"advanced\": True,\n \"type\": \"choice\",\n \"choices\": (\n (_(\"Fullscreen\"), \"-f\"),\n (_(\"Windowed\"), \"\"),\n (_(\"Borderless\"), \"-b\"),\n ),\n \"default\": \"-f\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Run gamescope in fullscreen, windowed or borderless mode\\n\"\n \"Toggle fullscreen (-f) : Super + F\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_fsr_sharpness\",\n \"label\": _(\"FSR Level\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Use AMD FidelityFX\u2122 Super Resolution 1.0 for upscaling (-U).\\n\"\n \"Upscaler sharpness from 0 (max) to 20 (min).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_fps_limiter\",\n \"label\": _(\"FPS Limiter\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set a frame-rate limit for gamescope specified in frames per second (-r).\\n\"\n \"\\n\"\n \"<b>Empty string:</b> Disabled\"),\n },\n {\n \"section\": \"Gamescope\",\n \"option\": \"gamescope_flags\",\n \"label\": _(\"Custom Settings\"),\n \"advanced\": True,\n \"type\": \"string\",\n \"condition\": bool(system.find_executable(\"gamescope\")),\n \"help\": _(\"Set additional flags for gamescope (if available).\\n\"\n \"See 'gamescope --help' for a full list of options.\\n\"\n \"\\n\"\n \"<b>Empty String:</b> Disabled\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"single_cpu\",\n \"type\": \"bool\",\n \"label\": _(\"Restrict number of cores used\"),\n \"default\": False,\n \"help\": _(\"Restrict the game to a maximum number of CPU cores.\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"limit_cpu_count\",\n \"type\": \"string\",\n \"label\": _(\"Restrict number of cores to\"),\n \"default\": \"1\",\n \"help\": _(\"Maximum number of CPU cores to be used, if 'Restrict number of cores used' is turned on.\"),\n },\n {\n \"section\": \"CPU\",\n \"option\": \"gamemode\",\n \"type\": \"bool\",\n \"default\": linux.LINUX_SYSTEM.gamemode_available(),\n \"condition\": linux.LINUX_SYSTEM.gamemode_available(),\n \"label\": _(\"Enable Feral GameMode\"),\n \"help\": _(\"Request a set of optimisations be temporarily applied to the host OS\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"mangohud\",\n \"type\": \"bool\",\n \"label\": _(\"FPS counter (MangoHud)\"),\n \"default\": False,\n \"condition\": bool(system.find_executable(\"mangohud\")),\n \"help\": _(\"Display the game's FPS + other information. Requires MangoHud to be installed.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"reset_desktop\",\n \"type\": \"bool\",\n \"label\": _(\"Restore resolution on game exit\"),\n \"default\": False,\n \"help\": _(\"Some games don't restore your screen resolution when \\n\"\n \"closed or when they crash. This is when this option comes \\n\"\n \"into play to save your bacon.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"restore_gamma\",\n \"type\": \"bool\",\n \"default\": False,\n \"label\": _(\"Restore gamma on game exit\"),\n \"advanced\": True,\n \"help\": _(\"Some games don't correctly restores gamma on exit, making \"\n \"your display too bright. Select this option to correct it.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"disable_compositor\",\n \"label\": _(\"Disable desktop effects\"),\n \"type\": \"bool\",\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Disable desktop effects while game is running, \"\n \"reducing stuttering and increasing performance\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"disable_screen_saver\",\n \"label\": _(\"Disable screen saver\"),\n \"type\": \"bool\",\n \"default\": SCREEN_SAVER_INHIBITOR is not None,\n \"advanced\": False,\n \"condition\": SCREEN_SAVER_INHIBITOR is not None,\n \"help\": _(\"Disable the screen saver while a game is running. \"\n \"Requires the screen saver's functionality \"\n \"to be exposed over DBus.\"),\n },\n\n {\n \"section\": \"Display\",\n \"option\": \"fps_limit\",\n \"type\": \"string\",\n \"size\": \"small\",\n \"label\": _(\"FPS limit\"),\n \"condition\": bool(system.find_executable(\"strangle\")),\n \"help\": _(\"Limit the game's FPS using libstrangle\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"sdl_video_fullscreen\",\n \"type\": \"choice\",\n \"label\": _(\"SDL 1.2 Fullscreen Monitor\"),\n \"choices\": get_output_list,\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Hint SDL 1.2 games to use a specific monitor when going \"\n \"fullscreen by setting the SDL_VIDEO_FULLSCREEN \"\n \"environment variable\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"display\",\n \"type\": \"choice\",\n \"label\": _(\"Turn off monitors except\"),\n \"choices\": get_output_choices,\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Only keep the selected screen active while the game is \"\n \"running. \\n\"\n \"This is useful if you have a dual-screen setup, and are \\n\"\n \"having display issues when running a game in fullscreen.\"),\n },\n {\n \"section\": \"Display\",\n \"option\": \"resolution\",\n \"type\": \"choice\",\n \"label\": _(\"Switch resolution to\"),\n \"choices\": get_resolution_choices,\n \"default\": \"off\",\n \"help\": _(\"Switch to this screen resolution while the game is running.\"),\n },\n {\n \"section\": \"Audio\",\n \"option\": \"reset_pulse\",\n \"type\": \"bool\",\n \"label\": _(\"Reset PulseAudio\"),\n \"default\": False,\n \"advanced\": True,\n \"condition\": system.find_executable(\"pulseaudio\"),\n \"help\": _(\"Restart PulseAudio before launching the game.\"),\n },\n {\n \"section\": \"Audio\",\n \"option\": \"pulse_latency\",\n \"type\": \"bool\",\n \"label\": _(\"Reduce PulseAudio latency\"),\n \"default\": False,\n \"advanced\": True,\n \"condition\": system.find_executable(\"pulseaudio\") or system.find_executable(\"pipewire-pulse\"),\n \"help\": _(\"Set the environment variable PULSE_LATENCY_MSEC=60 \"\n \"to improve audio quality on some games\"),\n },\n {\n \"section\": \"Input\",\n \"option\": \"use_us_layout\",\n \"type\": \"bool\",\n \"label\": _(\"Switch to US keyboard layout\"),\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Switch to US keyboard QWERTY layout while game is running\"),\n },\n {\n \"section\": \"Input\",\n \"option\": \"antimicro_config\",\n \"type\": \"file\",\n \"label\": _(\"AntiMicroX Profile\"),\n \"advanced\": True,\n \"help\": _(\"Path to an AntiMicroX profile file\"),\n },\n\n {\n \"section\": \"Input\",\n \"option\": \"sdl_gamecontrollerconfig\",\n \"type\": \"string\",\n \"label\": _(\"SDL2 gamepad mapping\"),\n \"advanced\": True,\n \"help\": _(\"SDL_GAMECONTROLLERCONFIG mapping string or path to a custom \"\n \"gamecontrollerdb.txt file containing mappings.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"prime\",\n \"type\": \"bool\",\n \"default\": False,\n \"condition\": True,\n \"label\": _(\"Enable NVIDIA Prime Render Offload\"),\n \"help\": _(\"If you have the latest NVIDIA driver and the properly patched xorg-server (see \"\n \"https://download.nvidia.com/XFree86/Linux-x86_64/435.17/README/primerenderoffload.html\"\n \"), you can launch a game on your NVIDIA GPU by toggling this switch. This will apply \"\n \"__NV_PRIME_RENDER_OFFLOAD=1 and \"\n \"__GLX_VENDOR_LIBRARY_NAME=nvidia environment variables.\")\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"dri_prime\",\n \"type\": \"bool\",\n \"default\": USE_DRI_PRIME,\n \"condition\": USE_DRI_PRIME,\n \"label\": _(\"Use discrete graphics\"),\n \"advanced\": True,\n \"help\": _(\"If you have open source graphic drivers (Mesa), selecting this \"\n \"option will run the game with the 'DRI_PRIME=1' environment variable, \"\n \"activating your discrete graphic chip for high 3D \"\n \"performance.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"optimus\",\n \"type\": \"choice\",\n \"default\": \"off\",\n \"choices\": get_optirun_choices,\n \"label\": _(\"Optimus launcher (NVIDIA Optimus laptops)\"),\n \"advanced\": True,\n \"help\": _(\"If you have installed the primus or bumblebee packages, \"\n \"select what launcher will run the game with the command, \"\n \"activating your NVIDIA graphic chip for high 3D \"\n \"performance. primusrun normally has better performance, but\"\n \"optirun/virtualgl works better for more games.\"\n \"Primus VK provide vulkan support under bumblebee.\"),\n },\n {\n \"section\": \"Multi-GPU\",\n \"option\": \"vk_icd\",\n \"type\": \"choice\",\n # Default is \"\" which does not set the VK_ICD_FILENAMES env var\n # (Matches \"Unspecified\" in dropdown)\n \"default\": \"\",\n \"choices\": get_vk_icd_choices,\n \"label\": _(\"Vulkan ICD loader\"),\n \"advanced\": True,\n \"help\": _(\"The ICD loader is a library that is placed between a Vulkan \"\n \"application and any number of Vulkan drivers, in order to support \"\n \"multiple drivers and the instance-level functionality that works \"\n \"across these drivers.\")\n },\n {\n \"section\": \"Text based games\",\n \"option\": \"terminal\",\n \"label\": _(\"CLI mode\"),\n \"type\": \"bool\",\n \"default\": False,\n \"advanced\": True,\n \"help\": _(\"Enable a terminal for text-based games. \"\n \"Only useful for ASCII based games. May cause issues with graphical games.\"),\n },\n {\n \"section\": \"Text based games\",\n \"option\": \"terminal_app\",\n \"label\": _(\"Text based games emulator\"),\n \"type\": \"choice_with_entry\",\n \"choices\": linux.get_terminal_apps,\n \"default\": linux.get_default_terminal(),\n \"advanced\": True,\n \"help\": _(\"The terminal emulator used with the CLI mode. \"\n \"Choose from the list of detected terminal apps or enter \"\n \"the terminal's command or path.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"env\",\n \"type\": \"mapping\",\n \"label\": _(\"Environment variables\"),\n \"help\": _(\"Environment variables loaded at run time\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"locale\",\n \"type\": \"choice_with_search\",\n \"label\": _(\"Locale\"),\n \"choices\": (\n get_locale_choices\n ),\n \"default\": \"\",\n \"advanced\": False,\n \"help\": _(\"Can be used to force certain locale for an app. Fixes encoding issues in legacy software.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prefix_command\",\n \"type\": \"string\",\n \"label\": _(\"Command prefix\"),\n \"advanced\": True,\n \"help\": _(\"Command line instructions to add in front of the game's \"\n \"execution command.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"manual_command\",\n \"type\": \"file\",\n \"label\": _(\"Manual script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute from the game's contextual menu\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prelaunch_command\",\n \"type\": \"file\",\n \"label\": _(\"Pre-launch script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute before the game starts\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"prelaunch_wait\",\n \"type\": \"bool\",\n \"label\": _(\"Wait for pre-launch script completion\"),\n \"advanced\": True,\n \"default\": False,\n \"help\": _(\"Run the game only once the pre-launch script has exited\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"postexit_command\",\n \"type\": \"file\",\n \"label\": _(\"Post-exit script\"),\n \"advanced\": True,\n \"help\": _(\"Script to execute when the game exits\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"include_processes\",\n \"type\": \"string\",\n \"label\": _(\"Include processes\"),\n \"advanced\": True,\n \"help\": _(\"What processes to include in process monitoring. \"\n \"This is to override the built-in exclude list.\\n\"\n \"Space-separated list, processes including spaces \"\n \"can be wrapped in quotation marks.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"exclude_processes\",\n \"type\": \"string\",\n \"label\": _(\"Exclude processes\"),\n \"advanced\": True,\n \"help\": _(\"What processes to exclude in process monitoring. \"\n \"For example background processes that stick around \"\n \"after the game has been closed.\\n\"\n \"Space-separated list, processes including spaces \"\n \"can be wrapped in quotation marks.\"),\n },\n {\n \"section\": \"Game execution\",\n \"option\": \"killswitch\",\n \"type\": \"string\",\n \"label\": _(\"Killswitch file\"),\n \"advanced\": True,\n \"help\": _(\"Path to a file which will stop the game when deleted \\n\"\n \"(usually /dev/input/js0 to stop the game on joystick \"\n \"unplugging)\"),\n },\n\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr\",\n \"label\": _(\"Use Xephyr\"),\n \"type\": \"choice\",\n \"choices\": (\n (_(\"Off\"), \"off\"),\n (_(\"8BPP (256 colors)\"), \"8bpp\"),\n (_(\"16BPP (65536 colors)\"), \"16bpp\"),\n (_(\"24BPP (16M colors)\"), \"24bpp\"),\n ),\n \"default\": \"off\",\n \"advanced\": True,\n \"help\": _(\"Run program in Xephyr to support 8BPP and 16BPP color modes\"),\n },\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr_resolution\",\n \"type\": \"string\",\n \"label\": _(\"Xephyr resolution\"),\n \"advanced\": True,\n \"help\": _(\"Screen resolution of the Xephyr server\"),\n },\n {\n \"section\": \"Xephyr (Deprecated, use Gamescope)\",\n \"option\": \"xephyr_fullscreen\",\n \"type\": \"bool\",\n \"label\": _(\"Xephyr Fullscreen\"),\n \"default\": True,\n \"advanced\": True,\n \"help\": _(\"Open Xephyr in fullscreen (at the desktop resolution)\"),\n },\n]\n\n\ndef with_runner_overrides(runner_slug):\n \"\"\"Return system options updated with overrides from given runner.\"\"\"\n options = system_options\n try:\n runner = runners.import_runner(runner_slug)\n except runners.InvalidRunner:\n return options\n if not getattr(runner, \"system_options_override\"):\n runner = runner()\n if runner.system_options_override:\n opts_dict = OrderedDict((opt[\"option\"], opt) for opt in options)\n for option in runner.system_options_override:\n key = option[\"option\"]\n if opts_dict.get(key):\n opts_dict[key] = opts_dict[key].copy()\n opts_dict[key].update(option)\n else:\n opts_dict[key] = option\n options = list(opts_dict.values())\n return options\n", "path": "lutris/sysoptions.py"}]} |
gh_patches_debug_1197 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-1425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
Not auto add DistributedSampler for DDP training
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
in 0.72, even if we don't set sampler, pytorch_lightning will not add DistributedSampler for us.
### To Reproduce
the reason is in pytorch, if we don't set sampler, pytorch will add a sampler for us.
in pytorch's dataloader.py:
```
if sampler is None: # give default samplers
if self._dataset_kind == _DatasetKind.Iterable:
# See NOTE [ Custom Samplers and IterableDataset ]
sampler = _InfiniteConstantSampler()
else: # map-style
if shuffle:
sampler = RandomSampler(dataset)
else:
sampler = SequentialSampler(dataset)
```
but in pytorch_lightning we check whether sampler is None to decide to add sampler
in data_loading.py funciton auto_add_sampler:
```
no_sampler_added = dataloader.sampler is None
```
because pytorch have default sampler for us, which is not None, pytorch_lighting will not automatically add sampler.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `pytorch_lightning/trainer/data_loading.py`
Content:
```
1 import warnings
2 from abc import ABC, abstractmethod
3 from typing import Union, List, Tuple, Callable
4
5 import torch.distributed as torch_distrib
6 from torch.utils.data import DataLoader
7 from torch.utils.data.distributed import DistributedSampler
8
9 from pytorch_lightning.core import LightningModule
10 from pytorch_lightning.utilities.exceptions import MisconfigurationException
11
12 try:
13 from apex import amp
14 except ImportError:
15 APEX_AVAILABLE = False
16 else:
17 APEX_AVAILABLE = True
18
19 try:
20 import torch_xla
21 import torch_xla.core.xla_model as xm
22 import torch_xla.distributed.xla_multiprocessing as xmp
23 except ImportError:
24 XLA_AVAILABLE = False
25 else:
26 XLA_AVAILABLE = True
27
28
29 def _has_len(dataloader: DataLoader) -> bool:
30 """ Checks if a given Dataloader has __len__ method implemented i.e. if
31 it is a finite dataloader or infinite dataloader """
32 try:
33 # try getting the length
34 if len(dataloader) == 0:
35 raise ValueError('Dataloader returned 0 length. Please make sure'
36 ' that your Dataloader atleast returns 1 batch')
37 return True
38 except TypeError:
39 return False
40
41
42 class TrainerDataLoadingMixin(ABC):
43
44 # this is just a summary on variables used in this abstract class,
45 # the proper values/initialisation should be done in child class
46 proc_rank: int
47 use_ddp: bool
48 use_ddp2: bool
49 shown_warnings: ...
50 val_check_interval: float
51 use_tpu: bool
52 tpu_local_core_rank: int
53 train_dataloader: DataLoader
54 num_training_batches: Union[int, float]
55 val_check_batch: ...
56 val_dataloaders: List[DataLoader]
57 num_val_batches: Union[int, float]
58 test_dataloaders: List[DataLoader]
59 num_test_batches: Union[int, float]
60 train_percent_check: float
61 val_percent_check: float
62 test_percent_check: float
63
64 @abstractmethod
65 def is_overriden(self, *args):
66 """Warning: this is just empty shell for code implemented in other class."""
67
68 def _percent_range_check(self, name: str) -> None:
69 value = getattr(self, name)
70 msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'
71 if name == 'val_check_interval':
72 msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'
73
74 if not 0. <= value <= 1.:
75 raise ValueError(msg)
76
77 def _worker_check(self, dataloader: DataLoader, name: str) -> None:
78 if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:
79 warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'
80 ' Consider increasing the value of the `num_workers` argument`'
81 ' in the `DataLoader` init to improve performance.')
82
83 def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:
84
85 # don't do anything if it's not a dataloader
86 if not isinstance(dataloader, DataLoader):
87 return dataloader
88
89 need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu
90 no_sampler_added = dataloader.sampler is None
91
92 if need_dist_sampler and no_sampler_added:
93
94 skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']
95
96 dl_args = {
97 k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys
98 }
99
100 if self.use_tpu:
101 sampler = DistributedSampler(
102 dataloader.dataset,
103 num_replicas=xm.xrt_world_size(),
104 rank=xm.get_ordinal()
105 )
106 else:
107 sampler = DistributedSampler(dataloader.dataset)
108
109 dl_args['sampler'] = sampler
110 dataloader = type(dataloader)(**dl_args)
111
112 return dataloader
113
114 def reset_train_dataloader(self, model: LightningModule) -> None:
115 """Resets the train dataloader and initialises required variables
116 (number of batches, when to validate, etc.).
117
118 Args:
119 model: The current `LightningModule`
120 """
121 self.train_dataloader = self.request_dataloader(model.train_dataloader)
122
123 self.num_training_batches = 0
124
125 # automatically add samplers
126 self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)
127
128 self._worker_check(self.train_dataloader, 'train dataloader')
129 self._percent_range_check('train_percent_check')
130
131 if not _has_len(self.train_dataloader):
132 self.num_training_batches = float('inf')
133 else:
134 # try getting the length
135 self.num_training_batches = len(self.train_dataloader)
136 self.num_training_batches = int(self.num_training_batches * self.train_percent_check)
137
138 # determine when to check validation
139 # if int passed in, val checks that often
140 # otherwise, it checks in [0, 1.0] % range of a training epoch
141 if isinstance(self.val_check_interval, int):
142 self.val_check_batch = self.val_check_interval
143 if self.val_check_batch > self.num_training_batches:
144 raise ValueError(
145 f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '
146 f'to the number of the training batches ({self.num_training_batches}). '
147 'If you want to disable validation set `val_percent_check` to 0.0 instead.')
148 else:
149 if not _has_len(self.train_dataloader):
150 if self.val_check_interval == 1.0:
151 self.val_check_batch = float('inf')
152 else:
153 raise MisconfigurationException(
154 'When using an infinite DataLoader (e.g. with an IterableDataset or when '
155 'DataLoader does not implement `__len__`) for `train_dataloader`, '
156 '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '
157 'checking validation every k training batches.')
158 else:
159 self._percent_range_check('val_check_interval')
160
161 self.val_check_batch = int(self.num_training_batches * self.val_check_interval)
162 self.val_check_batch = max(1, self.val_check_batch)
163
164 def _reset_eval_dataloader(self, model: LightningModule,
165 mode: str) -> Tuple[int, List[DataLoader]]:
166 """Generic method to reset a dataloader for evaluation.
167
168 Args:
169 model: The current `LightningModule`
170 mode: Either `'val'` or `'test'`
171
172 Returns:
173 Tuple (num_batches, dataloaders)
174 """
175 dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))
176
177 if not isinstance(dataloaders, list):
178 dataloaders = [dataloaders]
179
180 # add samplers
181 dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]
182
183 num_batches = 0
184
185 # determine number of batches
186 # datasets could be none, 1 or 2+
187 if len(dataloaders) != 0:
188 for i, dataloader in enumerate(dataloaders):
189 self._worker_check(dataloader, f'{mode} dataloader {i}')
190 if not _has_len(dataloader):
191 num_batches = float('inf')
192
193 percent_check = getattr(self, f'{mode}_percent_check')
194
195 if num_batches != float('inf'):
196 self._percent_range_check(f'{mode}_percent_check')
197
198 num_batches = sum(len(dataloader) for dataloader in dataloaders)
199 num_batches = int(num_batches * percent_check)
200 elif percent_check not in (0.0, 1.0):
201 raise MisconfigurationException(
202 'When using an infinite DataLoader (e.g. with an IterableDataset or when '
203 f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '
204 f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')
205 return num_batches, dataloaders
206
207 def reset_val_dataloader(self, model: LightningModule) -> None:
208 """Resets the validation dataloader and determines the number of batches.
209
210 Args:
211 model: The current `LightningModule`
212 """
213 if self.is_overriden('validation_step'):
214 self.num_val_batches, self.val_dataloaders =\
215 self._reset_eval_dataloader(model, 'val')
216
217 def reset_test_dataloader(self, model) -> None:
218 """Resets the validation dataloader and determines the number of batches.
219
220 Args:
221 model: The current `LightningModule`
222 """
223 if self.is_overriden('test_step'):
224 self.num_test_batches, self.test_dataloaders =\
225 self._reset_eval_dataloader(model, 'test')
226
227 def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:
228 """Handles downloading data in the GPU or TPU case.
229
230 Args:
231 dataloader_fx: The bound dataloader getter
232
233 Returns:
234 The dataloader
235 """
236 dataloader = dataloader_fx()
237
238 # get the function we'll use to get data
239 if self.use_ddp or self.use_ddp2:
240 # all processes wait until data download has happened
241 torch_distrib.barrier()
242
243 # data download/load on TPU
244 elif self.use_tpu and XLA_AVAILABLE:
245 # all processes wait until data download has happened
246 torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')
247
248 return dataloader
249
250 def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,
251 test_percent_check: float, overfit_pct: float) -> None:
252 """Use less data for debugging purposes
253 """
254 self.train_percent_check = train_percent_check
255 self.val_percent_check = val_percent_check
256 self.test_percent_check = test_percent_check
257 if overfit_pct > 0:
258 if overfit_pct > 1:
259 raise ValueError(
260 f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')
261
262 self.train_percent_check = overfit_pct
263 self.val_percent_check = overfit_pct
264 self.test_percent_check = overfit_pct
265
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py
--- a/pytorch_lightning/trainer/data_loading.py
+++ b/pytorch_lightning/trainer/data_loading.py
@@ -87,9 +87,8 @@
return dataloader
need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu
- no_sampler_added = dataloader.sampler is None
- if need_dist_sampler and no_sampler_added:
+ if need_dist_sampler:
skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']
| {"golden_diff": "diff --git a/pytorch_lightning/trainer/data_loading.py b/pytorch_lightning/trainer/data_loading.py\n--- a/pytorch_lightning/trainer/data_loading.py\n+++ b/pytorch_lightning/trainer/data_loading.py\n@@ -87,9 +87,8 @@\n return dataloader\n \n need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu\n- no_sampler_added = dataloader.sampler is None\n \n- if need_dist_sampler and no_sampler_added:\n+ if need_dist_sampler:\n \n skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']\n", "issue": "Not auto add DistributedSampler for DDP training\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\nin 0.72, even if we don't set sampler, pytorch_lightning will not add DistributedSampler for us.\r\n### To Reproduce\r\nthe reason is in pytorch, if we don't set sampler, pytorch will add a sampler for us.\r\nin pytorch's dataloader.py:\r\n```\r\n if sampler is None: # give default samplers\r\n if self._dataset_kind == _DatasetKind.Iterable:\r\n # See NOTE [ Custom Samplers and IterableDataset ]\r\n sampler = _InfiniteConstantSampler()\r\n else: # map-style\r\n if shuffle:\r\n sampler = RandomSampler(dataset)\r\n else:\r\n sampler = SequentialSampler(dataset)\r\n```\r\n\r\nbut in pytorch_lightning we check whether sampler is None to decide to add sampler \r\nin data_loading.py funciton auto_add_sampler:\r\n```\r\n no_sampler_added = dataloader.sampler is None\r\n```\r\n\r\nbecause pytorch have default sampler for us, which is not None, pytorch_lighting will not automatically add sampler.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import warnings\nfrom abc import ABC, abstractmethod\nfrom typing import Union, List, Tuple, Callable\n\nimport torch.distributed as torch_distrib\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\ntry:\n from apex import amp\nexcept ImportError:\n APEX_AVAILABLE = False\nelse:\n APEX_AVAILABLE = True\n\ntry:\n import torch_xla\n import torch_xla.core.xla_model as xm\n import torch_xla.distributed.xla_multiprocessing as xmp\nexcept ImportError:\n XLA_AVAILABLE = False\nelse:\n XLA_AVAILABLE = True\n\n\ndef _has_len(dataloader: DataLoader) -> bool:\n \"\"\" Checks if a given Dataloader has __len__ method implemented i.e. if\n it is a finite dataloader or infinite dataloader \"\"\"\n try:\n # try getting the length\n if len(dataloader) == 0:\n raise ValueError('Dataloader returned 0 length. Please make sure'\n ' that your Dataloader atleast returns 1 batch')\n return True\n except TypeError:\n return False\n\n\nclass TrainerDataLoadingMixin(ABC):\n\n # this is just a summary on variables used in this abstract class,\n # the proper values/initialisation should be done in child class\n proc_rank: int\n use_ddp: bool\n use_ddp2: bool\n shown_warnings: ...\n val_check_interval: float\n use_tpu: bool\n tpu_local_core_rank: int\n train_dataloader: DataLoader\n num_training_batches: Union[int, float]\n val_check_batch: ...\n val_dataloaders: List[DataLoader]\n num_val_batches: Union[int, float]\n test_dataloaders: List[DataLoader]\n num_test_batches: Union[int, float]\n train_percent_check: float\n val_percent_check: float\n test_percent_check: float\n\n @abstractmethod\n def is_overriden(self, *args):\n \"\"\"Warning: this is just empty shell for code implemented in other class.\"\"\"\n\n def _percent_range_check(self, name: str) -> None:\n value = getattr(self, name)\n msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'\n if name == 'val_check_interval':\n msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'\n\n if not 0. <= value <= 1.:\n raise ValueError(msg)\n\n def _worker_check(self, dataloader: DataLoader, name: str) -> None:\n if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:\n warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'\n ' Consider increasing the value of the `num_workers` argument`'\n ' in the `DataLoader` init to improve performance.')\n\n def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:\n\n # don't do anything if it's not a dataloader\n if not isinstance(dataloader, DataLoader):\n return dataloader\n\n need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu\n no_sampler_added = dataloader.sampler is None\n\n if need_dist_sampler and no_sampler_added:\n\n skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']\n\n dl_args = {\n k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys\n }\n\n if self.use_tpu:\n sampler = DistributedSampler(\n dataloader.dataset,\n num_replicas=xm.xrt_world_size(),\n rank=xm.get_ordinal()\n )\n else:\n sampler = DistributedSampler(dataloader.dataset)\n\n dl_args['sampler'] = sampler\n dataloader = type(dataloader)(**dl_args)\n\n return dataloader\n\n def reset_train_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the train dataloader and initialises required variables\n (number of batches, when to validate, etc.).\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n self.train_dataloader = self.request_dataloader(model.train_dataloader)\n\n self.num_training_batches = 0\n\n # automatically add samplers\n self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)\n\n self._worker_check(self.train_dataloader, 'train dataloader')\n self._percent_range_check('train_percent_check')\n\n if not _has_len(self.train_dataloader):\n self.num_training_batches = float('inf')\n else:\n # try getting the length\n self.num_training_batches = len(self.train_dataloader)\n self.num_training_batches = int(self.num_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n if self.val_check_batch > self.num_training_batches:\n raise ValueError(\n f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '\n f'to the number of the training batches ({self.num_training_batches}). '\n 'If you want to disable validation set `val_percent_check` to 0.0 instead.')\n else:\n if not _has_len(self.train_dataloader):\n if self.val_check_interval == 1.0:\n self.val_check_batch = float('inf')\n else:\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n 'DataLoader does not implement `__len__`) for `train_dataloader`, '\n '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '\n 'checking validation every k training batches.')\n else:\n self._percent_range_check('val_check_interval')\n\n self.val_check_batch = int(self.num_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n def _reset_eval_dataloader(self, model: LightningModule,\n mode: str) -> Tuple[int, List[DataLoader]]:\n \"\"\"Generic method to reset a dataloader for evaluation.\n\n Args:\n model: The current `LightningModule`\n mode: Either `'val'` or `'test'`\n\n Returns:\n Tuple (num_batches, dataloaders)\n \"\"\"\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\n\n if not isinstance(dataloaders, list):\n dataloaders = [dataloaders]\n\n # add samplers\n dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]\n\n num_batches = 0\n\n # determine number of batches\n # datasets could be none, 1 or 2+\n if len(dataloaders) != 0:\n for i, dataloader in enumerate(dataloaders):\n self._worker_check(dataloader, f'{mode} dataloader {i}')\n if not _has_len(dataloader):\n num_batches = float('inf')\n\n percent_check = getattr(self, f'{mode}_percent_check')\n\n if num_batches != float('inf'):\n self._percent_range_check(f'{mode}_percent_check')\n\n num_batches = sum(len(dataloader) for dataloader in dataloaders)\n num_batches = int(num_batches * percent_check)\n elif percent_check not in (0.0, 1.0):\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '\n f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')\n return num_batches, dataloaders\n\n def reset_val_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('validation_step'):\n self.num_val_batches, self.val_dataloaders =\\\n self._reset_eval_dataloader(model, 'val')\n\n def reset_test_dataloader(self, model) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('test_step'):\n self.num_test_batches, self.test_dataloaders =\\\n self._reset_eval_dataloader(model, 'test')\n\n def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:\n \"\"\"Handles downloading data in the GPU or TPU case.\n\n Args:\n dataloader_fx: The bound dataloader getter\n\n Returns:\n The dataloader\n \"\"\"\n dataloader = dataloader_fx()\n\n # get the function we'll use to get data\n if self.use_ddp or self.use_ddp2:\n # all processes wait until data download has happened\n torch_distrib.barrier()\n\n # data download/load on TPU\n elif self.use_tpu and XLA_AVAILABLE:\n # all processes wait until data download has happened\n torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')\n\n return dataloader\n\n def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,\n test_percent_check: float, overfit_pct: float) -> None:\n \"\"\"Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n if overfit_pct > 1:\n raise ValueError(\n f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')\n\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading.py"}], "after_files": [{"content": "import warnings\nfrom abc import ABC, abstractmethod\nfrom typing import Union, List, Tuple, Callable\n\nimport torch.distributed as torch_distrib\nfrom torch.utils.data import DataLoader\nfrom torch.utils.data.distributed import DistributedSampler\n\nfrom pytorch_lightning.core import LightningModule\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\ntry:\n from apex import amp\nexcept ImportError:\n APEX_AVAILABLE = False\nelse:\n APEX_AVAILABLE = True\n\ntry:\n import torch_xla\n import torch_xla.core.xla_model as xm\n import torch_xla.distributed.xla_multiprocessing as xmp\nexcept ImportError:\n XLA_AVAILABLE = False\nelse:\n XLA_AVAILABLE = True\n\n\ndef _has_len(dataloader: DataLoader) -> bool:\n \"\"\" Checks if a given Dataloader has __len__ method implemented i.e. if\n it is a finite dataloader or infinite dataloader \"\"\"\n try:\n # try getting the length\n if len(dataloader) == 0:\n raise ValueError('Dataloader returned 0 length. Please make sure'\n ' that your Dataloader atleast returns 1 batch')\n return True\n except TypeError:\n return False\n\n\nclass TrainerDataLoadingMixin(ABC):\n\n # this is just a summary on variables used in this abstract class,\n # the proper values/initialisation should be done in child class\n proc_rank: int\n use_ddp: bool\n use_ddp2: bool\n shown_warnings: ...\n val_check_interval: float\n use_tpu: bool\n tpu_local_core_rank: int\n train_dataloader: DataLoader\n num_training_batches: Union[int, float]\n val_check_batch: ...\n val_dataloaders: List[DataLoader]\n num_val_batches: Union[int, float]\n test_dataloaders: List[DataLoader]\n num_test_batches: Union[int, float]\n train_percent_check: float\n val_percent_check: float\n test_percent_check: float\n\n @abstractmethod\n def is_overriden(self, *args):\n \"\"\"Warning: this is just empty shell for code implemented in other class.\"\"\"\n\n def _percent_range_check(self, name: str) -> None:\n value = getattr(self, name)\n msg = f'`{name}` must lie in the range [0.0, 1.0], but got {value:.3f}.'\n if name == 'val_check_interval':\n msg += ' If you want to disable validation set `val_percent_check` to 0.0 instead.'\n\n if not 0. <= value <= 1.:\n raise ValueError(msg)\n\n def _worker_check(self, dataloader: DataLoader, name: str) -> None:\n if isinstance(dataloader, DataLoader) and dataloader.num_workers <= 2:\n warnings.warn(f'The dataloader, {name}, does not have many workers which may be a bottleneck.'\n ' Consider increasing the value of the `num_workers` argument`'\n ' in the `DataLoader` init to improve performance.')\n\n def auto_add_sampler(self, dataloader: DataLoader, train: bool) -> DataLoader:\n\n # don't do anything if it's not a dataloader\n if not isinstance(dataloader, DataLoader):\n return dataloader\n\n need_dist_sampler = self.use_ddp or self.use_ddp2 or self.use_tpu\n\n if need_dist_sampler:\n\n skip_keys = ['sampler', 'batch_sampler', 'dataset_kind']\n\n dl_args = {\n k: v for k, v in dataloader.__dict__.items() if not k.startswith('_') and k not in skip_keys\n }\n\n if self.use_tpu:\n sampler = DistributedSampler(\n dataloader.dataset,\n num_replicas=xm.xrt_world_size(),\n rank=xm.get_ordinal()\n )\n else:\n sampler = DistributedSampler(dataloader.dataset)\n\n dl_args['sampler'] = sampler\n dataloader = type(dataloader)(**dl_args)\n\n return dataloader\n\n def reset_train_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the train dataloader and initialises required variables\n (number of batches, when to validate, etc.).\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n self.train_dataloader = self.request_dataloader(model.train_dataloader)\n\n self.num_training_batches = 0\n\n # automatically add samplers\n self.train_dataloader = self.auto_add_sampler(self.train_dataloader, train=True)\n\n self._worker_check(self.train_dataloader, 'train dataloader')\n self._percent_range_check('train_percent_check')\n\n if not _has_len(self.train_dataloader):\n self.num_training_batches = float('inf')\n else:\n # try getting the length\n self.num_training_batches = len(self.train_dataloader)\n self.num_training_batches = int(self.num_training_batches * self.train_percent_check)\n\n # determine when to check validation\n # if int passed in, val checks that often\n # otherwise, it checks in [0, 1.0] % range of a training epoch\n if isinstance(self.val_check_interval, int):\n self.val_check_batch = self.val_check_interval\n if self.val_check_batch > self.num_training_batches:\n raise ValueError(\n f'`val_check_interval` ({self.val_check_interval}) must be less than or equal '\n f'to the number of the training batches ({self.num_training_batches}). '\n 'If you want to disable validation set `val_percent_check` to 0.0 instead.')\n else:\n if not _has_len(self.train_dataloader):\n if self.val_check_interval == 1.0:\n self.val_check_batch = float('inf')\n else:\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n 'DataLoader does not implement `__len__`) for `train_dataloader`, '\n '`Trainer(val_check_interval)` must be `1.0` or an int. An int k specifies '\n 'checking validation every k training batches.')\n else:\n self._percent_range_check('val_check_interval')\n\n self.val_check_batch = int(self.num_training_batches * self.val_check_interval)\n self.val_check_batch = max(1, self.val_check_batch)\n\n def _reset_eval_dataloader(self, model: LightningModule,\n mode: str) -> Tuple[int, List[DataLoader]]:\n \"\"\"Generic method to reset a dataloader for evaluation.\n\n Args:\n model: The current `LightningModule`\n mode: Either `'val'` or `'test'`\n\n Returns:\n Tuple (num_batches, dataloaders)\n \"\"\"\n dataloaders = self.request_dataloader(getattr(model, f'{mode}_dataloader'))\n\n if not isinstance(dataloaders, list):\n dataloaders = [dataloaders]\n\n # add samplers\n dataloaders = [self.auto_add_sampler(dl, train=False) for dl in dataloaders if dl]\n\n num_batches = 0\n\n # determine number of batches\n # datasets could be none, 1 or 2+\n if len(dataloaders) != 0:\n for i, dataloader in enumerate(dataloaders):\n self._worker_check(dataloader, f'{mode} dataloader {i}')\n if not _has_len(dataloader):\n num_batches = float('inf')\n\n percent_check = getattr(self, f'{mode}_percent_check')\n\n if num_batches != float('inf'):\n self._percent_range_check(f'{mode}_percent_check')\n\n num_batches = sum(len(dataloader) for dataloader in dataloaders)\n num_batches = int(num_batches * percent_check)\n elif percent_check not in (0.0, 1.0):\n raise MisconfigurationException(\n 'When using an infinite DataLoader (e.g. with an IterableDataset or when '\n f'DataLoader does not implement `__len__`) for `{mode}_dataloader`, '\n f'`Trainer({mode}_percent_check)` must be `0.0` or `1.0`.')\n return num_batches, dataloaders\n\n def reset_val_dataloader(self, model: LightningModule) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('validation_step'):\n self.num_val_batches, self.val_dataloaders =\\\n self._reset_eval_dataloader(model, 'val')\n\n def reset_test_dataloader(self, model) -> None:\n \"\"\"Resets the validation dataloader and determines the number of batches.\n\n Args:\n model: The current `LightningModule`\n \"\"\"\n if self.is_overriden('test_step'):\n self.num_test_batches, self.test_dataloaders =\\\n self._reset_eval_dataloader(model, 'test')\n\n def request_dataloader(self, dataloader_fx: Callable) -> DataLoader:\n \"\"\"Handles downloading data in the GPU or TPU case.\n\n Args:\n dataloader_fx: The bound dataloader getter\n\n Returns:\n The dataloader\n \"\"\"\n dataloader = dataloader_fx()\n\n # get the function we'll use to get data\n if self.use_ddp or self.use_ddp2:\n # all processes wait until data download has happened\n torch_distrib.barrier()\n\n # data download/load on TPU\n elif self.use_tpu and XLA_AVAILABLE:\n # all processes wait until data download has happened\n torch_xla.core.xla_model.rendezvous('pl.TrainerDataLoadingMixin.get_dataloaders')\n\n return dataloader\n\n def determine_data_use_amount(self, train_percent_check: float, val_percent_check: float,\n test_percent_check: float, overfit_pct: float) -> None:\n \"\"\"Use less data for debugging purposes\n \"\"\"\n self.train_percent_check = train_percent_check\n self.val_percent_check = val_percent_check\n self.test_percent_check = test_percent_check\n if overfit_pct > 0:\n if overfit_pct > 1:\n raise ValueError(\n f'`overfit_pct` must be not greater than 1.0, but got {overfit_pct:.3f}.')\n\n self.train_percent_check = overfit_pct\n self.val_percent_check = overfit_pct\n self.test_percent_check = overfit_pct\n", "path": "pytorch_lightning/trainer/data_loading.py"}]} |
gh_patches_debug_1198 | rasdani/github-patches | git_diff | numpy__numpy-15425 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
dir(numpy) returns duplicate "testing"
<!-- Please describe the issue in detail here, and fill in the fields below -->
### Reproducing code example:
<!-- A short code example that reproduces the problem/missing feature. It should be
self-contained, i.e., possible to run as-is via 'python myproblem.py' -->
```python
import numpy as np
>>> np.__version__
'1.18.1'
>>> len(dir(np))
620
>>> np.testing
<module 'numpy.testing' from 'C:\\Python\\Python38\\lib\\site-packages\\numpy\\testing\\__init__.py'>
>>> len(dir(np))
621
>>> [i for i in dir(np) if i == "testing"]
['testing', 'testing']
```
### Error:
"testing" appears twice in dir(np)
### Numpy/Python version information:
<!-- Output from 'import sys, numpy; print(numpy.__version__, sys.version)' -->
Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32
>>> np.__version__
'1.18.1'
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `numpy/__init__.py`
Content:
```
1 """
2 NumPy
3 =====
4
5 Provides
6 1. An array object of arbitrary homogeneous items
7 2. Fast mathematical operations over arrays
8 3. Linear Algebra, Fourier Transforms, Random Number Generation
9
10 How to use the documentation
11 ----------------------------
12 Documentation is available in two forms: docstrings provided
13 with the code, and a loose standing reference guide, available from
14 `the NumPy homepage <https://www.scipy.org>`_.
15
16 We recommend exploring the docstrings using
17 `IPython <https://ipython.org>`_, an advanced Python shell with
18 TAB-completion and introspection capabilities. See below for further
19 instructions.
20
21 The docstring examples assume that `numpy` has been imported as `np`::
22
23 >>> import numpy as np
24
25 Code snippets are indicated by three greater-than signs::
26
27 >>> x = 42
28 >>> x = x + 1
29
30 Use the built-in ``help`` function to view a function's docstring::
31
32 >>> help(np.sort)
33 ... # doctest: +SKIP
34
35 For some objects, ``np.info(obj)`` may provide additional help. This is
36 particularly true if you see the line "Help on ufunc object:" at the top
37 of the help() page. Ufuncs are implemented in C, not Python, for speed.
38 The native Python help() does not know how to view their help, but our
39 np.info() function does.
40
41 To search for documents containing a keyword, do::
42
43 >>> np.lookfor('keyword')
44 ... # doctest: +SKIP
45
46 General-purpose documents like a glossary and help on the basic concepts
47 of numpy are available under the ``doc`` sub-module::
48
49 >>> from numpy import doc
50 >>> help(doc)
51 ... # doctest: +SKIP
52
53 Available subpackages
54 ---------------------
55 doc
56 Topical documentation on broadcasting, indexing, etc.
57 lib
58 Basic functions used by several sub-packages.
59 random
60 Core Random Tools
61 linalg
62 Core Linear Algebra Tools
63 fft
64 Core FFT routines
65 polynomial
66 Polynomial tools
67 testing
68 NumPy testing tools
69 f2py
70 Fortran to Python Interface Generator.
71 distutils
72 Enhancements to distutils with support for
73 Fortran compilers support and more.
74
75 Utilities
76 ---------
77 test
78 Run numpy unittests
79 show_config
80 Show numpy build configuration
81 dual
82 Overwrite certain functions with high-performance Scipy tools
83 matlib
84 Make everything matrices.
85 __version__
86 NumPy version string
87
88 Viewing documentation using IPython
89 -----------------------------------
90 Start IPython with the NumPy profile (``ipython -p numpy``), which will
91 import `numpy` under the alias `np`. Then, use the ``cpaste`` command to
92 paste examples into the shell. To see which functions are available in
93 `numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use
94 ``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow
95 down the list. To view the docstring for a function, use
96 ``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view
97 the source code).
98
99 Copies vs. in-place operation
100 -----------------------------
101 Most of the functions in `numpy` return a copy of the array argument
102 (e.g., `np.sort`). In-place versions of these functions are often
103 available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
104 Exceptions to this rule are documented.
105
106 """
107 import sys
108 import warnings
109
110 from ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning
111 from ._globals import _NoValue
112
113 # We first need to detect if we're being called as part of the numpy setup
114 # procedure itself in a reliable manner.
115 try:
116 __NUMPY_SETUP__
117 except NameError:
118 __NUMPY_SETUP__ = False
119
120 if __NUMPY_SETUP__:
121 sys.stderr.write('Running from numpy source directory.\n')
122 else:
123 try:
124 from numpy.__config__ import show as show_config
125 except ImportError:
126 msg = """Error importing numpy: you should not try to import numpy from
127 its source directory; please exit the numpy source tree, and relaunch
128 your python interpreter from there."""
129 raise ImportError(msg)
130
131 from .version import git_revision as __git_revision__
132 from .version import version as __version__
133
134 __all__ = ['ModuleDeprecationWarning',
135 'VisibleDeprecationWarning']
136
137 # Allow distributors to run custom init code
138 from . import _distributor_init
139
140 from . import core
141 from .core import *
142 from . import compat
143 from . import lib
144 # FIXME: why have numpy.lib if everything is imported here??
145 from .lib import *
146
147 from . import linalg
148 from . import fft
149 from . import polynomial
150 from . import random
151 from . import ctypeslib
152 from . import ma
153 from . import matrixlib as _mat
154 from .matrixlib import *
155 from .compat import long
156
157 # Make these accessible from numpy name-space
158 # but not imported in from numpy import *
159 # TODO[gh-6103]: Deprecate these
160 if sys.version_info[0] >= 3:
161 from builtins import bool, int, float, complex, object, str
162 unicode = str
163 else:
164 from __builtin__ import bool, int, float, complex, object, unicode, str
165
166 from .core import round, abs, max, min
167 # now that numpy modules are imported, can initialize limits
168 core.getlimits._register_known_types()
169
170 __all__.extend(['__version__', 'show_config'])
171 __all__.extend(core.__all__)
172 __all__.extend(_mat.__all__)
173 __all__.extend(lib.__all__)
174 __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
175
176 # These are added by `from .core import *` and `core.__all__`, but we
177 # overwrite them above with builtins we do _not_ want to export.
178 __all__.remove('long')
179 __all__.remove('unicode')
180
181 # Remove things that are in the numpy.lib but not in the numpy namespace
182 # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)
183 # that prevents adding more things to the main namespace by accident.
184 # The list below will grow until the `from .lib import *` fixme above is
185 # taken care of
186 __all__.remove('Arrayterator')
187 del Arrayterator
188
189 # Filter out Cython harmless warnings
190 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
191 warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
192 warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
193
194 # oldnumeric and numarray were removed in 1.9. In case some packages import
195 # but do not use them, we define them here for backward compatibility.
196 oldnumeric = 'removed'
197 numarray = 'removed'
198
199 if sys.version_info[:2] >= (3, 7):
200 # Importing Tester requires importing all of UnitTest which is not a
201 # cheap import Since it is mainly used in test suits, we lazy import it
202 # here to save on the order of 10 ms of import time for most users
203 #
204 # The previous way Tester was imported also had a side effect of adding
205 # the full `numpy.testing` namespace
206 #
207 # module level getattr is only supported in 3.7 onwards
208 # https://www.python.org/dev/peps/pep-0562/
209 def __getattr__(attr):
210 if attr == 'testing':
211 import numpy.testing as testing
212 return testing
213 elif attr == 'Tester':
214 from .testing import Tester
215 return Tester
216 else:
217 raise AttributeError("module {!r} has no attribute "
218 "{!r}".format(__name__, attr))
219
220 def __dir__():
221 return list(globals().keys()) + ['Tester', 'testing']
222
223 else:
224 # We don't actually use this ourselves anymore, but I'm not 100% sure that
225 # no-one else in the world is using it (though I hope not)
226 from .testing import Tester
227
228 # Pytest testing
229 from numpy._pytesttester import PytestTester
230 test = PytestTester(__name__)
231 del PytestTester
232
233
234 def _sanity_check():
235 """
236 Quick sanity checks for common bugs caused by environment.
237 There are some cases e.g. with wrong BLAS ABI that cause wrong
238 results under specific runtime conditions that are not necessarily
239 achieved during test suite runs, and it is useful to catch those early.
240
241 See https://github.com/numpy/numpy/issues/8577 and other
242 similar bug reports.
243
244 """
245 try:
246 x = ones(2, dtype=float32)
247 if not abs(x.dot(x) - 2.0) < 1e-5:
248 raise AssertionError()
249 except AssertionError:
250 msg = ("The current Numpy installation ({!r}) fails to "
251 "pass simple sanity checks. This can be caused for example "
252 "by incorrect BLAS library being linked in, or by mixing "
253 "package managers (pip, conda, apt, ...). Search closed "
254 "numpy issues for similar problems.")
255 raise RuntimeError(msg.format(__file__))
256
257 _sanity_check()
258 del _sanity_check
259
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/numpy/__init__.py b/numpy/__init__.py
--- a/numpy/__init__.py
+++ b/numpy/__init__.py
@@ -218,7 +218,7 @@
"{!r}".format(__name__, attr))
def __dir__():
- return list(globals().keys()) + ['Tester', 'testing']
+ return list(globals().keys() | {'Tester', 'testing'})
else:
# We don't actually use this ourselves anymore, but I'm not 100% sure that
| {"golden_diff": "diff --git a/numpy/__init__.py b/numpy/__init__.py\n--- a/numpy/__init__.py\n+++ b/numpy/__init__.py\n@@ -218,7 +218,7 @@\n \"{!r}\".format(__name__, attr))\n \n def __dir__():\n- return list(globals().keys()) + ['Tester', 'testing']\n+ return list(globals().keys() | {'Tester', 'testing'})\n \n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n", "issue": "dir(numpy) returns duplicate \"testing\"\n<!-- Please describe the issue in detail here, and fill in the fields below -->\r\n\r\n### Reproducing code example:\r\n\r\n<!-- A short code example that reproduces the problem/missing feature. It should be\r\nself-contained, i.e., possible to run as-is via 'python myproblem.py' -->\r\n\r\n```python\r\nimport numpy as np\r\n>>> np.__version__\r\n'1.18.1'\r\n>>> len(dir(np))\r\n620\r\n>>> np.testing\r\n<module 'numpy.testing' from 'C:\\\\Python\\\\Python38\\\\lib\\\\site-packages\\\\numpy\\\\testing\\\\__init__.py'>\r\n>>> len(dir(np))\r\n621\r\n>>> [i for i in dir(np) if i == \"testing\"]\r\n['testing', 'testing']\r\n```\r\n### Error:\r\n\"testing\" appears twice in dir(np)\r\n\r\n\r\n### Numpy/Python version information:\r\n\r\n<!-- Output from 'import sys, numpy; print(numpy.__version__, sys.version)' -->\r\nPython 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32\r\n>>> np.__version__\r\n'1.18.1'\n", "before_files": [{"content": "\"\"\"\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <https://www.scipy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <https://ipython.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as `np`::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line \"Help on ufunc object:\" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nTo search for documents containing a keyword, do::\n\n >>> np.lookfor('keyword')\n ... # doctest: +SKIP\n\nGeneral-purpose documents like a glossary and help on the basic concepts\nof numpy are available under the ``doc`` sub-module::\n\n >>> from numpy import doc\n >>> help(doc)\n ... # doctest: +SKIP\n\nAvailable subpackages\n---------------------\ndoc\n Topical documentation on broadcasting, indexing, etc.\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n NumPy testing tools\nf2py\n Fortran to Python Interface Generator.\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more.\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\ndual\n Overwrite certain functions with high-performance Scipy tools\nmatlib\n Make everything matrices.\n__version__\n NumPy version string\n\nViewing documentation using IPython\n-----------------------------------\nStart IPython with the NumPy profile (``ipython -p numpy``), which will\nimport `numpy` under the alias `np`. Then, use the ``cpaste`` command to\npaste examples into the shell. To see which functions are available in\n`numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n\"\"\"\nimport sys\nimport warnings\n\nfrom ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning\nfrom ._globals import _NoValue\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__\nexcept NameError:\n __NUMPY_SETUP__ = False\n\nif __NUMPY_SETUP__:\n sys.stderr.write('Running from numpy source directory.\\n')\nelse:\n try:\n from numpy.__config__ import show as show_config\n except ImportError:\n msg = \"\"\"Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.\"\"\"\n raise ImportError(msg)\n\n from .version import git_revision as __git_revision__\n from .version import version as __version__\n\n __all__ = ['ModuleDeprecationWarning',\n 'VisibleDeprecationWarning']\n\n # Allow distributors to run custom init code\n from . import _distributor_init\n\n from . import core\n from .core import *\n from . import compat\n from . import lib\n # FIXME: why have numpy.lib if everything is imported here??\n from .lib import *\n\n from . import linalg\n from . import fft\n from . import polynomial\n from . import random\n from . import ctypeslib\n from . import ma\n from . import matrixlib as _mat\n from .matrixlib import *\n from .compat import long\n\n # Make these accessible from numpy name-space\n # but not imported in from numpy import *\n # TODO[gh-6103]: Deprecate these\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n else:\n from __builtin__ import bool, int, float, complex, object, unicode, str\n\n from .core import round, abs, max, min\n # now that numpy modules are imported, can initialize limits\n core.getlimits._register_known_types()\n\n __all__.extend(['__version__', 'show_config'])\n __all__.extend(core.__all__)\n __all__.extend(_mat.__all__)\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n\n # These are added by `from .core import *` and `core.__all__`, but we\n # overwrite them above with builtins we do _not_ want to export.\n __all__.remove('long')\n __all__.remove('unicode')\n\n # Remove things that are in the numpy.lib but not in the numpy namespace\n # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)\n # that prevents adding more things to the main namespace by accident.\n # The list below will grow until the `from .lib import *` fixme above is\n # taken care of\n __all__.remove('Arrayterator')\n del Arrayterator\n\n # Filter out Cython harmless warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n\n # oldnumeric and numarray were removed in 1.9. In case some packages import\n # but do not use them, we define them here for backward compatibility.\n oldnumeric = 'removed'\n numarray = 'removed'\n\n if sys.version_info[:2] >= (3, 7):\n # Importing Tester requires importing all of UnitTest which is not a\n # cheap import Since it is mainly used in test suits, we lazy import it\n # here to save on the order of 10 ms of import time for most users\n #\n # The previous way Tester was imported also had a side effect of adding\n # the full `numpy.testing` namespace\n #\n # module level getattr is only supported in 3.7 onwards\n # https://www.python.org/dev/peps/pep-0562/\n def __getattr__(attr):\n if attr == 'testing':\n import numpy.testing as testing\n return testing\n elif attr == 'Tester':\n from .testing import Tester\n return Tester\n else:\n raise AttributeError(\"module {!r} has no attribute \"\n \"{!r}\".format(__name__, attr))\n\n def __dir__():\n return list(globals().keys()) + ['Tester', 'testing']\n\n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n # no-one else in the world is using it (though I hope not)\n from .testing import Tester\n\n # Pytest testing\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n\n def _sanity_check():\n \"\"\"\n Quick sanity checks for common bugs caused by environment.\n There are some cases e.g. with wrong BLAS ABI that cause wrong\n results under specific runtime conditions that are not necessarily\n achieved during test suite runs, and it is useful to catch those early.\n\n See https://github.com/numpy/numpy/issues/8577 and other\n similar bug reports.\n\n \"\"\"\n try:\n x = ones(2, dtype=float32)\n if not abs(x.dot(x) - 2.0) < 1e-5:\n raise AssertionError()\n except AssertionError:\n msg = (\"The current Numpy installation ({!r}) fails to \"\n \"pass simple sanity checks. This can be caused for example \"\n \"by incorrect BLAS library being linked in, or by mixing \"\n \"package managers (pip, conda, apt, ...). Search closed \"\n \"numpy issues for similar problems.\")\n raise RuntimeError(msg.format(__file__))\n\n _sanity_check()\n del _sanity_check\n", "path": "numpy/__init__.py"}], "after_files": [{"content": "\"\"\"\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <https://www.scipy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <https://ipython.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as `np`::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line \"Help on ufunc object:\" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nTo search for documents containing a keyword, do::\n\n >>> np.lookfor('keyword')\n ... # doctest: +SKIP\n\nGeneral-purpose documents like a glossary and help on the basic concepts\nof numpy are available under the ``doc`` sub-module::\n\n >>> from numpy import doc\n >>> help(doc)\n ... # doctest: +SKIP\n\nAvailable subpackages\n---------------------\ndoc\n Topical documentation on broadcasting, indexing, etc.\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n NumPy testing tools\nf2py\n Fortran to Python Interface Generator.\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more.\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\ndual\n Overwrite certain functions with high-performance Scipy tools\nmatlib\n Make everything matrices.\n__version__\n NumPy version string\n\nViewing documentation using IPython\n-----------------------------------\nStart IPython with the NumPy profile (``ipython -p numpy``), which will\nimport `numpy` under the alias `np`. Then, use the ``cpaste`` command to\npaste examples into the shell. To see which functions are available in\n`numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n\"\"\"\nimport sys\nimport warnings\n\nfrom ._globals import ModuleDeprecationWarning, VisibleDeprecationWarning\nfrom ._globals import _NoValue\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__\nexcept NameError:\n __NUMPY_SETUP__ = False\n\nif __NUMPY_SETUP__:\n sys.stderr.write('Running from numpy source directory.\\n')\nelse:\n try:\n from numpy.__config__ import show as show_config\n except ImportError:\n msg = \"\"\"Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.\"\"\"\n raise ImportError(msg)\n\n from .version import git_revision as __git_revision__\n from .version import version as __version__\n\n __all__ = ['ModuleDeprecationWarning',\n 'VisibleDeprecationWarning']\n\n # Allow distributors to run custom init code\n from . import _distributor_init\n\n from . import core\n from .core import *\n from . import compat\n from . import lib\n # FIXME: why have numpy.lib if everything is imported here??\n from .lib import *\n\n from . import linalg\n from . import fft\n from . import polynomial\n from . import random\n from . import ctypeslib\n from . import ma\n from . import matrixlib as _mat\n from .matrixlib import *\n from .compat import long\n\n # Make these accessible from numpy name-space\n # but not imported in from numpy import *\n # TODO[gh-6103]: Deprecate these\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n else:\n from __builtin__ import bool, int, float, complex, object, unicode, str\n\n from .core import round, abs, max, min\n # now that numpy modules are imported, can initialize limits\n core.getlimits._register_known_types()\n\n __all__.extend(['__version__', 'show_config'])\n __all__.extend(core.__all__)\n __all__.extend(_mat.__all__)\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n\n # These are added by `from .core import *` and `core.__all__`, but we\n # overwrite them above with builtins we do _not_ want to export.\n __all__.remove('long')\n __all__.remove('unicode')\n\n # Remove things that are in the numpy.lib but not in the numpy namespace\n # Note that there is a test (numpy/tests/test_public_api.py:test_numpy_namespace)\n # that prevents adding more things to the main namespace by accident.\n # The list below will grow until the `from .lib import *` fixme above is\n # taken care of\n __all__.remove('Arrayterator')\n del Arrayterator\n\n # Filter out Cython harmless warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n\n # oldnumeric and numarray were removed in 1.9. In case some packages import\n # but do not use them, we define them here for backward compatibility.\n oldnumeric = 'removed'\n numarray = 'removed'\n\n if sys.version_info[:2] >= (3, 7):\n # Importing Tester requires importing all of UnitTest which is not a\n # cheap import Since it is mainly used in test suits, we lazy import it\n # here to save on the order of 10 ms of import time for most users\n #\n # The previous way Tester was imported also had a side effect of adding\n # the full `numpy.testing` namespace\n #\n # module level getattr is only supported in 3.7 onwards\n # https://www.python.org/dev/peps/pep-0562/\n def __getattr__(attr):\n if attr == 'testing':\n import numpy.testing as testing\n return testing\n elif attr == 'Tester':\n from .testing import Tester\n return Tester\n else:\n raise AttributeError(\"module {!r} has no attribute \"\n \"{!r}\".format(__name__, attr))\n\n def __dir__():\n return list(globals().keys() | {'Tester', 'testing'})\n\n else:\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n # no-one else in the world is using it (though I hope not)\n from .testing import Tester\n\n # Pytest testing\n from numpy._pytesttester import PytestTester\n test = PytestTester(__name__)\n del PytestTester\n\n\n def _sanity_check():\n \"\"\"\n Quick sanity checks for common bugs caused by environment.\n There are some cases e.g. with wrong BLAS ABI that cause wrong\n results under specific runtime conditions that are not necessarily\n achieved during test suite runs, and it is useful to catch those early.\n\n See https://github.com/numpy/numpy/issues/8577 and other\n similar bug reports.\n\n \"\"\"\n try:\n x = ones(2, dtype=float32)\n if not abs(x.dot(x) - 2.0) < 1e-5:\n raise AssertionError()\n except AssertionError:\n msg = (\"The current Numpy installation ({!r}) fails to \"\n \"pass simple sanity checks. This can be caused for example \"\n \"by incorrect BLAS library being linked in, or by mixing \"\n \"package managers (pip, conda, apt, ...). Search closed \"\n \"numpy issues for similar problems.\")\n raise RuntimeError(msg.format(__file__))\n\n _sanity_check()\n del _sanity_check\n", "path": "numpy/__init__.py"}]} |
gh_patches_debug_1199 | rasdani/github-patches | git_diff | instadeepai__Mava-595 | We are currently solving the following issue within our repository. Here is the issue text:
--- BEGIN ISSUE ---
[TEST] Jax Trainer Init
### What do you want to test?
Jax trainer init components
### Outline of test structure
* Unit tests
* Test components and hooks
### Definition of done
Passing checks, cover all hooks, edge cases considered
### Mandatory checklist before making a PR
* [ ] The success criteria laid down in “Definition of done” are met.
* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.
* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.
--- END ISSUE ---
Below are some code segments, each from a relevant file. One or more of these files may contain bugs.
--- BEGIN FILES ---
Path: `mava/components/jax/training/trainer.py`
Content:
```
1 # python3
2 # Copyright 2021 InstaDeep Ltd. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 """Trainer components for system builders."""
17
18 import abc
19 from dataclasses import dataclass, field
20 from types import SimpleNamespace
21 from typing import Any, Callable, Dict, Optional
22
23 from mava.components.jax import Component
24 from mava.core_jax import SystemBuilder, SystemTrainer
25 from mava.utils.sort_utils import sort_str_num
26
27
28 class BaseTrainerInit(Component):
29 @abc.abstractmethod
30 def __init__(
31 self,
32 config: Any,
33 ):
34 """Initialise system init components.
35
36 Args:
37 config : a dataclass specifying the component parameters.
38 """
39 self.config = config
40
41 @abc.abstractmethod
42 def on_building_init_end(self, builder: SystemBuilder) -> None:
43 """Summary."""
44 pass
45
46 @abc.abstractmethod
47 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
48 """Summary."""
49 pass
50
51 @staticmethod
52 def name() -> str:
53 """Component name."""
54
55 return "trainer_init"
56
57
58 class SingleTrainerInit(BaseTrainerInit):
59 def __init__(self, config: SimpleNamespace = SimpleNamespace()):
60 """Initialises a single trainer.
61
62 Single trainer is used to train all networks.
63
64 Args:
65 config : a dataclass specifying the component parameters.
66 """
67 self.config = config
68
69 def on_building_init_end(self, builder: SystemBuilder) -> None:
70 """Assigns trainers to networks for training.
71
72 Args:
73 builder : the system builder
74 Raises:
75 ValueError: Raises an error when trainer_networks is not
76 set to single_trainer.
77 """
78 unique_net_keys = builder.store.unique_net_keys
79
80 # Setup trainer_networks
81
82 builder.store.trainer_networks = {"trainer": unique_net_keys}
83
84 # Get all the unique trainer network keys
85 all_trainer_net_keys = []
86 for trainer_nets in builder.store.trainer_networks.values():
87 all_trainer_net_keys.extend(trainer_nets)
88 unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))
89
90 # Check that all agent_net_keys are in trainer_networks
91 assert unique_net_keys == unique_trainer_net_keys
92 # Setup specs for each network
93 builder.store.net_spec_keys = {}
94 for i in range(len(unique_net_keys)):
95 builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[
96 i % len(builder.store.agents)
97 ]
98
99 # Setup table_network_config
100 builder.store.table_network_config = {}
101 for trainer_key in builder.store.trainer_networks.keys():
102 most_matches = 0
103 trainer_nets = builder.store.trainer_networks[trainer_key]
104 for sample in builder.store.network_sampling_setup:
105 matches = 0
106 for entry in sample:
107 if entry in trainer_nets:
108 matches += 1
109 if most_matches < matches:
110 matches = most_matches
111 builder.store.table_network_config[trainer_key] = sample
112
113 builder.store.networks = builder.store.network_factory()
114
115 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
116 """_summary_"""
117 # Convert network keys for the trainer.
118 trainer.store.trainer_table_entry = trainer.store.table_network_config[
119 trainer.store.trainer_id
120 ]
121 trainer.store.trainer_agents = trainer.store.agents[
122 : len(trainer.store.trainer_table_entry)
123 ]
124 trainer.store.trainer_agent_net_keys = {
125 agent: trainer.store.trainer_table_entry[a_i]
126 for a_i, agent in enumerate(trainer.store.trainer_agents)
127 }
128
129
130 class OneTrainerPerNetworkInit(BaseTrainerInit):
131 def __init__(self, config: SimpleNamespace = SimpleNamespace()):
132 """Initialises a multiple trainers.
133
134 Different trainer will be dedicated to training each network.
135
136 Args:
137 config : a dataclass specifying the component parameters.
138 """
139 self.config = config
140
141 def on_building_init_end(self, builder: SystemBuilder) -> None:
142 """.
143
144 Args:
145 builder : the system builder
146 Raises:
147 ValueError: Raises an error when trainer_networks is not
148 set to one_trainer_per_network.
149 """
150 unique_net_keys = builder.store.unique_net_keys
151
152 # Setup trainer_networks
153 builder.store.trainer_networks = {
154 f"trainer_{i}": [unique_net_keys[i]] for i in range(len(unique_net_keys))
155 }
156
157 # Get all the unique trainer network keys
158 all_trainer_net_keys = []
159 for trainer_nets in builder.store.trainer_networks.values():
160 all_trainer_net_keys.extend(trainer_nets)
161 unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))
162
163 # Check that all agent_net_keys are in trainer_networks
164 assert unique_net_keys == unique_trainer_net_keys
165 # Setup specs for each network
166 builder.store.net_spec_keys = {}
167 for i in range(len(unique_net_keys)):
168 builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[
169 i % len(builder.store.agents)
170 ]
171
172 # Setup table_network_config
173 builder.store.table_network_config = {}
174 for trainer_key in builder.store.trainer_networks.keys():
175 most_matches = 0
176 trainer_nets = builder.store.trainer_networks[trainer_key]
177 for sample in builder.store.network_sampling_setup:
178 matches = 0
179 for entry in sample:
180 if entry in trainer_nets:
181 matches += 1
182 if most_matches < matches:
183 matches = most_matches
184 builder.store.table_network_config[trainer_key] = sample
185
186 builder.store.networks = builder.store.network_factory()
187
188 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
189 """_summary_"""
190 # Convert network keys for the trainer.
191 trainer.store.trainer_table_entry = trainer.store.table_network_config[
192 trainer.store.trainer_id
193 ]
194 trainer.store.trainer_agents = trainer.store.agents[
195 : len(trainer.store.trainer_table_entry)
196 ]
197 trainer.store.trainer_agent_net_keys = {
198 agent: trainer.store.trainer_table_entry[a_i]
199 for a_i, agent in enumerate(trainer.store.trainer_agents)
200 }
201
202
203 @dataclass
204 class CustomTrainerInitConfig:
205 trainer_networks: Dict = field(default_factory=lambda: {})
206
207
208 class CustomTrainerInit(BaseTrainerInit):
209 def __init__(self, config: CustomTrainerInitConfig = CustomTrainerInitConfig()):
210 """Initialises custom trainers.
211
212 Custom trainer network configuration can be given as a dictionary
213 assigning specific trainers to specific networks.
214
215 Args:
216 config : a dataclass specifying the component parameters.
217 """
218
219 self.config = config
220
221 def on_building_init_end(self, builder: SystemBuilder) -> None:
222 """Assigns trainers to networks for training.
223
224 Args:
225 builder : the system builder
226 Raises:
227 ValueError: Raises an error when trainer_networks is not
228 passed in as a dictionary.
229 """
230 trainer_networks = self.config.trainer_networks
231 unique_net_keys = builder.store.unique_net_keys
232
233 # Setup trainer_networks
234 if not isinstance(trainer_networks, dict) or trainer_networks == {}:
235
236 raise ValueError("trainer_networks must be a dictionary.")
237
238 builder.store.trainer_networks = trainer_networks
239
240 # Get all the unique trainer network keys
241 all_trainer_net_keys = []
242 for trainer_nets in builder.store.trainer_networks.values():
243 all_trainer_net_keys.extend(trainer_nets)
244 unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))
245
246 # Check that all agent_net_keys are in trainer_networks
247 assert unique_net_keys == unique_trainer_net_keys
248 # Setup specs for each network
249 builder.store.net_spec_keys = {}
250 for i in range(len(unique_net_keys)):
251 builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[
252 i % len(builder.store.agents)
253 ]
254
255 # Setup table_network_config
256 builder.store.table_network_config = {}
257 for trainer_key in builder.store.trainer_networks.keys():
258 most_matches = 0
259 trainer_nets = builder.store.trainer_networks[trainer_key]
260 for sample in builder.store.network_sampling_setup:
261 matches = 0
262 for entry in sample:
263 if entry in trainer_nets:
264 matches += 1
265 if most_matches < matches:
266 matches = most_matches
267 builder.store.table_network_config[trainer_key] = sample
268
269 builder.store.networks = builder.store.network_factory()
270
271 def on_training_utility_fns(self, trainer: SystemTrainer) -> None:
272 """_summary_"""
273 # Convert network keys for the trainer.
274 trainer.store.trainer_table_entry = trainer.store.table_network_config[
275 trainer.store.trainer_id
276 ]
277 trainer.store.trainer_agents = trainer.store.agents[
278 : len(trainer.store.trainer_table_entry)
279 ]
280 trainer.store.trainer_agent_net_keys = {
281 agent: trainer.store.trainer_table_entry[a_i]
282 for a_i, agent in enumerate(trainer.store.trainer_agents)
283 }
284
285 @staticmethod
286 def config_class() -> Optional[Callable]:
287 """Config class used for component.
288
289 Returns:
290 config class/dataclass for component.
291 """
292 return CustomTrainerInitConfig
293
```
--- END FILES ---
Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks.
Here is an example:
```diff
diff --git a/examples/server_async.py b/examples/server_async.py
--- a/examples/server_async.py
+++ b/examples/server_async.py
@@ -313,4 +313,4 @@
if __name__ == "__main__":
- asyncio.run(run_async_server("."), debug=True)
+ asyncio.run(run_async_server(), debug=True)
diff --git a/examples/server_sync.py b/examples/server_sync.py
--- a/examples/server_sync.py
+++ b/examples/server_sync.py
@@ -313,5 +313,5 @@
if __name__ == "__main__":
- server = run_sync_server(".")
+ server = run_sync_server()
server.shutdown()
```
| diff --git a/mava/components/jax/training/trainer.py b/mava/components/jax/training/trainer.py
--- a/mava/components/jax/training/trainer.py
+++ b/mava/components/jax/training/trainer.py
@@ -233,7 +233,7 @@
# Setup trainer_networks
if not isinstance(trainer_networks, dict) or trainer_networks == {}:
- raise ValueError("trainer_networks must be a dictionary.")
+ raise ValueError("trainer_networks must be a non-empty dictionary.")
builder.store.trainer_networks = trainer_networks
| {"golden_diff": "diff --git a/mava/components/jax/training/trainer.py b/mava/components/jax/training/trainer.py\n--- a/mava/components/jax/training/trainer.py\n+++ b/mava/components/jax/training/trainer.py\n@@ -233,7 +233,7 @@\n # Setup trainer_networks\n if not isinstance(trainer_networks, dict) or trainer_networks == {}:\n \n- raise ValueError(\"trainer_networks must be a dictionary.\")\n+ raise ValueError(\"trainer_networks must be a non-empty dictionary.\")\n \n builder.store.trainer_networks = trainer_networks\n", "issue": "[TEST] Jax Trainer Init\n### What do you want to test?\r\nJax trainer init components\r\n\r\n### Outline of test structure\r\n* Unit tests\r\n* Test components and hooks\r\n\r\n### Definition of done\r\nPassing checks, cover all hooks, edge cases considered\r\n\r\n### Mandatory checklist before making a PR\r\n* [ ] The success criteria laid down in \u201cDefinition of done\u201d are met.\r\n* [ ] Test code is documented - docstrings for methods and classes, static types for arguments.\r\n* [ ] Documentation is updated - README, CONTRIBUTING, or other documentation.\n", "before_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Trainer components for system builders.\"\"\"\n\nimport abc\nfrom dataclasses import dataclass, field\nfrom types import SimpleNamespace\nfrom typing import Any, Callable, Dict, Optional\n\nfrom mava.components.jax import Component\nfrom mava.core_jax import SystemBuilder, SystemTrainer\nfrom mava.utils.sort_utils import sort_str_num\n\n\nclass BaseTrainerInit(Component):\n @abc.abstractmethod\n def __init__(\n self,\n config: Any,\n ):\n \"\"\"Initialise system init components.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n @abc.abstractmethod\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Summary.\"\"\"\n pass\n\n @abc.abstractmethod\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"Summary.\"\"\"\n pass\n\n @staticmethod\n def name() -> str:\n \"\"\"Component name.\"\"\"\n\n return \"trainer_init\"\n\n\nclass SingleTrainerInit(BaseTrainerInit):\n def __init__(self, config: SimpleNamespace = SimpleNamespace()):\n \"\"\"Initialises a single trainer.\n\n Single trainer is used to train all networks.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Assigns trainers to networks for training.\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n set to single_trainer.\n \"\"\"\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n\n builder.store.trainer_networks = {\"trainer\": unique_net_keys}\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n\nclass OneTrainerPerNetworkInit(BaseTrainerInit):\n def __init__(self, config: SimpleNamespace = SimpleNamespace()):\n \"\"\"Initialises a multiple trainers.\n\n Different trainer will be dedicated to training each network.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\".\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n set to one_trainer_per_network.\n \"\"\"\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n builder.store.trainer_networks = {\n f\"trainer_{i}\": [unique_net_keys[i]] for i in range(len(unique_net_keys))\n }\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n\n@dataclass\nclass CustomTrainerInitConfig:\n trainer_networks: Dict = field(default_factory=lambda: {})\n\n\nclass CustomTrainerInit(BaseTrainerInit):\n def __init__(self, config: CustomTrainerInitConfig = CustomTrainerInitConfig()):\n \"\"\"Initialises custom trainers.\n\n Custom trainer network configuration can be given as a dictionary\n assigning specific trainers to specific networks.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Assigns trainers to networks for training.\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n passed in as a dictionary.\n \"\"\"\n trainer_networks = self.config.trainer_networks\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n if not isinstance(trainer_networks, dict) or trainer_networks == {}:\n\n raise ValueError(\"trainer_networks must be a dictionary.\")\n\n builder.store.trainer_networks = trainer_networks\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return CustomTrainerInitConfig\n", "path": "mava/components/jax/training/trainer.py"}], "after_files": [{"content": "# python3\n# Copyright 2021 InstaDeep Ltd. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Trainer components for system builders.\"\"\"\n\nimport abc\nfrom dataclasses import dataclass, field\nfrom types import SimpleNamespace\nfrom typing import Any, Callable, Dict, Optional\n\nfrom mava.components.jax import Component\nfrom mava.core_jax import SystemBuilder, SystemTrainer\nfrom mava.utils.sort_utils import sort_str_num\n\n\nclass BaseTrainerInit(Component):\n @abc.abstractmethod\n def __init__(\n self,\n config: Any,\n ):\n \"\"\"Initialise system init components.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n @abc.abstractmethod\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Summary.\"\"\"\n pass\n\n @abc.abstractmethod\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"Summary.\"\"\"\n pass\n\n @staticmethod\n def name() -> str:\n \"\"\"Component name.\"\"\"\n\n return \"trainer_init\"\n\n\nclass SingleTrainerInit(BaseTrainerInit):\n def __init__(self, config: SimpleNamespace = SimpleNamespace()):\n \"\"\"Initialises a single trainer.\n\n Single trainer is used to train all networks.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Assigns trainers to networks for training.\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n set to single_trainer.\n \"\"\"\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n\n builder.store.trainer_networks = {\"trainer\": unique_net_keys}\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n\nclass OneTrainerPerNetworkInit(BaseTrainerInit):\n def __init__(self, config: SimpleNamespace = SimpleNamespace()):\n \"\"\"Initialises a multiple trainers.\n\n Different trainer will be dedicated to training each network.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\".\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n set to one_trainer_per_network.\n \"\"\"\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n builder.store.trainer_networks = {\n f\"trainer_{i}\": [unique_net_keys[i]] for i in range(len(unique_net_keys))\n }\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n\n@dataclass\nclass CustomTrainerInitConfig:\n trainer_networks: Dict = field(default_factory=lambda: {})\n\n\nclass CustomTrainerInit(BaseTrainerInit):\n def __init__(self, config: CustomTrainerInitConfig = CustomTrainerInitConfig()):\n \"\"\"Initialises custom trainers.\n\n Custom trainer network configuration can be given as a dictionary\n assigning specific trainers to specific networks.\n\n Args:\n config : a dataclass specifying the component parameters.\n \"\"\"\n\n self.config = config\n\n def on_building_init_end(self, builder: SystemBuilder) -> None:\n \"\"\"Assigns trainers to networks for training.\n\n Args:\n builder : the system builder\n Raises:\n ValueError: Raises an error when trainer_networks is not\n passed in as a dictionary.\n \"\"\"\n trainer_networks = self.config.trainer_networks\n unique_net_keys = builder.store.unique_net_keys\n\n # Setup trainer_networks\n if not isinstance(trainer_networks, dict) or trainer_networks == {}:\n\n raise ValueError(\"trainer_networks must be a non-empty dictionary.\")\n\n builder.store.trainer_networks = trainer_networks\n\n # Get all the unique trainer network keys\n all_trainer_net_keys = []\n for trainer_nets in builder.store.trainer_networks.values():\n all_trainer_net_keys.extend(trainer_nets)\n unique_trainer_net_keys = sort_str_num(list(set(all_trainer_net_keys)))\n\n # Check that all agent_net_keys are in trainer_networks\n assert unique_net_keys == unique_trainer_net_keys\n # Setup specs for each network\n builder.store.net_spec_keys = {}\n for i in range(len(unique_net_keys)):\n builder.store.net_spec_keys[unique_net_keys[i]] = builder.store.agents[\n i % len(builder.store.agents)\n ]\n\n # Setup table_network_config\n builder.store.table_network_config = {}\n for trainer_key in builder.store.trainer_networks.keys():\n most_matches = 0\n trainer_nets = builder.store.trainer_networks[trainer_key]\n for sample in builder.store.network_sampling_setup:\n matches = 0\n for entry in sample:\n if entry in trainer_nets:\n matches += 1\n if most_matches < matches:\n matches = most_matches\n builder.store.table_network_config[trainer_key] = sample\n\n builder.store.networks = builder.store.network_factory()\n\n def on_training_utility_fns(self, trainer: SystemTrainer) -> None:\n \"\"\"_summary_\"\"\"\n # Convert network keys for the trainer.\n trainer.store.trainer_table_entry = trainer.store.table_network_config[\n trainer.store.trainer_id\n ]\n trainer.store.trainer_agents = trainer.store.agents[\n : len(trainer.store.trainer_table_entry)\n ]\n trainer.store.trainer_agent_net_keys = {\n agent: trainer.store.trainer_table_entry[a_i]\n for a_i, agent in enumerate(trainer.store.trainer_agents)\n }\n\n @staticmethod\n def config_class() -> Optional[Callable]:\n \"\"\"Config class used for component.\n\n Returns:\n config class/dataclass for component.\n \"\"\"\n return CustomTrainerInitConfig\n", "path": "mava/components/jax/training/trainer.py"}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.